• Sonuç bulunamadı

A parametric simplex algorithm for linear vector optimization problems

N/A
N/A
Protected

Academic year: 2021

Share "A parametric simplex algorithm for linear vector optimization problems"

Copied!
30
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

DOI 10.1007/s10107-016-1061-z

F U L L L E N G T H PA P E R

A parametric simplex algorithm for linear vector

optimization problems

Birgit Rudloff1 · Firdevs Ulus2 · Robert Vanderbei3

Received: 9 July 2015 / Accepted: 29 July 2016 / Published online: 8 August 2016 © Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2016

Abstract In this paper, a parametric simplex algorithm for solving linear vector opti-mization problems (LVOPs) is presented. This algorithm can be seen as a variant of the multi-objective simplex (the Evans–Steuer) algorithm (Math Program 5(1):54– 72,1973). Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a solution in the sense of Löhne (Vector optimization with infimum and supremum. Springer, Berlin,2011), that is, it finds a subset of efficient solutions that allows to generate the whole efficient frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear optimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone in Ruszczy´nski and Vanderbei (Econometrica 71(4):1287–1297,2003). The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone C and for bounded as well as unbounded problems. Numerical results are provided to compare the proposed algo-rithm with an objective space based LVOP algoalgo-rithm [Benson’s algoalgo-rithm in Hamel et

B

Firdevs Ulus firdevs@bilkent.edu.tr Birgit Rudloff birgit.rudloff@wu.ac.at Robert Vanderbei rvdb@princeton.edu

1 Institute for Statistics and Mathematics, Vienna University of Economics and Business, 1020

Vienna, Austria

2 Department of Industrial Engineering, Bilkent University, 06800 Ankara, Turkey

3 Department of Operations Research and Financial Engineering, Princeton University, Princeton,

(2)

al. (J Global Optim 59(4):811–836,2014)], that also provides a solution in the sense of Löhne (2011), and with the Evans–Steuer algorithm (1973). The results show that for non-degenerate problems the proposed algorithm outperforms Benson’s algorithm and is on par with the Evans–Steuer algorithm. For highly degenerate problems Ben-son’s algorithm (Hamel et al.2014) outperforms the simplex-type algorithms; however, the parametric simplex algorithm is for these problems computationally much more efficient than the Evans–Steuer algorithm.

Keywords Linear vector optimization· Multiple objective optimization · Algorithms · Parameter space segmentation

Mathematics Subject Classification 90C29· 90C05 · 90-08

1 Introduction

Vector optimization problems have been studied for decades and many methods have been developed to solve or approximately solve them. In particular, there are a variety of algorithms to solve linear vector optimization problems (LVOPs).

1.1 Related literature

Among the algorithms that can solve LVOPs, some are extensions of the simplex method and are working in the variable space. In 1973, Evans and Steuer [15] developed a multi-objective simplex algorithm that finds the set of all ‘efficient extreme solutions’ and the set of all ‘unbounded efficient edges’ in the variable space, see also [11, Algorithm 7.1]. Later, some variants of this algorithm have been developed, see for instance [1,2,9,10,17,33]. More recently, Ehrgott, Puerto and Rodriguez-Chía [13] developed a primal–dual simplex method that works in the parameter space. This algorithm does not guarantee to find the set of all efficient solutions, instead it provides a subset of efficient solutions that are enough to generate the whole efficient frontier in case the problem is ‘bounded’. All of these simplex-type algorithms are designed to solve LVOPs with any number of objective functions where the ordering is component-wise. Among these, the Evans–Steuer algorithm [15] is implemented as a software called ADBASE [31]. The idea of decomposing the parameter set is also used to solve multiobjective integer programs, see for instance [26].

In [27], Ruszczy´nski and Vanderbei developed an algorithm to solve LVOPs with two objectives and the efficiency of this algorithm is equivalent to solving a single scalar linear program by the parametric simplex algorithm. Indeed, the algorithm is a modification of the parametric simplex method and it produces a subset of efficient solutions that generate the whole efficient frontier in case the problem is bounded.

Apart from the algorithms that work in the variable or parameter space, there are algorithms working in the objective space. In [8], Dauer and Liu proposed a procedure to determine the ‘maximal’ extreme points and edges of the image of the feasible region. Later, Benson [5] proposed an outer approximation algorithm that also works in the objective space. These methods are motivated by the observation that the dimension

(3)

of the objective space is usually much smaller than the dimension of the variable space, and decision makers tend to choose a solution based on objective values rather than variable values, see for instance [7]. Löhne [19] introduced a solution concept for LVOPs that takes into account these ideas. Accordingly a solution consists of a set of ‘point maximizers (efficient solutions)’ and a set of ‘direction maximizers (unbounded efficient edges)’, which altogether generate the whole efficient frontier. If a problem is ‘unbounded’, then a solution needs to have a nonempty set of direction maximizers. There are several variants of Benson’s algorithm for LVOPs. Some of them can also solve unbounded problems as long as the image has at least one vertex, but only by using an additional Phase 1 algorithm, see for instance [19, Section 5.4]. The algorithms provided in [12,19,29,30] solve in each iteration at least two LPs that are of the same size as the original problem. An improved variant where only one LP has to be solved in each iteration has been proposed independently in [6] and [16]. In addition to solving (at least) one LP, these algorithms solve a vertex enumeration problem in each iteration. As it can be seen in [6,22,23,29], it is also possible to employ an online vertex enumeration method. In this case, instead of solving a vertex enumeration problem from scratch in each iteration, the vertices are updated after an addition of a new inequality. Recently, Benson’s algorithm was extended to approximately solve bounded convex vector optimization problems in [14,21].

1.2 The proposed algorithm

In this paper, we develop a parametric simplex algorithm to solve LVOPs of any size and with any solid pointed polyhedral ordering cone. Although the structure of the algorithm is similar to the Evans–Steuer algorithm, it is different since the algorithm proposed here works in the parameter space and it finds a solution in the sense that Löhne proposed in [19]. In other words, instead of generating the set of all point and direction maximizers, it only finds a subset of them that already allows to generate the whole efficient frontier. More specifically, the difference can be seen at two points. First, in each iteration instead of performing a pivot for each ‘efficient nonbasic variable’, we perform a pivot only for a subset of them. This already decreases the total number of pivots performed throughout the algorithm. In addition, the method of finding this subset of efficient nonbasic variables is more efficient than the method that is needed to find the whole set. Secondly, for an entering variable, instead of performing all possible pivots for all ‘efficient basic variables’ as in [15], we perform a single pivot by picking only one of them as the leaving variable. In this sense, the algorithm provided here can also be seen as a generalization of the algorithm proposed by Ruszczy`nski and Vanderbei [27] to unbounded LVOPs with more than two objectives and with more general ordering cones.

In each iteration the algorithm provides a set of parameters which make the current vertex optimal. This parameter set is given by a set of inequalities among which the redundant ones are eliminated. This is an easier procedure than the vertex enumeration problem, which is required in some objective space algorithms. Different from the objective space algorithms, the algorithm provided here does not require to solve an additional LP in each iteration. Moreover, the parametric simplex algorithm works

(4)

also for unbounded problems even if the image has no vertices and generates direction maximizers at no additional cost.

As in the scalar case, the efficiency of simplex-type algorithms is expected to be better whenever the problem is non-degenerate. In vector optimization problems, one may observe different types of redundancies if the problem is degenerate. The first one corresponds to the ‘primal degeneracy’ concept in scalar problems. In this case, a simplex-type algorithm may find the same ‘efficient solution’ for many iterations. That is to say, one remains at the same vertex of the feasible region for more than one iteration. The second type of redundancy corresponds to the ‘dual degeneracy’ concept in scalar problems. Accordingly, the algorithm may find different ‘efficient solutions’ which yield the same objective values. In other words, one remains at the same vertex of the image of the feasible region. Additionally to these, a simplex-type algorithm for LVOPs may find efficient solutions which yield objective values that are not vertices of the image of the feasible region. Note that these points that are on a non-vertex face of the image set are not necessary to generate the whole efficient frontier. Thus, one can consider these solutions also as redundant.

The parametric simplex algorithm provided here may also find redundant solutions. However, it will be shown that the algorithm terminates at a finite time, that is, there is no risk of cycling. Moreover, compared to the Evans–Steuer algorithm, the parametric simplex algorithm finds much fewer redundant solutions in general.

We provide different initialization methods. One of the methods requires to solve two LPs while a second method can be seen as a Phase 1 algorithm. Both of these methods work for any LVOP. Depending on the structure of the problem, it is also possible to initialize the algorithm without solving an LP or performing a Phase 1 algorithm.

This paper is structured as follows. Section2is dedicated to basic concepts and notation. In Sect.3, the linear vector optimization problem and solution concepts are introduced. The parametric simplex algorithm is provided in Sect.4. Different methods of initialization are explained in Sect.4.8. Illustrative examples are given in Sect.5. In Sect.6, we compare the parametric simplex algorithm provided here with the different simplex algorithms for solving LVOPs that are available in the literature. Finally, some numerical results regarding the efficiency of the proposed algorithm compared to Benson’s algorithm and the Evans–Steuer algorithm are provided in Sect.7.

2 Preliminaries

For a set A ⊆ Rq, AC, int A, ri A, cl A, bd A, conv A, cone A denote the comple-ment, interior, relative interior, closure, boundary, convex hull, and conic hull of it, respectively. If A ⊆ Rq is a non-empty polyhedral convex set, it can be represented as

A= conv {x1, . . . , xs} + cone {k1, . . . , kt}, (1) where s ∈ N\{0}, t ∈ N, each xi ∈ Rq is a point, and each kj ∈ Rq\{0} is a

(5)

0} ⊆ A. The set A∞:= cone {k1, . . . , kt} is the recession cone of A. The set of points

{x1, . . . , xs} together with the set of directions {k1, . . . , kt} are called the generators of the polyhedral convex set A. We say({x1, . . . , xs}, {k1, . . . , kt}) is a V-representation of A whenever (1) holds. For convenience, we define cone∅ = {0}.

A convex cone C is said to be non-trivial if{0}  C  Rq and pointed if it does not contain any line. A non-trivial convex pointed cone C defines a partial ordering ≤ConRq:

v ≤C w :⇔ w − v ∈ C.

For a non-trivial convex pointed cone C ⊆ Rq, a point y∈ A is called a C-maximal

element of A if({y} + C\{0}) ∩ A = ∅. If the cone C is solid, that is, if it has a

non-empty interior, then a point y∈ A is called weakly C-maximal if ({y} + int C)∩A = ∅. The set of all (weakly) C-maximal elements of A is denoted by(w)MaxC(A). The set of (weakly) C-minimal elements is defined by(w)MinC(A) := (w)Max−C(A). The (positive) dual cone of C is the set C+:=z∈ Rq| ∀y ∈ C : zTy≥ 0. The positive orthant ofRqis denoted byRq+, that is,Rq+:= {y ∈ Rq| yi ≥ 0, i = 1, . . . , q}.

3 Linear vector optimization problems

We consider a linear vector optimization problem (LVOP) in the following form

maximize PTx (with respect to ≤C) (P)

subject to Ax ≤ b, x≥ 0,

where P ∈ Rn×q, A ∈ Rm×n, b ∈ Rm, and C ⊆ Rq is a solid polyhedral pointed ordering cone. We denote the feasible set by X := {x ∈ Rn| Ax ≤ b, x ≥ 0}. Throughout, we assume that (P) is feasible, i.e.,X = ∅. The image of the feasible set is defined as PT[X ] := {PTx∈ Rq| x ∈ X }.

We consider the solution concept for LVOPs as in [19]. To do so, let us recall the following. A point ¯x ∈ X is said to be a (weak) maximizer for (P) if PT¯x is (weakly)

C-maximal in P[X ]. The set of (weak) maximizers of (P) is denoted by (w)Max(P).

The homogeneous problem of (P) is given by

maximize PTx (with respect to ≤C) (Ph)

subject to Ax≤ 0, x≥ 0.

The feasible region of(Ph), namely Xh := {x ∈ Rn| Ax ≤ 0, x ≥ 0}, satisfies

Xh = X

∞, that is, the non-zero points inXh are exactly the directions of X . A direction k ∈ Rn\{0} of X is called a (weak) maximizer for (P) if the corresponding point k∈ Xh\{0} is a (weak) maximizer of the homogeneous problem (Ph).

Definition 3.1 [16,19] A set ¯X ⊆ X is called a set of feasible points for (P) and a set ¯

(6)

A pair of sets ¯X, ¯Xhis called a finite supremizer for (P) if ¯X is a non-empty finite set of feasible points for (P), ¯Xhis a (not necessarily non-empty) finite set of feasible directions for (P), and

conv PT[ ¯X ] + cone PT[ ¯Xh] − C = PT[X ] − C. (2) A finite supremizer( ¯X , ¯Xh) of (P) is called a solution to (P) if it consists of only maximizers.

The setP := PT[X ]−C is called the lower image of (P). Let y1, . . . , ytbe the gen-erating vectors of the ordering cone C. Then,({0} , {y1, . . . , yt}) is a V-representation of the cone C, that is, C= cone {y1, . . . , yt}. Clearly, if ( ¯X , ¯Xh) is a finite supremizer, then(PT[ ¯X ], PT[ ¯Xh] ∪ {−y1, . . . , −yt}) is a V-representation of the lower image

P.

Definition 3.2 (P) is said to be bounded if there exists p∈ Rqsuch thatP ⊆ {p}−C.

Remark 3.3 Note that the recession cone of the lower image,P, is equal to the lower image of the homogeneous problem, that is,P = PT[Xh] − C, see [19, Lemma 4.61]. Clearly,P ⊇ −C, which also implies P+ ⊆ −C+. In particular, if (P) is bounded, then we haveP= −C and ¯Xh= ∅.

The weighted sum scalarized problem for a parameter vectorw ∈ C+is

maximize wTPTx (P1(w))

subject to Ax≤ b, x≥ 0,

and the following well known proposition holds.

Proposition 3.4 ([24, Theorem 2.5]) A point¯x ∈ X is a maximizer (weak maximizer)

of (P) if and only if it is an optimal solution to(P1(w)) for some w ∈ int C+(w ∈ C+\{0}).

Proposition3.4suggests that if one could generate optimal solutions, whenever they exist, to the problems(P1(w)) for w ∈ int C+, then this set of optimal solutions

¯

X would be a set of (point) maximizers of (P). Indeed, it will be enough to solve

problem(P1(w)) for w ∈ ri W, where

W := {w ∈ C+| wTc= 1}, (3)

for some fixed c∈ int C. Note that (P1(w)) is not necessarily bounded for all w ∈ ri W.

Denote the set of allw ∈ ri W such that (P1(w)) has an optimal solution by Wb. If one can find a finite partition (Wbi)si=1 of Wb such that for each i ∈ {1, . . . , s} there exists xi ∈ X which is an optimal solution to (P1(w)) for all w ∈ Wbi, then, clearly, ¯X = {x1, . . . , xs} will satisfy (2) provided one can also generate a finite set of (direction) maximizers ¯Xh. Trivially, if problem (P) is bounded, then(P1(w)) can

(7)

be solved optimally for allw ∈ C+, ¯Xh = ∅, and ( ¯X , ∅) satisfies (2). If problem (P) is unbounded, we will construct in Sect.4a set ¯Xhby adding certain directions to it whenever one encounters a set of weight vectorsw ∈ C+for which(P1(w)) cannot

be solved optimally. The following proposition will be used to prove that this set ¯Xh, together with ¯X = {x1, . . . , xs} will indeed satisfy (2). It provides a characterization of the recession cone of the lower image in terms of the weighted sum scalarized problems. More precisely, the negative of the dual of the recession cone of the lower image can be shown to consist of thosew ∈ C+for which(P1(w)) can be optimally

solved.

Proposition 3.5 The recession conePof the lower image satisfies

−P+ = {w ∈ C+| (P1(w)) is bounded}.

Proof By Remark3.3, we haveP= PT[Xh] − C. Using 0 ∈ Xhand 0∈ C, we obtain

−P+ = {w ∈ Rq| ∀xh∈ Xh, ∀c ∈ C : wT(PTxh− c) ≤ 0} = {w ∈ C+| ∀xh∈ Xh : wT

PTxh ≤ 0}. (4)

Letw ∈ −P+, and consider the weighted sum scalarized problem of(Ph) given by

maximize wTPTx (P1h(w))

subject to Ax ≤ 0, x≥ 0.

By (4), xh = 0 is an optimal solution, which implies by strong duality of the linear program that there exist y∗∈ Rmwith ATy≥ Pw and y≥ 0. Then, y∗is also dual feasible for(P1(w)). By the weak duality theorem, (P1(w)) can not be unbounded.

For the reverse inclusion, letw ∈ C+be such that(P1(w)) is bounded, or

equiva-lently, an optimal solution exists for(P1(w)) as we assume X = ∅. By strong duality,

the dual problem of (P1(w)) has an optimal solution y∗, which is also dual

feasi-ble for(Ph1(w)). By weak duality, (Ph1(w)) is bounded and has an optimal solution ˜xh. Then,w ∈ −P+

∞ holds. Indeed, assuming the contrary, one can easily find a

contradiction to the optimality of ˜xh. 

4 The parametric simplex algorithm for LVOPs

In [15], Evans and Steuer proposed a simplex algorithm to solve linear multiobjective optimization problems. The algorithm moves from one vertex of the feasible region to another until it finds the set of all extreme (point and direction) maximizers. In this paper we propose a parametric simplex algorithm to solve LVOPs where the structure of the algorithm is similar to the Evans–Steuer algorithm. Different from it, the parametric simplex algorithm provides a solution in the sense of Definition3.1, that is, it finds subsets of extreme point and direction maximizers that generate the

(8)

lower image. This allows the algorithm to deal with the degenerate problems more efficiently than the Evans–Steuer algorithm. More detailed comparison of the two algorithms can be seen in Sect.6.

In [27], Ruszczy`nski and Vanderbei generalize the parametric self dual method, which originally is designed to solve scalar LPs [32], to solve two-objective bounded linear vector optimization problems. This is done by treating the second objective function as the auxiliary function of the parametric self dual algorithm. The algorithm provided here can be seen as a generalization of the parametric simplex algorithm from biobjective bounded LVOPs to q-objective LVOPs (q ≥ 2) that are not necessarily bounded where we also allow for an arbitrary solid polyhedral pointed ordering cone C. We first explain the algorithm for problems that have a solution. One can keep in mind that the methods of initialization proposed in Sect.4.8will verify if the problem has a solution or not.

Assumption 4.1 There exists a solution to problem (P).

This assumption is equivalent to having a nontrivial lower imageP, that is, ∅ = P  Rq. ClearlyP = ∅ implies X = ∅, which is equivalent to our standing assumption. Moreover, by Definition3.1and Proposition3.4, Assumption4.1implies that there exists a maximizer which guarantees that there exists somew0 ∈ int C+such that problem(P1(w0)) has an optimal solution. In Sect.4.8, we will propose methods to

find such aw0. It will be seen that the algorithm provided here finds a solution if there exists one.

4.1 The parameter set

Throughout the algorithm we consider the scalarized problem(P1(w)) for w ∈ W

where W is given by (3) for some fixed c ∈ int C. As W is q − 1 dimensional, we will transform the parameter set W into a set ⊆ Rq−1. Assume without loss of generality that cq = 1. Indeed, since C was assumed to be a solid cone, there exists some c ∈ int C such that either cq = 1 or cq = −1. For cq = −1, one can consider problem (P) where C and P are replaced by−C and −P.

Let ˜c = (c1, . . . , cq−1)T ∈ Rq−1and define the functionw(λ) : Rq−1→ Rq and the set ⊆ Rq−1as follows:

w(λ) := (λ1, . . . , λq−1, 1 − ˜cTλ)T,

 := {λ ∈ Rq−1| w(λ) ∈ C+}.

As we assume cq = 1, cTw(λ) = 1 holds for all λ ∈ . Then, w(λ) ∈ W for all λ ∈  and for any w ∈ W, (w1, . . . , wq−1)T ∈ . Moreover, if λ ∈ int , thenw(λ) ∈ ri W and if w ∈ ri W, then (w1, . . . , wq−1)T ∈ int . Throughout the algorithm, we consider the parametrized problem

(Pλ) := (P1(w(λ)))

(9)

4.2 Segmentation of: dictionaries and their optimality region

We will use the terminology for the simplex algorithm as it is used in [32]. First, we introduce slack variables[xn+1, . . . , xn+m]T to obtain x∈ Rn+mand rewrite(Pλ) as follows

maximize w(λ)T[PT 0]x (Pλ)

subject to [A I ]x = b,

x≥ 0,

where I is the identity and 0 is the zero matrix, all in the correct sizes. We consider a partition of the variable indices{1, 2, . . . , n + m} into two sets B and N . Variables

xj, j ∈ B, are called basic variables and xj, j ∈ N , are called nonbasic variables. We write x = xBT xNT T and permute the columns of [A I ] to obtain B N

satisfying[A I ]x = BxB+ N xN, where B ∈ Rm×m and N ∈ Rm×n. Similarly, we form matrices PB ∈ Rm×q, and PN ∈ Rn×qsuch that[PT 0]x = PBTxB+ PNTxN. In order to keep the notation simple, instead of writing[PT 0]x we will occasionally write PTx, where x stands then for the original decision variables inRnwithout the slack variables.

Whenever B is nonsingular, xB can be written in terms of the nonbasic variables as xB= B−1b− B−1N xN. Then, the objective function of(Pλ) is

w(λ)T[PT 0]x = w(λ)Tξ(λ) − w(λ)T

ZNT xN,

whereξ(λ) = PBTB−1b and ZN = (B−1N)TPB− PN.

We say that each choice of basic and nonbasic variables defines a unique dictionary. Denote the dictionary defined byB and N by D. The basic solution that corresponds to

D is obtained by setting xN = 0. In this case, the values of the basic variables become B−1b. Both the dictionary and the basic solution corresponding to this dictionary are

said to be primal feasible if B−1b ≥ 0. Moreover, if w(λ)TZNT ≥ 0, then we say

that the dictionary D and the corresponding basic solution are dual feasible. We call a dictionary and the corresponding basic solution optimal if they are both primal and dual feasible.

For j ∈ N , introduce the halfspace

IDj := {λ ∈ Rq−1| w(λ)TZNT ej ≥ 0},

where ej ∈ Rn denotes the unit column vector with the entry corresponding to the variable xj being 1. Note that if D is known to be primal feasible, then D is optimal forλ ∈ D, where

D:=  jN

IjD.

(10)

Proposition3.4already shows that a basic solution corresponding to a dictionary D withD∩ = ∅ yields a (weak) maximizer of (P). Throughout the algorithm we will move from dictionary to dictionary and collect their basic solutions into a set ¯X . We will later show that this set will be part of the solution( ¯X , ¯Xh) of (P). The algorithm will yield a partition of the parameter set into optimality regions of dictionaries and regions where(Pλ) is unbounded. The next subsections explain how to move from one dictionary to another and how to detect and deal with unbounded problems.

4.3 The set JDof entering variables

We call(IjD)j∈JDa defining (non-redundant) collection of half-spaces of the optimal-ity regionD∩  if JD ⊆ N satisfies

D∩  =  j∈JD IjD∩  and D∩   j∈J IjD∩ , for any J  JD. (5)

For a dictionary D, any nonbasic variable xj, j ∈ JD, is a candidate entering variable. Let us call the set JDan index set of entering variables for dictionary D.

For each dictionary throughout the algorithm, an index set of entering variables is found. This can be done e.g. by the following two methods. Firstly, using the duality of polytopes, the problem of finding defining inequalities can be transformed to the problem of finding a convex hull of given points. Then, the algorithms developed for this matter, see for instance [3], can be employed. Secondly, in order to check if j∈ N corresponds to a defining or a redundant inequality one can consider the following linear program inλ ∈ Rq−1

maximize w(λ)TZNT ej

subject to w(λ)TZNT e¯j≥ 0, for all ¯j ∈ N \({ j} ∪ Jredun), w(λ)T

Y ≥ 0,

where Jredunis the index set of redundant inequalities that have been already found and Y = [y1, . . . , yt] is the matrix where y1, . . . , yt are the generating vectors of the ordering cone C. The inequality corresponding to the nonbasic variable xjis redundant if and only if an optimal solution to this problem yieldsw(λ)TZNT ej ≤ 0. In this

case, we add j to the set Jredun. Otherwise, it is a defining inequality for the region

D∩  and we add j to the set JD. The set JD is obtained by solving this linear program successively for each untested inequality against the remaining.

Remark 4.2 For the numerical examples provided in Sect.5, the second method is employed. Note that the number of variables for each linear program is q− 1, which is much smaller than the number of variables n of the original problem in general.

(11)

Therefore, each linear program can be solved accurately and fast. Thus, this is a reliable and sufficiently efficient method to find JD.

Before applying one of these methods, one can also employ a modified Fourier– Motzkin elimination algorithm as described in [4] in order to decrease the number of redundant inequalities. Note that this algorithm has a worst-case complexity of

O(2q−1(q − 1)2)n2). Even though it does not guarantee to detect all of the redundant

inequalities, it decreases the number significantly.

Note that different methods may yield a different collection of indices as the set

JDmight not be uniquely defined. However, the proposed algorithm works with any choice of JD.

4.4 Pivoting

In order to initialize the algorithm one needs to find a dictionary D0for the parame-trized problem(Pλ) such that the optimality region of D0satisfiesD0∩ int  = ∅. Note that the existence of D0is guaranteed by Assumption4.1and by Proposition3.4. There are different methods to find an initial dictionary and these will be discussed in Sect.4.8. For now, assume that D0is given. By Proposition3.4, the basic solution

x0corresponding to D0is a maximizer to (P). As part of the initialization, we find an index set of entering variables JD0 as defined by (5).

Throughout the algorithm, for each dictionary D with given basic variables B, optimality regionD∩, and index set of entering variables JD, we select an entering variable xj, j ∈ JD, and pick analog to the standard simplex method a leaving variable

xi satisfying i ∈ arg min iB (B−1N)i j>0 (B−1b) i (B−1N)i j, (6)

whenever there exists some i with(B−1N)i j > 0. Here, indices i, j are written on behalf of the entries that correspond to the basic variable xiand the nonbasic variable

xj, respectively. Note that this rule of picking leaving variables, together with the initialization of the algorithm with a primal feasible dictionary D0, guarantees that each dictionary throughout the algorithm is primal feasible.

If there exists a basic variable xiwith(B−1N)i j > 0 satisfying (6), we perform the pivot xj ↔ xi to form the dictionary ¯D with basic variables ¯B = (B ∪ { j})\{i} and nonbasic variables ¯N = (N ∪ {i})\{ j}. For dictionary ¯D, we have Ii¯D= cl (IjD)C= {λ ∈ Rq−1| w(λ)TZT

Nej ≤ 0}. If dictionary ¯D is considered at some point in the algorithm, it is known that the pivot xi ↔ xj will yield the dictionary D considered above. Thus, we call(i, j) an explored pivot (or direction) for ¯D. We denote the set of all explored pivots of dictionary ¯D by E¯D.

4.5 Detecting unbounded problems and constructing the set ¯Xh

Now, consider the case where there is no candidate leaving variable for an entering variable xj, j ∈ JD, of dictionary D, that is,(B−1N ej) ≤ 0. As one can not perform

(12)

a pivot, it is not possible to go beyond the halfspace IjD. Indeed, the parametrized problem(Pλ) is unbounded for λ /∈ IjD. The following proposition shows that in that case, a direction of the recession cone of the lower image can be found from the current dictionary D, see Remark3.3.

Proposition 4.3 Let D be a dictionary with basic and nonbasic variablesB and N ,

D∩  be its optimality region satisfying D∩ int  = ∅, and JDbe an index set of

entering variables. If for some j∈ JD,(B−1N ej) ≤ 0, then the direction xhdefined by setting xBh = −B−1N ejand xNh = ejis a maximizer to (P) and PTxh= −ZTNej. Proof Assume(B−1N ej) ≤ 0 for j ∈ JDand define xhby setting xh

B = −B−1N ej and xNh = ej. By definition, the direction xhwould be a maximizer for (P) if and only if it is a (point) maximizer for the homogeneous problem(Ph), see Sect.3. It holds

[A I ]xh= Bxh

B+ N xhN = 0.

Moreover, xNh = ej ≥ 0 and xBh = −B−1N ej ≥ 0 by assumption. Thus, xh is primal feasible for problem(Ph) and also for problem (Ph1(w(λ))) for all λ ∈ , that is, xh ∈ Xh\{0}. Let λ ∈ D∩ int , which implies w(λ) ∈ ri W ⊆ int C+. Note that by definition of the optimality region, it is true thatw(λ)TZNT ≥ 0. Thus, xhis also dual feasible for(Ph1(w(λ))) and it is an optimal solution for the parametrized homogeneous problem forλ ∈ D∩ int . By Proposition3.4(applied to(Ph) and

(Ph

1(w(λ)))), xhis a maximizer of(Ph). The value of the objective function of (Ph) at xhis given by

[PT 0]xh= PT

BxBh+ PNTxNh = −ZNT ej.



Remark 4.4 If for an entering variable xj, j ∈ JD, of dictionary D, there is no can-didate leaving variable, we conclude that problem (P) is unbounded in the sense of Definition3.2. Then, in addition to the set of point maximizers ¯X one also needs to find the set of (direction) maximizers ¯Xh of (P), which by Proposition4.3can be obtained by collecting directions xh defined by xh

B = −B−1N ej and xNh = ej for every j∈ JDwith B−1N ej ≤ 0 for all dictionaries visited throughout the algorithm.

For an index set JD of entering variables of each dictionary D, we denote the set of indices of entering variables with no candidate leaving variable for dictionary D by JbD := { j ∈ JD| B−1N ej ≤ 0}. In other words, JbD ⊆ JD is such that for any

j∈ JbD,(Pλ) is unbounded for λ /∈ IDj . 4.6 Partition of: putting it all together

We have seen in the last subsections that basic solutions of dictionaries visited by the algorithm yield (weak) point maximizers of (P) and partition into optimality regions for bounded problems(Pλ), while encountering an entering variable with no leaving

(13)

variable in a dictionary yields direction maximizers of (P) as well as regions of corresponding to unbounded problems(Pλ). This will be the basic idea to construct the two sets ¯X and ¯Xh and to obtain a partition of the parameter set. In order to show that( ¯X , ¯Xh) produces a solution to (P), one still needs to ensure finiteness of the procedure, that the whole set is covered, and that the basic solutions of dictionaries visited yield not only weak point maximizers of (P), but point maximizers.

Observe that whenever xj, j ∈ JD, is the entering variable for dictionary D with

D ∩  = ∅ and there exists a leaving variable x

i, the optimality region¯D for dictionary ¯D after the pivot is guaranteed to be non-empty. Indeed, it is easy to show

that

∅  ¯D∩ D ⊆ {λ ∈ Rq−1| w(λ)T

ZNej = 0},

whereN is the collection of nonbasic variables of dictionary D. Moreover, the basic solutions read from dictionaries D and ¯D are both optimal solutions to the parametrized

problem(Pλ) for λ ∈ ¯D∩ D. Note that the common optimality region of the two dictionaries has no interior.

Remark 4.5 a. In some cases it is possible that¯D itself has no interior and it is a subset of the neighboring optimality regions corresponding to some other dictio-naries.

b. Even though it is possible to come across dictionaries with optimality regions hav-ing empty interior, for any dictionary D found durhav-ing the algorithmD∩int  = ∅ holds. This is guaranteed by starting with a dictionary D0satisfyingD0∩int  = ∅ together with the rule of selecting the entering variables, see (5). More specifi-cally, throughout the algorithm, whenever IjD corresponds to the boundary of it is guaranteed that j /∈ JD. By this observation and by Proposition3.4, it is clear that the basic solution corresponding to the dictionary D is not only a weak maximizer but it is a maximizer.

Let us denote the set of all primal feasible dictionaries D satisfyingD∩int  = ∅ byD. Note that D is a finite collection. Let the set of parameters λ ∈  yielding bounded scalar problems(Pλ) be b. Then it can easily be shown that

b:= {λ ∈ | (Pλ) has an optimal solution}

=

DD

(D∩ ).

(7)

Note that not all dictionaries inD are required to be known in order to cover b. First, the dictionaries mentioned in Remark4.5a. do not provide a new region within

b. One should keep in mind that the algorithm may still need to visit some of these dictionaries in order to go beyond the optimality region of the current one. Secondly, in case there are multiple possible leaving variables for the same entering variable, instead of performing all possible pivots, it is enough to pick one leaving variable and continue with this choice. Indeed, choosing different leaving variables leads to different partitions of the same subregion withinb.

(14)

By this observation, it is clear that there is a subcollection of dictionaries ¯D ⊆ D which defines a partition ofbin the following sense

D∈ ¯D

(D∩ ) = 

b. (8)

If there is at least one dictionary D∈ ¯D with JbD = ∅, it is known by Remark4.4that (P) is unbounded. If furtherbis connected, one can show that

 D∈ ¯D, j∈JbD

(ID

j ∩ ) = b, (9)

holds. Indeed, connectedness ofbis correct, see Remark4.7below. 4.7 The algorithm

The aim of the parametrized simplex algorithm is to visit a set of dictionaries ¯D satisfying (8).

In order to explain the algorithm we introduce the following definition.

Definition 4.6 D ∈ D is said to be a boundary dictionary if D and an index set of entering variables JD is known. A boundary dictionary is said to be visited if the resulting dictionaries of all possible pivots from D are boundary and the index set JbD corresponding to JD (see Remark4.4) is known.

The motivation behind this definition is to treat the dictionaries as nodes and the possible pivots between dictionaries as the edges of a graph. Note that more than one dictionary may correspond to the same maximizer.

Remark 4.7 The graph described above is not necessarily connected. However, there

exists a connected subgraph which includes at least one dictionary corresponding to each maximizer found by visiting the whole graph. The proof for the case C= Rq+is given in [28] and it can be generalized easily to any polyhedral ordering cone. Note that this implies that the setbis connected.

The idea behind the algorithm is to visit a sufficient subset of ‘nodes’ to cover the setb. This can be seen as a special online traveling salesman problem. Indeed, we employ the terminology used in [18]. The set of all ‘currently’ boundary and visited dictionaries through the algorithm are denoted by BD and VS, respectively.

The algorithm starts with BD= {D0} and VS = ∅, where D0is the initial dictionary

with index set of entering variables JD0. We initialize ¯Xhas the empty set and ¯X as {x0}, where x0is the basic solution corresponding to D0. Also, as there are no explored

directions for D0we set ED0 = ∅.

For a boundary dictionary D, we consider each j ∈ JD and check the leaving variable corresponding to xj. If there is no leaving variable, we add xh defined by

(15)

corresponding leaving variable xi is found. If ( j, i) /∈ ED, we perform the pivot

xj ↔ xi as it has not been explored before. We check if the resulting dictionary ¯D is marked as visited or boundary. If ¯D ∈ VS, there is no need to consider ¯D further. If

¯D ∈ BD, then (i, j) is added to the set of explored directions for ¯D. In both cases, we continue by checking some other entering variable of D. If ¯D is neither visited nor

boundary, then the corresponding basic solution¯x is added to the set ¯X , an index set of entering variables J ¯Dis computed,(i, j) is added to the set of explored directions E¯D, and ¯D itself is added to the set of boundary dictionaries. Whenever all j ∈ JDhave been considered, D becomes visited. Thus, D is deleted from the set BD and added to the set VS. The algorithm stops when there are no more boundary dictionaries.

Algorithm 1 Parametric Simplex Algorithm for LVOP

1: Find D0and an index set of entering variables JD0; 2: Initialize

B D= {D0}, ¯X = {x0}; V S, ¯Xh, ED0 = ∅;

3: while B D = ∅ do

4: Let D∈ B D with nonbasic variables N and index set of entering variables JD; 5: for j∈ JDdo

6: Let xjbe the entering variable; 7: if B−1N ej≤ 0 then

8: Let xhbe such that xBh = −B−1N ejand xNh = ej; 9: X¯h← ¯Xh∪ {xh};

10: PT[ ¯Xh] ← PT[ ¯Xh] ∪ {−ZNT ej}

11: else

12: Pick i∈ arg miniB, (B−1N)

i j>0

(B−1b)i

(B−1N)i j;

13: if( j, i) /∈ EDthen

14: Perform the pivot with entering variable xjand leaving variable xi; 15: Call the new dictionary ¯D with nonbasic variables ¯N = N ∪ {i}\{ j};

16: if ¯D/∈ V S then

17: if ¯D∈ B D then

18: E¯D← E¯D∪ {(i, j)};

19: else

20: Let¯x be the basic solution for ¯D; 21: X ← ¯¯ X ∪ { ¯x};

22: PT[ ¯X ] ← PT[ ¯X ] ∪ {PT¯x};

23: Compute an index set of entering variables J¯Dof ¯D;

24: Let E¯D= {(i, j)}; 25: B D← B D ∪ { ¯D}; 26: end if 27: end if 28: end if 29: end if 30: end for 31: V S← V S ∪ {D}, B D← B D\{D}; 32: end while 33: return ( ¯X , ¯Xh) : A finite solution of (P); (PT[ ¯X ], PT[ ¯Xh] ∪ {y1, . . . , yt}) : V representation of P.

(16)

Theorem 4.8 Algorithm1returns a solution( ¯X , ¯Xh) to (P).

Proof Algorithm1terminates in a finite number of iterations since the overall num-ber of dictionaries is finite and there is no risk of cycling as the algorithm never performs ‘already explored pivots’, see line 13. ¯X , ¯Xhare finite sets of feasible points and directions, respectively, for (P), and they consist of only maximizers by Proposi-tions3.4and4.3together with Remark4.5b. Hence, it is enough to show that( ¯X , ¯Xh) satisfies (2).

Observe that by construction, the set of all visited dictionaries ¯D := V S at termi-nation satisfies (8). Indeed, there are finitely many dictionaries andbis a connected set, see Remark4.7. It is guaranteed by (8) that for anyw ∈ C+, for which(P1(w))

is bounded, there exists an optimal solution ¯x ∈ ¯X of (P1(w)). Then, it is clear that ( ¯X , ¯Xh) satisfies (2) as long as R:= cone PT[ ¯Xh] − C is the recession cone P

∞of the lower image.

If for all D ∈ ¯D the set JbD = ∅, then (P) is bounded, ¯Xh = ∅, and trivially

R = −C = P. For the general case, we show that−P+ = −R+which implies

P= cone PT[ ¯Xh]−C. Assume there is at least one dictionary D ∈ ¯D with JbD = ∅. Then, by Remarks4.4and4.7, (P) is unbounded, ¯Xh = ∅ and ¯D also satisfies (9). On the one hand, by definition of IjD, we can write (9) as

b=  D∈VS, j∈JD b λ ∈ | w(λ)TZT NDe j ≥ 0 , (10)

whereNDis the set of nonbasic variables corresponding to dictionary D. On the other hand, by construction and by Proposition4.3, we have

R= cone  −ZT NDe j| j ∈ D∈VS JbD  ∪ −y1, . . . , −yt ,

where{y1, . . . , yt} is the set of generating vectors for the ordering cone C. The dual cone can be written as

R+=  D∈VS, j∈JbD w ∈ Rq| wT ZNT De j ≤ 0 k  i=1 w ∈ Rq| wT yi ≤ 0 . (11)

Now, letw ∈ −P+. By proposition3.5,(P1(w)) has an optimal solution. As cTw >

0, also  P1  w cTw 

= (P¯λ) has an optimal solution, where ¯λ := 1

cTw(w1, . . . , wq−1) T and thusw(¯λ) = cTww. By the definition ofbgiven by (7), ¯λ ∈ b. Then, by (10), ¯λ ∈  and w(λ)TZT

NDe

j ≥ 0 for all j ∈ JD

b , D ∈ VS. This holds if and only if w ∈ −R+ by definition of  and by (11). The other inclusion can be shown

symmetrically. 

Remark 4.9 In general, simplex-type algorithms are known to work better if the

(17)

maximizers. The effects of degeneracy will be provided in more detail in Sect.7. For now, let us mention that it is possible to eliminate the redundancies by additional steps in Algorithm1. There are two types of redundant maximizers.

a. Algorithm1may find multiple point (direction) maximizers that are mapped to the same point (direction) in the image space. In order to find a solution that is free of these type of redundant maximizers, one may change line 21 (9) of the algorithm such that the current maximizer x (xh) is added to the set ¯X ( ¯Xh) only if its image is not in the current set PT[ ¯X ] (PT[ ¯Xh]).

b. Algorithm1may also find maximizers whose image is not a vertex on the lower image. One can eliminate these maximizers from the set ¯X by performing a vertex elimination at the end.

4.8 Initialization

There are different ways to initialize Algorithm1. We provide two methods, both of which also determine if the problem has no solution. Note that (P) has no solution if

X = ∅ or if the lower image is equal to Rq, that is, if(P

1(w)) is unbounded for all w ∈ int C+. We assumeX is nonempty. Moreover, for the purpose of this section, we

assume without loss of generality that b≥ 0. Indeed, if b  0, one can find a primal feasible dictionary by applying any ‘Phase 1’ algorithm that is available for the usual simplex method, see [32].

The first initialization method finds a weight vectorw0∈ int C+such that (P1(w0))

has an optimal solution. Then the optimal dictionary for (P1(w0)) is used to construct

the initial dictionary D0. There are different ways to choose the weight vectorw0. The second method of initialization can be thought of as a Phase 1 algorithm. It finds an initial dictionary as long as there exists one.

4.8.1 Findingw0and constructing D0

The first way to initialize the algorithm requires finding some w0 ∈ int C+ such that (P1(w0)) has an optimal solution. It is clear that if the problem is known to be

bounded, then anyw0∈ int C+works. However, it is a nontrivial procedure in general. In the following we give two different methods to find suchw0. The first method can also determine if the problem has no solution.

a. The first approach is to extend the idea presented in [15] to any solid polyhedral pointed ordering cone C. Accordingly, findingw0involves solving the following linear program:

minimize bTu (P0)

subject to ATu− Pw ≥ 0, YT(w − c) ≥ 0, u≥ 0,

(18)

where c ∈ int C+, and the columns of Y are the generating vectors of C. Under the assumption b ≥ 0, it is easy to show that (P) has a maximizer if and only if(P0) has an optimal solution. Note that (P0) is bounded. If (P0) is infeasible,

then we conclude that the lower image has no vertex and (P) has no solution. In case it has an optimal solution(u, w), then one can take w0= cTww∗∗ ∈ int C+, and solve (P1(w0)) optimally. For the randomly generated examples of Sect.5we

have used this method.

b. Using the idea provided in [27], it might be possible to initialize the algorithm without even solving a linear program. By the structure of a particular problem, one may start with a dictionary which is trivially optimal for some weightw0. In this case, one can start with this choice ofw0, and get the initial dictionary D0 even without solving an LP. An example is provided in Sect.5, see Remark5.3. In order to initialize the algorithm,w0can be used to construct the initial dictionary D0. Without loss of generality assume that cTw0= 1, indeed one can always normalize since c ∈ int C implies cTw0 > 0. Then, clearly w0 ∈ ri W. Let B0 andN0 be the set of basic and nonbasic variables corresponding to the optimal dictionary D∗ of (P1(w0)). If one considers the dictionary D0 for (Pλ) with the basic variables B0 and nonbasic variablesN0, the objective function of D0 will be different from D∗as it depends on the parameterλ. However, the matrices B0, N0, and hence the corresponding basic solution x0, are the same in both dictionaries. We consider D0as the initial dictionary for the parametrized problem(Pλ). Note that B0is a nonsingular matrix as it corresponds to dictionary D. Moreover, since D∗is an optimal dictionary for (P1(w0)), x0is clearly primal feasible for(Pλ) for any λ ∈ Rq−1. Furthermore,

the optimality region of D0satisfiesD0 ∩ int  = ∅ as λ0 := [w01, . . . , wq0−1] ∈

D0

∩int . Thus, x0is also dual feasible for(P

λ) for λ ∈ D0, and x0is a maximizer to (P).

4.8.2 Perturbation method

The second method of initialization works similar to the idea presented for Algorithm1

itself.

Assuming that b≥ 0, problem (Pλ) is perturbed by an additional parameterμ ∈ R as follows:

maximize (w(λ)TPT − μ1T)x (Pλ,μ)

subject to Ax ≤ b, x≥ 0,

where 1 is the vector of ones. After introducing the slack variables, consider the dictio-nary with basic variables xn+1, . . . , xm+nand with nonbasic variables x1, . . . , xn. This dictionary is primal feasible as b≥ 0. Moreover, it is dual feasible if Pw(λ)−μ1 ≤ 0. We introduce the optimality region of this dictionary as

(19)

Note that M0is not empty asμ can take sufficiently large values.

The aim of the perturbation method is to find an optimality region M such that

M∩ ri ( × {0}) = ∅, (12)

where × {0} := {(λ, 0)| λ ∈ }. If the current dictionary satisfies (12), then it can be taken as an initial dictionary D0for Algorithm1after deleting the parameterμ. Otherwise, the defining inequalities of the optimality region are found. Clearly, they correspond to the entering variables of the current dictionary. The search for an initial dictionary continues similar to the original algorithm. Note that if there does not exist a leaving variable for an entering variable, (Pλ,μ) is found to be unbounded for some set of parameters. The algorithm continues until we obtain a dictionary for which the optimality region satisfies (12) or until we cover the the parameter set × R+by the optimality regions and by the regions that are known to yield unbounded problems. At termination, if there exist no dictionary that satisfies (12), then we conclude that there is no solution to problem (P). Otherwise, we initialize the algorithm with D0.

See Example5.1, Remark5.4.

5 Illustrative examples

We provide some examples and numerical results in this section. The first example illustrates how the different methods of initialization and the algorithm work. The second example shows that Algorithm1can find a solution even though the lower image does not have any vertices.

Example 5.1 Consider the following problem

maximize (x1, x2− x3, x3)T with respect to ≤R3 + subject to x1+ x2≤ 5

x1+ 2x2− x3≤ 9 x1, x2, x3≥ 0.

Let c = (1, 1, 1)T ∈ int R3+. Clearly, we have = {λ ∈ R2| λ1+ λ2 ≤ 1, λi ≥ 0, i = 1, 2}.

Let us illustrate the different initialization methods.

Remark 5.2 (Initializing by solving (P0), see Sect. 4.8.1 a.) A solution of(P0) is

found as w= (1, 1, 1)T. Then, we take w0 = (31,13,13)T as the initial weight vector. x0 = (5, 0, 0)T is an optimal solution found for P1(w0). The indices of the

basic variables of the corresponding optimal dictionary are B0 = {1, 5}. We form dictionary D0of problem(Pλ) with basic variables B0:

ξ = 5λ1 −λ1x4 −(λ1− λ2)x2 −(λ1+ 2λ2− 1)x3 x1= 5 −x4 −x2

(20)

Remark 5.3 (Initializing using the structure of the problem, see Sect.4.8.1b.) The structure of Example 5.1 allows us to initialize without solving a linear program. Considerw0= (1, 0, 0)T. As the objective of P1(w0) is to maximize x1and the most

restraining constraint is x1+ x2≤ 5 together with xi ≥ 0, x = (5, 0, 0)T is an optimal solution of P1(w0). The corresponding slack variables are x4= 0 and x5= 4. Note that

this corresponds to the dictionary with basic variables{1, 5} and nonbasic variables {2, 3, 4}, which yields the same initial dictionary D0as above. Note that one needs to

be careful asw0 /∈ int C+butw0∈ bd C+. In order to ensure that the corresponding initial solution is a maximizer and not only a weak maximizer, one needs to check the optimality region of the initial dictionary. If the optimality region has a nonempty intersection with int, which is the case for D0, then the corresponding basic solution is a maximizer. In general, if one can findw ∈ int C+such that(P1(w)) has a trivial

optimal solution, then the last step is clearly unnecessary.

Remark 5.4 (Initializing using the perturbation method, see Sect.4.8.2.) The starting dictionary for (Pλ,μ) of the perturbation method is given as

ξ = −(μ − λ1)x1 −(μ − λ2)x2 −(μ + λ1+ 2λ2− 1)x3

x4 = 5 −x1 −x2

x5 = 9 −x1 −2x2 +x3

This dictionary is optimal for M0 = {(λ, μ) ∈  × R| μ − λ1 ≥ 0, μ − λ2 ≥

0, μ + λ1+ 2λ2≥ 1}. Clearly, M0does not satisfy (12). The defining halfspaces for M0correspond to the nonbasic variables x1, x2and x3. If x1enters, then the leaving

variable is x4 and the next dictionary has the optimality region M1 = {(λ, μ) ∈  × R| − μ + λ1≥ 0, λ1− λ2≥ 0, μ + λ1+ 2λ2≥ 1} which satisfies (12). Then,

by deletingμ the initial dictionary is found to be D0as above. Different choices of entering variables in the first iteration might yield different initial dictionaries.

Consider the initial dictionary D0. Clearly, I4D0 = {λ ∈ R2| λ1≥ 0}, ID

0

2 = {λ ∈

R2| λ

1− λ2≥ 0}, and ID

0

3 = {λ ∈ R2| λ1+ 2λ2≥ 1}. The defining halfspaces for D0

∩  correspond to the nonbasic variables x2, x3, thus we have JD

0

= {2, 3}. The iteration starts with the only boundary dictionary D0. If x2 is the entering

variable, x5 is picked as the leaving variable. The next dictionary, D1, has basic

variablesB1 = {1, 2}, the basic solution x1 = (1, 4, 0)T, and a parameter region

D1 = {λ ∈ R2| 2λ

1− λ2 ≥ 0, −λ1+ λ2 ≥ 0, 2λ1+ λ2 ≥ 1}. The halfspaces

corresponding to the nonbasic variables xj, j ∈ JD 1

= {3, 4, 5} are defining for the optimality regionD1 ∩ . Moreover, ED1 = {(5, 2)} is an explored pivot for D1.

From dictionary D0, for entering variable x3there is no leaving variable according

to the minimum ratio rule (6). We conclude that problem(Pλ) is unbounded for λ ∈ R2 such thatλ1+ 2λ2< 1. Note that

B−1N =  1 1 0 −1 1 −1  ,

(21)

and the third column corresponds to the entering variable x3. Thus, xBh0 = (x h 1, x h 5)T = (0, −1)T and xh N0 = (x4h, x2h, x3h)T = e3 = (0, 0, 1)T. Thus, we add xh = (xh

1, x2h, x3h) = (0, 0, 1)T to the set ¯Xh, see Algorithm1, line 12. Also, by

Propo-sition4.3, PTxh = (0, −1, 1)T is an extreme direction of the lower imageP. After the first iteration, we have VS= {D0}, and B D = {D1}.

For the second iteration, we consider D1 ∈ BD. There are three possible pivots with entering variables xj, j ∈ JD

1

= {3, 4, 5}. For x5, x2is found as the leaving

variable. As(5, 2) ∈ ED1, the pivot is already explored and not necessary. For x3,

the leaving variable is found as x1. The resulting dictionary D2has basic variables B2 = {2, 3}, basic solution x2= (0, 5, 1)T, the optimality regionD2 ∩  = {λ ∈ R2

+| 2λ12≤ 1, 2λ1+3λ2≤ 2, −λ1−2λ2≤ −1}, and the indices of the entering

variables JD2 = {1, 4, 5}. Moreover, we write ED2 = {(1, 3)}.

We continue the second iteration by checking the entering variable x4from D1.

The leaving variable is found as x1. This pivot yields a new dictionary D3with basic

variablesB3= {2, 4} and basic solution x3= (0, 4.5, 0)T. The indices of the entering variables are found as JD3 = {1, 3}. Also, we have ED3 = {(1, 4)}. At the end of the second iteration we have VS= {D0, D1}, and BD = {D2, D3}.

Consider D2 ∈ BD for the third iteration. For x1, the leaving variable is x3 and

we obtain dictionary D1 which is already visited. For x4, the leaving variable is x3. We obtain the boundary dictionary D3 and update the explored pivots for it as ED3 = {(1, 4), (3, 4)}. Finally, for entering variable x5there is no leaving variable

and one finds the same xhthat is already found at the first iteration. At the end of third iteration, VS= {D0, D1, D2}, BD = {D3}.

For the next iteration, D3 is considered. The pivots for both entering variables

yield already explored ones. At the end of this iteration there are no more bound-ary dictionaries and the algorithm terminates with VS= {D0, D1, D2, D3}. Figure1

shows the optimality regions after the four iterations. The color blue indicates that the corresponding dictionary is visited, yellow stands for boundary dictionaries and the gray region corresponds to the set of parameters for which problem (Pλ) is unbounded.

The solution to the problem is( ¯X , ¯Xh) where ¯X ={(5, 0, 0)T, (1, 4, 0)T, (0, 5, 1)T,

(0, 4.5, 0)T}, and ¯Xh = (0, 0, 1)T. The lower image can be seen in Fig.2.

(22)

Fig. 2 Lower imageP of Example5.1

Fig. 3 Lower imageP of Example5.5

Example 5.5 Consider the following example.

maximize (x1− x2, x3− x4)T with respect to ≤R2 + subject to x1− x2+ x3− x4≤ 1

x1, x2, x3, x4≥ 0.

Let c = (1, 1)T ∈ int R2+. Clearly, we have = [0, 1] ⊆ R. Using the method described in Sect.4.8.1a. we findw0= (12,12) as the initial scalarization parameter. Then, x0 = (1, 0, 0, 0)T is an optimal solution to P

1(w0) and the index set of the

basic variables of D0is found asB0= {1}. Algorithm1terminates after two iterations and yields ¯X = {(1, 0, 0, 0)T, (0, 0, 1, 0)T}, ¯Xh = {(1, 0, 0, 1)T, (0, 1, 1, 0)T}. The lower image can be seen in Fig.3. Note that as it is possible to generate the lower image only with one point maximizer, the second one is redundant, see Remark4.9b.

6 Comparison of different simplex algorithms for LVOP

As briefly mentioned in Sect.1, there are different simplex algorithms to solve LVOPs. Among them, the Evans–Steuer algorithm [15] works very similar to the algorithm

(23)

provided here. It moves from one dictionary to another where each dictionary gives a point maximizer. Moreover, it finds ‘unbounded efficient edges’, which correspond to the direction maximizers. Even though the two algorithms work in a similar way, they have some differences that affect the efficiency of the algorithms significantly. The main difference is that the Evans–Steuer algorithm finds the set of all maximizers whereas Algorithm1finds only a subset of maximizers, which generates a solution in the sense of Löhne [19] and allows to generate the set of all maximal elements of the image of the feasible set. In general, the Evans–Steuer algorithm visits more dictionaries than Algorithm1especially if the problem is degenerate.

First of all, in each iteration of Algorithm1, for each entering variable xj, only one leaving variable is picked among the set of all possible leaving variables, see line 12. Differently, the Evans–Steuer algorithm performs pivots xj ↔ xi for all possible leaving variables, i ∈ arg miniB, (B−1N)i j>0

(B−1b)i

(B−1N)i j. If the problem is degenerate, this procedure leads the Evans–Steuer algorithm to visit many more dictionaries than Algorithm1does. In general, these additionally visited dictionaries yield maximizers that are already found. In [1,2], it has been shown that using the lexicographic rule to choose the leaving variables would be sufficient to cover all the efficient basic solutions. For the numerical tests that we run, see Sect.7, we have modified the Evans–Steuer algorithm such that it uses the lexicographic rule.

Another difference between the two simplex algorithms is at the step where the entering variables are selected. In Algorithm 1, the entering variables are the ones which correspond to the defining inequalities of the current optimality region. Different methods to find the entering variables are provided in Sect.4.3. The method that is employed for the numerical tests of Sect.7involves solving sequential LP’s with q−1 variables and at most n+ t inequality constraints, where t is the number of generating vectors of the ordering cone. Note that the number of constraints are decreasing in each LP as one solves them successively. For each dictionary, the total number of LPs to solve is at most n in each iteration. The Evans–Steuer algorithm finds a larger set of entering variables, namely ‘efficient nonbasic variables’ for each dictionary. In order to find this set, it solves n LPs with n+ q + 1 variables, q equality and n + q + 1 non-negativity constraints. More specifically, for each nonbasic variable j ∈ N it solves

maximize 1Tv

subject to ZNT y− δZNT ej− v = 0, y, δ, v ≥ 0,

where y ∈ Rn, δ ∈ R, v ∈ Rq. Only if this program has an optimal solution 0, then xjis an efficient nonbasic variable. This procedure is clearly costlier than the one employed in Algorithm1. In [17], this idea is improved so that it is possible to complete the procedure by solving fewer LPs of the same structure. Further improvements are done also in [9]. Moreover, in [1,2] a different method is applied in order to find the efficient nonbasic variables. Accordingly, one needs to solve n LPs with 2q variables,

n equality and 2q nonnegativity constraints. Clearly, this method is more efficient than

(24)

efficient nonbasic variables clearly yields visiting more redundant dictionaries than Algorithm1would visit. Some of these additionally visited dictionaries yield different maximizers that map into already found maximal elements in the objective space, see Example6.1; while some of them yield non-vertex maximal elements in the objective space, see Example6.2.

Example 6.1 Consider the following simple example taken from [28], in which it has been used to illustrate the Evans–Steuer algorithm.

maximize (3x1+ x2, 3x1− x2)T with respect to ≤R2 + subject to x1+ x2≤ 4

x1− x2≤ 4 x3≤ 4 x1, x2, x3≥ 0.

If one uses Algorithm1, the solution is provided right after the initialization. The initial set of basic variables can be found asB0 = {1, 5, 6}, and the basic solution corresponding to the initial dictionary is x0= (4, 0, 0)T. One can easily check that x0

is optimal for allλ ∈ . Thus, Algorithm1stops and returns the single maximizer. On the other hand, it is shown in [28] that the Evans–Steuer algorithm terminates only after performing another pivot to obtain a new maximizer x1= (4, 0, 4)T. This is because, from the dictionary with basic variables B0 = {1, 5, 6} it finds x3 as an efficient

nonbasic variable and performs one more pivot with entering variable x3. Clearly the

image of x1 is again the same vertex(4, 4)T in the image space. Thus, in order to generate a solution in the sense of Definition3.1, the last iteration is unnecessary.

Example 6.2 Consider the following example.

maximize (−x1− x3, −x2− 2x3)T with respect to ≤R2 + subject to − x1− x2− 3x3≤ −1

x1, x2, x3≥ 0.

First, we solve the example by Algorithm1. Clearly, = [0, 1] ⊆ R. We find an initial dictionary D0withB0= {1}, which yields the maximizer x0= (1, 0, 0)T. One can easily see that index set of the defining inequalities of the optimality region can be chosen either as JD0 = {2} or JD0 = {3}. Note that Algorithm1picks one of them and continues with it. In this example we get JD0 = {2}, perform the pivot x2↔ x1

to get D1withB1= {2} and x1= (0, 1, 0)T. From D1, there are two choices of sets of entering variables and we set JD1 = {1}. As the pivot x1↔ x2is already explored,

the algorithm terminates with ¯X = {x0, x1} and ¯Xh= ∅.

When one solves the same problem by the Evans–Steuer algorithm, from D0, both

x2and x3are found as entering variables. When x3enters from D0, one finds a new

maximizer x2= (0, 0,13)T. Note that this yields a nonvertex maximal element on the lower image, see Fig.4.

Şekil

Fig. 1 Optimality regions after the first four iterations of Example 5.1
Fig. 2 Lower image P of Example 5.1
Fig. 4 Lower image P of Example 6.2 −2 −1 0−2−10 −x 1 − x 3−x2−2x3PTx0P T x 2P T x 1
Table 1 Run time statistics for randomly generated problems where q = 3. For the first row n = 20, m = 40; for the second row, n = 30, m = 30; for the last row n = 40, m = 20
+3

Referanslar

Benzer Belgeler

Özellikle son yıllarda yapılan çalışmalar grafen takviyesinin diğer karbon türevi malzemelere göre çok daha yüksek mekanik özelliklere sahip olduğunu göster- miştir..

Dembitsky ve arkadaşları (1993) da çalışmalarında, çoklu doymamış yağ asitlerinin fosfolipit fraksiyonunda (%50), total doymuş ve tekli doymamış yağ asitlerinin

vefatı dolayısı ile cenazesine katılan, hayır kurumlarına bağışta bu­ lunan, çelenk gönderen, telefon, telgrafla başsağlığı dileyen, evimi­ ze gelerek bizleri bu

In the eigenspace projection based robust adaptive beamforming, By using the assumed steering vector of desired signal, this method computes the projection of

according to the Ranking between the three proposed Hybrid DE variants including the original DE results in dimension 10 demonstrated in Table 8, version 3 of the

Optimizing the 15 Benchmark of single objective problems of CEC2015 [11] with Hybrid Particle Swarm Optimization using Fmin function as a local search was proposed.

Tocilizumab (TCZ) is an anti-IL-6 receptor monoclonal antibody, widely used in the treatment of autoimmune diseases and has been approved by the FDA to reduce

10 MHZ Sinyal uygulandığında (a) Kare dalga, (b) Sinusoidal dalga, (c) Üçgen dalga, sinyallerin osiloskop cihazında giriĢ ve çıkıĢ görüntüleri .... 10 MHz