The finite criss-cross method for hyperbolic programming

23
The finite criss-cross method for hyperbolic programming Report 96-103 T. Ill´ es ´ A. Szirmai T. Terlaky Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics

Transcript of The finite criss-cross method for hyperbolic programming

The finite criss-cross method forhyperbolic programming

Report 96-103

T. IllesA. SzirmaiT. Terlaky

Technische Universiteit DelftDelft University of Technology

Faculteit der Technische Wiskunde en InformaticaFaculty of Technical Mathematics and Informatics

ISSN 0922-5641

Copyright c 1996 by the Faculty of Technical Mathematics and Informatics, Delft, TheNetherlands.No part of this Journal may be reproduced in any form, by print, photoprint, microfilm,or any other means without permission from the Faculty of Technical Mathematics andInformatics, Delft University of Technology, The Netherlands.

Copies of these reports may be obtained from the bureau of the Faculty of TechnicalMathematics and Informatics, Julianalaan 132, 2628 BL Delft, phone+31152784568.A selection of these reports is available in PostScript form at the Faculty’s anonymousftp-site. They are located in the directory /pub/publications/tech-reports at ftp.twi.tudelft.nl

DELFT UNIVERSITY OF TECHNOLOGYREPORT Nr. 96{103The Finite Criss-Cross Method forHyperbolic ProgrammingT. Ill�es, �A. Szirmai, T. Terlaky

ISSN 0922{5641Reports of the Faculty of Technical Mathematics and Informatics Nr. 96{103Delft, October, 1996i

Tibor Ill�es, Tam�as TerlakyDelft University of TechnologyFaculty of Technical Mathematics and InformaticsP.O. Box 5031, 2600 GA Delft, The NetherlandsE-mail: [email protected], [email protected]�Akos SzirmaiE�otv�os Lor�and UniversityComputer Science DepartmentM�uzeum krt. 6-8., H-1088 Budapest, HungaryE-mail: sz [email protected] Ill�es is on leave from E�otv�os Lor�and University, Operations Research Department,Budapest, Hungary.

Copyright c 1996 by Faculty of Technical Mathematics and Informatics, Delft,The Netherlands.No part of this Journal may be reproduced in any form, by print, photo-print, micro�lm or any other means without written permission from Facultyof Technical Mathematics and Informatics, Delft University of Technology, TheNetherlands. ii

AbstractIn this paper the �nite criss-cross method is generalized to solve hyperbolic programmingproblems. Just as in the case of linear or quadratic programming the criss-cross method canbe initialized with any, not necessarily feasible basic solution.Finiteness of the procedure is proved under the usual mild assumptions. Some smallnumerical examples illustrate the main features of the algorithm.Key words: hyperbolic programming, pivoting, criss-cross method

iii

1 IntroductionThe hyperbolic (fractional linear) programming problem is a natural generalization of thelinear programming problem. The linear constraints are kept, but the linear objectivefunction is replaced by a quotient of two linear functions. Such fractional linear objectivefunctions arise in economical models when the goal is to optimize pro�t/allocation typefunctions (see for instance [12]).The objective function of the hyperbolic programming problem is neither linear nor con-vex, however there are several e�cient solution methods for this class of nonlinear pro-gramming problems. The existence of e�cient algorithms are due to the following niceproperties of the objective function. Martos proved in his early work [13], that the objec-tive function is not only pseudoconvex, but also pseudolinear, thus any local minima of itis global, too. Another attractive property of the fractional linear problem was identi�edby Charnes and Cooper [4]. They showed that any hyperbolic programming problemis equivalent to a special linear programming problem. Thus it is not surprising thatsuitable adaptations of linear programming algorithms, like simplex methods [3, 4, 6] orKarmarkar's interior point algorithm [1, 2], solve the hyperbolic programming problem.A thorough survey with nearly 1200 references, on this topic was written by Schaible [14].The criss-cross method forces a new view at pivot methods for optimization problems.Its main features are: 1, it can be initialized with any, not necessarily feasible, basissolution; 2, it solves the problem, without introducing arti�cial variables, in one phase;3, in a �nite number of steps either it solves the problem or shows that the optimizationproblem is either infeasible or unbounded. First criss-cross methods were designed forlinear [15, 16] and oriented matroid linear programming [17, 18]. The generalization toquadratic programming [9], su�cient complementarity problems [7] and oriented matroidlinear complementarity problems [5] followed shortly. Up till now, to our best knowledgeno criss-cross methods were designed for hyperbolic programming. The goal of this paperis to �ll this gap by generalizing the criss-cross method for this important class of problems.In Section 2, after the formulation of the primal and dual problem we summarize some ofthe basic properties of the hyperbolic programming problem pair. We discuss the relationsbetween speci�c sign structures of pivot tableaus and infeasibility or unboundedness ofthe problems.Then, in Section 3, after presenting the adaptation of the algorithm, its �niteness isproved under the usual weak assumptions. As we will see all the basic characteristicsof the criss-cross method are preserved. Finally, our hyperbolic criss-cross algorithm isillustrated on two simple examples chosen from Martos' book [13]. These examples showthat di�erent basic solutions are visited during the solution procedure compared withother known simplex-type algorithms. 1

2 Basic properties of the hyperbolic (fractional linear) program-ming problemThe primal hyperbolic programming (PHP ) problem in standard form is as follows:min cTxdTxAx = bx � 0 9>>>>>=>>>>>; (PHP );where c;d;x 2 Rn, b 2 Rm, A 2 Rm�n, rank(A) = m and J = f1; 2; :::; ng:The set P := fx 2 Rn : Ax = b;x � 0g contains all feasible solutions of the primalproblem.Let us introduce the standard [2] positivity assumption which is necessary condition tomake a dual problem without duality gap. This assumption is used in proving the cor-rectness and �niteness of several algorithms [2, 4].Assumption 2.1 P � fx 2 Rn : dTx > 0g:If P � fx 2 Rn : dTx < 0g, then using �d and �c in (PHP ) we �nd that the previousassumption holds. Thus the only case which is excluded by the assumption above is that9x 2 P : dTx = 0.It is well-known that the dual of (PHP ) is a special linear programming problem:max �y0ATy + dy0 � �cbTy � 0 9>>>>=>>>>; (DHP ):As we are going to present a �nite pivot algorithm for hyperbolic programming problemwe need a suitable simplex { or with other words basic { tableau. This tableau di�ersfrom the known simplex tableau, thus we give a formal de�nition of it below.Let us denote by aj the jth column of the matrix A, and by t(i)T the ith row of matrix T .2

p�c0 cT 0vj b T 0...0 q�d0 dT 1u(i)TFigure 1The following notations are used in Figure 1:c0 := cTBB�1b; d0 := dTBB�1b;b := B�1b; cT := cT � cTBB�1A; dT := dT � dTBB�1A;q := (1=d0)b; p := c� (c0=d0)d;u(i)T := t(i)T + (bi=d0)dT ; vj := B�1aj + (dj=d0)B�1b;T := B�1A;where matrix B denotes a basis of the linear equation Ax = b. Let us denote the indexset of the basic vectors by JB, while JN = J n JB denotes the index set of the non basicvectors.De�nition 2.1 The variable xi < 0; i 2 JB is called primal infeasible and the variablexj; j 2 JN is called dual infeasible, if cj < (c0=d0)dj .The well-known [13] optimality, primal and dual infeasibility criteria for the hyperbolicprogramming pair (PHP ) and (DHP ) are summarized in Propositions 2.1-2.3.Proposition 2.1 If b � 0 and p � 0, then we have optimal solution and the vectorx = (xB;xN) = (B�1b;0) is a primal optimal solution and y0 = �c0=d0; y = ??? aredual optimal solutions. 3

Proposition 2.2 If there exists r 2 JB such that br < 0 and u(r) � 0 then there is noprimal feasible solution.Proposition 2.3 If there exists r 2 JN such that pr < 0 and vr � 0 then there is nodual feasible solution.Propositions 2.1 - 2.3 serve as the stopping criteria for our algorithm. If we would justformaly generalize the criss-cross method [15, 16] then we would have the following pivotrule andPivot rule P.I. (i) If b � 0 and p � 0, then the pivot table is optimal. STOP.(ii) If (i) is not true, then letk := minfi : bi < 0 or pi < 0; i = 1; 2; : : : ; ng:II. (i) If pk < 0 and vk � 0 then there is no dual feasible solution. STOP.(ii) If (i) is not true, then lets := minfj : vjk > 0; j 2 JBg;make a pivot at position (s; k) and go to I.III. (i) If bk < 0 and u(r) � 0 then there is no primal feasible solution. STOP.(ii) If (i) is not true, then letr := minfi : uki < 0; i 62 JNg;make a pivot at position (k; r) and go to I.The above described Pivot rule P. has to be modi�ed, because �sk at step II. (ii) or�kr at step III. (ii) can be zero. This is an immediate consequence of that we are usingarti�cially de�ned vectors p;vk;u(k) for �nding the leaving and entering variables. Insteadof discussing disadvantages of the Pivot rule P., let us explain a useful property of it, whichwill be very important for the correctness and �niteness of our algorithm.Proposition 2.4 Let us suppose that d00 6= 0, where B 0 is the present basis. Choose thepivot position (s; r) by using vectors b;u(s) (or p;vr). If � 0sr 6= 0 then d000 6= 0, where d000denotes the new value of d0. 4

Proof:Based on Pivot rule P., we have that bs < 0 and usr = �sr + bs drd00 < 0: Let us suppose tothe contrary that d000 = 0: Then �d000 = �d00 � dr bs�sr = 0;therefore d00 = �dr bs�sr :Using the assumptions d00 6= 0 and �sr 6= 0 we obtain�sr + bs drd00 = 0;which is a contradiction.The proof goes analogously for the case pk < 0 and �sr > 0. 2If we apply Pivot rule P. starting from a basis with d0 6= 0 and at pivot position thereis a nonzero element, then by Proposition 2.4 the new tableau will have at position ofd0 nonzero, as well. We can repeat such pivots until �sk = 0 (or �kr = 0) occur, whenobviously it is impossible to pivot at position (s; k) or (k; r), respectively. If we wantto have a pivot rule which only uses the sign structure of the vectors p;b;vj and u(i),then we should adjust the Pivot rule P., to cover the pathological case, i.e. when �sk = 0(or �kr = 0). In such a case it seems to be reasonable to choose either position �c0 orposition �d0. (We refer to such a pivot as external pivot.) Considering the de�nition ofthe vectors p;vj and u(i) it is not surprising that the position of �d0 is the only suitableposition for an external pivot. Figure 2 shows the changes of the entries after an externalpivot. Obviously the su�cient condition for an external pivot is d0 6= 0, but this propertyof the tableau is preserved during the pivot sequence (Assumption 2.1 and Proposition2.4). �c0 : : : cj : : : 0... ... 0bi � � � �ij � � � ...... ... 0�d0 : : : dj : : : 1 ) 0 : : : cj � c0d0 dj : : : � c0d00 ... ...... : : : �ij + djd0 bi : : : bid00 ... ...1 : : : �djd0 : : : � 1d0Figure 25

After an external pivot our pivot tableau has the following form (Figure 3).0 pT �c0=d00 T = V = U q1 �1=d0dT �1=d0Figure 3Observe that new vectors p and q occur in the �rst row and the last column, respectively.This means that b is no longer a part of our pivot tableau. If we decide about the pivotposition by using the same Pivot rule P. then the pivot position will still be the sameposition (s; k) (or (k; r)). But now at that position we have either�sk + dkd0 bs = dkd0 bs or �kr + drd0 bk = drd0 bk:These values are non-zeros, i.e. we can make pivot at the position (s; k) (or (k; r)) afteran external pivot.Summarizing the above discussions we have that if at the originally choosen position,the coe�cient was zero then after an external pivot, it become nonzero, thus we cando the pivot there. Since this procedure involves two pivots, we are speaking about adouble-pivot. An advantage of double-pivoting is that after that we can decide aboutthe next pivot position by using the same kind of vectors (like p;vj and u(i)) withoutdestroying the structure of our pivot tableau. Hence double-pivoting has no e�ect on thesign structure used for pivot selection.If we adjust Pivot rule P. by making double-pivot when it is necessary, one easily canverify the following lemma.Lemma 2.1 Double-pivoting may occur at most once. 2If our initial basic solution x0 was such that d00 = dTx0 6= 0 then after a pivot at anyposition selected by Pivot rule P., the next basic solution x00 have the property d000 =dTx00 6= 0. The su�cient condition for being able to do an external-pivot (or double-pivot) is just the same i.e. d00 6= 0. For this purpose let us assume that.6

Assumption 2.2 fx 2 Rn : Ax = bg \ fx 2 Rn : dTx = 0g = ;:Assumption 2.2 is similar to the usual assumptions given in hyperbolic programming[1, 10, 11]. It is easy to check, and practically, it is not a strong condition.3 The Criss-cross algorithmIn this chapter we present our criss-cross algorithm for the fractional linear programmingproblem. We prove its �niteness and illustrate the solution process by two simple exam-ples. The positivity assumption is necessary to prove the correctness of the algorithm.Algorithm 3.1Initialization: Let us suppose that an initial basis, B0 is given and the Assumptions 2.2and 2.1 hold.Step 1: Compute the values of d0 = dTBB�1b and c0 = cTBB�1b and the vectorsp := c� c0d0d and q := 1d0b.Let I := fi 2 JB : bi < 0g [ fi 62 JB : pi < 0g.If I = ; then one of the following two cases may occur:(1) � � � � � 0� 0 �... ... ...� 0 �� 1Figure 4Optimal solution. STOP.7

(2) 0 � � � � �0 �... ...0 �1 Figure 5Optimal solution. STOP.else let r := minfi : i 2 Ig and go to Step 2.Step 2: If r 2 JBthen (Dual iteration)compute the vector u(r) and let K := fj 62 JB : urj < 0g:If K = ; then P = ;; STOPelse let s := minfj : j 2 Kg.If �rs = 0 then do double pivotat positions d0 and (r; s),else pivot at position (r; s).else (Primal iteration)compute the vector vr and let K := fj 2 JB : vjr > 0g:If K = ; then D = ;; STOPelse let s := minfj : j 2 Kg.If �sr = 0 then do double pivotat positions d0 and (s; r),else pivot at postion (s; r).Go to Step 1. 2Before verifying the �niteness of the algorithm let us analyze the stopping situations. Thesign structure of Figure 4 corresponds to a primal feasible solution and �d0 < 0 meansthat Assumption 2.1 is (also) satis�ed. The case d0 = 0 is allowed by Assumption 2.1,but the pivot rule, as it is proved in Proposition 3.1, expel that.8

� � � � � 0� 0 ... ... ...� 0 + 1Figure 6The sign structure shown in Figure 6 is expeled by Assumption 3.2, because �d0 > 0 cannot occur at a primal feasible solution.Figure 5, is dealing with the case when �1=d0 � 0. If �1=d0 < 0 then �b � 0 becauseq = 1=d0�b, therefore we have a primal feasible solution (and Assumption 3.2 is satis�ed).The Primal feasibility together with p � 0 means that our solution is an optimal one [13].If �1=d0 = 0 then the following result holds.Lemma 3.1 Let us suppose that sometimes after a double-pivot the sign structure givenby the following tableau 0 � : : : �0 �... ...0 �1 0occur (Figure 5, case �1=d0 = 0). Then problem (P ) has primal feasible solution, theobjective function is bounded from below then there is no optimal solution and the optimalvalue of the objective function can only be approached in the limit.Proof:The obtained sign structure says that an optimal solution (u; � ) := (q; 0) of the problem9

min cTuAu � �b = 0dTu = 1u � 0� � 0 9>>>>>>>>>>>=>>>>>>>>>>>; (SP );is found where the optimal value is ' := cT u and � = 0. Thus u solvesAu = 0dTu = 1u � 0:In this case the feasible solution set of (PHP ) is unbounded and a direction, u is foundsuch that the sequence of solutions de�ned byxk := x0 + ku 2 P;where x0 2 P and dTx0 = 1 (then '0 = cTx0). Further'0;k := cTxkdTxk = cTx0 + kcT udTx0 + kdT u = '0 + k'k + 1 :It is easy to verify that the values '0;k form a monotonically decreasing sequence of realnumbers. If we want to compute the objective function value with " > 0 accuracy, thenit su�ces to choose k(x0; ") := '0 � '" :It remains to show that in such a case there is no optimal solution. Let us assume contrarythat there is an optimal solution x� with objective value '�. But for the sequence ofsolutions produced similarly as before (just using x0 := x�) the objective function valueis (strictly) monotonically decreasing. This contradicts to the assumption that x� is anoptimal solution. 2The sign structure of Figure 7 is related to the Step 1, case (2) (Figure 5), but it isexcluded by Assumption 2.1 10

0 � � � � �0 ... ...0 1 +Figure 7The �niteness of the Algorithm 3.1 will be veri�ed by using the pivot tableau describedin Figure 3. The only di�erence is that instead of vector q we will use vector b. Theproof is based on the orthogonality theorem ([8], Theorem 2.3). The double pivot hasno in uence on the index of the entering and the leaving variable (Proposition 2.4 andLemma 1.6), thus we do not need to make di�erence between the cases when such pivotoccurs or not.Our proof follows the main steps of Terlaky's original proof for the linear programmingcase [15, 16].Theorem 3.1 The criss-cross method (Algorithm 3.1) for the hyperbolic programmingproblem is �nite.Proof:Let us assume to the contrary, that the algorithm is not �nite. The number of all possiblebases are �nite, thus at least one basis should occur in�nitely many times during thecomputation. This can be happened only if the algorithm is cycling. Let us denote byI� the index set of those variables which enter the basis during a cycle. (These indicescorrespond to the variables which leave the basis during a cycle, as well.) Let l := maxi2I� i.Let us examine when variable xl enters/leaves the basis. We have four cases: the variablexl(a) enters and leaves the basis at primal iterations;(b) enters the basis at primal iteration, but leaves at dual iteration;(c) enters the basis at dual iteration, but leaves at primal iteration;(d) enters and leaves the basis at dual iterations.The sign structure of the four cases when variable xl enters or leaves the basis are summa-rized on Figure 8. (At the top row of tableau the sign structure of the vector p, while the11

right hand side column contains the sign structure of vector b, instead of that of vectorq, Figure 3.) The cases when variable xl enters and leaves the basis at primal iterationsare shown on tableaus (A) and (B), respectively. The cases when variable xl enters andleaves the basis at dual iterations are shown on tableaus (C) and (D), respectively.l s(A): � . . . � � (B): �� :. :. :. � + ll(C): (D): � . . . �� �.. .. .� .� . . . � � � r �� lFigure 8Let us analyze all possible cases. For simplicity, we use the sign-vector of the rows and(extended) columns as it was de�ned in the paper of Klafszky and Terlaky ([8], page102). The hyperbolic programming problem can be treated as the natural extension ofthe concept of linear programming, but for verifying the �niteness of the Algorithm 3.1,instead of the row space of matrix 12

0B@ A �bc 0 1CAas in the case of linear programming, we have to use the row space of the following matrix0BBBB@ A �b 0c 0 0d 0 1 1CCCCA :To show that the cases (a), (c) and (d) cannot occur is relatively easy and very similar toeach other, while case (b) needs a little bit more attention. Let us start with the easiercases.(a) Variable xl enters and leaves the basis at primal iterations. When variable xlleaves the basis then variable xs enters. Using tableaus (A) and (B) of Figure 8 and theorthogonality theorem ([8], Theorem 2.3) we know that the row vector p of tableau (A) isorthogonal to the (extended) column vector corresponding to the non-basic variable xs intableau (B). The row belonging to vector p is t(p) = t(c) � c0d0 t(d). (From this equationimmediately follows that at positions of c and d we have 1 and � c0d0 .) Then the vectorsl b c dt(p) = � : : : � � 0 1 � c0d0ts = : : : + 0 cs dsshould be orthogonal, but tsTt(p) < 0 holds, taking into consideration that ps = cs �c0d0 ds < 0.(c) Variable xl enters the basis at dual iteration while variable xr leaves the basis andvariable xl leaves the basis at primal iteration. Using tableaus (C) and (B) of the Figure8, from the orthogonality theorem we have that the rth row of tableau (C) is orthogonalto the sth (extended) column of tableau (B), i.e.l b c dt(r) = � : : : � � � 0 013

ts = : : : + 0 � �are orthogonal. From the sign structures of these vectors obviously follows that tsT t(r) <0, where � means that we have no information about the sign of the corresponding ele-ments. The coordinates of t(r) at the positions related to vectors c and d are zeros becausethose are row vectors ('basic' vectors).(d) Variable xl enters the basis at dual iteration while variable xr leaves the basisand variable xl leaves the basis at dual iteration, too. Using tableaus (C) and (D) ofthe Figure 8, from the orthogonality theorem we have that the rth row of tableau (C) isorthogonal to the (extended) column of vector b of tableau (B), i.e.l b c dt(r) = � : : : � � � 0 0tb = � : : : � � �1 � �are orthogonal.The values of t(r) at the positions related to vectors c and d are zeros because those arerow vectors, while elements of tb at the same positions are denoted by � because in bothcases no information about their values is available. Then tbT t(r) > 0 contradicting theorthogonality of these two vectors.All the cases (a), (c) and (d) contradict to the orthogonality theorem, thus they cannotoccur.Finally, let us consider case (b).(b) Variable xl enters the basis at a primal iteration and leaves at a dual one. Thetableaus (A) and (D) of Figure 8 are used. The vectors of tableaus (A) and (D) are markedwith ' and ", respectively. From both tableaus, the row vector of p and the (extended)column vector of b are produced. The column vectors of b are normalized such thatat the position belonging to vector d there is �1. Then vectors with the following signstructures are obtained. l b c dt(p)0 = � : : : � � 0 1 � c00d0014

t00b = � : : : � � �1d000 � c000d000 �1and l b c dt(p)00 = � : : : � 0 0 1 � c000d000t0b = � : : : � � �1d00 � c00d00 �1From the orthogonality theorem we have t00bTt(p)0 = 0 and t0bT t(p)00 = 0, therefore 0 =t00bT t(p)0+t0bTt(p)00. But from the sign structures above the sum of the two scalar productsis a positive number, which leads to contradiction.This completes our proof. 2Finally, let us illustrate the performance of our algorithm on two small examples choosenfrom the book of Martos [13]. In this way we can immediately compare the sequence ofbases produced by our algorithm to those which are produced in [13].Martos used the following example ([13], page 170) to illustrate his own algorithm.Example 3.1 Find the minimum of the function ', where'(x1; x2) = 24x1 + 65x1 + x2 + 1under the constraints �x1 + x2 � 1x1 � x2 � 1x1; x2 � 015

Vertices of the feasible solution set are x1 = (1; 0), x2 = (0; 0); x3 = (0; 1) with thefollowing objective function values '1 = 6; '2 = 6; '3 = 3, respectively. Using Martos'method from vertex x1 there is no direct way to rich vertex x3, which corresponds to theoptimal basic solution, because there is no other feasible solution with smaller objectivevalue than '1, but bigger than '3. In such a case Martos in his algorithm apply a specialstep, called regularization (for more detail see [13], page 169). The regularization meansthat a new, special constraint should be added to the set of constraints. In our examplethis constraint is x1 + x2 � 2which generates two more primal feasible basic solutions x4 = (1=2; 3=2), x5 = (3=2; 1=2).Starting from x1 through x5; x4 we can rich x3 in three pivots by using Martos algorithm.Applying our criss-cross algorithm to the same problem, starting from the same feasiblesolution x1, we make a double-pivot immediately. After that double-pivot we arrive tothe vertex x2 from which the optimal solution x3 is obtained with a single pivot. Ouralgorithm used two steps but three pivots to solve the example. The sequence of basicsolutions followed by our algorithm is di�erent from that of generated by Martos' method.The second example is also from Martos' book ([13], page 177) and is used to illustratethe steps of the hyperbolic simplex algorithm.Example 3.2 Find the minimum of function ', where'(x1; x2) = �6x1 � 5x22x1 + 7under the constraints x1 + 2x2 � 33x1 + 2x2 � 6x1; x2 � 0For this problem there is a trivial, starting feasible solution, when x1 = x2 = 0. In this casethe arti�cal variables y1 and y2 are in the basis. Using the hyperbolic simplex algorithmthe following bases are obtained: fy1; y2g, fx2; y2g and fx2; x1g. The last basis is optimal.All other are primal feasible and the objective value is monotonically decreasing on thissequence of solutions. 16

Applying our criss-cross algorithm to solve this problem starting from the same basicsolution, di�erent sequence of bases are obtained: fy1; y2g, fx1; y2g �es fx1; x2g. In ourcase the second basis is dual feasible, but primal infeasible ! The changes of the objectivevalue is not monotone any more.However with both methods Example 3.2 is solved by using only two pivots, but the ob-tained sequence of bases are di�erent and all important characteristics of these algorithmsare illustrated.Acknowledgement. This research was partially supported by the Hungarian NationalResearch Council (grants OTKA T 014302 and OTKA T 019492). The �rst versionof this paper was �nished in July 1995, when the �rst author has been visiting DIKU,University of Copenhagen sponsored by the Hungarian State E�otv�os Fellowship. Wekindly acknowledge all of the supports.References[1] Anstreicher, K.M., Analysis of Karmarkar's algorithm for fractional linear program-ming, Technical Report, (1985), November, Yale School of Management, Yale Uni-versity, New Haven, CT 06520, USA.[2] Anstreicher, K.M., A monotonic projective algorithm for fractional linear program-ming, Algorithmica, 1, No. 4, (1986) 483-498.[3] Bitran, G. R., Novaes, A. G., Linear programming with a fractional objective func-tion, Operations Research, 21 (1973) 22-29.[4] Charnes, A., Cooper, W. W., Programming with linear fractionals, Naval ResearchQuaterly, 9 (1962) 181-186.[5] Fukuda, K., Terlaky T., Linear complementarity and oriented matroids, Journal ofthe Operational Research Society of Japan, 35 (1992) 1, 45-61.[6] Gilmore, P.C., Gomory, R.E., A linear programming approach to the cutting stockproblem, part II., Operations Research, (1963) 863-888.[7] Hertog, D. den, Roos, C., Terlaky T., The linear complementarity problem, su�cientmatrices and the criss-cross method,Linear Algebra and its Applications, 187 (1993)1-14.[8] Klafszky E., Terlaky T., The role of pivoting in proving some fundamental theoremsof linear algebra, Linear Algebra and its Applications, 151 (1991) 97-118.[9] Klafszky E., Terlaky T., Some generalization of the criss-cross method for quadraticprogramming, Math. Oper. u. Statist. Ser. Optim., 24 (1992) 2, 127-139.17

[10] Martos B., Hiperbolikus programoz�as, Az MTA Matematikai Kutat�o Int�ezet�enek K�oz-lem�enyei, 5 Budapest (1960) 383-406.[11] Martos B., Hyperbolic programming, Naval Research Logistic Quarterly, 11 (1964)135-155.[12] Martos B., Nem-line�aris programoz�asi m�odszerek hat�ok�ore, Az MTA K�ozgazdas�ag-tudom�anyi Int�ezet�enek K�ozlem�enyei, 20 Budapest, 1966.[13] Martos B., Nonlinear Programming: Theory and Methods, Akad�emiai Kiad�o, Buda-pest, 1975.[14] Schaible, S., Fractional Programming, in Eds. Pardalos, P. and Horst, R. Handbookof Global Optimization, Kluwer Academic Publisher, 1995.[15] Terlaky T., Egy �uj, v�eges criss-cross m�odszer line�aris programoz�asi feladatokmegold�as�ara, Alkalmazott Matematikai Lapok, 10 (1984) 289-296.[16] Terlaky T., A convergent criss-cross method, Math. Oper. u. Statist. Ser. Optim., 16(1985) 5, 683-690.[17] Terlaky T., A �nite criss-cross method for oriented matroids, J. of Combin. Theory,Ser. B, 42 (1987), 319-327.[18] Wang, Zh., A conformal elimination free algorithm for oriented matroid program-ming, Chinese Annals of Mathematics 8 (1987) B 1.

18