A new greedy approach for facility location problems

10
A new greedy approach for facility location problems [Extended Abstract] Kamal Jain Microsoft Research One Microsoft Way Redmond, WA 98052 [email protected] Mohammad Mahdian Laboratory for Computer Science M.I.T. Cambridge, MA 02139 [email protected] Amin Saberi College of Computing Georgia Tech Atlanta, GA 30332 [email protected] ABSTRACT 1. INTRODUCTION Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC’02, May 19-21, 2002, Montreal, Quebec, Canada. Copyright 2002 ACM 1-58113-495-9/02/0005 ... 5.00.

Transcript of A new greedy approach for facility location problems

A new greedy approach for facility location problems

[Extended Abstract]

Kamal JainMicrosoft ResearchOne Microsoft Way

Redmond, WA 98052

[email protected]

Mohammad MahdianLaboratory for Computer Science

M.I.T.Cambridge, MA 02139

[email protected]

Amin SaberiCollege of Computing

Georgia TechAtlanta, GA 30332

[email protected]

ABSTRACTWe present a simple and natural greedy algorithm for themetri un apa itated fa ility lo ation problem a hieving anapproximation guarantee of 1.61. We use this algorithm to�nd better approximation algorithms for the apa itated fa- ility lo ation problem with soft apa ities and for a ommongeneralization of the k-median and fa ility lo ation prob-lems. We also prove a lower bound of 1+2=e on the approx-imability of the k-median problem. At the end, we presenta dis ussion about the te hniques we have used in the anal-ysis of our algorithm, in luding a omputer-aided methodfor proving bounds on the approximation fa tor.1. INTRODUCTIONIn the (un apa itated) fa ility lo ation problem, we havea set F of nf fa ilities and a set C of n ities. For every fa- ility i 2 F , a nonnegative number fi is given as the opening ost of fa ility i. Furthermore, for every fa ility i 2 F and ity j 2 C, we have a onne tion ost (a.k.a. servi e ost) ij between fa ility i and ity j. The obje tive is to open asubset of the fa ilities in F , and onne t ea h ity to an openfa ility so that the total ost is minimized. We will onsiderthe metri version of this problem, i.e., the onne tion ostssatisfy the triangle inequality.This problem has many appli ations in operations resear h[9, 21℄, and re ently in network design problems su h aspla ement of routers and a hes [14, 22℄, agglomeration oftraÆ or data [1, 15℄, and web server repli ations in a on-tent distribution network (CDN) [19, 26℄. In the last de adethis problem was studied extensively from the perspe tiveof approximation algorithms [2, 4, 6, 7, 8, 13, 18, 20, 28, 30℄.Di�erent approa hes su h as LP rounding, primal-dualmethod, lo al sear h, and ombinations of these methodswith ost s aling and greedy postpro essing are used to solvethe fa ility lo ation problem and its variants. At the time ofsubmission of the present paper, the best known approxima-Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.STOC’02, May 19-21, 2002, Montreal, Quebec, Canada.Copyright 2002 ACM 1-58113-495-9/02/0005 ...$5.00.

tion algorithm for this problem was a 1.728-approximationalgorithm due to Charikar and Guha [4℄. This algorithmmarginally improves an LP-rounding-based algorithm of Chu-dak and Shmoys [7, 8℄ using the ideas of ost s aling, greedyaugmentations, and a primal-dual algorithm of Jain andVazirani [18℄. The drawba k of LP-rounding-based algo-rithms is that they need to solve large linear programs andtherefore have a high running time. Charikar and Guha [4℄also present an O(n3) algorithm with approximation ratio1.853. Mahdian et al. [23℄ show that a simple greedy algo-rithm (similar to the greedy set- over algorithm of Ho hbaum[16℄) a hieves an approximation ratio of 1.861 in O(n2 log n)time. For the ase of sparse graphs, Thorup [30℄ gives afaster (3+ o(1))-approximation algorithm. Regarding hard-ness results, Guha and Khuller [13℄ proved that it is impossi-ble to get an approximation guarantee of 1.463 for the metri fa ility lo ation problem, unlessNP � DTIME[nO(log logn)℄.Shmoys [27℄ provides a survey of the problem.In this paper, we present a simple and natural heuris-ti algorithm for the fa ility lo ation problem a hieving anapproximation fa tor of 1.61 with the running time O(n3).This algorithm is an improvement of the greedy algorithmof Mahdian et al [23℄. We use the method of dual �ttingfor the analysis of this algorithm. In this method, the al-gorithm omputes a solution to the problem together withan infeasible dual-solution with the same value. The ap-proximation fa tor of the algorithm an be omputed as thefa tor by whi h we need to shrink the dual solution to makeit feasible. In order to ompute this fa tor, we express the onstraints imposed by the problem statement and our al-gorithm as linear inequalities. This allows us to bound thefa tor by solving a parti ular series of linear programs, whi hwe all fa tor-revealing LPs. A more detailed treatment ofthese te hniques will appear in Jain et al [17℄.The te hnique of fa tor-revealing LPs is similar to theidea of LP bounds in oding theory. LP bounds give thebest known bounds on the minimum distan e of a ode witha given rate by bounding the solution of a linear program.( f. M Elie e et al. [25℄). In the ontext of approxima-tion algorithms, Goemans and Kleinberg [11℄ use a similarmethod in the analysis of their algorithm for the minimumlaten y problem.The fa tor-revealing LP enables us to ompute the ap-proximation ratio of the algorithm empiri ally, and providesa straightforward way to prove a bound on the approxima-tion ratio. In the ase of our algorithm, this te hnique alsoenables us to ompute the tradeo� between the approxima-

tion ratio of the fa ility osts versus the approximation ratioof the onne tion osts. The algorithm, its analysis, and adis ussion about this tradeo� are presented in Se tions 2, 3,and 4, respe tively.Among all known algorithms for the fa ility lo ation prob-lem, the primal-dual algorithm of Jain and Vazirani [18℄ isperhaps the most versatile one in that it an be used toobtain algorithms for other variants of the problem. Thisversatility is partly be ause of a property of the algorithmwhi h makes it possible to apply the Lagrangian relaxationte hnique. We all this property the Lagrangian multiplierpreserving property. We will prove in Se tion 5 that our al-gorithm also has this property with an approximation fa torbetter than the primal-dual algorithm. This enables us toobtain algorithms for some variants of the fa ility lo ationproblem, su h as the k-fa ility lo ation problem and the a-pa itated fa ility lo ation problem with soft apa ities. Inthe k-fa ility lo ation problem an instan e of the fa ility lo- ation problem and an integer k are given and the obje tiveis to �nd the heapest solution that opens at most k fa ili-ties. This problem is a ommon generalization of the fa ilitylo ation and k-median problems. The k-median problem isstudied extensively [2, 4, 5, 18℄ and the best known approx-imation algorithm for this problem, due to Arya et al. [2℄,a hieves a fa tor of 3+�. The k-fa ility lo ation problem hasalso been studied in operations resear h [9℄, and the best pre-viously known approximation fa tor for this problem was 6[18℄. In this paper, we present a 4-approximation algorithmfor this problem. We will also give a 3-approximation algo-rithm for a apa itated version of the fa ility lo ation prob-lem, in whi h we are allowed to open more than one fa ilityat any lo ation. We will refer to this problem as the apa i-tated fa ility lo ation problem with soft apa ities. The bestpreviously known approximation algorithm for this problemhas a fa tor of 3.46, and is based on the fa ility lo ation al-gorithm of Charikar and Guha [4℄ together with the observa-tion that any �-approximation algorithm for the un apa -itated fa ility lo ation problem yields a 2�-approximationalgorithm for the apa itated fa ility lo ation problem withsoft apa ities.In Se tion 6, we will state some lower bound results. Weprove that the k-median problem annot be approximatedwithin a fa tor stri tly less than 1 + 2=e, unless NP �DTIME[nO(log log n)℄. This is an improvement over a lowerbound of 1 + 1=e due to Guha [12℄. This result shows thatk-median is a stri tly harder problem to approximate thanthe fa ility lo ation problem. We will also see a lower boundon the best tradeo� we an hope to a hieve between the ap-proximation fa tors for the fa ility ost and the onne tion ost in the fa ility lo ation problem.In Se tion 7 we will see a general dis ussion about themethod used to analyze the algorithms in this paper. Theimportant feature of this te hnique is that the most diÆ ultpart of the analysis, whi h is proving a bound on the solutionof the fa tor-revealing LP, an be done almost automati allyusing a omputer. We will use the set over problem as anexample to illustrate the te hnique of using fa tor-revealingLPs.Sin e the submission of the present paper, two new al-gorithms have been proposed for the fa ility lo ation prob-lem. The �rst algorithm, due to Sviridenko [29℄, uses theLP-rounding method to a hieve an approximation fa tor of1.58. The se ond algorithm, due to Mahdian, Ye, and Zhang

[24℄, ombines our algorithm with the idea of ost s aling toa hieve an approximation fa tor of 1.52.2. THE ALGORITHMThe fa ility lo ation problem an be aptured by a om-monly known integer program due to Balinski [3℄. For thesake of onvenien e, we give another equivalent formulationfor the problem. Let us say that a star onsists of one fa il-ity and several ities. The ost of a star is the sum of theopening ost of the fa ility and the onne tion osts betweenthe fa ility and all the ities in the star. Let S be the set ofall stars. The fa ility lo ation problem an be thought of aspi king a minimum ost set of stars su h that ea h ity is inat least one star. This problem an be aptured by the fol-lowing integer program. In this program, xS is an indi atorvariable denoting whether star S is pi ked and S denotesthe ost of star S.minimize XS2S SxS (1)subje t to 8j 2 C : XS:j2SxS � 18S 2 S : xS 2 f0; 1gThe LP-relaxation of this program is:minimize XS2S SxS (2)subje t to 8j 2 C : XS:j2SxS � 18S 2 S : xS � 0The dual program is:maximize Xj2C �j (3)subje t to 8S 2 S : Xj2S\C �j � S8j 2 C : �j � 0We an think of the variable �j in the dual program asthe share of ity j toward the total expenses. Now, supposewe have an algorithm that �nds a solution for the fa ilitylo ation problem of ost T , and values �j for j 2 C su hthat Pj2C �j = T and for every star S, Pj2S\C �j � S,where � 1 is a �xed number. Then the approximationratio of the algorithm is at most , sin e if for every fa ilityi that is opened in the optimal solution and the olle tionA of ities that are onne ted to it, we write the inequalityPj2A �j � (fi +Pj2A ij) and add up these inequalities,we will obtain that the ost of our solution is at most timesthe ost of the optimal solution. Another way of looking atthis is from the perspe tive of LP-duality. The inequalityPj2S\C �j � S implies that if we shrink �j 's by a fa torof , we obtain a feasible dual solution. The value of thisfeasible solution for the dual, whi h isPj2C �j= = T= , isa lower bound on the ost of the optimum.This method, whi h is alled dual �tting, an be onsid-ered a primal-dual type method. The only di�eren e is that

in primal-dual algorithms, we usually relax the omplemen-tary sla kness onditions to obtain a solution for the primaland a solution for the dual so that the ratio of the valuesof the obje tive fun tions for these two solutions is boundedby the approximation fa tor of the algorithm. However, inthe dual �tting s heme we relax the inequalities in the dualprogram. Therefore, the algorithm �nd a solution for theprimal, and an infeasible solution for the dual with the samevalue for the obje tive fun tion. The amount by whi h thedual inequalities are relaxed (or in other words, the amountby whi h we must shrink the dual solution so that it �ts thedual) will give a bound on the approximation fa tor of thealgorithm. This fa t is the basis of our analysis. See Jain etal. [17℄ or Mahdian et al. [23℄ for a more detailed dis ussionof this te hnique.Algorithm 11. We introdu e a notion of time. The algorithm startsat time 0. At this time, all ities are un onne ted, allfa ilities are unopened, and the budget of every ity j,denoted by Bj , is initialized to 0. At every moment,ea h ity j o�ers some money from its budget to ea hunopened fa ility i. The amount of this o�er is om-puted as follows: If j is un onne ted, the o�er is equalto max(Bj � ij ; 0) (i.e., if the budget of j is morethan the ost that it has to pay to get onne ted to i,it o�ers to pay this extra budget to i); If j is already onne ted to some other fa ility i0, then its o�er tofa ility i is equal to max( i0j � ij ; 0) (i.e., the amountthat j o�ers to pay to i is equal to the amount j wouldsave by swit hing its fa ility from i0 to i).2. While there is an un onne ted ity, in rease the time,and simultaneously, in rease the budget of ea h un on-ne ted ity at the same rate (i.e., every un onne ted ity j has Bj = t at time t), until one of the follow-ing events o ur. If multiple events o ur at the sametime, pro ess them in an arbitrary order.(a) For some unopened fa ility i, the total o�er thatit re eives from ities is equal to the ost of open-ing i. In this ase, we open fa ility i, and for ev-ery ity j ( onne ted or un onne ted) whi h has anon-zero o�er to i, we onne t j to i. The amountthat j had o�ered to i is now alled the ontribu-tion of j toward i, and j is no longer allowed tode rease this ontribution.(b) For some un onne ted ity j, and some fa ility ithat is already open, the budget of j is equal tothe onne tion ost between j and i. In this ase,we onne t ity j to fa ility i. The ontributionof j toward i is zero.3. For every ity j, set �j (the share of j of the totalexpenses) equal to the budget of j at the end of algo-rithm. Noti e that this value is also equal to the timethat j �rst gets onne ted.At any time during the exe ution of this algorithm, thebudget of ea h onne ted ity is equal to its urrent onne -tion ost plus its total ontribution toward open fa ilities.The following fa t should be obvious from the des ription ofthe algorithm.

Lemma 1. The total ost of the solution found by theabove algorithm is equal to the sum of �j's.The above algorithm is similar to the greedy algorithmof Mahdian et al [23℄. The only di�eren e is that in [23℄, ities stop o�ering money to fa ilities as soon as they get onne ted to a fa ility, but in our algorithm, they still o�ersome money (the amount that they ould save by swit hingtheir fa ility) to other fa ilities. As a result, our algorithm�nds a solution that annot be improved just by openingnew fa ilities, and therefore it annot be improved by thegreedy augmentation pro edure of Charikar and Guha [4℄,whereas the solution found by the algorithm of Mahdian etal. [23℄ does not possess this property. As we will see in thenext se tion, this hange redu es the approximation fa torof the algorithm from 1.86 to 1.61.3. ANALYSIS OF THE ALGORITHMIn this se tion we ompute the approximation ratio of Al-gorithm 1. By the omments before Algorithm 1, we knowthat in order to prove an approximation guarantee of , itis enough to show that for every star S, the sum of �j 's ofthe ities in S is at most times the ost of S. In orderto ompute su h a , in Se tion 3.1 we will de�ne an op-timization program ( alled the fa tor-revealing LP) whosesolution gives the value of . In Se tion 3.2 we will use thefa tor-revealing LP to prove an upper bound of 1.61 on theapproximation ratio of Algorithm 1. A dis ussion of thiste hnique is presented in Se tion 7.3.1 Deriving the factor-revealing LPIn this se tion, we express various onstraints that areimposed by the problem or by the stru ture of the algorithmas inequalities so that we an get a bound on the value of de�ned above by solving a series of linear programs.Consider a star S onsisting of a fa ility of opening ostf (with a slight misuse of the notation, we all this fa ilityf), and k ities numbered 1 through k. Let dj denote the onne tion ost between fa ility f and ity j, and �j denotethe share of j of the expenses, as de�ned in Algorithm 1.We may assume without loss of generality that�1 � �2 � � � � � �k: (4)We need more variables to apture the exe ution of Al-gorithm 1. For every i (1 � i � k), onsider the situationof the algorithm at time t = �i � �, where � is very small,i.e., just a moment before ity i gets onne ted for the �rsttime. At this time, ea h of the ities 1; 2; : : : ; i� 1 might be onne ted to a fa ility. For every j < i, if ity j is onne tedto some fa ility at time t, let rj;i denote the onne tion ostbetween this fa ility and ity j; otherwise, let rj;i := �j .The latter ase o urs if and only if �i = �j . It turns outthat these variables (f , dj 's, �j 's, and rj;i's) are enough towrite down some inequalities to bound the ratio of the sumof �j 's to the ost of S (i.e., f +Pkj=1 dj).First, noti e that on e a ity gets onne ted to a fa ility,its budget remains onstant and it annot revoke` its ontri-bution to a fa ility, so it an never get onne ted to anotherfa ility with a higher onne tion ost. This implies that forevery j,rj;j+1 � rj;j+2 � � � � � rj;k: (5)

Now, onsider time t = �i � �. At this time, the amount ity j o�ers to fa ility f is equal tomax(rj;i � dj ; 0) if j < i; andmax(t� dj ; 0) if j � i:Noti e that by the de�nition of rj;i this holds even if j < iand �i = �j . It is lear from Algorithm 1 that the totalo�er of ities to a fa ility an never be ome larger than theopening ost of the fa ility. Therefore, for all i,i�1Xj=1max(rj;i � dj ; 0) + kXj=i max(�i � dj ; 0) � f: (6)The triangle inequality is another important onstraintthat we need to use. Consider ities i and j with j < i attime t = �i � �. Let f 0 be the fa ility j is onne ted toat time t. By the triangle inequality and the de�nition ofrj;i, the onne tion ost f 0i between ity i and fa ility f 0is at most rj;i + di + dj . Furthermore, f 0i an not be lessthan t, sin e if it is, our algorithm ould have onne ted the ity i to the fa ility f 0 at a time earlier than t, whi h is a ontradi tion. Here we need to be areful with the spe ial ase �i = �j . In this ase, rj;i + di + dj is not more thant. If �i 6= �j , the fa ility f 0 is open at time t and therefore ity i an get onne ted to it, if it an pay the onne tion ost. Therefore for every 1 � j < i � k,�i � rj;i + di + dj : (7)The above inequalities form the following optimizationprogram, whi h we all the fa tor-revealing LP.maximize Pki=1 �if +Pki=1 di (8)subje t to 8 1 � i < k : �i � �i+18 1 � j < i < k : rj;i � rj;i+18 1 � j < i � k : �i � rj;i + di + dj8 1 � i � k : i�1Xj=1max(rj;i � dj ; 0)+ kXj=i max(�i � dj ; 0) � f8 1 � j � i � k : �j ; dj ; f; rj;i � 0Noti e that although the above optimization program isnot written in the form of a linear program, it is easy to hange it to a linear program by introdu ing new variablesand inequalities.Lemma 2. If zk denotes the solution of the fa tor-revealingLP, then for every star S onsisting of a fa ility and k ities,the sum of �j 's of the ities in S in Algorithm 1 is at mostzk S.Proof. Inequalities 4, 5, 6, and 7 derived above implythat the values �j ; dj ; f; rj;i that we get by running Algo-rithm 1 onstitute a feasible solution of the fa tor-revealingLP. Thus, the value of the obje tive fun tion for this solutionis at most zk.Lemmas 1 and 2 imply the following.

k maxi�k zi10 1.5414720 1.5708450 1.58839100 1.59425200 1.59721300 1.59819400 1.59868500 1.59898Table 1: Solution of the fa tor-revealing LPLemma 3. Let zk be the solution of the fa tor-revealingLP, and := supkfzkg. Then Algorithm 1 solves the metri fa ility lo ation problem with an approximation fa tor of .3.2 Solving the factor-revealing LPAs mentioned earlier, the optimization program 8 an bewritten as a linear program. This enables us to use an LP-solver to solve the fa tor-revealing LP for small values of k,in order to ompute the numeri al value of . Table 1 showsa summary of results that are obtained by solving the fa tor-revealing LP using CPLEX. It seems from the experimentalresults that zk is an in reasing sequen e that onverges tosome number lose to 1:6 and hen e � 1:6.By solving the fa tor-revealing LP for any parti ular valueof k, we get a lower bound on the value of . In order toprove an upper bound on , we need to present a general so-lution to the dual of the fa tor-revealing LP. Unfortunately,this is not an easy task in general. (For example, perform-ing a tight asymptoti analysis of the LP bound is still anopen question in oding theory). However, here empiri alresults an help us: we an solve the dual of the fa tor-revealing LP for small values of k to get an idea how thegeneral optimal solution looks like. Using this, it is usuallypossible (although sometimes tedious) to prove a lose-to-optimal upper bound on the value of zk. We have used thiste hnique to prove an upper bound of 1:61 on . The proofof this upper bound is presented in Appendix A. Also, we an use the optimal solution of the fa tor-revealing LP to onstru t an example on whi h our algorithm performs atleast zk times worse than the optimum. The proof of thisfa t is omitted here. These results imply the following.Theorem 4. Algorithm 1 solves the fa ility lo ation prob-lem in time O(n3), where n = max(nf ; n ). Its approxima-tion ratio is equal to the supremum of the solution of themaximization program 8, whi h is less than 1.61, and morethan 1.598.4. THE TRADEOFF BETWEEN FACILITY

AND CONNECTION COSTSWe de�ned the ost of a solution in the fa ility lo ationproblem as the sum of the fa ility ost (i.e., total ost ofopening fa ilities) and the onne tion ost. We proved inthe previous se tion that Algorithm 1 a hieves an overallperforman e guarantee of 1.61. However, sometimes it isuseful to get di�erent approximation guarantees for fa ilityand onne tion osts. The following theorem gives su h aguarantee. The proof is similar to the proof of Lemma 3.Theorem 5. Let f � 1 and := supkfzkg, where zk isthe solution of the following optimization program.

1.2

1.4

1.6

1.8

2

1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3Figure 1: The tradeo� between f and maximize Pki=1 �i � ffPki=1 di (9)subje t to 8 1 � i < k : �i � �i+18 1 � j < i < k : rj;i � rj;i+18 1 � j < i � k : �i � rj;i + di + dj8 1 � i � k : i�1Xj=1max(rj;i � dj ; 0)+ kXj=i max(�i � dj ; 0) � f8 1 � j � i � k : �j ; dj ; f; rj;i � 0Then for every instan e I of the fa ility lo ation problem,and for every solution SOL for I with fa ility ost FSOLand onne tion ost CSOL, the ost of the solution found byAlgorithm 1 is at most fFSOL + CSOL.We have omputed the solution of the optimization pro-gram 9 for k = 100, and several values of f between 1and 3, to get an estimate of the orresponding 's. Theresult is shown in the diagram in Figure 1. Every point( f ; 0 ) on the thi k line in this diagram represents a valueof f , and the orresponding estimate for the value of .The dashed line shows a lower bound that holds unlessNP � DTIME[nO(log log n)℄ and is stated in Se tion 6. Sim-ilar tradeo� problems are onsidered by Charikar and Guha[4℄. However, an important advantage that we get here isthat all the inequalities ALG � fFSOL + CSOL are sat-is�ed by a single algorithm. In the next se tion, we willuse the point f = 1 of this tradeo� to design algorithmsfor other variants of the fa ility lo ation problem. Otherpoints of this tradeo� an also be useful in designing otheralgorithms based on our algorithm. For example, Mahdian,Ye, and Zhang [24℄ use the point f = 1:1 of this tradeo�to obtain a 1.52-approximation algorithm for the metri fa- ility lo ation problem, whi h is urrently the best knownalgorithm for this problem.5. VARIANTS OF THE PROBLEMThe k-median problem di�ers from the fa ility lo ationproblem in two respe ts: there is no ost for opening fa- ilities, and there is an upper bound k, that is supplied aspart of the input, on the number of fa ilities that an beopened. The k-fa ility lo ation problem is a ommon gen-eralization of k-median and the fa ility lo ation problem.In this problem, we have an upper bound k on the number

of fa ilities that an be opened, as well as osts for openingfa ilities. Jain and Vazirani [18℄ redu ed the k-median prob-lem to the fa ility lo ation problem in the following sense:Suppose A is an approximation algorithm for the fa ility lo- ation problem. Consider an instan e I of the problem withoptimum ost OPT , and let F and C be the fa ility and onne tion osts of the solution found by A. Algorithm A is alled a Lagrangian Multiplier Preserving �-approximation(or LMP �-approximation for short) if for every instan eI, C � �(OPT � F ): Jain and Vazirani [18℄ show that anLMP �-approximation algorithm for the metri fa ility lo- ation problem gives rise to a 2�-approximation algorithmfor the metri k-median problem. They have noted that thisresult also holds for the k-fa ility lo ation problem.Lemma 6. [18℄ An LMP �-approximation algorithm forthe fa ility lo ation problem gives a 2�-approximation algo-rithm for the k-fa ility lo ation problem.In this se tion, we give an LMP 2-approximation algo-rithm for the metri fa ility lo ation problem based on Al-gorithm 1. This will result in a 4-approximation algorithmfor the metri k-fa ility lo ation problem, whereas the bestpreviously known was 6 [18℄.In the apa itated fa ility lo ation problem, for every fa- ility, there is one more parameter, whi h indi ates the a-pa ity of this fa ility, i.e., the number of ities it an serve.We will refer to the version of this problem in whi h we areallowed to open ea h fa ility more than on e as the apa i-tated fa ility lo ation problem with soft apa ities. Jain andVazirani [18℄ show that their fa ility lo ation algorithm givesrise to a 4-approximation algorithm for the metri apa i-tated fa ility lo ation problem with soft apa ities. One aneasily generalize their result to the following lemma. Thislemma, together with our LMP 2-approximation fa ility lo- ation algorithm gives a 3-approximation algorithm for themetri apa itated fa ility lo ation problem with soft apa -ities.Lemma 7. An LMP �-approximation algorithm for themetri un apa itated fa ility lo ation problem leads to an(� + 1)-approximation algorithm for the metri apa itatedfa ility lo ation problem with soft apa ities.Now we show that there is an LMP 2-approximation al-gorithm for the metri fa ility lo ation problem. The proofis based on Theorem 5 together with the s aling te hniqueof Charikar and Guha [4℄. We prove the following lemmausing this te hnique.Lemma 8. Assume there is an algorithm A for the metri fa ility lo ation problem su h that for every instan e I andevery solution SOL for I, A �nds a solution of ost at mostFSOL+�CSOL, where FSOL and CSOL are fa ility and on-ne tion osts of SOL, and � is a �xed number. Then thereis an LMP �-approximation algorithm for the metri fa ilitylo ation problem.Proof. Consider the following algorithm: The algorithm onstru ts another instan e I0 of the problem by multiplyingthe fa ility opening osts by �, runs A on this modi�edinstan e I0, and outputs its answer. It is easy to see thatthis algorithm is an LMP �-approximation.Now we only need to prove the following. The proof ofthis theorem follows the general s heme that is explained inSe tion 7.

Theorem 9. For every instan e I and every solution SOLfor I, Algorithm 1 �nds a solution of ost at most FSOL +2CSOL, where FSOL and CSOL are fa ility and onne tion osts of SOL.Proof. By Theorem 5 we only need to prove that thesolution of the fa tor-revealing LP 9 with f = 1 is at most 2.We �rst write the maximization program 9 as the followingequivalent linear program.maximize kXi=1 �i � f (10)subje t to kXi=1 di = 18 1 � i < k : �i � �i+1 � 08 1 � j < i < k : rj;i+1 � rj;i � 08 1 � j < i � k : �i � rj;i � di � dj � 08 1 � j < i � k : rj;i � di � gi;j � 08 1 � i � j � k : �i � dj � hi;j � 08 1 � i � k : i�1Xj=1 gi;j + kXj=i hi;j � f � 08 i; j : �j ; dj ; f; rj;i; gi;j ; hi;j � 0We need to prove an upper bound of 2 on the solution of theabove LP. Sin e this program is a maximization program, itis enough to prove the upper bound for any relaxation ofthe above program. Numeri al results (for a �xed value ofk, say k = 100) suggest that removing the se ond, third,and seventh inequalities of the above program does not itssolution. Therefore, we an relax the above program byremoving these inequalities. Now, it is a simple exer iseto write down the dual of the relaxed linear program and ompute its optimal solution. This solution orresponds tomultiplying the third, fourth, �fth, and sixth inequalities ofthe linear program 10 by 1=k, and the �rst one by (2�1=k),and adding up these inequalities. This gives an upper boundof 2� 1=k on the value of the obje tive fun tion. Thus, for f = 1, we have � 2. In fa t, is pre isely equal to 2,as shown by the following solution for the program 9.�i =� 2� 1k i = 12 2 � i � kdi =� 1 i = 10 2 � i � krj;i =� 1 j = 12 2 � j � kf = 2(k � 1)This example shows that the above analysis of the fa tor-revealing LP is tight.Lemma 8 and Theorem 9 provide an LMP 2-approximationalgorithm for the metri fa ility lo ation problem. This re-sult improves all the results in Jain and Vazirani [18℄, andgives straightforward algorithms for some other problems onsidered by Charikar et al [6℄.

6. LOWER BOUNDSIn this se tion we explore some impossibility results. Our�rst result is the following theorem, whi h together withFeige's result on the hardness of set- over [10℄ shows thatthere is no (1+ 2e��)-approximation algorithm for k-median,unless NP � DTIME[nO(log log n)℄. The proof is similar tothe one used by Guha and Khuller [13℄ to prove the hardnessof the metri fa ility lo ation problem, and is omitted in thisextended abstra t.Theorem 10. Metri k-median problem annot be approx-imated within a fa tor stri tly smaller than 1+ 2e unless min-imum set- over an be approximated within a fa tor of lnnfor < 1.This theorem improves a lower bound of 1+ 1e due to Guha[12℄. Noti e that the above theorem proves that k-medianis a stri tly harder problem to approximate than the fa ilitylo ation problem be ause the latter an be approximatedwithin a fa tor of 1.61.We also adapt the proof of Charikar and Guha [13℄ toshow the following lower bound on the tradeo� results. Thedashed line in Figure 1 shows the lower bound provided bythe following theorem.Theorem 11. Let f and be onstants with < 1 +2e� f . Assume there is an algorithm A su h that for everyinstan e I of the metri fa ility lo ation problem, A �nds asolution whose ost is not more than fFSOL + CSOL forevery solution SOL for I with fa ility and onne tion ostsFSOL and CSOL. Then minimum set- over an be approxi-mated within a fa tor of lnn for < 1.The above theorem shows that �nding an LMP (1 + 2e ��)-approximation for the metri fa ility lo ation problem ishard. Also, the integrality gap examples found by Guha [12℄show that Lemma 6 is tight. This shows that one annotuse Lemma 6 as a bla k box to obtain a smaller fa tor than2+ 4e for k-median problem. Note that 3+ � approximationis already known [2℄ for the problem. Hen e if one wants tobeat this fa tor using the Lagrangian relaxation te hniquethen it will be ne essary to look into the underlying LMPalgorithm as already been done by Charikar and Guha [4℄.7. THE FACTOR-REVEALING LP TECH-

NIQUEIn this se tion, we elaborate on the te hnique of usingfa tor-revealing LPs whi h we used to analyze the algo-rithms in this paper. We demonstrate this te hnique byapplying it in ombination with dual �tting to a lassi algreedy algorithm for the set over problem. We also explainhow we an use omputers to predi t and prove bounds onthe solution to the fa tor-revealing LP. Similar methods areused in Mahdian et al. [23℄ and Goemans and Kleinberg [13℄.A re-statement of the greedy algorithm for the set overproblem is as follows. All un overed elements raise theirdual-variables until a new set S goes tight (i.e., its ostequals the sum of the values of the dual variables of itselements). At this point, the set S is pi ked. Newly ov-ered elements pay for the ost of S with their dual values.In doing so, they withdraw their ontributions o�ered to-wards the ost of any other set. This ensures that at theend of the algorithm the total ontribution of the elements

is equal to the sum of the ost of the pi ked sets. However,we might not get a feasible dual solution. To make the dualsolution feasible we look for the smallest positive number Z,so that when the dual solution is shrunk by a fa tor of Z,it be omes feasible. An upper bound on the approximationfa tor of the algorithm is obtained by maximizing Z over allpossible instan es. This te hnique is alled dual �tting andis explained in detail in Mahdian et al [23℄. In this se tion,we fo us on the fa tor-revealing LP te hnique, whi h is usedto estimate the value of Z.Clearly Z is also the maximum fa tor by whi h any set isover-tight. Consider any set S. We want to see what is theworst fa tor, over all sets and over all possible instan es ofthe problem, by whi h a set S is over-tight. Let the elementsin S be 1; 2; � � � ; k. Let xi be the dual variable orrespondingto the element i at the end of the algorithm. Without lossof generality we may assume that x1 � x2 � � � � � xk. Itis easy to see that at time t = x�i , total duals o�ered toS is at least (k � i + 1)xi. Therefore, this value annot begreater than the ost of the set S (denoted by S). So, theoptimum solution of the following mathemati al programgives an upper bound on the value of Z. (Note that S is avariable not a onstant).maximize Pki=1 xi S (11)subje t to 81 � i < k : xi � xi+181 � i � k : (k � i+ 1)xi � S81 � i � k : xi � 0 S � 1The above optimization program an be turned into alinear program by adding the onstraint S = 1 and hang-ing the obje tive fun tion to Pki=1 xi. We all this linearprogram the fa tor-revealing LP. Noti e that the fa tor-revealing LP has nothing to do with the LP formulationof the set over problem; it is only used in order to analyzethis parti ular algorithm. This is the important distin tionbetween the fa tor-revealing LP te hnique, and other LP-based te hniques in approximation algorithms.On e we formulate the analysis of the algorithm as afa tor-revealing LP, we an use omputers to empiri ally ompute the upper bound given by the fa tor-revealing LPon the approximation ratio of the algorithm. This is veryuseful, sin e if the empiri al results suggest that the fa tor-revealing LP does not give us a good approximation ratio,we an try adding other inequalities to the fa tor-revealingLP. For this we might need to introdu e new variables to apture the exe ution of the algorithm more a urately, e.g.,we needed to introdu e the variables rj;i in Se tion 3.1 inorder to get a good bound on the approximation ratio of thealgorithm.The next step is to analyze the fa tor-revealing LP andderive an upper bound on the value of its solution. Forthe set over example above, this step is trivial, sin e thefa tor-revealing LP asso iated with the algorithm is quitesimple. However, in general this an be the most diÆ ultstep of the proof (as it is in the ase of our algorithm andthe algorithm of Mahdian et al. [23℄). Here we an use omputers to get ideas about the proof, as explained below.Proving Theorem 4 would have been very diÆ ult without

using these te hniques.Sin e the fa tor-revealing LP provides an upper bound onthe approximation ratio of the algorithm, we an relax someof the onstraints of this LP to make it simpler. After ea hrelaxation, we an use omputers to verify that this relax-ation does not hange the value of the obje tive fun tiondrasti ally. After simplifying the fa tor-revealing LP in thisway, we an �nd an upper bound on its solution by �ndinga feasible solution for its dual for every k. Again, here we an use a omputer to solve the dual linear program for a ouple hundred values of k, to observe a trend in the valuesof the optimal dual solution. After guessing a sequen e ofdual solutions, one has to theoreti ally verify their feasibil-ity. For ompli ated linear programs, it is usually a goodidea to throw in a few parameters (like p1 and p2 in theproof of Theorem 4 in Appendix A), guess a general dualsolution in terms of these parameters, and optimize over the hoi e of these parameters at the end.Note that in general this te hnique does not guaranteethe tightness of the analysis, be ause sometimes the algo-rithm performs well not be ause of lo al stru tures but forsome global reasons. Still, in many ases one may get atight example from a feasible solution of the fa tor-revealingLP. For example, from any feasible solution x of the fa tor-revealing LP 11, one an onstru t the following instan e:There are k elements 1; : : : ; k, a set S = f1; : : : ; kg of ost1 + � whi h is the optimal solution, and sets Si = fig of ost xi for i = 1; : : : ; k. It is easy to verify that our al-gorithm works Pxi times worst than the optimal on thisinstan e. This means that the approximation ratio of theset over algorithm is pre isely equal to the solution of thefa tor-revealing LP, whi h is Hn.8. CONCLUDING REMARKSA large fra tion of the theory of approximation algorithms,as we know it today, is built around the theory of linearprogramming, whi h o�ers the two fundamental algorithmdesign te hniques of rounding and the primal-dual s hema(see Vazirani [31℄). The te hnique of using dual �tting withthe fa tor-revealing LP appears to be a third emerging te h-nique.The te hnique that we used in this paper seems to be auseful tool for analyzing greedy, heuristi , and lo al sear halgorithms. For many algorithms, the proof of the approxi-mation ratio is mainly based on ombining several inequal-ities (usually linear inequalities) to derive a bound on theapproximation ratio. It might be possible to \automatize"su h proofs using a method similar to the one used in thispaper. It would be interesting to �nd other examples thatapply this method.When analyzing a problem using this method, we usuallyen ounter the problem of �nding the limit of the solution of asequen e of linear programs. This seems to be a very diÆ ulttask in general. It would be ni e to develop a general methodfor solving su h problems. One possible idea is to onsiderthe limit of the values of the variables in the optimal solutionas a ontinuous fun tion, and derive fun tional equationsfrom the inequalities.We have implemented our fa ility lo ation algorithm, andrun it on several randomly generated test ases. The test ases were generated by onsidering shortest distan e met-ri in a omplete bipartite graph with random weights, orEu lidean distan e metri among a set of random points as

the onne tion osts. In all instan es, the solution given byour algorithm was at most a fa tor of 1.05 away from thelower bound obtained by solving the LP relaxation of theproblem. This shows that in pra ti e our algorithm worksmu h better than the guaranteed approximation ratio. Atheoreti al explanation of this fa t would be interesting.A knowledgment. Se tion 7 was developed over time,while the last two authors were also working on anotherpaper [23℄ with Vijay V. Vazirani and Evangelos Markakis.We would like to thank Vijay Vazirani, Evangelos Markakis,Mi hel Goemans, Ni ole Immorli a, and Milena Mihail fortheir helpful omments.9. REFERENCES[1℄ M. Andrews and L. Zhang. The a ess network designproblem. In Pro eedings of the 39th Annual IEEESymposium on Foundations of Computer S ien e,1998.[2℄ V. Arya, N. Garg, R. Khandekar, A. Meyerson,K. Munagala, and V. Pandit. Lo al sear h heuristi sfor k-median and fa ility lo ation problems. InPro eedings of 33rd ACM Symposium on Theory ofComputing, 2001.[3℄ M.L. Balinski. On �nding integer solutions to linearprograms. In Pro . IBM S ienti� ComputingSymposium on Combinatorial Problems, pages225{248, 1966.[4℄ M. Charikar and S. Guha. Improved ombinatorialalgorithms for fa ility lo ation and k-medianproblems. In Pro eedings of the 40th Annual IEEESymposium on Foundations of Computer S ien e,pages 378{388, O tober 1999.[5℄ M. Charikar, S. Guha, E. Tardos, and D.B. Shmoys. A onstant-fa tor approximation algorithm for thek-median problem. In Pro eedings of the 31st AnnualACM Symposium on Theory of Computing, pages1{10, May 1999.[6℄ M. Charikar, S. Khuller, D. Mount, andG. Narasimhan. Fa ility lo ation with outliers. In 12thAnnual ACM-SIAM Symposium on Dis reteAlgorithms, Washington DC, January 2001.[7℄ F.A. Chudak. Improved approximation algorithms forun apa ited fa ility lo ation. In R.E. Bixby, E.A.Boyd, and R.Z. R��os-Mer ado, editors, IntegerProgramming and Combinatorial Optimization,volume 1412 of Le ture Notes in Computer S ien e,pages 180{194. Springer, Berlin, 1998.[8℄ F.A. Chudak and D. Shmoys. Improved approximationalgorithms for the un apa itated fa ility lo ationproblem. unpublished manus ript, 1998.[9℄ G. Cornuejols, G.L. Nemhauser, and L.A. Wolsey. Theun apa itated fa ility lo ation problem. InP. Mir handani and R. Fran is, editors, Dis reteLo ation Theory, pages 119{171. John Wiley and SonsIn ., 1990.[10℄ U. Feige. A threshold of lnn for approximating set over. J. ACM, 45:634{652, 1998.[11℄ M. Goemans and J. Kleinberg. An improvedapproximation ratio for the minimum laten y problem.Mathemati al Programming, 82:111{124, 1998.[12℄ S. Guha. Approximation algorithms for fa ility lo ationproblems. PhD thesis, Stanford University, 2000.

[13℄ S. Guha and S. Khuller. Greedy strikes ba k:Improved fa ility lo ation algorithms. Journal ofAlgorithms, 31:228{248, 1999.[14℄ S. Guha, A. Meyerson, and K. Munagala. Hierar hi alpla ement and network design problems. InPro eedings of the 41th Annual IEEE Symposium onFoundations of Computer S ien e, 2000.[15℄ S. Guha, A. Meyerson, and K. Munagala. Improved ombinatorial algorithms for single sink edgeinstallation problems. Te hni al ReportSTAN-CS-TN00 -96, Stanford University, 2000.[16℄ D. S. Ho hbaum. Heuristi s for the �xed ost medianproblem. Mathemati al Programming, 22(2):148{162,1982.[17℄ K. Jain, M. Mahdian, E. Markakis, A. Saberi, andV.V. Vazirani. Approximation algorithms for fa ilitylo ation via dual �tting with fa tor-revealing lp.submitted to Combinatori a.[18℄ K. Jain and V.V. Vazirani. Primal-dual approximationalgorithms for metri fa ility lo ation and k-medianproblems. In Pro eedings of the 40th Annual IEEESymposium on Foundations of Computer S ien e,pages 2{13, O tober 1999.[19℄ S. Jamin, C. Jin, Y. Jin, D. Raz, Y. Shavitt, andL. Zhang. On the pla ement of internetinstrumentations. In Pro eedings of IEEEINFOCOM'00, pages 26{30, Mar h 2000.[20℄ M.R. Korupolu, C.G. Plaxton, and R. Rajaraman.Analysis of a lo al sear h heuristi for fa ility lo ationproblems. In Pro eedings of the 9th AnnualACM-SIAM Symposium on Dis rete Algorithms, pages1{10, January 1998.[21℄ A.A. Kuehn and M.J. Hamburger. A heuristi program for lo ating warehouses. ManagementS ien e, 9:643{666, 1963.[22℄ B. Li, M. Golin, G. Italiano, Deng X., and K. Sohraby.On the optimal pla ement of web proxies in theinternet. In Pro eedings of IEEE INFOCOM'99, pages1282{1290, 1999.[23℄ M. Mahdian, E. Markakis, A. Saberi, and V.V.Vazirani. A greedy fa ility lo ation algorithm analyzedusing dual �tting. In Pro eedings of 5th InternationalWorkshop on Randomization and ApproximationTe hniques in Computer S ien e, volume 2129 ofLe ture Notes in Computer S ien e, pages 127{137.Springer-Verlag, 2001.[24℄ M. Mahdian, Y. Ye, and J. Zhang. Improvedapproximation algorithms for metri fa ility lo ationproblems. Unpublished manus ript, 2002.[25℄ R.J. M Elie e, E.R. Rodemi h, H. Rumsey Jr., andL.R. Wel h. New upper bounds on the rate of a odevia the delsarte-ma williams inequalities. IEEETransa tions on Information Theory, 23:157{166,1977.[26℄ L. Qiu, V.N. Padmanabhan, and G. Voelker. On thepla ement of web server repli as. In Pro eedings ofIEEE INFOCOM'01, An horage, AK, USA, April2001.[27℄ D.B. Shmoys. Approximation algorithms for fa ilitylo ation problems. In K. Jansen and S. Khuller,editors, Approximation Algorithms for CombinatorialOptimization, volume 1913 of Le ture Notes in

Computer S ien e, pages 27{33. Springer, Berlin,2000.[28℄ D.B. Shmoys, E. Tardos, and K.I. Aardal.Approximation algorithms for fa ility lo ationproblems. In Pro eedings of the 29th Annual ACMSymposium on Theory of Computing, pages 265{274,1997.[29℄ M. Sviridenko. An 1.582-approximation algorithm forthe metri un apa itated fa ility lo ation problem. toappear in the Ninth Conferen e on IntegerProgramming and Combinatorial Optimization, 2002.[30℄ M. Thorup. Qui k k-median, k- enter, and fa ilitylo ation for sparse graphs. In Automata, Languagesand Programming, 28th International Colloquium,Crete, Gree e, volume 2076 of Le ture Notes inComputer S ien e, pages 249{260, 2001.[31℄ V. V. Vazirani. Approximation Algorithms.Springer-Verlag, Berlin, 2001.APPENDIX

A. UPPER BOUND ON THE SOLUTION OFTHE FACTOR-REVEALING LPIn this appendix we prove an upper bound of 1:61 on thesolution of the fa tor-revealing LP 8. This proof is obtainedusing the te hniques explained in Se tion 7. We start byproving the following lemma whi h allows us to on entrateon the ase when k is suÆ iently large.Lemma 12. If zk denotes the solution to the fa tor-revealingLP, then for every k, zk � z2k.Proof. Let (�j ; dj ; f; rj;i) be the optimum solution ofthe fa tor-revealing LP for k. We onstru t a feasible solu-tion (�0j ; d0j ; f 0; r0j;i) for 2k by dupli ating everything as fol-lows: �02j�1 = �02j = �j , r02j�1;2i�1 = r02j�1;2i = r02j;2i�1 =r02j;2i = rj;i, d02j�1 = d02j = dj , and f 0 = 2f . It is easy to seethat this is a feasible solution for 2k with an obje tive valueof zk. Thus, z2k � zk.Lemma 13. Let zk be the solution to the fa tor-revealingLP. Then for every suÆ iently large k, zk � 1:61.Proof. Consider a feasible solution of the fa tor-revealingLP. Let xj;i := max(rj;i � dj ; 0). The fourth inequality ofthe fa tor-revealing LP implies that for every i � i0,(i0 � i+ 1)�i � i0Xj=i dj + f � i�1Xj=1 xj;i: (12)Now, we de�ne li as follows:li = � p2k if i � p1kk if i > p1kwhere p1 and p2 are two onstants (with p1 < p2) that willbe �xed later. Consider Inequality 12 for every i � p2k andi0 = li, and divide both sides of this inequality by (li�i+1).By adding up these inequalities we obtain

p2kXi=1 �i � p2kXi=1 liXj=i djli � i+ 1 + (p2kXi=1 1li � i+ 1)f� p2kXi=1 i�1Xj=1 xj;ili � i+ 1 : (13)Now for every j � p2k, let yj := xj;p2k. The se ondinequality of the fa tor-revealing LP implies that xj;i � yjfor every j < i � p2k and xj;i � yj for every i > p2k. Also,let � :=Pp2ki=1 1li�i+1 . Therefore, inequality 13 impliesp2kXi=1 �i � p2kXi=1 liXj=i djli � i+ 1 + �f � p2kXi=1 i�1Xj=1 yjli � i+ 1 : (14)Consider the index ` � p2k for whi h 2d` + y` has itsminimum (i.e., for every j � p2k, 2d` + y` � 2dj + yj). Thethird inequality of the fa tor-revealing LP implies that fori = p2k + 1; : : : ; k,�i � r`;i + di + d` � x`;i + 2d` + di � di + 2d` + y`: (15)By adding Inequality 15 for i = p2k + 1; : : : ; k with In-equality 14 we obtainkXi=1 �i � p2kXi=1 liXj=i djli � i+ 1 + (2d` + y`)(1� p2)k+ kXj=p2k+1 dj � p2kXi=1 i�1Xj=1 yjli � i+ 1 + �f= p2kXj=1 �dj � p2kXj=1 p2kXi=j+1 dj + yjli � i+ 1+ kXj=p2k+1(1 + p2kXi=p1k+1 1k � i+ 1)dj+(2d` + y`)(1� p2)k + �f� p2kXj=1 �dj + kXj=p2k+1(1 + p2kXi=p1k+1 1k � i+ 1)dj + �f+(2d` + y`) (1� p2)k � 12 p2kXj=1 p2kXi=j+1 1li � i+ 1! ;where the last inequality is a onsequen e of the inequality2d`+y` � 2dj+yj � 2dj+2yj for j � p2k. Now, let �0 := 1+Pp2ki=p1k+1 1k�i+1 and Æ := (1�p2)� 12k Pp2kj=1Pp2ki=j+1 1li�i+1 .Therefore, the above inequality an be written as follows:kXi=1 �i � p2kXj=1 �dj + kXj=p2k+1 �0dj + �f + Æ(2d` + y`)k; (16)where

� = p2kXi=1 1li � i+ 1 = ln p2(1� p1)(p2 � p1)(1� p2) + o(1); (17)�0 = 1 + p2kXi=p1k+1 1k � i+ 1 = 1 + ln 1� p11� p2 + o(1); (18)Æ = 1� p2 � 12k p2kXj=1 p2kXi=j+1 1li � i+ 1= 12 (2� p2 � p2 ln p2p2 � p1 � ln 1� p11� p2 ) + o(1): (19)Now if we hoose p1 and p2 su h that Æ < 0, and let := max(�; �0) then inequality 16 implies thatkXi=1 �i � ( + o(1))(f + kXi=1 dj):Using equations 17, 18, and 19, it is easy to see that sub-je t to the ondition Æ < 0, the value of is minimized whenp1 � 0:439 and p2 � 0:695, whi h gives us < 1:61.