0, +-1) ideal matrices

19

Transcript of 0, +-1) ideal matrices

(0, �1) IDEAL MATRICES �Paolo Nobiliy Antonio Sassano zFebruary 16, 1996

�This work was partially supported by MURST, Roma, ItalyyIstituto di Analisi dei Sistemi ed Informatica del CNR - Viale Manzoni 30 - 00185 Roma, ItalyzUniversit�a di Roma \La Sapienza" - Dipartimento di Informatica e Sistemistica - via Buonarroti,12 - 00185 Roma, Italy. 1

AbstractA (0; 1) matrix A is said to be ideal if all the vertices of the polytope Q(A) = fx :Ax � 1; 0 � x � 1g are integral. The issue of �nding a satisfactory characterizationof those matrices which are minimally non-ideal is a well known open problem. Anoutstanding result toward the solution of this problem, due to Alfred Lehman, is thedescription of crucial properties of minimally non-ideal matrices. In this paper weconsider the extension of the notion of ideality to (0;�1) matrices. By means of astandard transformation, we associate with any (0;�1) matrixA a suitable (0; 1) matrixD(A). Then we introduce the concept of disjoint completion A+ of a (0;�1) matrix Aand we show that A is ideal if and only if D(A+) is ideal. Moreover, we introduce asuitable concept of a minimally non-ideal (0;�1) matrix and we prove a Lehman-typecharacterization of minimally non-ideal (0;�1) matrices.

2

1 IntroductionLet A be a (0;�1) matrix whose sets of columns and rows are, respectively, N =f1; : : : ; ng and M = f1; : : : ;mg. We de�ne the generalized set covering polytope asso-ciated with the matrix A as the solution set Q(A) of the following system:Xj2Pi xj � Xj2Ri xj � 1� jRij; i 2M; (1)0 � xj � 1; j 2 N;where, for each row i of the matrix A, Pi is the set of columns which have a +1 entry inrow i and Ri is the set of columns which have a �1 entry in row i. Evidently, Pi\Ri = ;.With a slight abuse of notation, we will sometime use the term \row i" referring to theset Pi [Ri, thus allowing statements like \row i contains column j" or \row i intersectsrow k". The matrix A is said to be the constraint matrix of the system (1).A column of A is said to be monotone if it does not contain both a 1 and a �1. Thematrix A0 obtained from A by multiplying by �1 all the entries of a column j is said tobe a switching of A. We also say that A0 is obtained by switching column j. Observethat switching column j corresponds to replacing variable xj by variable yj = 1� xj in(1).A zero vector, denoted by 0, is a vector with all components equal to zero. A onevector, denoted by 1, is a vector with all components equal to 1.In what follows we will denote by P the set Smi=1 Pi, by R the set Smi=1Ri and by �(A)the vector whose i-th component is jRij. This notation will allow us to write system(1) in the more compact formAx � 1� �(A);0 � x � 1:The columns in the set P , that is, columns having some positive entry, will be calledpositive columns. Analogously, the columns in the set R, which have some negativeentry, will be called negative columns. Observe that, in the case of a (0; 1) matrix A,system (1) de�nes the usual set covering polytope associated with A.As in the (0; 1) case, a matrix A is said to be ideal if and only if every vertex ofQ(A) is integral. Evidently, A is ideal if and only if every switching of A is ideal.A non-trivial constraint of (1) associated with a row i 2 M is dominated in (1) ifthere exists a row k 2 M � fig such that Pi � Pk, Ri � Rk. Any row of A whichcorresponds to a dominated constraint is also said to be dominated.3

Definition 1.1 Let A be a (0;�1) matrix and j a column of A. Then(i) the deletion of j in A is the submatrix Anfjg obtained by removing all rows witha 1 in column j, removing column j and, �nally, removing any dominated row;(ii) the contraction of j in A is the submatrix A=fjg obtained by removing all rowswith a �1 in column j, removing column j and, �nally, removing any dominatedrow. 2Observe that deletion and contraction as de�ned above, generalize the correspondingoperations for (0; 1) matrices (clutters) and so we use the standard notation for them.Deletion and contraction operations are commutative. Hence, for each pair C, D ofsubsets of N such that C \ D = ;, we can contract the columns in C and delete thecolumns in D in any order, obtaining the submatrix A0 = AnD=C of A. The matrix A0is said to be a minor of A.Moreover, as in the (0; 1) case, the system (Anfjg)x � 1��(Anfjg) is obtained fromAx � 1 � �(A) by �xing at 1 the variable xj. Similarly, the system (A=fjg)x � 1 ��(A=fjg) is obtained from Ax � 1� �(A) by �xing at 0 the variable xj. Consequently,again as in the (0; 1) case, it can be shown that every minor of an ideal matrix is ideal.Following Guenin ([2]) we de�ne also an elimination operation on a (0;�1) matrixA, as follows:Definition 1.2 Let A be a (0;�1) matrix and j a column of A. Then the eliminationof j in A is the submatrix A�fjg obtained by removing all rows with a non-zero entryin column j and removing column j. 2Let i; k be two rows in M such that Pi \ Rk = flg and Pk \ Ri = ;. The followingconstraint is said to be an implication of the system (1):Xj2(Pi[Pk�flg)xj � Xj2(Ri[Rk�flg)xj � 1 � jRi [Rk � flgj: (2)Observe that the constraint (2) has the same structure as the constraints in (1). Rowsi and k are said to be the antecedents of (2).If (Pi [Ri)\ (Pk [Rk) = flg we say that (2) is a disjoint implication. Observe thatin this case the constraint (2) is a linear consequence of system (1).The system obtained starting from (1), recursively adding all implications and �nallyremoving all dominated rows is said to be the logical completion of (1). The constraintmatrix of such a system is also called the logical completion of A and denoted as A�.It is well known that Q(A�) \ f0; 1gn = Q(A) \ f0; 1gn.Analogously, the system obtained by recursively adding all disjoint implications andremoving all dominated constraints is called the disjoint completion of (1). Its constraintmatrix is called the disjoint completion of A and denoted as A+; evidently, Q(A) =4

Q(A+). A matrix A which coincides with its disjoint completion will be called d-complete.In the rest of the paper we will refer to a row of A associated with a dominatedconstraint (with an implication) as a dominated row (an implication).The main purpose of this paper is to establish a connection between the propertiesof (0;�1) ideal matrices and those of (0; 1) ideal matrices. Such a connection will allowus to de�ne a sound concept of minimally non-ideal matrix for the (0;�1) case and togive a characterization of the minimally non-ideal (0;�1) matrices.A crucial step to achieve this goal is the description of a standard transformationwhich maps the polytope Q(A) associated with a (0;�1) matrixA to a face of a suitableset covering polytope Q(B), associated with a (0; 1) matrix B. To describe such astandard transformation, we �rst rewrite the system (1) as follows:Xj2Pi xj + Xj2Ri(1 � xj) � 1; i 2M; (3)0 � xj � 1; j 2 N:Now, if we let yj = xj for each j 2 P and zj = 1� xj for each j 2 R, we can rewritesystem (3) as follows:Xj2Pi yj + Xj2Ri zj � 1; i 2M; (4)yj + zj = 1; j 2 P \R;yj � 0; j 2 P;zj � 0; j 2 R:Let D(A) be the incidence matrix of the constraints in (4). If d = jP \ Rj is thenumber of non-monotone columns of A, then D(A) is a (0; 1) matrix with m+ d rowsand n+ d columns. The �rst m rows of D(A) correspond to the constraints of (1), theremaining d rows correspond to the coupling constraints yj + zj = 1 in (4). The set ofcolumns of D(A) is partitioned into two sets: Each column in the �rst set correspondsto a variable yj (j 2 P ) and is derived from the j-th column of A by turning to zero the�1 entries. Each column in the second set corresponds to a variable zj (j 2 R) and isderived from the j-th column of A by �rst switching it and then by turning to zero the�1 entries. To make clear the connection of a column of D(A) with the column of Afrom which it derives, we will call positive (negative) the columns in the �rst (second)set. For a set of columns J � N we shall denote by p(J) and r(J) the corresponding5

subsets of positive and negative columns of D(A), with p(J) = ; if J \P = ;, r(J) = ;if J \R = ;, p(j) = p(fjg) and r(j) = r(fjg).As one can promptly see, the system (4) de�nes a face F (A) of the set coveringpolytope Q(D(A)). Moreover, there is a one-to-one correspondence between solutionsof (1) and solutions of (4). In particular, if P = N then such a correspondence isde�ned by y = x, z = 1� x, implying that Q(A) is the projection onto IRn of F (A).We call D(A) the (0; 1)-extension of A. Moreover, if x is a vector in Q(A), we callextension of x the vector (y; z) 2 Q(D(A)), where yi = xi, for i 2 P and zi = 1 � xi,for i 2 R.2 A characterization of ideal (0;�1) matricesIn [3] John Hooker de�nes a set covering submatrix of a (0;�1) matrix A as a maximalrow submatrix such that every column is monotone. Using such a concept, he was ableto give the following characterization of the ideal (0;�1) matrices that coincide withtheir logical completion.Theorem 2.1 (Hooker) If A � A� then Q(A) is integral if and only if, for every setcovering submatrix B of A, Q(B) is integral. 2In the following theorem, we give a more general characterization of ideal (0;�1)matrices.Theorem 2.2 Let A be a (0;�1) matrix. Then A is ideal if and only if D(A+) is ideal.Proof. If part. Assume that A is non-ideal, that is, there exists some non-integralvertex x of Q(A) � Q(A+). Let (y; z) be the extension of x. Obviously, (y; z) is anextreme point of the face F (A+) of Q(D(A+)) and, hence, it is an extreme point ofQ(D(A+)). This proves that D(A+) is non-ideal.Only if part. Assume now that D(A+) is non-ideal, that is, there exists some non-integral vertex (y; z) of Q(D(A+)). Consider �rst the case A+ = A�.If (y; z) 2 F (A) then y is a (non-integral) vertex of Q(A), proving that A is non-ideal. Assume, conversely, that (y; z) 62 F (A). Hence there exists a column h 2 N suchthat yh + zh > 1: (5)Since (y; z) is an extreme point of Q(D(A+)), yh and zh cannot be decreased withoutviolating some constraint de�ning Q(D(A+)). In other words, there exist two rows uand v of D(A+) which satisfy:Xj2Pu yj + Xj2Ru zj = 1; (6)6

Xj2Pv yj + Xj2Rv zj = 1; (7)with h 2 Pu \ Rv. Let P 0 = Pu [ Pv � fhg and R0 = Ru [ Rv � fhg. By de�nitionof logical completion, there exists some row in D(A�) = D(A+) whose sets of positiveand negative columns are contained in P 0 and R0, respectively. If P 0 [ R0 = ;, D(A�)contains the empty row and hence Q(D(A�)) is empty, contradicting the hypothesisthat it contains the vertex (y; z). It follows that P 0 [R0 6= ; and that the inequalityXj2P 0 yj + Xj2R0 zj � 1; (8)is satis�ed by all the points in Q(D(A+)) = Q(D(A�)). On the other hand, from (5),(6) and (7) it follows that (8) is not satis�ed by (y; z), a contradiction.Assume now that A� 6= A+.Claim 1. If A� 6= A+ then Q(A�) � Q(A+).We prove the claim by exhibiting a fractional point �x in Q(A+) � Q(A�). SinceA� 6= A+ we have that there exist rows of A� that are not rows of A+. Let w be one ofthese rows such that k = jPw [ Rwj has the minimum value. By switching columns inRw, we can assume, without loss of generality, that Rw = ;.Let A0 be the submatrix of A+ obtained by elimination of the columns in C0 =N �Pw. Let �x0 be the vector of Q(A0) whose components are all equal to 1k+1 . Observethat �x0 violates the constraintXj2Pw xj � 1; (9)associated with row w of A�. However, we claim that �x0 satis�es the constraint de�nedby any row u of A0. In fact, we have that jRuj � 2, otherwise either u or the implicationof u and w would dominate row w in A�. The inequality associated with row u can bewritten as follows:Xj2Pu xj + Xj2Ru(1� xj) � 1: (10)Since jRuj � 2 and each component of �x0 has value 1k+1 , we have that (10) is satis�edby �x0.Hence, we have that the vector �x0 is in Q(A0) � Q(A+�C0) but violates inequality(9). Moreover, �x0 satis�es the following property:(a) Every component has a value between 1k+1 and kk+1 .7

Let C 0 be the smallest subset of C0 such that there exists a vector �x0 2 Q(A+ �C 0)which violates inequality (9) and satis�es property (a). The theorem will be proved byshowing that C 0 = ;.Assume, conversely, that jC 0j � 1, let h be an index in C 0 and C 00 = C 0 � fhg.To complete the proof we now show that there exists a fractional point �x00 2 Q(A+ �C 00) which violates inequality (9) and satis�es property (a), thus contradicting theassumption of minimality of C 0.To this purpose, let �x00 be de�ned as follows:�x00j = �x0j; j 2 (N �C 0);�x00h = �:Evidently, for any choice of � the vector �x00 violates inequality (9). We will show thatthe value � can be determined in such a way that 1k+1 � � � kk+1 and every constraintassociated with a row of A00 = A+ � C 00 is satis�ed by �x00.If this is not the case, then there exist two rows r and s of A00 (and hence of A+)whose associated constraints can be written as follows:Xj2Pr xj + Xj2Rr(1� xj) � 1; (11)Xj2Ps xj + Xj2Rs(1 � xj) � 1; (12)such that h 2 Pr \ Rs and, for any value � 2 [ 1k+1 ; kk+1 ], at least one of the twoconstraints is violated by �x00. Observe that jPs [ Rsj � 2 and jPr [ Rrj � 2, otherwisethe disjoint implication of r and s would dominate both r and s in A+. Consequently, byproperty (a), letting � = 1k+1 the vector �x00 satis�es inequality (12) and, hence, violatesinequality (11). On the other hand, letting � = kk+1 the vector �x00 satis�es inequality(11). It follows that there exists a value �� 2 [ 1k+1 ; kk+1 ] such that, letting � = ��, thevector �x00 satis�es:Xj2Pr �x00j + Xj2Rr(1� �x00j ) = 1; (13)Xj2Ps �x00j + Xj2Rs(1 � �x00j ) < 1: (14)8

Let P 0 = Pr [ Ps � fhg and R0 = Rr [Rs � fhg. Summing up inequalities (13) and(14) and observing that �x00j = �x0j, for j 6= h, we get the following relation:Xj2P 0 �x0j + Xj2R0(1� �x0j) + Xj2Pr\Ps �x0j + Xj2Rr\Rs(1 � �x0j) < 1: (15)Observe that P 0 \ R0 = ;, otherwise the inequality (15) would be contradicted. Itfollows that rows r and s are the antecedents of an implication. Suppose now that thereexists some row t in A+ such that Pt � P 0 and Rt � R0. Observe that t is also a rowin A00. Hence the inequalityXj2P 0 xj + Xj2R0(1 � xj) � 1; (16)is satis�ed by all the points in Q(A00), contradicting inequality (15).Consequently, rows r and s are the antecedents of a non-disjoint implication whichis not dominated by any row in A+. Then there exists some row t of A� which is notin A+ such that Pt � P 0 and Rt � R0. Since jPt [ Rtj � k, we have jP 0 [ R0j � k.Moreover, we have j(Pr \ Ps) [ (Rr \ Rs)j � 1. Hence, by property (a) we get again acontradiction to inequality (15).We can conclude that there exists a value � 2 [ 1k+1 ; kk+1 ] such that the point �x00belongs to Q(A00) and violates inequality (9). End of Claim 1.Claim 2. If A� 6= A+ then A is non-ideal.Observe that, by Claim 1, Q(A�) � Q(A+). Moreover, Q(A�) \ f0; 1gn = Q(A+) \f0; 1gn, and so Q(A+) � Q(A) has a fractional vertex. Consequently, A is non-ideal.End of Claim 2.Claim 2 concludes the proof of the theorem. 2Recently [2], Guenin was able to prove a related result. Given a (0;�1) matrix Ahe de�ned the matrix A� to be the matrix obtained from A by adding any (0;�1) rowsuch that the associated constraint of the form (1) is valid for Q(A).Theorem 2.3 (Guenin) Let A be a (0;�1) matrix. Then A is ideal if and only ifD(A�) is ideal. 2In general A+ 6= A�, as the following example shows.A = 26664 �1 1 1 0 0�1 0 0 1 11 1 0 1 01 0 1 0 1 37775 (17)Observe that A � A+ 6� A�, since the row (01111) belongs to A� but not to A+.9

Evidently, Q(D(A)) � Q(D(A+)) � Q(D(A�)) � Q(D(A�)). Moreover, if A is idealthen, by Claim 2 of Theorem 2.2, we have that A+ � A� � A� and so D(A+) =D(A�) = D(A�). Then, by Theorem 2.2, D(A�) is ideal. Conversely, if A is non-idealthen F (A) � F (A+) � F (A�) has a fractional vertex, and so D(A�) is non-ideal.It follows that Guenin's theorem is a corollary of Theorem 2.2. On the contrary,Hooker's theorem (2.1) is not an immediate consequence of Theorem 2.2. In fact, thediscussion on the relationship between our result and Hooker's theorem requires somemore de�nitions and will be one of the subjects of the next section.3 Minimally non-ideal (0;�1) matricesA (0; 1) matrix is said to be minimally non-ideal if it is non-ideal but every properminor is ideal. Alfred Lehman [4] showed that minimally non-ideal (0; 1) matrices havecertain regularity properties except for one in�nite family. In order to describe thisresult, we need to introduce some de�nitions.We denote by Js a square (s+1�s+1) (0; 1) matrix whose rows are the characteristicvectors over the set f1; : : : ; s+ 1g (s � 2) of the following sets: f2; 3; : : : ; s+ 1g, f1; igfor i = 2; : : : ; s + 1. Moreover, given a (0; 1) (m � n) matrix A, let ai denote the i-throw of A and let �(A) = minfaTi 1 : i = 1; : : : ;mg. Let �(A) denote the minimum ofxT1 over all the (0; 1) vectors in Q(A) (covering number of A). Finally, let E be thematrix of all ones and I the identity matrix of appropriate dimension.Theorem 3.1 (Lehman) If A is a minimally non-ideal (0; 1) matrix with n columns,then either(a) A is Jn�1; or(b) there are exactly n linearly independent rows of A, r1; : : : ; rn, with �(A) ones and n(0; 1) linearly independent vectors of Q(A), b1; : : : ; bn, with �(A) ones. Moreover,denoted by R and B the square matrices whose rows are the vectors ri and bi,respectively, then the following hold:(b1) RBT = E + (�(A)�(A)� n)I;(b2) each column of R has �(A) ones and each column of B has �(A) ones. 2An easy corollary of Lehman's theorem is the existence of a unique fractional vertexof Q(A) when A is minimally non-ideal.Corollary 3.2 If A is minimally non-ideal, then Q(A) has a unique fractional vertex�x. In particular, if A � Jn�1 then �x1 = n�2n�1 , �xi = 1n�1 , for i = 2; : : : ; n. If, conversely,A 6� Jn�1 then �xi = 1�(A) , for i = 1; : : : ; n. 210

It is natural to extend the notion of a minimally non-ideal matrix to the (0;�1) case.In particular, we would like to de�ne an appropriate generalization of this notion whichpreserves the nice regularity properties expressed by Lehman's result. However, if wede�ne a minimally non-ideal matrix A as a matrix which is non-ideal but all of whoseproper minors are ideal, we loose the fundamental property that Q(A) has a uniquefractional vertex.In fact, the following matrix M has the property that any deletion or contractionproduces an ideal submatrix, but Q(M) has two fractional vertices, namely, �x1 =(23 ; 13; 13) and �x2 = (13; 23 ; 23).M = 2666666664 �1 1 11 1 01 0 11 �1 �1�1 �1 0�1 0 �1 3777777775 (18)In [2] Bertrand Guenin shows that deletion, contraction and elimination are commu-tative and calls a minor of a (0;�1) matrix any submatrix obtained by performing asequence of the three operations. Unfortunately, the matrixM above is also minimallynon-ideal with respect to this alternative de�nition of minor. It follows that we needa di�erent concept of \minor" to overcome the above di�culty. To this purpose, weaugment the list of minor-taking operations by de�ning two more operations.Definition 3.3 Let A be a (0;�1) matrix and J a subset of N . Then(i) the semi-deletion of J in A is the submatrix AnnJ obtained by removing all rowswith a 1 in some column in J and removing all zero columns;(ii) the semi-contraction of J in A is the submatrix A==J obtained by removing allrows with a �1 in some column in J and removing all zero columns. 2The operations of deletion, contraction, semi-deletion and semi-contraction per-formed on A have a clear counterpart in the (0; 1)-extension D(A). In fact, we havethe following proposition whose easy proof is left to the reader.Proposition 3.4 Let J be a subset of N . We have that:D(AnJ) = D(A)np(J)=r(J);D(A=J) = D(A)=p(J)nr(J);D(AnnJ) = D(A)np(J);D(A==J) = D(A)nr(J): 211

We de�ne a weak minor ofA to be any submatrix ofA obtained by performing a seriesof deletions, contractions, semi-deletions and semi-contractions. By Proposition 3.4, ifA0 is a weak minor of A thenD(A0) is a minor ofD(A). It follows that the order in whichthe series of operations is performed does not a�ect the result. More speci�cally, lettingN1; N2; N3; N4 be (possibly empty) subsets of N such that N1 \ Ni = ; (i = 2; 3; 4)and N2 \ Ni = ; (i = 1; 3; 4), we have that the weak minor obtained by deleting N1,contracting N2, semi-deleting N3 and semi-contracting N4, in any order, coincides withthe weak minorA0 = AnN1=N2nnN3==N4: (19)Of course a minor is a weak minor, while the converse does not hold in general.Moreover, the elimination of a column corresponds to performing both a semi-deletionand a contraction (or a semi-contraction and a deletion). Hence a minor accordingto Guenin (obtained by deletions, contractions and eliminations) is also a weak minorobtained by choosing N3 = N4 in (19).Finally, we have that the concept of weak minor also generalizes the concept of setcovering submatrix [3] as described at the beginning of Section 2. In fact, any setcovering submatrix of A has the form (19) where N1[N2 = ;, N3\N4 = ; and N3[N4is the index set of non-monotone columns. In particular, by switching the columns withindex in N3, we get a (0; 1) matrix from A0.We can now de�ne the following concept of a minimally non-ideal matrix.Definition 3.5 A minimally non-ideal (0;�1) matrix is a non-ideal matrix with theproperty that every proper weak minor is ideal. 2We will now derive a characterization of minimally non-ideal (0;�1) matrices in thespirit of Lehman's theorem. A consequence of our characterization will be that if A isa minimally non-ideal matrix then Q(A) has a unique fractional vertex.Observe that the class of minimally non-ideal (0;�1) matrices includes all the (0; 1)minimally non-ideal matrices and their switchings.Another class of minimally non-ideal matrices is given by theminimally non-balancedmatrices de�ned by Truemper [7]. A minimally non-balanced matrix is a (0;�1) squarematrix with two non-zero entries per row and per column and with the property that thesum of the entries is not a multiple of 4. It is easy to construct minimally non-balanced(0;�1) matrices which are not switchings of (0; 1) matrices. Moreover, we can observethat A is minimally non-balanced if and only if D(A) is the node-edge incidence matrixof an odd cycle.Finally, for each integer s � 2, there exists a minimally non-ideal (0;�1) matrixwhich is neither a switching of a (0; 1) matrix nor minimally non-balanced. We will12

call such a matrix Js. It is obtained from the special matrix Js de�ned by Lehman, bysimply turning the �rst component of the �rst row from 0 to �1.J3 = 26664 �1 1 1 11 1 0 01 0 1 01 0 0 1 37775By analogy, we denote by J1 the (2 � 2) matrix with the �rst component of the �rstrow equal to �1 and the other entries equal to 1. Observe that D(Js) = Js+1.The following theorem and the subsequent corollary characterize minimally non-ideal (0;�1) matrices in terms of their (0; 1) counterparts and provide a description oftheir structure. In particular, we will show that a (0;�1) minimally non-ideal matrixA which is not a switching of a (0; 1) matrix is either a switching of Js or contains aminimally non-balanced row submatrix R. In the latter case, every row of A not in Rhas at least three non-zero entries.Theorem 3.6 Let A be a (0;�1) matrix. Then A is minimally non-ideal if and onlyif D(A) is minimally non-ideal.Proof. Let N denote the set of columns of A, with jN j = n.Claim 1. If D(A) is minimally non-ideal, then A is minimally non-ideal.Since D(A) is minimally non-ideal, it follows that every proper minor of D(A) isideal and, hence, every proper weak minor of A is ideal. Consequently, to show that Ais minimally non-ideal, it is enough to prove that A is non-ideal.If no column j 2 N exists such that both p(j) and r(j) belong to D(A), we havethat A is monotone, that is, a switching of a (0; 1) matrix (namely D(A)). In this case,obviously, A is non-ideal.If, on the other hand, there exists some j 2 N such that both p(j) and r(j) belong toD(A), we have thatD(A) contains a row with only two non-zero elements (namely thosecorresponding to p(j) and r(j)). It follows that �(D(A)) = 2. By Lehman's theorem,eitherD(A) is Jn�1 orD(A) contains as a row submatrix the node-edge incidence matrixR0 of an odd cycle and all other rows have at least three non-zero elements. In the �rstcase, A is Jn�2, which is non-ideal. In the second case, A has a row submatrix R withthe property that D(R) = R0 and R is minimally non-balanced. Since Q(R) has the(unique) vertex (12 ; : : : ; 12) which satis�es all the constraints associated with the otherrows of A, it follows that A is again non-ideal. End of Claim 1.Assume now that A is minimally non-ideal but D(A) is not minimally non-ideal.Since A is non-ideal, we have that D(A) is non-ideal and there exists a proper minor Bof D(A) which is minimally non-ideal. But then there exists a proper weak minor B 0 ofA such that B = D(B 0) and, by Claim 1, B 0 is non-ideal, contradicting the minimalityof A. Hence, the theorem follows. 2From the above theorem the following corollary can be easily derived.13

Corollary 3.7 Let A be a (0;�1) matrix. Then A is minimally non-ideal if and onlyif A is a switching of Jn�1 or a switching of a minimally non-ideal (0; 1) matrix or Acontains a row submatrix R which is minimally non-balanced and every row of A notin R has at least three non-zero entries. 2The above corollary and Lehman's characterization of the unique fractional vertexof a minimally non-ideal (0; 1) matrix immediately imply the following corollary.Corollary 3.8 If A is a minimally non-ideal matrix, then Q(A) has a unique frac-tional vertex. In particular, there exists a switching A0 of A such that A0 and the uniquefractional vertex �x of Q(A0) satisfy one of the following:(i) A0 � Jn�1 and �x1 = n�2n�1 , �xi = 1n�1 , for i = 2; : : : ; n;(ii) A0 � Jn�1 and �x1 = n�1n , �xi = 1n , for i = 2; : : : ; n;(iii) A0 contains a minimally non-balanced row submatrix R, every row of A0 not in Rhas at least three non-zero entries and �xi = 12, for i = 1; : : : ; n;(iv) A0 is a minimally non-ideal (0; 1) matrix di�erent from Jn�1 and �xi = 1�(A0) , fori = 1; : : : ; n. 2In the case of minimally non-ideal d-complete matrices, it is possible to prove asharper version of Corollary 3.7.Corollary 3.9 A d-complete (0;�1) matrix with n columns is minimally non-idealif and only if it is a switching of Jn�1 or a switching of a minimally non-ideal (0; 1)matrix.Proof. Suppose that A is a minimally non-ideal matrix with n columns which isneither a switching of Jn�1 nor a switching of a minimally non-ideal (0; 1) matrix. Itfollows that P \ R 6= ;, where P and R are the sets of positive and negative columnsof A, respectively. Moreover, by Corollary 3.7, the matrix A contains a row submatrixR which is minimally non-balanced and every row of A not in R has at least threenon-zero entries.It follows that D(A) contains a square row submatrix B = D(R) which is theincidence matrix of an odd cycle of length q � 3.Let fi1; : : : ; iqg be the columns of D(A) ordered in such a way that, for each pair ofconsecutive indices ih; ih+1 (with iq+1 � i1), there exists a row in B with ones in columnsih; ih+1 and zeroes elsewhere. Let j 2 P \R and assume, without loss of generality, thatp(j) � i2 and r(j) � i3. Suppose that q � 5. Since A is d-complete, we must have thatD(A) contains a row with two ones in columns i1; i4 and zeroes elsewhere, contradictingthe assumption that every row not in B has cardinality at least 3. It follows that q = 3.Hence, D(A) � J2 and A is a switching of J1, a contradiction. 214

The description of minimally non-ideal matrices given by Corollary 3.7 does notprovide a characterization of (0;�1) ideal matrices in terms of forbidden weak minors.In fact, while any non-ideal matrix contains a minimally non-ideal weak minor, thefollowing example shows that also an ideal matrix can contain a (minimally) non-idealweak minor (obtained, in this case, by contracting the �rst two columns and eliminatingthe �fth).A = 26664 1 0 1 0 10 1 0 0 �11 0 1 1 00 1 1 �1 0 37775Nevertheless, since a (0;�1) matrix A is ideal if and only if its disjoint completionA+ is ideal, we can try to characterize non-ideal matrices in terms of forbidden weakminors of their disjoint completions.Indeed, thanks to Theorem 2.2, we can prove the following theorem.Theorem 3.10 A (0;�1) matrix A is ideal if and only if every weak minor of A+ isideal.Proof. If part is obvious. Hence, assume that A is an ideal matrix. It follows, byTheorem 2.2, that D(A+) is ideal. Let B be a weak minor of A+ which is non-ideal.It follows that the extension of a fractional vertex of Q(B) is a fractional vertex ofQ(D(B)), and so D(B) is non-ideal. Since D(B) is a minor of D(A+) and all theminors of D(A+) are ideal, we have a contradiction. 2The above theorem implies that a (0;�1) matrix A is ideal if and only if its d-completion A+ does not contain a minimally non-ideal weak minor whose structure isdescribed by Corollary 3.7.In the rest of this section, we will derive a sharper forbidden weak minor character-ization of (0;�1) ideal d-complete matrices by using the result stated in Corollary 3.9.Namely, we will show that the only forbidden weak minors are the switchings of Js andof (0; 1) minimally non-ideal matrices. In proving the above result, we will also derivesome crucial properties of logical and disjoint completions of (0;�1) matrices which areinteresting on their own.Observe �rst that Corollary 3.9 and Theorem 3.10 do not immediately imply thesharper forbidden weak minor characterization stated above. In fact, it is not alwaystrue that every weak minor of a d-complete matrix is d-complete. In other words, theproperty of being d-complete is not hereditary with respect to taking weak minors.Indeed, it can be easily checked that Js � J+s but the proper (weak) minor of Jsobtained by contracting any column in the set f2; : : : ; s + 1g does not coincide withits disjoint completion. Moreover, the matrix Js has also the property that its disjointcompletion does not coincide with its logical completion.15

As a matter of fact, the following Theorem 3.11 shows that the coincidence of thedisjoint and the logical completion of a (0;�1) matrix A is a necessary and su�cientcondition for all the weak minors of A+ to be d-complete. Moreover, Theorem 3.12will show that the matrices Js (s � 1) are (up to switching) the only matrices thatone needs to forbid as weak minors of a d-complete matrix in order to guarantee thecoincidence of disjoint and logical completions and, hence, to turn d-completeness intoa hereditary property.Theorem 3.11 Let A be a (0;�1) matrix. Then A+ 6= A� if and only if there exists aweak minor B of A+ satisfying B 6= B+.Proof. If part. Let B be a weak minor of A+ such that B 6= B+. We have that thereexist two rows of B, say q and t, and a column j such that (Pq [ Rq) \ (Pt [ Rt) =Pt \ Rq = fjg, and with the property that the disjoint implication w of q and t is notdominated by any row of B. Let q0 and t0 be the rows of A+ whose restrictions to B areq and t, respectively. Since B is a minor of A+ we have that the signs of the entries ofrows q0 and t0 must agree in every column outside B. It follows that rows q0 and t0 arethe antecedents of an implication w0. Assume that A+ � A�. Hence, a row dominatingw0 is in A+ and the restriction of such a row to B is dominated by a row w00 of B. Butthen w00 dominates w, a contradiction. It follows that A+ 6= A�.Only if part. Assume now that A+ 6= A�. Hence, there exist two rows of A+, say qand t, and a column j such that Pt \Rq = fjg, Pq \Rt = ;, (Pq \ Pt) [ (Rq \Rt) 6= ;,and with the property that the implication of q and t is not dominated by any rowin A+. Let B be the weak minor of A+ de�ned by B = A+=(Pq \ Pt)n(Rq \ Rt). Bcontains the restrictions of rows q and t, say q0 and t0, which are the antecedents of adisjoint implication. Moreover, B does not contain any row which dominates such adisjoint implication. In other words, B is di�erent from B+ and the theorem is proved.2 In the following we shall say that a matrix A is a minimal matrix with respect to agiven property if A satis�es the property but none of its (proper) weak minors does.Theorem 3.12 Let A be a (0;�1) d-complete matrix which is minimal with respect tothe property that A+ 6= A�. Then A is a switching of Js.Proof. Since A = A+ 6= A�, there exist two columns in A, say j and k, and two rows,say q and t, such that Pt \Rq = fjg, Pq \Rt = ;, k 2 (Pq \ Pt) [ (Rq \Rt), and withthe property that the implication of q and t is not dominated by any row in A. We canassume that the columns of A have been switched in such a way that Rq = fjg, Rt = ;and hence k 2 (Pq \ Pt). Let N denote the set of columns of A.Claim 1. Pq [ Pt = N .In fact, assume that there exists a column i =2 Pq [ Pt (observe that i must bedi�erent from j and k). Hence, the weak minor B = A�fig contains the restriction of16

rows q and t, but does not contain any row which dominates their implication. This isa contradiction to the hypothesis that B satis�es B+ = B�.Claim 2. For any column l 2 N �fj; kg, there exists a row rl of A whose restriction toA=flg dominates the restriction of either row q or row t and such that l 2 Prl .In fact, the weak minor B = A=flg satis�es B+ = B�. Hence, we have that thereexists a row rl of A whose restriction in B dominates the restriction of one of the tworows. Evidently, since row rl dominates neither row q nor row t, we have that row rlhas a 1 in column l.Claim 3. For any column l 2 N �fj; kg, we have that row rl has a non-zero in columnj, a 0 in column k and non-negative values in all the other entries.Assume �rst that the restriction of row rl to B = A=flg dominates the restrictionof row q. As a consequence, row q has a 0 in column l. Moreover, since Rq = fjg,we have that the only possible negative entry of row rl is in column j. It follows thatrow rl has a non-zero in column j, otherwise rl would dominate the implication of qand t, a contradiction. In particular, rl has a �1 in column j, since its restriction to Bdominates the restriction of q.Finally, assume that row rl has a 1 in column k. Since Pq [ Pt = N and l =2 Pq,we have l 2 Pt. It follows that no row in B can dominate the restriction of row t,and hence the restrictions to B of rows t and rl form the antecedents of a non-disjointimplication. Since B+ = B�, we have that there exists a row r0 in A whose restrictionin B dominates such an implication. But then row r0 dominates the implication of rowsq and t in A, a contradiction. It follows that row rl has a 0 in column k.An analogous argument can be used to prove the claim in the case in which therestriction of row rl to B dominates the restriction of row t.Claim 4. There do not exist rows rl and rm with the property that j 2 Prl \ Rrm.Suppose conversely that such rows exist. Consider the weak minor B = A�fkg andobserve that every row of B+ is the restriction of a corresponding row of A.Since, by Claim 3, both rows rl and rm have a 0 in column k, we have that theirrestrictions belong to B and are the antecedents of an implication w. Moreover, sinceB+ = B�, we have that w is dominated by a row w0 of B+. But then w0 is the restrictionto B+ of a row of A which dominates the implication of rows q and t, a contradiction.It follows that we can assume without loss of generality that column j has beenswitched (possibly interchanging rows q and t) in such a way that j 2 Prl for anyl 2 N � fj; kg. Hence we have the following.Claim 5. All the entries of row rl (l 2 N � fj; kg) are non-negative.Claim 6. For any l 2 N � fj; kg, the restriction of row rl to A=flg dominates therestriction of row t.In fact, observe that the restriction of row rl to the minor A=flg cannot dominatethe restriction of row q (since row q has a �1 in column j) and hence, by Claim 2, itmust dominate the restriction of row t. 17

Claim 7. Row t has a 1 in columns j and k and 0 elsewhere.Assume, conversely, that row t has a non-zero entry in a column l 2 N �fj; kg. Byassumption, such an entry is a 1. Since row rl does not dominate row t, we have thatthere exists a column i 2 N � fj; k; lg with a 1 in row rl and 0 in row t. But then therestriction of rl to A=flg does not dominate the restriction of t, contradicting Claim 6.Claim 8. For any l 2 N � fj; kg, row rl has a 1 in columns j and l and 0 elsewhere.In fact, by Claim 7, this is the only possibility to ensure that the restriction of rl inA=flg dominates the restriction of t.Observe that, by Claims 1 and 7, we have that Pq = N � fjg. It follows, also usingClaim 5, that rows q; t; rl (l 2 N � fj; kg) form the matrix Js (s = jN j � 1).To conclude the proof we have to show that no other row exists in A. Assumeconversely that there exists in A a row r0 =2 fq; tg [ frl : l 2 N � fj; kgg. We havethat there exists a column i 6= j such that i 2 Rr0. In fact, otherwise, either row r0dominates row q (if j 2 Rr0) or it dominates row t (if Pr0 = fjg) or it is dominated bysome row in ftg [ frl : l 2 N � fj; kgg. In any case we get a contradiction. It followsthat the weak minor B==fig is a proper minor of A with the property that B+ 6= B�,a contradiction. Hence, the theorem follows. 2Corollary 3.13 Let A be a (0;�1) d-complete matrix. If A does not contain a switch-ing of Js as a weak minor then every weak minor B of A is d-complete.Proof. By Theorem 3.12, if A does not contain a switching of Js as a weak minorthen A = A+ = A�. Hence, by Theorem 3.11, every weak minor B of A is d-complete.2 As a consequence of the above result, Corollary 3.9 and Theorem 3.10 immediatelyimply the following characterization of ideal matrices in terms of forbidden weak minorsof their disjoint completions.Corollary 3.14 Let A be a (0;�1) matrix. Then A is ideal if and only if A+ doesnot contain, as a weak minor, a switching of Js or a switching of a minimally non-ideal(0; 1) matrix.If A � A� then we have the following interesting corollary.Corollary 3.15 If A � A�, then A is ideal if and only if A does not contain, as aweak minor, a switching of a minimally non-ideal (0; 1) matrix.Proof. Since A � A� implies A � A+, we have simply to show, by Corollary 3.14,that A does not contain as a weak minor a switching of Js. Assume, conversely, thata weak minor B of A is a switching of Js and denote by S the columns of A not in B.18

Let t and q be the rows of A corresponding to the �rst two rows of B. Evidently, nocolumn in S has entries of opposite sign in rows t and q. It follows that t and q are theantecedents of a logical implication w in A. Since A � A�, A contains a row w0 whichdominates w. But, since B contains the restrictions of rows t and q, it also contains arow which dominates the restriction of w0. Such a row dominates also the �rst row ofB, a contradiction. 2Observe that Hooker's theorem as stated in the previous section is an immediateconsequence of the above corollary.AcknowledgementsWe wish to thank Bertrand Guenin who provided a better statement and a shorter prooffor Theorem 3.12 and whose suggestions and comments were invaluable for improvingthe presentation of this paper.References[1] G. CORNU�EJOLS and B. NOVICK, Ideal 0; 1 matrices, J. Combinatorial TheorySer. B, 60, 145{157 (1994).[2] B. GUENIN, Perfect and ideal (0;�1) matrices, Technical Report, GSIA, CarnegieMellon University (1994).[3] J. HOOKER, Resolution and integrality of satis�ability problems, Technical Re-port, GSIA, Carnegie Mellon University (1994).[4] A. LEHMAN, On the width-length inequality, Mathematical Programming, 17,403{413 (1979).[5] M.W. PADBERG, Lehman's forbidden minor characterization of ideal 0; 1 ma-trices, Working Paper No. 334, �Ecole Polytechnique, Laboratoire d'Econometrie,Paris, France (1990).[6] P.D. SEYMOUR, On Lehman's width-length characterization, in \Polyhedral com-binatorics" (W. Cook and P.D. Seymour, eds.), DIMACS Series in Discrete Math-ematics and Theoretical Computer Science, Vol. 1, 107{117 (1990).[7] K. TRUEMPER, On balnced matrices and Tutte's characterization of regular ma-troids, preprint (1978). 19