An Extension of Sums of Squares Relaxations to Polynomial Optimization Problems Over Symmetric Cones

18
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152-8552 Japan An Extension of Sums of Squares Relaxations to Polynomial Optimization Problems over Symmetric Cones Masakazu Kojima and Masakazu Muramatsu Research Report B-406, April 2004 Abstract. This paper is based on a recent work by Kojima which extended sums of squares relaxations of polynomial optimization problems to polynomial semidefinite programs. Let E and E + be a finite dimensional real vector space and a symmetric cone embedded in E ; examples of E and E + include a pair of the N -dimensional Euclidean space and its nonnegative orthant, a pair of the N -dimensional Euclidean space and N -dimensional second order cones, and a pair of the space of m×m real symmetric (or complex Hermitian) matrices and the cone of their positive semidefinite matrices. Sums of squares relaxations are further extended to a polynomial optimization problem over E + , i.e., a minimization of a real valued polynomial a(x) in the n-dimensional real variable vector x over a compact feasible region {x : b(x) ∈E + }, where b(x) denotes an E -valued polynomial in x. It is shown under a certain moderate assumption on the E -valued polynomial b(x) that optimal values of a sequence of sums of squares relaxations of the problem, which are converted into a sequence of semidefinite programs when they are numerically solved, converge to the optimal value of the problem. Key words. Polynomial Optimization Problem, Conic Program, Symmetric Cone, Euclidean Jordan Algebra, Sum of Squares, Global Optimization, Semidefinite Program Department of Mathematical and Computing Sciences, Tokyo Institute of Technol- ogy, 2-12-1-W8-29 Oh-Okayama, Meguro-ku, Tokyo 152-8552 Japan. [email protected] Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, Tokyo, 182-8585 Japan. [email protected]

Transcript of An Extension of Sums of Squares Relaxations to Polynomial Optimization Problems Over Symmetric Cones

Research Reports on Mathematical and Computing SciencesSeries B : Operations Research

Department of Mathematical and Computing SciencesTokyo Institute of Technology

2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152-8552 Japan

An Extension of Sums of Squares Relaxations to

Polynomial Optimization Problems over Symmetric Cones

Masakazu Kojima† and Masakazu Muramatsu⋆

Research Report B-406, April 2004

Abstract.

This paper is based on a recent work by Kojima which extended sums of squaresrelaxations of polynomial optimization problems to polynomial semidefinite programs.Let E and E+ be a finite dimensional real vector space and a symmetric cone embeddedin E ; examples of E and E+ include a pair of the N -dimensional Euclidean space andits nonnegative orthant, a pair of the N -dimensional Euclidean space and N -dimensionalsecond order cones, and a pair of the space ofm×m real symmetric (or complex Hermitian)matrices and the cone of their positive semidefinite matrices. Sums of squares relaxationsare further extended to a polynomial optimization problem over E+, i.e., a minimization ofa real valued polynomial a(x) in the n-dimensional real variable vector x over a compactfeasible region {x : b(x) ∈ E+}, where b(x) denotes an E-valued polynomial in x. It isshown under a certain moderate assumption on the E-valued polynomial b(x) that optimalvalues of a sequence of sums of squares relaxations of the problem, which are convertedinto a sequence of semidefinite programs when they are numerically solved, converge tothe optimal value of the problem.

Key words.

Polynomial Optimization Problem, Conic Program, Symmetric Cone, Euclidean JordanAlgebra, Sum of Squares, Global Optimization, Semidefinite Program

† Department of Mathematical and Computing Sciences, Tokyo Institute of Technol-ogy, 2-12-1-W8-29 Oh-Okayama, Meguro-ku, Tokyo 152-8552 [email protected]

⋆ Department of Computer Science, The University of Electro-Communications, 1-5-1Chofugaoka, Chofu-shi, Tokyo, 182-8585 [email protected]

1 Introduction

This paper concerns an extension of the framework of the sums of squares (SOS) re-laxation and semidefinite programming (SDP) relaxation for polynomial optimizationproblems developed by Lasserre [9] and Parrilo [12] to polynomial optimization problemsover symmetric cones. A symmetric cone is an extension of the positive orthant, andhas played important roles in extensions of the interior-point methods for linear program-ming to convex programming. To date, the most general cone on which the primal-dualinterior-point methods are proved to be polynomially convergent, is the symmetric cone([1, 2, 10, 11]).

Let E be an N -dimensional Euclidean Jordan algebra and E+ be its associated symmet-ric cone. Let a(x) be a real valued polynomial in the variable vector x = (x1, x2, . . . , xn) ∈R

n and b(x) an E-valued polynomial in x (precise definitions of Euclidean Jordan alge-bras, symmetric cones, and E-valued polynomials are given in Section 2). We consider apolynomial optimization problem over E+:

〈POP 〉 minimize a(x) subject to b(x) ∈ E+. (1)

Throughout the paper, we assume that the feasible region K = {x ∈ Rn : b(x) ∈ E+} of

〈POP 〉 is nonempty and compact.A fundamental example of symmetric cones is the nonnegative orthant R

N+ in the N -

dimensional Euclidean space RN . In this case, 〈POP 〉 becomes a (standard) polynomial

optimization problem (over RN+ )

minimize a(x) subject to b(x) ≥ 0, (2)

which includes various important optimization models such as nonconvex quadratic pro-grams and 0-1 integer programs. For this case, Lasserre proposed a numerical method [9]and a software package [4] based on SDP relaxations. His method generates a sequenceof SDP problems which serve as convex relaxations of the problem (2). Under a mildassumption on the R

N -valued polynomial b(x), optimal values of the SDP relaxationsconverge to the optimal value of the problem (2) (Theorem 4.2 of [9]). This method isalso viewed as SOS relaxations [12, 13] of the problem (2) from the dual side. In fact,Lasserre [9] proved the convergence of optimal values of the SDP relaxations from the dualside using some fundamental lemma by Putinar, Lemma 4.1 of [14], on a representationof a positive polynomial on a compact semi-algebraic set of the form {x ∈ R

n : b(x) ≥ 0}in terms of a sum of squares of polynomials.

Another important example of symmetric cones is the cone of positive semidefinitematrices in the space of real symmetric matrices. In this case, 〈POP 〉 turns out to be apolynomial SDP, which covers a bilinear matrix inequality as a special case. Kojima [6]recently extended the SDP relaxation by Lasserre [9] and the SOS relaxations by Parrilo[12] to this case. In this extension, a polynomial penalty function, which was originallyproposed for a polynomial optimization problem (over R

N+ ) in the paper [5], played an

essential role.This paper may be regarded as a continuation of the paper [6]; we extend the frame-

work of the SDP and SOS relaxations mentioned above further to 〈POP 〉, a polynomial

1

optimization problem over a general symmetric cone E+. We remark that besides polyno-mial SDP problems, 〈POP 〉 covers polynomial second-order cone optimization problemswhere we take a direct product of second-order cones for E+, and polynomial complexSDP problems where we take the cone of Hermitian positive semidefinite matrices for E+.The main theoretical result is an extension of Putinar’s lemma (Lemma 4.1 of [14]) toa representation of a positive polynomial on the nonempty and compact feasible regionK of 〈POP 〉. This extension is constructed in such a way to exploit the sparsity of b.The proof relies on an extension of the polynomial penalty function [5] to K and theoriginal Putinar’s lemma. Based on the extended lemma, an SOS relaxation for 〈POP 〉 isproposed to generate a sequence of SOS optimization problems whose optimal objectivevalues converge to the optimal value of 〈POP 〉 under a mild assumption similar to theone made in Theorem 4.2 of [9]. The sequence of SOS optimization problems can beregarded as a dual of a sequence of SDP relaxations obtained by extending Lasserre’sSDP relaxation [9] to 〈POP 〉.

In Section 2, we introduce some definitions, notations and basic properties of Eu-clidean Jordan algebras and E-valued polynomials, which are used throughout the paper.Section 3 presents a generalization of Putinar’s lemma (Lemma 4.1 of [14]) and its proof.Section 4 is devoted to our main results; we derive several SOS and SDP relaxations of〈POP 〉 based on the dual approach in Section 4.1, and SDP relaxations based on theprimal approach in Section 4.2. Convergence of optimal values of the relaxations to theoptimal value of 〈POP 〉 and some relationships among the relaxations are also shownthere. Section 5 discusses a relationship between a cone of sums of squares of polyno-mial matrices proposed in [8] for polynomial SDPs and our cone of sums of squares ofpolynomial symmetric matrices.

2 Preliminaries

2.1 Euclidean Jordan algebras

Here we give a brief introduction to Euclidean Jordan algebras, which is a basic toolextensively used in this paper. Because the statements in this subsection are standard inthe field of Jordan algebra, our presentation here is concise and without proof. For moredetails and proofs, see textbooks of Jordan algebras, for example, Faraut and Koranyi [3].

We denote by R the field of real numbers. A finite dimensional vector space E over R

is called a Jordan algebra if a bilinear mapping (multiplication) E × E → E denoted by ◦is defined satisfying

(J1) x ◦ y = y ◦ x,

(J2) [L(x2), L(x)] = O,

where x2 = x ◦ x, L(x) is a linear transformation of E defined by L(x)y = x ◦ y, and[A,B] = AB − BA for every pair of linear transformations A and B on E . Note thatassociativity does not hold for ◦, i.e., x ◦ (y ◦ z) 6= (x ◦ y) ◦ z in general.

A Jordan algebra E is Euclidean if an associative inner product • is defined, i.e.,

(x◦y)•z = x•(y◦z) holds for every x,y, z ∈ E . Throughout the paper, we assume that E

2

is a Euclidean Jordan algebra having an identity element e; it holds that e◦x = x◦e = x

for all x ∈ E . Such an identity element is unique.An element c ∈ E is called idempotent if c◦c = c. Idempotents c and c′ are orthogonal

if c ◦ c′ = 0. When an idempotent c cannot be expressed by a sum of two other non-zero idempotents, c is primitive. We cannot choose an arbitrary number of primitiveorthogonal idempotents in E . In fact, the number is bounded by N , the dimension of E .To show this, assume to the contrary that we can choose N + 1 orthogonal idempotentsc1, . . . , cN+1. Because N is the dimension of E , these idempotents are linearly dependent;without loss of generality, we assume that cN+1 =

∑N

j=1 λjcj holds for some λ1, . . . , λN

which are not all zeros. For every i ∈ {1, . . . , N}, however, we have 0 = cN+1 ◦ ci = λici,which implies λi = 0. This is a contradiction.

We denote the maximum possible number of primitive orthogonal idempotents by m,which is called the rank of E . The rank of E is in general different from the dimension ofE . We call that a set of idempotents {c1, . . . , cm} is orthonormal if they are orthogonaleach other and ‖cj‖ =

√cj • cj = 1 for every j = 1, . . . , m. The following theorem is

fundamental.

Theorem 1 (Spectral Decomposition) For f ∈ E , there exists a set of orthonor-

mal primitive idempotents {c1, . . . , cm} and real numbers {λ1, . . . , λm} such that f =∑m

i=1 λici.

This is Theorem III.1.2 of Faraut and Koranyi [3]. The real numbers λ1, . . . , λm and{c1, . . . , cm} are called the eigenvalues and the Jordan frame of f , respectively. Theeigenvalues are continuous functions of f . It is also known that

m∑

j=1

cj = e (3)

if {c1, . . . , cm} is a Jordan frame. When i 6= j, we have

ci • cj = (e ◦ ci) • cj = e • (ci ◦ cj) = 0, (4)

thus, they are also orthogonal with respect to the inner product •. Define f 1 = f andf k+1 = f ◦ f k−1 (k = 2, 3, . . . ) recursively. For any positive integer k, it is easy to seethat

f k = (λ1c1 + . . .+ λmcm)k = λk1c1 + . . .+ λk

mcm, (5)

because c1, . . . , cm are primitive orthogonal idempotents. Naturally, we have

f 0 = c1 + . . .+ cm = e.

For E , the set E+ ={

f2 : f ∈ E}

is called the symmetric cone associated with E .In view of (5), E+ is also characterized as the set of elements whose eigenvalues are allnonnegative. An important property of the symmetric cone E+ is self-duality; the dualcone of E+ defined by { g : g • f ≥ 0 ∀f ∈ E+ } is E+ itself.

The simplest example of E is the space of real numbers R. If we define

f ◦ g = f • g = fg for every f, g ∈ R,

3

where fg is the standard multiplication between real numbers, then R becomes a Eu-clidean Jordan algebra. Both the dimension and the rank are 1 in this case. The sym-metric cone associated with R is then R+ = { f ∈ R : f ≥ 0 }.

Another important and more interesting example of E is the space of m × m realsymmetric matrices Sm×m. Sm×m becomes a Euclidean Jordan algebra if we define forevery F,G ∈ Sm×m,

F ◦G =FG+GF

2and F •G = trace(FG),

where FG is the standard multiplication between two matrices. The rank of Sm×m is m,while the dimension of Sm×m is m(m+ 1)/2. It is easily verified that the symmetric coneSm×m

+ associated with Sm×m is the cone of positive semidefinite matrices.

2.2 Polynomials over Euclidean Jordan Algebras

We denote by Z and Z+ the set of integers and nonnegative integers, respectively. LetG ⊆ Z

n+ be a finite set, and for each α ∈ G, we assume that a vector fα ∈ E is given.

Then an E-valued polynomial f : Rn → E is defined by f(x) =

α∈G fαxα, wherexα = xα1

1 · · ·xαn

n . The set of E-valued polynomials is denoted by E [x]. For example, whenE = R, R[x] is the set of real-valued polynomials.

The support of f is defined by supp f = {α ∈ G : fα 6= 0 } . Then f can be expresseduniquely as f(x) =

α∈supp f fαxα. When # supp f = 1, that is, supp f consists ofjust a single element, f is called an (E-valued) monomial. The degree of f is defined bydeg(f) = max {∑n

i=1 αi : α ∈ supp f } .For r ∈ Z+, we denote by E [x]r a finite dimensional linear subspace of the E-valued

polynomials whose degree is less than or equals to r: E [x]r = { f ∈ E [x] : deg(f) ≤ r } .Specifically, we assume that E [x]0 = E . Let Gr = {α ∈ Z

n+ :

∑n

i=1 αi ≤ r }, and{g1, . . . , gN} be a basis of E . Then the set of monomials Br = { gix

α : i = 1, . . . , N,α ∈Gr } forms a basis of E [x]r; any f ∈ E [x]r can be represented as a unique linear combination

of monomials in Br. Since #Gr =

(

r + nr

)

, we know dim E [x]r = N

(

r + nr

)

.

2.3 Sums of Squares of E-valued polynomials

For f, g ∈ E [x], we define a bilinear mapping ◦ by

(f ◦ g)(x) =

(

α∈supp f

fαxα

)

◦(

β∈supp g

gβxβ

)

=∑

α∈supp f

β∈supp g

(

fα ◦ gβ

)

xα+β,

where ◦ on the right-hand side is the multiplication of Jordan algebra E . We denote bye the function of identity: e(x) = e for every x ∈ R

n. Then for any f ∈ E [x], e ◦ f =f ◦ e = f .

Let D be a linear subspace of E [x]. Using ◦, we define the sums of squares of E-valuedpolynomials in D by

D2 =

{

q∑

i=1

fi ◦ fi : ∃integer q ≥ 1, fi ∈ D}

.

4

It is easy to verify that D2 is a convex cone.Notice that when D = E [x], we have the sums of squares of E-valued polynomials

E [x]2 =

{

q∑

i=1

fi ◦ fi : ∃integer q ≥ 1, fi ∈ E [x]

}

,

and that when D = R[x], we have the sums of squares of real-valued polynomials

R[x]2 =

{

q∑

i=1

fi ◦ fi : ∃integer q ≥ 1, fi ∈ R[x]

}

.

Lemma 2 Let A be a basis of a finite dimensional linear subspace D of E [x]. Then

D2 =

{

f∈A

g∈A

Vfgf ◦ g : V ∈ S#A×#A+

}

.

Proof : Because V is a real symmetric matrix, we have a decomposition V =∑q

j=1 rjrTj

where rj ∈ R#A (j = 1, . . . , q), or Vfg =

∑q

j=1 rj,frj,g for all f, g ∈ A. Therefore, we have

f∈A

g∈A

Vfgf ◦ g =∑

f∈A

g∈A

q∑

j=1

rj,frj,gf ◦ g

=

q∑

j=1

(

f∈A

rj,ff

)

◦(

g∈A

rj,gg

)

=

q∑

j=1

(

f∈A

rj,ff

)2

.

This implies that∑

f∈A

g∈A Vfgf ◦ g is expressed as a sum of squares of E-valued poly-

nomials in D. Hence we have shown D2 ⊇{

f∈A

g∈A Vfgf ◦ g : V ∈ S#A×#A+

}

.

The other inclusion relation is almost obvious by the same equality because any hj ∈ Dis expressed as hj(x) =

f∈A rj,ff for some rj ∈ R#A. �

Using ◦, we can define the power of f by f 0 = e and fk = f ◦ fk−1 for every positiveinteger k. Because the power is well-defined on E (see (5)), we have fk(x) = f(x)k forevery x ∈ R

n.For every f, g ∈ E [x], a real-valued polynomial f • g is defined by

(f • g)(x) = (∑

α∈supp f

fαxα) • (∑

β∈supp g

gβxβ) =∑

α∈supp f

β∈supp g

fα • gβxα+β.

3 A Generalization of Putinar’s Lemma

In this section, we prove an extension of Putinar’s lemma (Lemma 4.1 of [14]) to polyno-mials in Euclidean Jordan algebra. This lemma will be used to show our main results inSection 4.

5

We define two linear subspaces of E [x] induced from b ∈ E [x] by

E [x; b]r =

{

r∑

j=0

gjbj ∈ E [x] :

gj ∈ R[x]r, j deg(b) + deg(gj) ≤ r,j = 0, 1, 2, . . . , r

}

,

E [x; b] = { f ∈ E [x] : ∃r ≥ 1, f ∈ E [x; b]r } .

By definition E [x; b]r ⊆ E [x]r. Hence dim E [x; b]r ≤ dim E [x]r. Furthermore we canrewrite E [x; b]r as the linear space spanned by

{

xαbj : 0 ≤ j ≤ r, α ∈ Zn+, j deg(b) +

∑n

i=1 αi ≤ r}

.

Hence, if deg(b) ≥ 1 and k = ⌊r/ deg(b)⌋, then

dim E [x; b]r ≤k∑

j=0

(

n+ r − j deg(b)r − j deg(b)

)

≤ (k + 1)

(

n+ rr

)

.

Thereforedim E [x; b]r/ dim E [x]r ≤ min {1, (k + 1)/N} . (6)

Lemma 3 Generalized Putinar’s Lemma: For b ∈ E [x], we define a cone C =R[x]2 + b • E [x; b]2. Suppose that the set K = {x ∈ R

n : b(x) ∈ E+} is compact. Then

any positive polynomial on K belongs to the cone C, if and only if there is a p ∈ C such

that {x : p(x) ≥ 0 } is compact.

To prove this lemma, we will introduce a polynomial penalty function ψr and showsome basic properties on it in the following two lemmas.

Lemma 4 Let b, K, C be as defined in the assumption of Lemma 3. Also, assume that

there exists a p ∈ C such that B = {x : p(x) ≥ 0 } is compact. We put

M = sup{maximum absolute eigenvalue of b(x) : x ∈ B },

which is finite because B is compact, and

ψr = −b • (e− b/M)2r ∈ −b • E [x; b]2

for every nonnegative integer r.

1. For any ǫ > 0, there exists a nonnegative integer r such that ψr(x) ≥ −ǫ for every

x ∈ B and r ≥ r.

2. For given x ∈ B −K and κ > 0, there exist a positive number δ and a nonnegative

integer r such that ψr(x) ≥ κ for every x ∈ B(x, δ)∩B and r ≥ r, where B(x, δ) ={x ∈ R

n : ‖x− x‖ < δ }.

6

Proof : Since p ∈ C, there exist g1, . . . , gq ∈ R[x] and f1, . . . , fq ∈ E [x; b] such thatp =

∑q

i=1 g2i + b •∑q

j=1 fj ◦ fj. It follows that, for every x ∈ K,

p(x) =

q∑

i=1

gi(x)2 + b(x) •q∑

j=1

fj(x) ◦ fj(x) ≥ 0,

because fj(x) ◦ fj(x) ∈ E+ and b(x) ∈ E+. (Recall the self-duality of symmetric cone.)Hence the compact set B contains K.

Next we derive an equality which will be used to show both 1 and 2. Let x ∈ B, andb(x) =

∑m

i=1 λici be the spectral decomposition. Then we have

ψr(x) = −b(x) • (e− 1

Mb(x))2r

= −m∑

j=1

λjcj • (e−m∑

i=1

λi

Mci)

2r

= −m∑

j=1

λjcj •(

m∑

i=1

(1− λi

M)ci

)2r

(Use (3))

= −m∑

j=1

λjcj •m∑

i=1

(1− λi

M)2rci (Use (5))

= −m∑

i=1

λi(1−λi

M)2r (Use (4))

= −∑

λi>0

λi(1−λi

M)2r −

λi<0

λi(1−λi

M)2r. (7)

Because 1− λi/M ≥ 0, we have

ψr(x) ≥ −∑

λi>0

λi(1−λi

M)2r = −M

λi>0

λi

M(1− λi

M)2r.

Let λ(r) = max{ (1− ξ)2rξ : ξ ∈ [0, 1] }. Then for an arbitrary positive number ǫ, thereexists a positive integer r such that for r ≥ r, λ(r) ≤ ǫ/(mM). For such r, we haveψr(x) ≥ −ǫ. Since the choice of r does not depend on the choice of x ∈ B, this provesthe first statement.

Next we prove the second statement. Notice that (7) implies that

ψr(x) ≥ −∑

λi>0

λi −∑

λi<0

λi(1−λi

M)2r ≥ −mM −

λi<0

λi(1−λi

M)2r (8)

for every x ∈ B. Suppose that x ∈ B −K and κ > 0 are given. Since x ∈ B −K, theminimum eigenvalue of b(x) is negative. The continuity of eigenvalues implies that thereexists δ > 0 and λ < 0 such that if x ∈ B(x, δ), then the minimum eigenvalue of b(x) isnot greater than λ. Then (8) can be further evaluated as ψr(x) ≥ −mM − λ(1− λ

M)2r for

every x ∈ B(x, δ) ∩B. Because (1− λ/M) > 1 and λ < 0, there exists a positive integerr such that ψr(x) ≥ κ for every r ≥ r and x ∈ B(x, δ) ∩B. �

7

Lemma 5 Let b, K, C be as defined in the assumption of Lemma 3, and p, B, M , ψr be

as defined in Lemma 4. If a ∈ R[x] is positive on K, then there exists a positive integer

r such that a + ψr is positive on B for every r ≥ r.

Proof : Let xr be a minimizer of a + ψr on the compact set B. We show the lemma byproving that there exists a positive integer r such that a(xr)+ψr(x

r) > 0 for every r ≥ r.Suppose to the contrary that for any r > 0, there exists r ≥ r such that a(xr) +

ψr(xr) ≤ 0. Because r is arbitrary, the set L = { r : a(xr)+ψr(x

r) ≤ 0 } is infinite. Since{ xr : r ∈ L } ⊆ B, we can take an accumulation point x∗ ∈ B of { xr : r ∈ L }, and asubsequence { xr : r ∈ L′ } (L′ ⊆ L) which converges to x∗.

In the following, we will prove that there exists r > 0 and δ > 0 such that a(x) +ψr(x) > 0 for every x ∈ B(x∗, δ)∩B and r ≥ r. Because xr ∈ B(x∗, δ)∩B for sufficientlylarge r ∈ L′, this contradicts that x∗ is an accumulation point of { xr : r ∈ L }, whichestablishes the lemma.

We first consider the case where x∗ ∈ K. Since K is compact, we can take a positivenumber ǫ such that a(x) ≥ ǫ for every x ∈ K. Then there exists a positive number δsuch that a(x) ≥ ǫ/2 for every x ∈ B(x∗, δ). On the other hand, 1 of Lemma 4 impliesthat there exists r > 0 such that ψr(x) ≥ −ǫ/4 for every r ≥ r and x ∈ B. Therefore, ifr ≥ r and x ∈ B(x∗, δ) ∩B, then a(x) + ψr(x) ≥ ǫ/4 > 0.

Next we consider the case where x∗ ∈ B−K. Let κ∗ = inf{−a(x) : x ∈ B }+1, whichis finite because B is compact. Then 2 of Lemma 4 implies that there exists a positivenumber δ and a positive integer r such that ψr(x) ≥ κ∗ for every x ∈ B(x∗, δ) ∩ B andr ≥ r. For such x and r, we have a(x) + ψr(x) ≥ 1 > 0. �

Proof of Generalized Putinar’s Lemma :“only if part”: Since K is compact, we can take a positive number R such thatR− xT x > 0 for every x ∈ K. Define p(x) = R− xT x for every x ∈ R

n. Then p ∈ R[x]is a positive polynomial on K. Hence it belongs to C by the assumption of the “only if”part. By construction, we see that

{

x ∈ Rn : R− xT x ≥ 0

}

is compact in Rn.

“if part”: Let p ∈ C such that B = {x : p(x) ≥ 0 } is compact. Due to Lemma 5, wecan choose ψ ∈ −b • E [x; b]2 such that a(x) + ψ(x) > 0 for every x ∈ B. Applying theoriginal Putinar’s Lemma (Lemma 4.2 of [14]), we obtain a + ψ ∈ R[x]2 + pR[x]2. Sinceψ ∈ −b • E [x; b]2, it readily follows that a ∈ R[x]2 + b • E [x; b]2, once we show pR[x]2 ⊆R[x]2+b•E [x; b]2. To show this, take p ∈ pR[x]2. Expressing p =

∑q

i=1 g2i +b•∑q

j=1 fj ◦fj

by some q > 0, q > 0, g1, . . . , gq ∈ R[x] and f1, . . . , fq ∈ E [x; b], and p = p∑q

k=1 g2k by

some q > 0 and g1, . . . , gq ∈ R[x], we have

p =

(

q∑

i=1

g2i + b •

q∑

j=1

fj ◦ fj

)

q∑

k=1

g2k =

q∑

i=1

q∑

l=1

(gigl)2 + b •

q∑

j=1

q∑

k=1

gkfj ◦ gkfj.

Because gkfj ∈ E [x; b], p ∈ R[x]2 + b • E [x; b]2. �

8

4 Main Results

In this section, we will derive various SOS and SDP relaxations of 〈POP 〉 whose rela-tions are illustrated in Figure 1. Here “equiv.”, “relax.” and “rest.” are abbreviations

〈POP 〉 dual←→ 〈D〉equiv.ւր ց relax. SOSւ ց SOS

〈SDP/POP 〉rrelax.−→ 〈SDP ′/POP 〉r 〈SOS ′/D〉r

rest.←− 〈SOS/D〉rlinear relax. ↓ l equiv.

linear relax. ↓ 〈LR/SDP ′/POP 〉rdual←→ 〈SDP/SOS ′/D〉r l equiv.

〈LR/SDP/POP 〉rdual←→ 〈SDP/SOS/D〉r

Figure 1: SOS and SDP relaxations of 〈POP 〉 and their relations.

of “equivalent”, “relaxation” and “restriction”, respectively. There are two ways to de-rive those relaxations; a dual approach by [12] and a primal approach by [9]. In thedual approach, we first construct a dual 〈D〉 of 〈POP 〉, which then induces SOS relax-ations 〈SOS/D〉r and 〈SOS ′/D〉r of 〈POP 〉. 〈SDP/SOS/D〉r and 〈SDP/SOS ′/D〉rare semidefinite programs equivalent to 〈SOS/D〉r and 〈SOS ′/D〉r, respectively. Onthe other hand, 〈LR/SDP/POP 〉r and 〈LR/SDP ′/POP 〉r are semidefinite program-ming relaxation of 〈POP 〉 directly derived by the primal approach. In the derivation,we first construct a nonlinear semidefinite program 〈SDP/POP 〉r, which is equivalent to〈POP 〉, and a subproblem 〈SDP ′/POP 〉r of 〈SDP/POP 〉r. Then they are linearizedinto 〈LR/SDP/POP 〉r and 〈LR/SDP ′/POP 〉r. The subscript r represents the degreeof polynomials and E-valued polynomials used in each optimization problem. As we takea larger r, the relaxation is tighter, although the problem size is larger. The differencebetween SOS and SOS ′ lies in whether we exploit the structure and sparsity of the E-valued polynomial b involved in the constraint in 〈POP 〉; the latter utilizes them moreeffectively to reduce its size. The same comment applies to the difference between SDPand SDP ′.

4.1 A Dual Approach

We consider an optimization problem:

〈D〉 maximize ζ subject to a(x)− ζ ≥ 0 ∀x ∈ K.

Because K is compact, the optimal value ζ∗ of 〈D〉 is finite, and equals to that of 〈POP 〉.〈D〉 can be regarded as an optimization problem finding the supremum of ζ for whicha− ζ is a nonnegative polynomial on K.

Let ra = ⌈deg(a)/2⌉ and rb = ⌈deg(b)/2⌉. For r ≥ max(ra, rb), we consider an SOS

9

optimization problem corresponding to 〈D〉:

〈SOS/D〉r

maximize ζsubject to a− b • w − ζ = w,

w ∈ E [x]2r−rband w ∈ R[x]2r.

Here the nonnegative integers r − rb and r, which appears as the subscripts of E [x]r−rb

and R[x]r, respectively, have been chosen so that the degrees of the polynomials in bothsides of the equality constraint a − b • w − ζ = w are balanced. We denote the optimalvalue of 〈SOS/D〉r by ζr. When 〈SOS/D〉r has no feasible region, ζr = −∞. If (w, w, ζ)is feasible for 〈SOS/D〉r, then a(x) − ζ = w(x) + b(x) • w(x) ≥ 0 for every x ∈ K.Therefore, we have ζ ≤ ζ∗ or ζr ≤ ζ∗.

Restricting w ∈ E [x]2r−rbto E [x; b]2r−rb

in 〈SOS/D〉r, we obtain another SOS problem:

〈SOS ′/D〉r

maximize ζsubject to a− b • w − ζ = w,

w ∈ E [x; b]2r−rband w ∈ R[x]2r.

Since E [x; b]2r−rb⊆ E [x]2r−rb

, Obviously, the optimal value ζ ′r of 〈SOS ′/D〉r is less than orequals to ζr and ζ∗. The following theorem establishes a relationship between 〈D〉 and〈SOS ′/D〉r.Theorem 6 If there exists a p ∈ R[x]2 +b•E [x; b]2 such that {x : p(x) ≥ 0 } is compact,

then ζ ′r → ζ∗ as r →∞.

Proof : Let ǫ > 0. By Lemma 3, there exist polynomials w ∈ E [x; b]2 and w ∈ R[x]2 suchthat a− (ζ∗ − ǫ) = b • w + w. Take an rǫ such that w ∈ E [x; b]2rǫ−rb

and w ∈ R[x]2rǫThen

w ∈ E [x; b]2r−rband w ∈ R[x]2r for every r ≥ rǫ. Hence ζ∗ − ǫ ≤ ζ ′r ≤ ζ∗ for every r ≥ rǫ.

Corollary 7 If there exists a p ∈ R[x]2 + b • E [x]2 such that {x : p(x) ≥ 0 } is compact,

then ζr → ζ∗ as r →∞.

Recall that if deg(b) ≥ 1 and k = ⌊r/ deg(b)⌋ then the inequality (6) holds betweenthe dimension of E [x]r−rb

and E [x; b]r−rb. Hence the size of 〈SOS ′/D〉r is smaller than the

size of 〈SOS/D〉r in cases k + 1 < N . 〈SOS ′/D〉r has a further advantage in numericalcomputation. Let A be a basis of the linear subspace E [x; b]r−rb

of E [x]r−rb. By using

Lemma 2, we rewrite the constraints of 〈SOS ′/D〉r as

a− b •(

f∈A

g∈A

Vfgf ◦ g)

− ζ ∈ R[x]2r and V ∈ S#A×#A. (9)

When the polynomials a and b are sparse, we expect the left hand polynomial in theinclusion relation above inherits the sparsity. In such a case, we can apply the methodproposed for exploiting the sparsity of sums of squares of polynomials in the paper [8] toefficiently solve the minimization of ζ subject to (9), which is equivalent to 〈SOS ′/D〉r.Details are omitted here.

10

4.2 A Primal Approach

Let r ≥ max(ra, rb) be fixed throughout this section. We construct from 〈POP 〉 a#Br−rb

×#Br−rbsymmetric matrix Ur of polynomials whose (f, g)th element (f, g ∈ Br−rb

)is defined by

(Ur)fg = (f ◦ g) • b ∈ R[x]2r.

Also, denote by ur the vector of all monomials xα (α ∈ Gr), and let Mr = uruTr . Using

these matrices, we consider a polynomial SDP:

〈SDP/POP 〉r{

minimize a(x)

subject to Ur(x) ∈ S#Br−rb×#Br−rb

+ and Mr(x) ∈ S#Gr×#Gr

+ .

Theorem 8 〈SDP/POP 〉r is equivalent with 〈POP 〉.

Proof : Since both problems 〈POP 〉 and 〈SDP/POP 〉r share the polynomial objectivefunction a, it suffices to show that x is a feasible solution of the former problem if andonly if it is a feasible solution of the latter, i.e.,

b(x) ∈ E+ if and only if Ur(x) ∈ S#Br−rb×#Br−r

b

+ .

(Notice that Mr(x) ∈ S#Gr×#Gr holds for any x ∈ Rn; this constraint can be ignored

here.)“only if part” Suppose that b(x) ∈ E+. Then, for every c ∈ R

#Br−rb ,

f∈Br−rb

g∈Br−rb

cfcg(Ur)fg(x) =∑

f∈Br−rb

g∈Br−rb

cfcg(f ◦ g)(x) • b(x)

= (∑

f∈Br−rb

cff(x))2 • b(x)

≥ 0.

Here the last inequality holds since (∑

f∈Br−rb

cff(x))2 ∈ E+ and b(x) ∈ E+. Hence Ur(x)

is positive semidefinite.

“if part”: Suppose that Ur(x) ∈ S#Br−rb×#Br−r

b

+ . We will show that

(d ◦ d) • b(x) ≥ 0 for every d ∈ E ,

which implies b(x) ∈ E+. Take an arbitrary d ∈ E . Because d is a (constant) E-valuedpolynomial, we have an expression d =

f∈Br−rb

dff where df ∈ R for f ∈ Br−rb. Hence

(d ◦ d) • b(x) =

f∈Br−rb

dff

g∈Br−rb

dgg

• b(x)

=∑

f∈Br−rb

g∈Br−rb

dfdg(Ur)fg(x)

≥ 0,

11

because Ur(x) is positive semidefinite by the assumption. �

Since Ur ∈ S#Br−rb×#Br−rb [x]2r, we can write

Ur(x) =∑

α∈supp(Ur)−{0}

Dαxα−D0

by using some Dα ∈ S#Br−rb×#Br−rb for α ∈ supp(Ur). Similarly, Mr(x) can be expressed

asMr(x) =

α∈supp(Mr)−{0}

Dαxα− D0

by using some Dα ∈ S#Gr×#Gr for α ∈ supp(Mr).Substituting each monomial xα appeared in 〈SDP/POP 〉r by yα ∈ R, we obtain a

linear relaxation of 〈SDP/POP 〉r:

〈LR/SDP/POP 〉r

minimize∑

α∈supp(a)

aαyα

subject to∑

α∈supp(Ur)−{0}

Dαyα−D0 ∈ S#Br−rb×#Br−rb

+ and

α∈supp(Mr)−{0}

Dαyα− D0 ∈ S#Gr×#Gr

+ .

If we put Fr = supp(a) ∪ supp(Ur) ∪ supp(Mr) − {0}, then 〈LR/SDP/POP 〉r can berewritten as

minimize∑

α∈Fr

aαyα

subject to∑

α∈Fr

Dαyα−D0 ∈ S#Br−rb×#Br−r

b

+ and

α∈Fr

Dαyα− D0 ∈ S#Gr×#Gr

+ ,

whereaα = 0 if α 6∈ supp(a),Dα = O if α 6∈ supp(Ur)− {0},Dα = O if α 6∈ supp(Mr)− {0}.

The dual of 〈LR/SDP/POP 〉r is

〈SDP/SOS/D〉r

maximize D0 •X + D0 • Xsubject to Dα •X + Dα • X = aα (α ∈ Fr),

X ∈ S#Br−rb×#Br−rb

+ and X ∈ S#Gr×#Gr

+ .

We named 〈SDP/SOS/D〉r on the above optimization problem, because 〈SDP/SOS/D〉ris nothing but an SDP-version of 〈SOS/D〉r.

Theorem 9 〈SOS/D〉r is equivalent with 〈SDP/SOS/D〉r.

12

Lemma 10 Assume that X ∈ S#Br−rb×#Br−r

b

+ and X ∈ S#Gr×#Gr

+ . Then (X, X) is feasible

for 〈SDP/SOS/D〉r with the objective value ζ = D0 •X + D0 • X if and only if

a(x)−X • Ur(x)− X •Mr(x) = ζ for every x ∈ Rn. (10)

Proof : The constraint of 〈SDP/SOS/D〉r implies

α∈Fr

Dαxα •X +∑

α∈Fr

Dαxα • X =∑

α∈Fr

aαxα

for every x ∈ Rn, if (X, X) is feasible. Adding this to ζ = D0 •X + D0 • X, we have (10).

Conversely, if (10) holds, then this is an identity between polynomials, and the coeffi-cients of both sides must be met, which produces the equality conditions of 〈SDP/SOS/D〉rand that ζ = D0 •X + D0 • X. �

Proof of Theorem 9 : Suppose that (X, X) is a feasible solution of 〈SDP/SOS/D〉r withan objective value ζ = D0 •X + D0 • X. By Lemma 2, we know that

w =∑

f∈Br−rb

g∈Br−rb

Xfgf ◦ g ∈ E [x]2r−rb,

w′ = X •Mr =∑

α∈Gr

β∈Gr

Xαβxα+β ∈ R[x]2r.

We then see that

a− b • w − ζ = a− b •

f∈Br−rb

g∈Br−rb

Xfgf ◦ g

− ζ

= a−X • Ur − ζ= X •Mr (by Lemma 10)

= w′.

Thus (w,w′, ζ) is a feasible solution of 〈SOS/D〉r with the objective value ζ .Conversely, suppose that (w,w′, ζ) ∈ E [x]2r−rb

× R[x]2r × R is a feasible solution of〈SOS/D〉r with an objective value ζ . By Lemma 2, there exist positive semidefinite

matrices X ∈ S#Br−rb×#Br−r

b

+ and X ∈ S#Gr×#Gr

+ such that

w =∑

f∈Br−rb

g∈Br−rb

Xfgf ◦ g and w′ =∑

α∈Gr

β∈Gr

Xαβxα+β

respectively. Then b•w = X•Ur and w′ = X•Mr. Hence a(x)−X•Ur(x)−X•Mr(x) = ζfor every x ∈ R

n. Now Lemma 10 implies that (X, X) is feasible for 〈LR/SDP/POP 〉rwith the objective value ζ = D0 •X + D0 • X. �

Now we briefly mention how to derive an SDP relaxation 〈LR/SDP ′/POP 〉r, whichis corresponding to a dual of 〈SDP/SOS ′/D〉r, in Figure 1. We first choose a basis A

13

of the linear subspace E [x; b]r−rbof E [x]r. We then construct from 〈POP 〉 a #A × #A

symmetric matrix U ′r of polynomials whose (f, g)th element (f, g ∈ A) is defined by

(U ′r)fg = (f ◦ g) • b ∈ R[x]2r.

We consider the following polynomial SDP:

〈SDP ′/POP 〉r{

minimize a(x)

subject to U ′r(x) ∈ S#A×#A

+ and Mr(x) ∈ S#Gr×#Gr

+ .

We note that 〈SDP ′/POP 〉r is not necessarily equivalent to 〈POP 〉 but it is a relax-ation of 〈POP 〉 in general; we can prove that any feasible solution x of 〈POP 〉 isfeasible in 〈SDP ′/POP 〉r but the converse is not true. Applying the linearization to〈SDP ′/POP 〉r in the same way as we have discussed above, we obtain a semidefinite relax-ation 〈LR/SDP ′/POP 〉r of 〈POP 〉. We can further construct its dual 〈SDP/SOS ′/D〉r,which is equivalent to 〈SOS ′/D〉. The derivation of 〈SDP/SOS ′/D〉r and the proof ofits equivalence to 〈SOS ′/D〉 are similar to those of 〈SDP/SOS/D〉r, and the details areomitted.

5 Connections to Another Class of Sums of Squares

in Sm×m[x]

In [6], Kojima introduced a class of “sums of squares” of polynomial matrices by

Σm×mr =

{

A ∈ Sm×m[x]2r : ∃q > 0, A =

q∑

i=1

GiGTi , Gi ∈ R

m×m[x]r

}

,

where Rm×m[x]r is the set of m × m polynomial matrices whose degree is less than or

equals to r. Using Σm×mr , he showed similar relationships between SOS and SDP problems

as ours.Because Sm×m[x]r is derived by restricting each Gi in the definition of Σm×m

r to besymmetric, Sm×m[x]2r ⊆ Σm×m

r . Interestingly, Σm×mr 6⊆ Sm×m[x]2 in general. To see this,

consider the case:

m = 2, n = 1,

G0 =

(

0 01 1

)

, G1 =

(

1 00 0

)

, G(x) = G0 + G1x,

F (x) = G(x)G(x)T

=

((

0 01 1

)

+

(

1 00 0

)

x

)((

0 10 1

)

+

(

1 00 0

)

x

)

=

(

0 00 2

)

+

(

0 11 0

)

x+

(

1 00 0

)

x2.

By construction, F ∈ Σ2×21 . We show F 6∈ S2×2[x]2. Assume to the contrary that

F ∈ S2×2[x]2. Then there exist Aij ∈ S2×2 (i = 1, 2, . . . , p, j = 0, 1, 2, . . . , r) such that

F (x) =

p∑

i=1

(

r∑

j=0

Aijxj

)2

.

14

Assume that r ≥ 2. It follows that(

0 00 2

)

+

(

0 11 0

)

x+

(

1 00 0

)

x2

=

p∑

i=1

(

r−1∑

j=0

Aijxj

)2

+ Air

(

r−1∑

j=0

Aijxj

)

xr +

(

r−1∑

j=0

Aijxj

)

Airxr + A2

irx2r

for every x ∈ R, which implies the identities Air = O (i = 1, 2, . . . , p). By induction, wecan prove that Aij = O (i = 1, 2, . . . , p, j = 2, 3, , . . . , r). Thus the above identity turnsout to be(

0 00 2

)

+

(

0 11 0

)

x+

(

1 00 0

)

x2 =

p∑

i=1

(

A2i0 + Ai1Ai0x+ Ai0Ai1x+ A2

i1x2)

for every x ∈ R. Comparing the constant, linear and quadratic terms in x in both sides,we have the identities

(

0 00 2

)

=

p∑

i=1

A2i0,

(

1 00 0

)

=

p∑

i=1

A2i1,

(

0 11 0

)

=

p∑

i=1

(Ai1Ai0 + Ai0Ai1).

The first equality implies that each Ai0 ∈ S2×2 is represented as Ai0 =

(

0 00 ai

)

for

some ai ∈ R, and the second identity implies Ai1 =

(

bi 00 0

)

for some bi ∈ R. Hence,

Ai1Ai0 = Ai0Ai1 = O. This contradicts the last identity above.

On the other hand, we can express Σm×mr in terms of S(m+1)×(m+1)[x]2r. Let Π :

S(m+1)×(m+1)[x] → Sm×m[x] be a map such that Π(A) is the m × m leading principalsubmatrix of A ∈ S(m+1)×(m+1)[x]. Then

Σm×mr = Π

(

S(m+1)×(m+1)[x]2r)

. (11)

To prove this identity, we first observe an alternative representation of Σm×mr :

Σm×mr =

{

A ∈ Sm×m[x]2r : ∃q > 0, A =

q∑

i=1

fifTi , fi ∈ R

m×1[x]r

}

.

Since both Σm×mr and Π

(

S(m+1)×(m+1)[x]2r)

are convex cones in Sm×m[x]2r and the map

Π : S(m+1)×(m+1) → Sm×m is linear, it suffices to show that

ffT ∈ Π(

S(m+1)×(m+1)[x]2r)

if f ∈ Rm×1[x]r, and (12)

Π(

G2)

∈ Σm×mr if G ∈ S(m+1)×(m+1)[x]r. (13)

Given f ∈ Rm×1[x]r, let

G =

(

O ffT 0

)

∈ S(m+1)×(m+1)[x]r.

15

Then Π(

G2)

= ffT , thus we have shown (12). Next suppose that

G =

(

GgT

)

∈ S(m+1)×(m+1)[x]r, G ∈ Rm×(m+1)[x]r, g

T ∈ R1×(m+1)[x]r.

Then Π(

G2)

= Π(

GGT)

= GGT ∈ Σm×mr . Hence we have shown (13).

Assume that we are given a polynomial SDP

〈SDP 〉 minimize a(x) subject to B(x) ∈ Sm×m+ ,

where a ∈ R[x] and B ∈ Sm×m[x]. There are two ways to construct SOS and SDPrelaxations for this problem: Kojima’s method by [6] and our method with E = Sm×m.Because Σm×m

r ⊇ Sm×m[x]2r and Σm×mr 6= Sm×m[x]2r, the SOS relaxation in [6] gives a

tighter bound than ours. However, (11) shows that the same SOS relaxation is obtainedby applying our method to an equivalent optimization problem

minimize a(x) subject to

(

B(x) 0

0T 0

)

∈ S(m+1)×(m+1)+ .

It is known that any E-valued polynomial constraint b(x) ∈ E+ can be reduced to apolynomial semidefinite constraint B(x) ∈ SN×N

+ , where N denotes the dimension of E .Let {g1, g2, . . . , gN} be a basis of E . Define B ∈ SN×N [x] by

Bij(x) = (gi ◦ gj) • b(x) for every x ∈ Rn (i, j = 1, 2, . . . , N).

Then we can verify that b(x) ∈ E+ if and only if B(x) ∈ SN×N+ . In this way 〈POP 〉 can

be casted into 〈SDP 〉. The conversion from 〈POP 〉 to 〈SDP 〉, however, is not attractivein practice because it would destroy the structure and the sparsity of the original 〈POP 〉.

References

[1] L. Faybusovich, “Jordan Algebras, Symmetric Cones and Interior Point Methods”,Technical report, Department of Mathematics, Notre Dame University, 1995.

[2] L. Faybusovich, “Linear Systems in Jordan Algebras and Primal-Dual Interior-PointAlgorithms”, Journal of Computational and Applied Mathematics, Vol. 86, 149–175,1997.

[3] J. Faraut and A. Koranyi, Analysis on Symmetric Cones, Oxford University Press,New York, NY, 1994.

[4] D. Henrion and J. B. Lasserre, “GloptiPoly: Global optimization over polynomialswith Matlab and SeDuMi”, Laboratoire d’Analyse et d’Architecture des Systemes,Centre National de la Recherche Scientifique, 7 Avenue du Colonel Roche, 31 077Toulouse, cedex 4, France, February 2002.

16

[5] S. Kim, M. Kojima and H. Waki, “Generalized lagrangian duals and sums of squaresrelaxation of sparse polynomial optimization problems”, Research Report B-395,Dept. of Mathematical and computing Sciences, Tokyo Institute of Technology, Me-guro, Tokyo 152-8552, September 2003.

[6] M. Kojima, “Sums of squares relaxations of polynomial semidefinite programs”, Re-search Report B-397, Dept. of Mathematical and computing Sciences, Tokyo Instituteof Technology, Meguro, Tokyo 152-8552, November 2003.

[7] M. Kojima, S. Kim and H. Waki, “A general framework for convex relaxation of poly-nomial optimization problems over cones”, Journal of Operations Research Society

of Japan, 46 2 (2003) 125-144.

[8] M. Kojima, S. Kim and H. Waki, Sparsity in sums of squares of polynomials, ResearchReport B-391, Dept. of Mathematical and computing Sciences, Tokyo Institute ofTechnology, Meguro, Tokyo 152-8552, June 2003. Revised August 2003.

[9] J. B. Lasserre, Global optimization with polynomials and the problems of moments.SIAM Journal on Optimization, 11 (2001) 796–817.

[10] M. Muramatsu, On a Commutative Class of Search Directions for Linear Program-ming over Symmetric Cones. Journal of Optimization Theory and Applications, 112,No.3 (2002) 595–625.

[11] Y. E. Nesterov and M. J. Todd, Primal-dual interior-point methods for self-scaledcones. SIAM Journal on Optimization, 8 (1998) 324–364.

[12] P. A. Parrilo, “Semidefinite programming relaxations for semialgebraic problems”.Mathematical Programming, 96 (2003) 293–320.

[13] S. Prajna, A. Papachristodoulou and P. A. Parrilo, “SOSTOOLS: Sum of SquaresOptimization Toolbox for MATLAB – User’s Guide”, Control and Dynamical Sys-tems, California Institute of Technology, Pasadena, CA 91125 USA, 2002.

[14] M. Putinar, “Positive polynomials on compact semi-algebraic sets”, Indiana Uni-

versity Mathematics Journal, 42 (1993) 969–984.

17