Hopf-Lax formulas for semicontinuous data

40

Transcript of Hopf-Lax formulas for semicontinuous data

Hopf-Lax Formulas for Semicontinuous DataO. Alvarez �y, E.N.Barron z x, H. Ishii { kJuly 2, 1998, Revised July 14, 1999AbstractThe equations ut + H(Du) = 0 and ut + H(u;Du) = 0, with initial conditionu(0; x) = g(x) have an explicit solution when the hamiltonian is convex in thegradient variable (Lax formula) or the initial data is convex, or quasiconvex (Hopfformula). This paper extends these formulas to initial functions g which are onlylower semicontinuous (lsc), and possibly in�nite. It is proved that the Lax formulasgive a lsc viscosity solution, and the Hopf formulas result in the minimal superso-lution. A level set approach is used to give the most general results.

�UPRESA 60-85, Universit�e de Rouen, 76821Mont Saint Aignan Cedex, France, [email protected] in part by the TMR Network \Viscosity Solutions and Applications"zDepartment of Mathematical Sciences, Loyola University Chicago, Chicago, Ill. USA 60626,[email protected] in part by NSF grant DMS-9532030 and a grant from Loyola University{Department of Mathematics, Tokyo Metropolitan University, Tokyo, Japan, [email protected] in part by Grant-in-Aid for Scienti�c Research, No. 09440067, Ministry of Education,Science, Sports and Culture. 1

Contents1 Introduction 22 Hopf formula for lsc data 53 The Hopf formula for ut +H(u;Du) = 0 and g lsc and quasiconvex 114 The lsc Hopf formula for ut +H(u;Du) = 0, H nonconvex 145 The Lax formula for lsc data 196 Level Sets 257 Appendix : Quasiconvexity 35

1

1 IntroductionThis paper is concerned with �nding an explicit solution of the Hamilton Jacobi equation( ut +H(u;Dxu) = 0; (t; x) 2 (0;1)�Rnu(0; x) = g(x) x 2 Rn: (1.1)Under the assumptions that H = H(p) is independent of u and convex, and g is at leastcontinuous, the Lax formula gives the explicit solutionu(t; x) = infy2Rn �g(y) + tH� �x� yt �� ;where * means the Legendre Fenchel conjugate. This formula comes from consideration ofan associated optimal control problem. A more di�cult problem results when one wantsto move the convexity o� of H and onto the initial data g since then the associated controlproblem is a di�erential game. Nevertheless, assuming that H is at least continuous andg is convex and �nite, results in the Hopf formulau(t; x) = [g�(p) + tH(p)]�(x):These results are proved in several places but in the context of viscosity solutions underassumptions leading to a continuous solution u, refer to Bardi and Evans [2]. We call theseformulas the classical Hopf and Lax formulas.It was the purpose of the series of papers [9]{[12] to extend these formulas to equa-tion (1.1) with u dependence and allow more general initial data, namely the class ofquasiconvex functions. This class of functions is a vast generalization of convex functionssince a quasiconvex function is de�ned by the property that it has convex level sets.Obviously, every convex function is quasiconvex but the reverse is false. Indeed, x3 isquasiconvex, and so is the heaviside function �(0;1)(x). Under the assumption that g isat least continuous and H( ; p) is continuous, nondecreasing in , convex and positivelyhomogeneous degree one in p, [9] obtained the Lax type formulau(t; x) = miny2Rn �g(y) _H# �x� yt �� ;which uses the (second) quasiconvex conjugate # of H( ; p), given byH#(z) = inff j supp2Rn[p � z �H( ; p)] � 0g:This is derived from an associated control problem in L1 (see [9]). Moving the convexityo� of H and onto g in the weakened form of quasiconvexity, resulted [10] in the Hopf typeformula u(t; x) = [g#( ; p) + tH( ; p)]#(x);where the �rst quasiconvex conjugate of the function g isg#( ; p) = supfp � x j x 2 E( ; g)g; E( ; g) = fx 2 Rn j g(x) � g:2

This formula is again associated with a di�erential game control problem, but now in L1(see [10] and [12]). The assumptions to derive this formula involved H( ; p) continuous,nondecreasing in , positively homogeneous degree one in p, and g at least continuous andquasiconvex. One of the main points of these new formulas is that with u dependence ofthe hamiltonian, one must have homogeneity ofH in p. We frequently refer to the formulasfor ut +H(u;Du) = 0 where quasiconvexity is involved as the quasiconvex formulas.The purpose of the present paper is to extend all of these considerations to initial datawhich is only semicontinuous and possibly in�nite. The assumptions on the hamiltonianremain unchanged. This extension results in very weak and natural assumptions on g. Itencompasses the important case where the initial data is the indicator function of a closedset A, de�ned by �(x j A) = ( 0; if x 2 A,+1; if x 62 A.When we are given initial data g we may recover the quasiconvex Hopf and Lax formulasby choosing A to be a level set of g. In the case of the classical Hopf formula, we notethat continuity of the initial data is not natural since it does not guarantee continuityand �niteness of the solution. For instance, when g(x) = jxj22 and H(p) = � jpj22 , the Hopfformula givesu(t; x) = jxj22(1� t) for t < 1; u(1; x) = �(x j f0g); u(t; x) = +1 for t > 1:Furthermore, in optimal control problems in which one desires the trajectories to reach agiven set A at the terminal time, it is natural to have data such as �(x j A).In the Lax formula case, when we assume a convex hamiltonian in p for either H(p)or H( ; p), we may use the theory of lsc viscosity solutions introduced in [5] and [6]which extended the classical Crandall-Lions de�nition to lsc, possibly in�nite functions.Precisely, a lsc function u with values in R [ f+1g is a lsc solution of an equationut + F (t; x; u;Du) = 0, ifpt + F (t; x; u; px) = 0; 8(pt; px) 2 D�u(t; x); when u(t; x) < +1:There is an equivalent de�nition using smooth test functions (see below) and we refer tothe comprehensive book by Bardi and Capuzzo-Dolcetta [3] for the precise results andsimpli�ed proofs. Using this notion of solution we can prove that the Lax formulas resultin a lsc solution of the problem with lsc initial data.In the more di�cult case of the Hopf assumptions, when no convexity of the hamil-tonian is assumed, we do not have a good characterization of what it means to be a lscsolution in the sense that uniqueness is still an open problem. What we can do is to char-acterize the Hopf formulas as yielding the minimal supersolution of the equation. Thereis no good notion of subsolution for lsc functions since one cannot in general touch a lscfunction from above.The contents of this paper are as follows. In section 2 we start with extending theHopf formula for ut+H(Du) = 0 and u(0; x) = g(x) to lsc convex data g. It is established3

the Hopf formula yields the lsc solution of the problem when we assume as well H(p)is convex. If we do not assume that H is convex, we prove the Hopf formula gives theminimal supersolution, but on the interior of the domain of u Hopf gives us a (continuous)Crandall-Lions solution.In sections 3 and 4, we consider the quasiconvex Hopf formula for ut+H(u;Du) = 0with initial data which is quasiconvex and lsc. In section 3, we add the assumption thatH( ; p) is convex in p and we prove that the Hopf formula again gives us the lsc solutionof the problem. To this end, we introduce a modi�ed in�mal convolution of a function fde�ned in the simplest case byf"(x) = infy2Rn f(y) _ jx� yj" ! :This convolution enjoys many of the same properties as the classical convolution but it iscompatible with the max operator _, just as the classical convolution is compatible withthe + operator.Section 4 drops the assumption of convexity of p 7! H( ; p). It is proved that u =(g#+ tH)# still is characterized as the minimal lsc supersolution. The proof we give usesa new quasiconvex conjugate % de�ned byg%( ; p) = supfxjg(x)� g[p � x� g(x)]and g%%(x) = supp; [p � x� g%( ; p)] ^ :These are very reminiscent of the Legendre Fenchel conjugates and are very similar to theconjugates in [19], [21], etc., where these authors develop a complete theory of quasiconvexduality. We use only a small part of that theory and, to make this paper self contained,we prove the results we need (see also [7]). In particular, we show that in calculating theHopf formula we may use either conjugate % or #. That is, (g# + tH)# = (g% + tH)%.Section 5 turns to the Lax formula for lsc data for both H = H(p) and H = H( ; p).Since the Lax formula requires a convex (in p) hamiltonian, we prove that the classicalLax formula, and the quasiconvex Lax formula yield the lsc solution of the equation.In section 6 we introduce a new approach to developing the quasiconvex Hopf and Laxformulas that weakens the assumptions of the preceding sections. This approach is basedon consideration of the level sets of geometric pde's. First order geometric type pde's likeut + F (t; x; u;Dxu) = 0 are those which are homogeneous degree one in the gradient.Second order geometric pde's have an additional condition on the second derivatives. Themain idea, introduced in [12], is that when we freeze the dependence on the u variable,say by looking at w t + F (t; x; ;Dxw ) = 0, then u and w have the same �level sets.We extend this observation in the following way. We use the classical Hopf and Laxformulas to get the solution of w t +H( ;Dxw ) = 0 and then we prove that the functionu = inff j w � g solves ut +H(u;Dxu) = 0. This results in the sharpest quasiconvexformulas. We conclude this section with some examples.4

In section 7, an appendix, we gather together some useful, although peripheral resultson quasiconvex functions. We also prove here that we may replace in�mum by minimumin the quasiconvex Hopf formulas.Acknowledgement. The �rst two authors would like to thank the Department of Math-ematics of Tokyo Metropolitan University for their very generous hospitality.2 Hopf formula for lsc dataWe begin by extending the Hopf formula for lsc convex data.Recall that the Legendre Fenchel conjugate of a function g is given byg�(p) = supx2Rn[p � x� g(x)]:If g is a lsc convex function with values in R [ f+1g then g��(x) = g(x): Note that, inaddition to proper functions, we allow the function g � +1.Assume that g : Rn ! R is convex and continuous, that H : Rn ! R is continuous,and that limjpj!1 g�(p) + tH(p)jpj = +1 uniformly for t bounded: (2.1)Then the classical Hopf formula [2], [18]u(t; x) = (g�(p) + tH(p))� (x) = supp2Rn[p � x� g�(p)� tH(p)]; (2.2)gives a convex function with domain all of [0;1) � Rn that is a continuous viscositysolution of the Hamilton Jacobi equation( ut +H(Dxu) = 0; (t; x) 2 (0;1)�Rnu(0; x) = g(x) x 2 Rn: (2.3)If g is uniformly continuous on Rn, then g is Lipschitz and so Dxu 2 L1((0; T )�Rn) forany T > 0. In this case, u is the unique viscosity solution of (2.3).We will extend these considerations to lsc, possibly in�nite initial data. First we notethat the Hopf formula gives a lsc convex function in (t; x). The function has values inR [ f+1g because g� 6� +1 and H is �nite. When g is proper, u is proper but, ofcourse, it may happen that u(t; x) = +1 8 (t; x) 2 (0;1)�Rn.Under the assumption that the hamiltonian is convex we have the following result.Theorem 2.1 Let g : Rn ! (�1;1] be a lower semicontinuous, proper, and convexfunction. Assume H(p) is convex and continuous. Then u : [0;1) � Rn ! (�1;1],de�ned by u(t; x) = (g� + tH)�(x), is the unique lower semicontinuous viscosity solutionof (2.3) that is bounded from below by a function of linear growth.5

Throughout the paper, we say that a function u, de�ned on [0;+1)�Rn, is boundedfrom below by a function of linear growth if, for every T > 0, there is a constant C > 0for which we have u(t; x) � �C(1 + jxj) 8 (t; x) 2 [0; T ]�Rn:Remark 2.2 Recall ([5]) that a lsc function u with values in (�1;1] is a lsc viscositysolution of (2.3) if for any smooth ' 2 C1((0;1) � Rn) for which u � ' has a (zero)minimum at (t0; x0) (with u(t0; x0) <1), we have 't(t0; x0) +H(Dx'(t0; x0)) = 0: Also,recall that if u is continuous (and �nite), and H is convex, then it is a (Crandall-Lions)viscosity solution, if and only if it is a lsc viscosity solution. Without convexity of thehamiltonian this is no longer true (see below).Remark 2.3 A lsc function achieves the initial data, and we write u(0; x) = g(x), ifg(x) = infflim infn!1 u(tn; xn) j (tn; xn)! (0 + 0; x)g = lim inf(s;y)!(0+0;x) u(s; y);as a weak lower Barles-Perthame limit (see [4]), and this is the same as epi-convergence([1]).Proof . Let " > 0. Consider the Moreau-Yosida inf convolution of g.g"(x) = infy2Rn g(y) + jx� yj22" ! :Then, since g is lsc, proper and convex, it is known (see [1, Thm. 3.24], [13]) that(i) g" is convex and continuously di�erentiable. In fact, 0 � D2g" � C=" and g" 2W 2;1loc (Rn):(ii) g" " g as " # 0 pointwise.(iii) (g")�(p) = g�(p) + "2 jpj2; and (g")�(p) is strictly convex.Now we set u"(t; x) = ((g")�(p) + tH(p))� :Since g" satis�es all of the classical hypotheses for the Hopf formula (including (2.1)), weknow that u" is the unique continuous viscosity solution that is bounded from below bya function of linear growth of( (u")t +H(Dxu") = 0; (t; x) 2 (0;1)�Rnu"(0; x) = g"(x) x 2 Rn; (2.4)We need the lemma.Lemma 2.4 u(t; x) = sup">0 u"(t; x) 6

Proof . We have from property (iii)u"(t; x) = supp2Rn[p � x� (g")�(p)� tH(p]) = supp2Rn[p � x� g�(p)� "2 jpj2 � tH(p)]:Then sup">0 u"(t; x) = sup">0 supp2Rn[p � x� g�(p)� "2 jpj2 � tH(p)]= supp2Rn sup">0 [p � x� g�(p)� "2 jpj2 � tH(p])= supp2Rn[p � x� g�(p)� tH(p)]= u(t; x):Now, since fu"g">0 is a nondecreasing sequence, u" " u. Hence,u(t; x) = sup">0 u"(t; x) = lim inf(s;y;")!(t;x;0)u"(s; y);as a weak Barles-Perthame limit. Since u" is a continuous (Crandall-Lions) viscosity so-lution of (2.4) it is also a lsc viscosity solution of (2.4) (see [5]). But since the weak limitof lsc solutions is an lsc solution, we conclude that u is an lsc solution of (2.3) which isbounded from below by a function of linear growth.Furthermore, by property (ii) above,g(x) = sup">0 g"(x) = sup">0 u"(0; x) = u(0; x):Hence, lim inf(s;y)!(0+0;x) u(s; y) � g(x). Also, if we choose � 2 Rn; � 2 R so that H(p) �p � �� �, then u(s; x+ s�) � supp2Rn[p � (x+ s�)� g�(p)� sp � � + s�]= g(x) + s�:Hence, lim inf(s;y)!(0+0;x) u(s; y) � g(x), and we conclude that u(0; x) = g(x) in the senseof lsc solutions.The uniqueness of a lsc solution to (2.3) is a simple adaptation of the results of[5] and [6]. The di�erence lies in the fact that we allow g and the solution to be +1and to be bounded from below by a function of linear growth (instead of �nite andbounded from below). The techniques for relaxing the lower bound to linear growth isstandard in the theory of classical viscosity solutions (see [3], for instance), so we omitthe details. To treat the case where the solution may be +1, we �rst note that wemay assume, without loss of generality, that H(0) = 0 (since u0 = u + H(0)t solvesu0t +H 0(Du0) = 0 with H 0 = H �H(0)). Given a constant , we recall the trivial identity7

D�(u^ )(t; x) � f0g [D�u(t; x). Hence, if u and v are two solutions of (2.3), u^ andv ^ are two lsc solutions of( ut +H(Dxu) = 0; (t; x) 2 (0;1)�Rnu(0; x) = g(x) ^ x 2 Rn: (2.5)We can now use the classical uniqueness result of [5] to deduce that u ^ = v ^ . Since is arbitrary, this gives u = v.This theorem required the convexity of the hamiltonian in order to apply the theoryof lsc viscosity solutions in [5] (see also [4] and [3]). Whenever H is not convex but isbounded from below by a function of linear growth (so that (2.1) still holds for g"), thepreceding argument adapts and shows that u is a supersolution of (2.3).The next theorem generalizes this observation to the case where the hamiltonian isnot convex and not bounded from below. The argument is direct. Of course, the notionof lsc solution is not applicable here, so we can only say that the Hopf formula gives thesmallest supersolution.Theorem 2.5 Let g : Rn ! R [ f1g be a lsc convex function and H : Rn ! R bea continuous function. The function u : [0;1) � Rn ! R [ f1g de�ned by the Hopfformula u(t; x) = (g�(p) + tH(p))�(x)is the minimal viscosity supersolution of the problem (2.3) among all supersolutions thatare bounded from below by a function with linear growth.In particular, it is the minimal convex supersolution. Furthermore, it is a Crandall-Lions viscosity solution in int(dom(u)).Proof . For p 2 dom(g�) arbitrary, the function f(t; x) := p � x� tH(p)� g�(p) is a C1solution of the Hamilton-Jacobi equation with the a�ne function f(0; x) = p �x�g�(p) asinitial data. Since u is the sup of such functions, its usc envelope, call it ~u, is a subsolutionin U := f~u < +1g (see [3], for instance). But since u is convex, U = int(dom(u)) and~u = u on U (cf. Rockafellar [22, Thm. 10.1] ). Hence u is a subsolution in int(dom(u)).Let v be a supersolution that is bounded from below by a function with linear growth.We show that v � u. Since 8 p 2 dom(g�) one has g(x) � p � x � g�(p), the functionp � x � tH(p) � g�(p) is a subsolution of the equation (2.3) with g as initial data. Bythe comparison principle for functions with linear growth (see [4], [3]), one gets thatv(t; x) � p � x � tH(p) � g�(p) for (t; x) 2 [0;+1) � Rn. By taking the sup over p, weconclude that v � u. Hence u is smaller than any supersolution.It remains to prove that u is itself a supersolution. Let (t; x) 2 (0;1)�Rn be suchthat u(t; x) < +1 and take (pt; px) 2 D�u(t; x). Since u is convex, (pt; px) belongs to thesubdi�erential of u in the sense of convex analysis (see [3]). Therefore, by de�nition ofconvex subdi�erential,u(s; y) � u(t; x) + pt(s� t) + px � (y � x); 8 (s; y) 2 [0;1)�Rn:8

Set s = 0 and rearrange terms to obtaint(pt +H(px)) � u(t; x) + tH(px) + px � (y � x)� g(y):Take the sup in y to gett(pt +H(px)) � u(t; x)� px � x + tH(px) + g�(px):By the de�nition of u, the right hand side is � 0. Hence, since t > 0, we conclude thatpt +H(px) � 0 and so u is a supersolution.Remark 2.6 When H is not convex, u is not necessarily a lsc solution of the equationbecause it may happen that pt + H(px) > 0 for some (pt; px) 2 D�u(t; x). For example,the unique continuous viscosity solution to ut�jDuj = 0 in (0;1)�Rn with u(0; x) = jxjin Rn is u(t; x) = t+ jxj (it is given by the classical Hopf formula). But, for every t > 0,we have that (pt; px) = (1; 0) 2 D�u(t; 0). This shows that for a nonconvex hamiltonianthe equivalence of continuous viscosity solution and lsc viscosity solution fails.Remark 2.7 Suppose that g is continuous. Then, the e�ective domain of u is a convexset containing the hyperplane f0g �Rn. This implies trivially the existence of a blow uptime T 2 [0;+1] so that [0; T [�Rn � domu � [0; T ]�Rn:The preceding theorem then says that u is a continuous viscosity solution in [0; T [�Rn,and, by de�nition of T , that u � +1 on ]T;+1[�Rn.We show in the next paragraph that the blow up time T is given by the formula� 1T = lim infjpj!1 H�(p)g�(p)(with a� = a ^ 0). In particular, when g is Lipschitz continuous or when H has at mostlinear growth (H(p) � �C(1 + jpj)), one recovers the classical result that T = +1.To obtain the formula for the blow up time, we use the classical observation that a(not necessarily convex) function � de�ned on Rn is coercive (i.e. lim infjpj!1 �(p)jpj = +1)if and only if dom�� = Rn. Therefore, for every t < T , we must havelim infjpj!1 g�(p)jpj 1 + tH(p)g�(p)! = lim infjpj!1 g�(p) + tH(p)jpj = +1:As g� is coercive, we deduce that lim infjpj!1(1+ tH(p)g�(p)) � 0, or 1t + lim infjpj!1 H(p)g�(p) � 0.This gives the inequality � 1T � lim infjpj!1 H(p)g�(p) . Conversely, repeating the argument fort > T (provided T < +1), we obtain 1t +lim infjpj!1 H(p)g�(p) � 0. And, this gives the reverseinequality. 9

We end this section with a general theorem on the uniqueness of convex solutions.Consider the Cauchy problemut +H(Dxu) = 0 in (0;1)�Rn u(0; x) = g(x) (x 2 Rn); (2.6)where H 2 C(Rn) and g 2 C(Rn) is convex.Lemma 2.8 If u 2 C([0; T ) � Rn), with T > 0, is a viscosity subsolution of (2.6) andconvex, then for each (t; x) 2 (0; T )�Rn there is a (q; p) 2 D�u(t; x) such that q+H(p) �0. Proof . Since u is almost everywhere (or just densely super-) di�erentiable andlocally Lipschitz continuous in (0; T ) � Rn, we see that there is a sequence (tk; xk) 2(0; T ) � Rn such that u is superdi�erentiable (hence di�erentiable) at (tk; xk) and suchthat if (qk; pk) = Du(tk; xk) then qk +H(pk) = 0 and (qk; pk)! (q; p) as k !1 for some(q; p) 2 R�Rn. We have q +H(p) � 0. Since (qk; pk) 2 D�u(tk; xk), by the convexity ofu we see that (q; p) 2 D�u(t; x).This lemma leads us to the comparison theorem we are after.Theorem 2.9 Let u 2 C([0; T ] � Rn) be convex and a subsolution of (2.6) and v 2C([0; T ] � Rn) be a supersolution of (2.6) bounded from below by a function with lineargrowth. Then we have u � v.Proof . Fix (�t; �x) 2 (0; T )�Rn and choose (q; p) 2 D�u(�t; �x) so that q+H(p) � 0.De�ne w(t; x) = p � (x � �x) + q(t� �t) + u(�t; �x). Then w is a subsolution of (2.6), w � u,and w(�t; �x) = u(�t; �x). By the standard comparison theorem, we have w � v in [0; T ]�Rn.Hence, u � v in [0; T ]�Rn.Let us �nally remove the assumption that g is a real-valued function. Instead weassume that g : Rn ! R [ f+1g is convex, lower semicontinuous, and proper.De�nition 2.10 A function u : [0; T )�Rn ! R [ f1g, with T > 0, is called a convexviscosity subsolution of (2.6) if it is convex in [0; T )�Rn, u(0; x) � g(x) for all x 2 Rn,and for each (�t; �x) 2 (0; T )�Rn there is a sequence (qk; pk; rk) 2 R�Rn �R such thatqk +H(pk) � 0rk ! u(�t; �x) as k!1pk � (x� �x) + qk(t� �t) + rk � u(t; x) for (t; x) 2 [0; T )�Rn:Of course, if u(�t; �x) =1, the condition rk ! u(�t; �x) means that rk !1.Theorem 2.11 Let u : [0; T ) � Rn ! R [ f1g be a convex (viscosity) subsolution of(2.6) and v : [0; T )�Rn ! R [ f1g be a supersolution of (2.6) bounded from below bya function with linear growth. Then we have u � v.Proof . Arguments similar to that of Theorem 2.9 work in this case as well.10

3 The Hopf formula for ut+H(u;Du) = 0 and g lsc andquasiconvexAssume that g : Rn ! R is quasiconvex and continuous, and H : R � Rn ! R,H = H( ; p) is nondecreasing in , positively homogeneous degree one in p, continuous in( ; p) and Lipschitz continuous in p for every . Then in [10] it is proved that a continuousviscosity solution of the Hamilton Jacobi equation( ut +H(u;Dxu) = 0; (t; x) 2 (0;1)�Rnu(0; x) = g(x) x 2 Rn; (3.1)is given by the quasiconvex Hopf formulau(t; x) = �g#( ; p) + tH( ; p)�# (x): (3.2)Hence, the quasiconvex conjugates replace the Fenchel conjugates for equation (3.1).In this section, we will extend the formula to lsc initial data with convex hamiltonians.First we recall some terms and de�nitions. For any function g with values in R [ f1g,we recall that the �level set of g isE( ; g) = fx 2 Rn : g(x) � g; 2 R:The function g is quasiconvex if it has all its level sets convex. The �rst quasiconvexconjugate of a function g depends on two variables, 2 R and p 2 Rn and is given byg#( ; p) = supx2E( ;g) p � x:It is nondecreasing in and positively homogeneous degree one in p. The second quasi-convex conjugate of g is de�ned byg##(x) = inff 2 R : supp2Rn[p � x� g#( ; p)] � 0g:Analogously, the second quasiconvex conjugate in formula (3.2) is de�ned by�g#( ; p) + tH( ; p)�# (x) = inff : supp2Rn[p � x� g#( ; p)� tH( ; p)] � 0g: (3.3)In the appendix it is shown that the inf and sup may be interchanged in the de�nitionof the second conjugate. Moreover, it is shown that g## = g if and only if g is lscand quasiconvex. Refer also to the appendix for some results and further de�nitions onquasiconvex duality.The �rst theorem extends a result of [10] to lsc quasiconvex initial data. The argumentadapts the proof of Theorem 2.1 by inf-convolution to the quasiconvex case.11

Theorem 3.1 Let g : Rn ! R[f+1g be a lsc, proper and quasiconvex function. Assumethat g is bounded from below. Let H( ; p) be continuous, nondecreasing in 2 R, convex,and homogeneous degree one in p 2 Rn. Then u(t; x) = (g# + tH)#(x) in (3.2) is a lscviscosity solution of (3.1).Proof . Since g is assumed bounded from below, we may as well assume it is � 0. Let" > 0. Consider the inf convolution of g.g"(x) = infy2Rn g(y) _ jx� yj" ! :Then, since g is nonnegative, lsc, proper, and quasiconvex,(i) g" is uniformly Lipschitz continuous and quasiconvex.Indeed, since E( ; g") = E( ; g) +B(0; " );we see that E( ; g") is convex for every 2 R.(ii) g" " g as " # 0 pointwise.(iii) (g")#( ; p) = g#( ; p) + " jpj; .This follows from the fact that the support function of the sum of two convex sets isthe sum of the support functions.Now we set u"(t; x) = �(g")#( ; p) + tH( ; p)�# (x):Since g" satis�es all of the hypotheses for the classical Hopf formula with u dependenceof [10], we know that u" is a continuous viscosity solution of( (u")t +H(u"; Dxu") = 0; (t; x) 2 (0;1)�Rnu"(0; x) = g"(x) x 2 Rn: (3.4)We verify that u(t; x) = sup">0 u"(t; x)We have u"(t; x) = supp2Rn inff : p � x� (g")#( ; p)� tH( ; p) � 0g= supp2Rn inff : p � x� g#( ; p)� " jpj � tH( ; p) � 0g:Then sup">0 u"(t; x) = sup">0 supp2Rn inff : p � x� g#( ; p)� " jpj � tH( ; p) � 0g= supp2Rn inff : sup">0 p � x� g#( ; p)� " jpj � tH( ; p) � 0g= supp2Rn inff : p � x� g#( ; p)� tH( ; p) � 0g= u(t; x): 12

The interchange of the supremum on " and the in�mum in the second line is easy to verifyby considering level sets (cf. the appendix).The proof that u is a lsc viscosity solution of (3.1) is now completed exactly as in thepreceding section.In order to allow for initial data which is not necessarily bounded from below we mayuse the same argument with a di�erent in�mal convolution. This modi�ed convolution isgiven in the next theorem.Theorem 3.2 Let g : Rn ! R [ f+1g be a lsc proper function. For every " > 0, putg"(x) = infy2Rnfg(y) _ ln jx� yj" !g:Then g" is locally Lipschitz continuous. Moreover, we have that g" " g as " # 0. Finally, ifg is quasiconvex, then g" is quasiconvex and, 8 2 R, we have g#" ( ; p) = g#( ; p)+"e jpj.Proof . We only show that the function g" is locally Lipschitz continuous, because therest of the argument is as in the preceding proof. Let K be a compact subset of Rn. Sinceg is proper, there is a y0 for which g(y0) < +1. Hence, we have g"(x) � g(y0)_ ln( jx�y0j" ).Choose a constant C so that g" � C on K. Put K 0 = K + B(0; "eC), so that 8 x 2 Kwe have g"(x) = g(y) _ ln( jx�yj" ) for some y 2 K 0. Set C 0 = minfg(y) j y 2 K 0g andC 00 = maxfln( jx�yj" ) j x 2 K; y 2 K 0g � C � C 0. De�ne in [0;+1) the functionh(r) = (ln r _ C 0) ^ C 00. Of course, we haveg(y) _ ln( jx� yj" ) = g(y) _ h( jx� yj" ); 8 x 2 K; 8 y 2 K 0:But h is Lipschitz continuous with some constant k > 0. Consequently, if for every x; x0 2K we choose y 2 K 0 such that g"(x) = g(y) _ ln( jx�yj" ), we obtaing"(x) = g(y)_h( jx� yj" ) � g(y)_h( jx0 � yj" )� k" jx�x0j = g(y)_ ln( jx0 � yj" )� k" jx�x0j:Hence, g"(x) � g"(x0)� k" jx� x0j. Exchanging x and x0, we conclude thatjg"(x)� g"(x0)j � k" jx� x0j; 8 x; x0 2 K:Remark 3.3 Here is another example of regularization. This one is globally Lipschitzcontinuous but it does not converge increasingly. Putg"(x) = infy2Rnfg(y) _ jx� yj"2 � 1"!g:13

Then one has fg" � g = fg � g+B(0; "(1 + " ));(where the right-hand ball is ; for < �1=" and f0g when = �1="). We still have thatg" converges pointwise to g. But we do not have g = sup"�"0 g" for some "0 > 0, becauseg" � �1=".4 The lsc Hopf formula for ut +H(u;Du) = 0, H non-convexIn this section we drop the assumption that the hamiltonian is convex. Our goal is toeventually show that the Hopf formula given in (3.2) is the minimal supersolution of(3.1). We begin by establishing that the Hopf formula gives a supersolution with lscquasiconvex data.Theorem 4.1 Let g : Rn ! R [ f1g be a lsc quasiconvex function. Let H : R �Rn ! R be a continuous function such that H( ; p) is nondecreasing in and positivelyhomogeneous of degree 1 in p. Setu(t; x) = [g#( ; p) + tH( ; p)]#(x):Then u : [0;1)�Rn ! R [ f1g is a lsc quasiconvex function. Moreover, it is a super-solution of the Hamilton-Jacobi equation (3.1).Proof . It is proved in the appendix that, for every 2 R, the -level set of the functionu given by Hopf formula isE( ; u) = f(t; x) j supp2Rn[p � x� g#( ; p)� tH( ; p)] � 0g:Since the function supp2Rn[p �x�g#( ; p)� tH( ; p)] is lsc and convex in (t; x), we deducethat E( ; u) is closed and convex. Therefore, u is quasiconvex and lsc in (t; x). We alsonote that u(0; x) = g##(x) = g(x), 8 x 2 Rn (see the appendix).In what follows we will use the fact that ��(p j A) = supx2A p � x is the supportfunction of the set A.Let us check that u has values in R [ f1g. One has to prove that\ 2RE( ; u) = ;:We �rst show that if (t; x) 2 E( ; u) then x 2 E( ; g)+B(0; R t) where R = supfH( ; p) jjpj � 1g. When u(t; x) � we havep � x � tH( ; p) + g#( ; p) � R tjpj+ g#( ; p); 8 p 2 Rn: (4.1)14

Since ��(p j B(0; R t)) = R tjpj (this is the support function of the ball), we obtain from(4.1) and the de�nition of g#p � x � ��(p j E( ; g)) + ��(p j B(0; R t)) = ��(p j E( ; g) +B(0; R t)); 8 p 2 Rn:Since the set E( ; g) + B(0; R t) is closed and convex, we conclude by duality thatx 2 E( ; g) +B(0; R t), as claimed (see Rockafellar [22, Chap. 13]).Next, the quantity R is nondecreasing with , hencefx j (t; x) 2 \ E( ; u)g � \ (E( ; g) +B(0; R0t)):But g has values in R [ f1g, and so T E( ; g) = ;. Since every E( ; g) is closed and 7! E( ; g) is nondecreasing, a simple compactness argument gives that T (E( ; g) +B(0; R0t)) = ;. Hence, T E (u) = ; and u has range in R [ f1g.It remains to prove that u is a viscosity supersolution. Let (t; x) 2 (0;1) � Rn besuch that u(t; x) 2 R and take (pt; px) 2 D�u(t; x). Since u is quasiconvex,pt(s� t) + px � (y � x) � 0 when u(s; y) � u(t; x). (4.2)To see this, for every � 2 (0; 1), we haveu(�s+ (1� �)t; �y + (1� �)x) � u(t; x);Hence, since (pt; px) 2 D�u(t; x), �(pt(s� t) + px � (y� x))� o(�) � 0. Then we get (4.2)by sending �! 0.Since u(0; y) = g(y), we set s = 0 in (4.2) and obtaint(pt +H(u(t; x); px)) � tH(u(t; x); px) + px � y � px � xwhen g(y) � u(t; x). Taking the sup in y over E(u(t; x); g), we deduce thatt(pt +H(u(t; x); px)) � tH(u(t; x); px) + g#(u(t; x); px)� px � x:But, by the de�nition of u (see (7.1)), the right hand term is � 0. We conclude, sincet > 0, that pt +H(u(t; x); px) � 0 and so u is a supersolution.The next theorem characterizes the lsc function u as the minimal supersolution of(3.1). We shall need the following assumption (c.f. [20]) on the initial data g( for every < sup g, there is a continuous a�nefunction which is a minorant of g on E( ; g). (4.3)Theorem 4.2 Assume that g satis�es (4.3). Under the assumptions of Theorem 4.1, thefunction u in (3.2) is the minimal viscosity supersolution of (3.1).15

Condition (4.3) (which will be disposed of in section 6) is satis�ed if, for example,g is bounded from below by an a�ne function. It was introduced by Martinez-Legazin [19], [20] and it is natural when regarding a quasiconvex function as the envelope ofquasi-a�ne functions (of the form a ^ for a an a�ne function and a constant). Thischaracterization of quasiconvex functions is needed here to follow the proof of Theorem 2.5that the function given by the classical Hopf formula is the minimal supersolution. Thisapproach uses the %-conjugate for quasiconvex functions which was introduced in [11]and is frequently easier to compute than the #�conjugate. We shall �rst prove that theHopf formulas with the % and # conjugates coincide. Then we will prove that the Hopfformula with the % conjugate gives the minimal supersolution.We brie y recall the de�nition of the %-conjugates and characterize the level sets ofthe second conjugate. This will be actually the key point in the proof.The �rst %-conjugate of g is de�ned asg%( ; p) = [g + �(� j E( ; g))]� = supfp � x� g(x) j x 2 E( ; g)g;it is clearly nondecreasing in .The second %-conjugate of the function ( ; p) 7! g%( ; p) + tH( ; p) is de�ned by(g%( ; p) + tH( ; p))%(x) = sup ;p f[p � x� g%( ; p)� tH( ; p)] ^ g:For every 2 R, we therefore haveE( ; [g%( ; p) + tH( ; p)]%) = f(t; x) j supp;� [p � x� g%(�; p)� tH(�; p)] ^ � � g= \p;�> f(t; x) j p � x� g%(�; p)� tH(�; p) � g= \p f(t; x) j p � x� g%( + 0; p)� tH( ; p) � g= f(t; x) j supp2Rn[p � x� g%( + 0; p)� tH( ; p)] � g:Hence E( ; [g%( ; p) + tH( ; p)]%) = f(t; x) j [g%( + 0; p) + tH( ; p)]�(x) � g:Remark 4.3 For later reference, we rewrite this identity as[g%( ; p) + tH( ; p)]%(x) = inff j [g%( ; p) + tH( ; p)]�(x) � g:If g is bounded from above and sup g � , we have g%( ; p) = g�(p) � � + �(p j f0g).Hence [g%( ; p) + tH( ; p)]�(x) � . Therefore, we can write the preceding as[g%( ; p) + tH( ; p)]%(x) = inff � sup g j [g%( ; p) + tH( ; p)]�(x) � g:16

The next lemma guarantees that the biconjugate g%% coincides with g under (4.3).This is a known result [20] and [12] but we provide a proof for the reader's convenience.Lemma 4.4 Let g : Rn ! R[ f1g be a lsc quasiconvex function satisfying (4.3). Theng%% = g.Proof . We have to prove that E( ; g%%) = E( ; g); 8 . If � sup g, then E( ; g) = Rn,g% = g� and g�� � . Therefore, E( ; g%%) = fg�� � g = Rn, so the result is proved inthis case.We now suppose that < sup g. Let a(�) be an a�ne function below g+�(� j E( ; g)).We have the inequalitiesa(x) + �(x j E( ; g)) � (g + �(� j E( ; g)))��(x) = g%�( ; x) � + �(x j E( ; g)): (4.4)If g(x) � , then g%�( + 0; x) � g%�( ; x) � , hence x 2 E( ; g%%). Conversely, if x 2E( ; g%%), then g%�( + 0; x) � . Since g% is nonincreasing in , we get g%�( 0; x) � 0for every < 0 < sup g. The �rst inequality in (4.4) implies that x 2 E( 0; g); for every < 0 < sup g, hence x 2 E( ; g).Remark 4.5 For later reference, we note that, under (4.3), inequality (4.4) yields theidentity E( ; g) = E( ; (g + �(� j E( ; g)))��)whenever < sup g.We are now ready to prove that the Hopf formula can be obtained from either the#�conjugate or the %�conjugate.Theorem 4.6 Let g : Rn ! R [ f1g be a lsc quasiconvex function satisfying (4.3). LetH : R �Rn ! R be a continuous function such that H( ; p) is nondecreasing in andpositively homogeneous of degree 1 in p. Then the equality[g#( ; p) + tH( ; p)]# = [g%( ; p) + tH( ; p)]%between the two second quasiconvex conjugates holds.Proof . We shall prove that, for every , the -level sets of both second conjugatescoincide. Thus, we �x 2 R and recall thatE( ; [g#( ; p) + tH( ; p)]#) = f(t; x) j supp2Rn[p � x� g#( ; p)� tH( ; p)] � 0g;andE( ; [g%( ; p) + tH( ; p)]%) = f(t; x) j supp2Rn[p � x� g%( + 0; p)� tH( ; p]) � g:17

The equality of the level sets amounts to the equivalencep �x� tH( ; p)�g#( ; p) � 0; 8 p () p �x� tH( ; p)�g%( +0; p) � ; 8 p: (4.5)When g is bounded from above and � sup g, one computes g#( ; p) = �(p j f0g)and g%( ; p) = g�(p) hence g%( ; 0) = � inf g and g%( ; p) = +1 if p 6= 0. In this case,both parts of (4.5) are trivially true.We therefore assume that < sup g. By the assumptions on g, there are constants Cand q0 independent of 0 (provided it is bounded away from sup g) such that 0 � g(x) � �C + q0 � x; 8 x 2 E( 0; g):Taking the Fenchel conjugate, we obtain the inequalities� 0 + g#( 0; p) � g%( 0; p) � C + g#( 0; p� q0); 8 p: (4.6)Choosing 0 = and noting that g%( ; p) � g%( + 0; p), the �rst inequality in (4.6)immediately yields the implication in the equivalence (4.5).For the converse, we suppose thatp � x� tH( ; p)� g%( + 0; p) � ; 8 p:After sending 0 # , the second inequality in (4.6) gives p�x�tH( ; p)�g#( +0; p�q0) � + C; 8 p. Taking p = q0 + �p0 for � > 0 and p0 arbitrary, dividing by � and using thepositive homogeneity in p, we get(p0 + q0� ) � x� tH( ; p0 + q0� )� g#( + 0; p0) � + C� ; 8 p0:Since H is continuous in p, we may send �! +1 and deduce thatp0 � x� tH( ; p0)� g#( + 0; p0) � 0; 8 p0:But it is proved in the appendix that (g#)�( ; �) = (g#)�( + 0; �). Therefore p0 � x �tH( ; p0)� g#( ; p0) � 0; 8 p0. And (4.5) is proved.Now we are ready to give the proof of Theorem 4.2.Proof . (of Theorem 4.2) We know by the preceding theorem that u is also given by theformula u(t; x) = supp; [p � x� g%( ; p)� tH( ; p)] ^ where g%( ; p) = supx2E( ;g)[p � x� g(x)]:Let v be any supersolution, with v(0; x) � g(x). Fix 2 R and p 2 dom(g%). Then, setk(t; x) = p � x� g%( ; p)� tH( ; p) and h(t; x) = k(t; x) ^ :18

We claim that h is a subsolution of (3.1). First,h(0; x) = (p � x� g%( ; p)) ^ � g%%(x) = g(x);by Lemma 4.4. Next, suppose k � ' achieves a strict, zero maximum at (t0; x0), with 'a smooth function. If k(t0; x0) < , then h(t0; x0) = k(t0; x0) and indeed, h(t; x) = k(t; x)is true in a neighborhood of (t0; x0). Since k is a smooth function, 't = �H( ;Dx') at(t0; x0) and so 0 = 't +H( ;Dx') � 't +H(h(t0; x0); Dx');and h is a subsolution. If k(t0; x0) > a similar, but easier argument gives the sameresult. Finally, if k(t0; x0) = , we use a smooth strictly monotone approximation �" to�(x) = x ^ as in the proof of Proposition 6.1 below (see also Corollary 6.2) and thehomogeneity of p 7! H( ; p) to get the result. Hence h is a subsolution of (3.1) in allcases. By comparison, we havev(t; x) � h(t; x) = (p � x� g%( ; p)� tH( ; p)) ^ :Taking the supremum over p 2 dom(g%) and 2 R, we are done.To justify the use of the comparison principle, we observe that the subsolution his bounded from above and Lipschitz continuous. So we may assume without loss ofgenerality that the hamiltonian is uniformly continuous in p. The comparison principleholds because H( ; p) is homogeneous in p. Indeed, we may proceed as in section 6 belowand replace the supersolution v by v = if v � , v = +1 otherwise. Then v isa supersolution which is now bounded from below, so that v � h by the standardcomparison principle for a Lipschitz subsolution. Then v = inf v � h.5 The Lax formula for lsc dataIn this section we turn our attention to the Lax formula for initial data which is lsc. TheLax formula requires that the hamiltonian is convex and therefore we may use the theoryof lsc viscosity solutions to characterize the solution. We will treat both (2.3) and (3.1).Under the assumption that H : Rn ! R is convex and g : Rn ! R is uniformlyLipschitz continuous, we have (see [2]) thatu(t; x) = miny2Rn �g(y) + tH� �x� yt ��is the Lipschitz continuous viscosity solution of (2.3). We want to extend this result to lscinitial data g.In what follows we will use the assumptionThere is a constant C > 0 such that g(x) � �C(jxj+ 1); x 2 Rn: (5.1)19

Remark 5.1 By a simple change of variables we may also write the Lax formula asu(t; x) = miny2Rn (g(x� yt) + tH�(y)) :From this expression we see it is clear that u(0; x) = g(x).Theorem 5.2 Let g : Rn ! (�1;1] be lsc and satisfy (5.1). Let H be continuous,�nite, and convex. Set u(t; x) := infy2Rn �g(y) + tH� �x� yt �� :Then u : [0;1) � Rn ! (�1;1] is the unique lsc viscosity solution of (2.3) that isbounded from below by a function of linear growth.Proof . We �rst verify that lim inf(t;x)!(0+0;�x) u(t; x) = g(�x): (5.2)To see (5.2), we �rst observe that, for any r � 0,v(t; x) := tH� �xt � = supp2Rn(p � x� tH(p)) � rjxj � tmaxjpj�r H(p):Since u(t; x) = infy2Rn(g(y) + v(t; x� y));we have from (5.1)u(t; x) � infy2Rn(g(y) + rjx� yj � tmaxjpj�r H)� infy2Rn(�Cjyj � C + rjx� yj � tmaxjpj�r H)� infy2Rn((r � C)jx� yj � Cjxj � C � tmaxjpj�r H); (5.3)for any r � 0. Fix " > 0. Since g is lsc, there is � > 0 such that if jy � �xj � 2�, theng(y) � g(�x)� ":Choose r > C so that (r � C)� � C(j�xj+ �)� C � g(�x);and then � > 0 so that � maxjpj�r jHj � ":We see that if 0 < t � � and x 2 B(�x; �), then(r � C)jx� yj � Cjxj � C � tmaxjpj�r H � g(�x)� " (jy � �xj > 2�)20

and so for any d � 0,u(t; x) � minfg(�x)� "; infy2B(�x;2�)(g(y) + djx� yj � tmaxjpj�dH)g:In particular, we haveu(t; x) � minfg(�x)� "; g(�x)� "� tH(0)g x 2 B(�x; �); 0 < t � �:We assume that � jH(0)j � ". Then we haveu(t; x) � g(�x)� 2" x 2 B(�x; �); 0 < t � �:This shows that lim inf(t;x)!(0+0;�x) u(t; x) � g(�x):Next we observe that since H(p) � �a� b � p for some a 2 R and b 2 Rn,v(t; x) = supp2Rn(p � x� tH(p)) � supp2Rn[p � (x + tb) + at]:Therefore we haveu(t; �x� tb) � infy2Rn supp2Rn[(g(y) + p � (�x� bt� y + bt) + at)] � g(�x) + at 8 t � 0;and hence lim inf(0;1)�Rn3(t;x)!(0;�x) u(t; x) � g(�x):So (5.2) is proved.It follows at once from (5.3) that the function u is bounded from below by a functionof linear growth. Moreover u is lsc. To see this, we �x 2 R and show that the -level setof u is closed. Let (tk; xk) be a sequence converging to some (t; x) such that u(tk; xk) � .By (5.2), we know that u(t; x) � when t = 0, so we may assume that t > 0. For k �xed,we deduce from (5.3) for r = C + 1 that the in�mum in y in the de�nition of u(tk; xk)is taken in the ball B(xk; + Cjxkj + C + tkmaxjpj�C+1H). Since g and H� are lsc, thein�mum is therefore achieved for some yk. Moreover, the sequence (yk) lies in a �xedcompact set. Extracting a subsequence (yk0) that converges to some y, we use the lowersemicontinuity of g and H� again to obtain � lim inf u(tk0; xk0) = lim inf �g(yk0) + tk0H� �xk0 � yk0tk0 �� � g(y)+tH��x� yt � � u(t; x):We conclude that u is lsc.We �nally show that u is a lsc solution of (2.3). We present a way to regularize thehamiltonian closely related to inf convolutions. This procedure converts the hamiltonianinto a strictly convex hamiltonian and this smoothes out initial data.Fix " > 0 and de�nev"(t; x) := supp2Rn(p � x� t(H(p) + "jpj2)) = (t(H(p) + "jpj2))�(x):21

Then v"(0; x) = ( +1; if x 6= 00; if x = 0and v"(t; x) = tv"(1; x=t) if t > 0. Observe thatv"(1; x) = supp [p � x�H(p)� "jpj2]= supp infy [p � y �H(p) + 14" jy � xj2]= infy supp [p � y �H(p) + 14" jy � xj2]= infy [H�(y) + 14" jy � xj2]:We can switch the inf and sup because of concave-convexity. The function v"(1; x) istherefore the inf-convolution of the convex function H�(x) and so is in C1;1(Rn). It iseasily seen that as " # 0, v"(t; x) " v(t; x) = supp2Rn(p � x� tH(p)):It is clear that the function (t; x) 7! p � x� t(H(p) + "jpj2) is a classical solution ofut +H(Du) + "jDuj2 = 0: (5.4)Therefore, the function v" has the property that if ' is a test function and if v"�' attainsa minimum at (t0; x0), then't(t; x) +H(D'(t0; x0)) + "jD'(t0; x0)j2 = 0:So, it is a lsc viscosity solution of (5.4). Now, the monotone convergence of v" to vguarantees that v is also an lsc solution of (2.3). Moreover, since the lsc function satisfying(5.2) u(t; x) = infy2Rn(g(y) + v(t; x� y))is the in�mum of lsc solutions, we conclude by stability that the function u is also an lscsolution of (2.3).Uniqueness was proved in Theorem 2.1 (where the convexity of the initial data wasnot used).The following example shows that without a lower bound on the initial data one doesnot even have that the Lax formula results in an lsc function.Consider the equation ut + 12 jDuj2 = 0 x 2 Rn; t > 022

with initial data u(0; x) = g(x). We choose g as g(x) = �12 jxj2. The Lax formula tells usthat the solution should beu(t; x) = infy2Rn supp2Rn(�12 jyj2 + p � (x� y)� t12 jpj2):However, since g is not bounded from below in any good sense we have the following.Lemma 5.3 u is not lower semicontinuous.Proof . Indeed, we haveu(1; 0) = infy2Rn supp2Rn 12(�jyj2 � 2y � p� jpj2)= infy2RN supp2RN(�12 jy + pj2) = 0;and if x 6= 0, then u(1; x) = infy2Rn supp2Rn 12(�jyj2 + 2(x� y) � p� jpj2)= infy2Rn 12(�jyj2 + jx� yj2)= infy2Rn 12(jxj2 � 2x � y) = �1:Hence, u(1; 0) = 0 and lim inf(t;x)!(1;0) u(t; x) = �1. This shows that u is not lsc.For later sections we need to state a simple corollary of Theorem 5.2 (see [3] for theequivalence between lsc solutions and minimal supersolutions).Corollary 5.4 Under the assumptions of the theorem the function u given by the Laxformula is the minimal supersolution of (2.3) among the supersolutions that are boundedfrom below by a function of linear growth.Now we consider the Lax formula for equation (3.1) with u dependence. The nextresult illustrates how the modi�ed inf-convolution extends the results of [9] for boundedLipschitz continuous initial data to lsc functions. As in Theorem 3.1 we assume, to sim-plify, that g is bounded from below. We could relax this assumption with the help ofTheorem 3.2, but we wait until the next section to give the general assumptions underwhich the quasiconvex Lax formula holds.Theorem 5.5 Let g be a lower semicontinuous, proper function. Assume that g is boundedfrom below. Let H( ; p) be continuous, nondecreasing in 2 R, convex, and homogeneousdegree one in p 2 Rn. Setu(t; x) := infy2Rn �g(y) _H# �x� yt �� :Then u is a lsc viscosity solution of (3.1). 23

Remark 5.6 We may rewrite the Lax formula as follows:u(t; x) := infy2Rn �g(x� yt) _H#(y)� :We see that u(0; x) = infy2Rn �g(x) _H#(y)�. But infyH#(y) = �1 and so u(0; x) =g(x).Proof . We �rst show that u is lsc by proving that its level sets are closed. Let 2 R and(tk; xk) be a sequence that converges to some point (t; x) with u(tk; xk) � 8 k. Observethat for R +1 = supfH( +1; p) j jpj � 1g, we get H( +1; p) � R +1jpj 8 p. Hence, whenH#(z) � + 1, we have 0 = H�( + 1; z) � �(z j B(0; R +1)), so that z 2 B(0; R +1).This implies that, in the de�nition of u(tk; xk), the in�mum in y is taken in the compactset B(xk; tkR +1) and is therefore achieved for some yk. If t = 0, then xk ! x and yk ! x.But u(tk; xk) � g(yk) and therefore � g(x) = u(0; x). If t > 0, the sequence (yk) beingbounded, it converges along a subsequence (yk0) to some y. Therefore, � lim inf u(tk0; xk0) = lim inf g(yk0) _H# �xk0 � yk0tk0 � � g(y) _H# �x� yt � � u(t; x):Hence u(t; x) � . We conclude that u is lsc.We now show that the initial boundary condition in the lsc senselim inft0!0+0;x0!xu(t0; x0) = g(x)holds. Fix x 2 Rn and assume that g(x) 2 R. Since infzH#(z) = �1, there is z0 so thatH#(z0) � g(x). For every t > 0, we have u(t; x + tz0) � g(x) _ H#(z0) = g(x). Hencelim inft0!0+0;x0!x u(t0; x0) � g(x). The reverse inequality holds by lower semicontinuity.Also observe that, since H#(0) = �1, we have u(t; x) � g(x), 8 x 2 Rn, 8 t > 0.Since u is lsc, the initial condition lim inft0!0+0;x0!x u(t0; x0) = g(x) 8 x 2 Rn holds in thelsc sense.Since g is bounded from below, we may assume without loss of generality that g � 0hence u � 0. We �rst suppose that g is bounded from above. We take the modi�ed "�infconvolution of g, say g" and set u" as the solution of (3.1) corresponding to g". Since g" isbounded and Lipschitz continuous, we know from the results of [9] that the function u"is given by the Lax formulau"(t; x) = miny2Rn �g"(y) _H# �x� yt �� :We compute by setting w = x� (y � z),u"(t; x) = infy;z2Rn g(z) _ jy � zj" _H# �x� yt �!= infw;z2Rn g(z) _ jx� wj" _H# �w � zt �!= infw2Rn u(t; w) _ jx� wj" ! :24

The last expression is the modi�ed inf-convolution in x of u(t; �). Since u is lsc, we getu = sup" u". By stability, we conclude that u is a lsc viscosity solution of (3.1).For a general g, we consider, for every 2 R, the function u given by Lax formulawith initial data g^ . It is immediate to check that u = sup u . Moreover, by the precedingparagraph, the function u is a lsc solution of ut+H(u;Du) = 0 in (0;+1)�Rn. Henceu is a lsc solution of (3.1).Remark 5.7 The �rst part of the proof did not use the assumption that g was boundedfrom below. Therefore, when g is lsc and has values in R [ f+1g, the function u givenby Lax formula is also lsc and has values in R [ f+1g (because the in�mum in y in thede�nition of u is achieved provided u(t; x) < +1).6 Level SetsIn this section, we recover the Hopf and Lax quasiconvex formulas (Theorems 4.1, 4.2,4.6 and 5.5) by considering the level set approach. This method, which applies to pdes ofgeometric type, will sharpen the previous results, but also clarify the role played by thequasiconvex conjugates. Furthermore, this method shows the fundamental property thatthe level sets of geometric pdes evolve in time from the level sets of the initial data andindependently of the particular initial functionIn order to motivate the level set approach, we �rst derive basic properties of solutionsto the second order fully nonlinear geometric parabolic pde( ut + F (t; x; u;Dxu;D2xu) = 0; (t; x) 2 (0;1)�Rnu(0; x) = g(x) x 2 Rn; (6.1)These properties extend some results of [12].To say that (6.1) is of geometric type means that F satis�es the conditionF (t; x; ; �p; �M � �p p) = �F (t; x; ; p;M); 8 � � 0; � 2 R: (6.2)In addition, (6.1) is a parabolic equation ifF (t; x; r; p;X) � F (t; x; s; p; Y ); for r � s; Y � X;where X and Y are any symmetric n � n matrices. An example of a geometric equationis the motion by mean curvature equation. See Evans and Spruck [14], Chen, Giga, andGoto [15], and Ishii and Souganidis [16] for related results on second order geometricpde's. Geometric �rst order pdes are precisely those which are homogeneous degree onein p.Proposition 6.1 (c.f. [12]) Assume (6.2). Given L > 0 de�ne�(r) = 8><>: L; if r � Lr; if jrj � L�L; if r � �L25

and set w = �(u), where u is the viscosity solution of (6.1) Then w is a bounded viscositysolution ofwt + F (t; x; w;Dxw;D2xw) = 0; (t; x) 2 (0;1)�Rn; w(0; x) = �(g(x)); x 2 Rn:The proposition shows, in particular, that the solution to a geometric pde may beassumed to be bounded.This proposition for the �rst order case was proved in [12]. The proof for the secondorder case is virtually the same once we use the condition that the equation is geometric.For the reader's convenience we provide the modi�ed proof of this important result.Proof . We will only prove that w is a subsolution.Let w � ' achieve a unique, zero maximum at (t0; x0); t0 > 0, with ' a smoothfunction. If u(t0; x0) > L, then u > L in a neighborhood of (t0; x0) because u is continuousand so w(t; x) = L in this neighborhood. Hence, ' achieves a minimum at (t0; x0) andso 't(t0; x0) = jDx'(t0; x0)j = 0 and M = D2x'(t0; x0) � 0. Since F (t; x; ; 0;M) � 0 wehave 't(t0; x0) + F (t0; x0; w(t0; x0); Dx'(t0; x0); D2x'(t0; x0)) � 0:If u(t0; x0) � �L then '(t0; x0) = �L and ' � �L, so again 't(t0; x0) = jDx'(t0; x0)j = 0andD2x'(t0; x0) � 0. If ju(t0; x0)j < L, w is a subsolution because w = u in a neighborhoodof (t0; x0). Finally, we are reduced to the case u(t0; x0) = L.Suppose that �" is a smooth approximation to �, which is strictly monotone increasingand satis�es �"(r) = r when jrj < L, and �" is linear when jrj > L+", and �" ! � locallyuniformly, as " ! 0. Let w" = �"(u). Then, w" � '" achieves a zero local maximumat (t"; x"), where '" is at most a linear translation of ', and (t"; x") ! (t0; x0). Also,w"(t"; x")! w(t0; x0) = L as "! 0: But then u���1" ('") achieves a zero local maximumat (t"; x"). Since u is a subsolution, this implies at (t"; x")1� 0"('")('")t(t"; x")+F (t"; x"; u(t"; x"); 1� 0"('")Dx'"; 1� 0"('")D2x'" � 1(� 0")2('")� 00('")Dx'" Dx'") � 0:Now we use the geometric property of F and the fact that � 0"('"(t"; x")) > 0 to concludethat ('")t(t"; x") + F (t"; x"; u(t"; x"); Dx'"; D2x'") � 0:Letting "! 0, we get't(t0; x0) + F (t0; x0; u(t0; x0); Dx'(t0; x0); D2x'(t0; x0))= 't(t0; x0) + F (t0; x0; w(t0; x0); Dx'(t0; x0); D2x'(t0; x0)) � 0and so w is a subsolution of (6.1). Since w(0; x) = �(g(x)) is bounded, we are done.26

Proposition 6.2 If u is a viscosity solution of (6.1), then u_ is a subsolution and u^ is a supersolution of #t + F (t; x; ;Dx#;D2x#) = 0; (6.3)for any 2 R, with the initial data u(0; x) _ = g(x) _ and u(0; x) ^ = g(x) ^ ,respectively. Furthermore, if w denotes the solution of (6.3) with w (0; x) = g(x), then,w _ and w ^ are solutions of (6.3) with initial data w (0; x) = g(x)_ and w (0; x)^ ;respectively.This proposition, whose proof is similar to that of Proposition 6.1, implies severalimportant results concerning the level sets of solutions to geometric pdes. In particularthe following result is a generalization of a result of [12] to geometric pdes.Theorem 6.3 Assume that 7! F (t; x; ; p;M) is nondecreasing in 2 R and F isgeometric.For each 2 R, let w denote the continuous viscosity solution ofw t (t; x) + F (t; x; ;Dxw (t; x); D2xw (t; x)) = 0; w (0; x) = g(x):Let u denote the continuous solution ofut(t; x) + F (t; x; u(t; x); Dxu(t; x); D2xu(t; x)) = 0; u(0; x) = g(x):Assume that F is such that any subsolution lies below any supersolution.Then, W = f(t; x) : w (t; x) � g = f(t; x) : u(t; x) � g = U (6.4)and hence w and u have the same level sets.Furthermore, u(t; x) = inff : w (t; x) � g (6.5)Proof (c.f. [12]) Set u (t; x) = u(t; x)_ and w (t; x) = w (t; x)_ : By Proposition6.2 u is a subsolution and w is a solution of#t + F (t; x; ;Dx#;D2x#) = 0; #(0; x) = g(x) _ :By comparison, we conclude that u (t; x) � w (t; x) everywhere. Consequently, fw = g � fu � g. But thenfw � g = fw = g � fu � g = fu � g:For the opposite inclusion we now de�ne u (t; x) = u(t; x) ^ ( + ") and w +"(t; x) =w +"(t; x) ^ ( + "), where " > 0 is �xed. By Proposition 6.2, u is a supersolution andw +" is a solution of#t + F (t; x; + ";Dx#;D2x#) = 0; #(0; x) = g(x) ^ ( + "):By comparison we may again conclude that u (t; x) � w +"(t; x) everywhere. Then, ifu(t; x) � we have u (t; x) = u(t; x) � w +"(t; x)^( +") and so � u(t; x) � w +"(t; x),27

8 " > 0. But, we claim that w = sup">0 w +". If this is true, then we have � u � w andthe opposite inclusion obtains. To see that the claim is true, we note that w +" increasesas " # 0 and thereforew0(t; x) := sup">0 w +"(t; x) = lim inf(";s;y)!(0+0;t;x)w +"(t; x):Hence, w0 is a supersolution of w0t + F (t; x; ;Dxw0; D2xw0) = 0 with w0(0; x) = g(x): Bycomparison w0 � w . But w +" � w for every " > 0 since w +" is a subsolution of theequation for w , and so w0 � w as well.Finally, set v(t; x) = inff : w (t; x) � g. Suppose that u(t; x) � v(t; x)�" for some" > 0. By de�nition of v as the smallest , we must have wv�" > v � " at (t; x). But(t; x) 2 fu � v� "g = fwv�" � v� "g and this is a contradiction. Hence u(t; x) = v(t; x).In the second order case the assumptions needed on F to ensure a comparison princi-ple are quite complicated and we refer to [16] or [15] for examples of cases where this canbe veri�ed. In the �rst order case we can be explicit and we will do so in the following.We want to use extensions of Theorem 6.3 in the �rst order case to show how thequasiconvex Hopf and Lax formulas can be easily derived. The main tool, Theorem 6.7below, constructs the solution of a geometric equation from its level sets by freezing theu variable in the equation. This is motivated by the preceding general result. To simplifythe exposition, we specialize again to the �rst order equationut +H(u;Du) = 0 in (0;1)�Rn and u(0; x) = g(x) on Rn: (6.6)We shall need the characterization of the function as the minimal supersolution since wewill be dealing with lsc and in�nite data.We begin with a lemma.Lemma 6.4 Let g : Rn ! R [ f1g be a lsc function and H : R � Rn ! R be acontinuous function such that H( ; p) is nondecreasing in and positively homogeneousof degree 1 in p.If u is a supersolution to (6.6) then, for 2 R arbitrary, the function u (t; x) = + �((t; x) j E( ; u)) is a supersolution tou t +H( ;Du ) = 0 in (0;1)�Rn and u (0; x) = g (x) on Rn; (6.7)where g (x) = + �(x j E( ; g)). The function u is also a supersolution of (6.6) since ithas values in f ;+1g.The proof uses the following characterization of the subdi�erential of the functionitself. This lemma, which is of independent interest, is a slight variant of a result in [5].Lemma 6.5 Let w : Rn ! R[f1g be lsc and consider the 0-level set E(0; w) of w. Letx 2 E(0; w) and p 2 D��(x j E(0; w)).Then there are sequences xm 2 Rn, �m > 0 and p0m 2 D�w(xm); such that xm ! x,lim supm!1 w(xm) � 0 and �m p0m ! p. 28

Proof (of Lemma 6.4) To prove that u is a supersolution, we have to show that for every(t; x) 2 E( ; u), with t > 0, and (pt; px) 2 D��((t; x) j E( ; u)), we have pt+H( ; px) � 0.By Lemma 6.5, there are sequences (tm; xm), �m > 0 and (p0t;m; p0x;m) 2 D�u(tm; xm) suchthat (tm; xm) ! (t; x), lim supm!1 u(tm; xm) � and �m(p0t;m; p0x;m) ! (pt; px). Since uis a supersolution, this implies that p0t;m+H(u(tm; xm); p0x;m) � 0. Multiplying by �m andusing the positive homogeneity of H in px, we get that �mp0t;m+H(u(tm; xm); �mp0x;m) � 0.Since H is nondecreasing in u and continuous, we obtain, after passing to the limit, thatpt +H( ; px) � 0.Now we prove Lemma 6.5. The argument is very close to the one of [5].Proof (of Lemma 6.5) Let x 2 E(0; w) and p 2 D��(x j E(0; w)). We know that thereis a smooth function ' assuming a strict maximum on E(0; w) at x such that '(x) = 0and D'(x) = p. We construct a nondecreasing sequence of increasing continuous functionsfm : R! [� 1m ;+1] that are C1 in R with f 0m > 0 and satisfy fm(0) = 0 and fm( 1m) = m.This implies of course that fm " �(� j [�1; 0]) pointwise.Fix a nonempty open ball B centered at x, and let xm be a minimum point in B ofthe lsc function fm �w�'. Since fm(w(xm))�'(xm) � fm(w(x))�'(x) � 0, we deducefrom the properties of fm that '(xm) � � 1m and that fm(w(xm)) � maxB ', whencew(xm) � 1m , provided maxB ' � m. Sending m!1, we obtain that lim supw(xm) � 0.Moreover, any limit point x0 of xm should satisfy '(x0) � 0 and w(x0) � 0. Since ' < 0on E(0; w) n fxg, we deduce that x0 = x. Hence the whole sequence xm converges to x.When m is so large that xm 2 B, we have that D'(xm) 2 D�(fm � w)(xm). Sincef 0m > 0, it is not hard to see thatD�(fm � w)(xm) = f 0m(w(xm))D�w(xm). ThusD'(xm) =�m p0m with �m = f 0m(w(xm)) > 0 and p0m 2 D�w(xm). Since D'(xm) ! D'(x) = p, weget the required result.Remark 6.6 In Lemma 6.5, the subdi�erential D��(x j E(0; w)) at x 2 E(0; w) is actu-ally the normal cone to E(0; w) at x, de�ned as the set of the p such that p � (x0 � x) �o(jx0 � xj) as E(0; w) 3 x0 ! x.The main result of this section is the following theorem. It is the precise result for�rst order geometric pdes describing the evolution of the level sets and connecting theequation with u dependence to the equation with u �xed at .Theorem 6.7 Let g : Rn ! R [ f1g be a lsc function. Let H : R � Rn ! R be acontinuous function such that H( ; p) is nondecreasing in and positively homogeneousof degree 1 in p. For every 2 R, let g : Rn ! R [ f1g be a lsc function such thatE( ; g) = E( ; g )and let u : [0;+1)�Rn ! R [ f1g be the minimal lsc supersolution ofu t +H( ;Du ) = 0 in (0;1)�Rn and u (0; x) = g (x) on Rn; (6.8)29

among the supersolutions that are bounded from below by a function of linear growth.De�ne the function u(t; x) = inff j u (t; x) � g:Then, u is lsc with values in R [ f1g with level setsE( ; u) = E( ; u ); 8 :It is the minimal supersolution of (6.6).Proof . For every 2 R, we setu = + �(� j E( ; u )):We �rst note that, by the de�nition of u, we haveu = inf u :Indeed, it is immediate that u � u , 8 . If we set w = inf u and we suppose thatu(t; x) < w(t; x) for some (t; x), we can choose 0 2 R in order that u 0(t; x) � 0 < w(t; x).We see that �u 0(t; x) = 0 + �((t; x) j E( 0; u 0)) = 0:Therefore w(t; x) � 0, a contradiction.Now, we check that u is locally bounded from below. For every 2 R, we putR = maxfH( ; p) j jpj � 1g so that H( ; p) � R jpj. Because of the �nite speed ofpropagation of the initial data, we have that(t; x) 2 E (v ) ) x 2 E (g) +BF (0; R t): (6.9)To see this, observe that v is a supersolution ofut +R jDuj = 0 2 (0;+1)�Rn and u(0; �) = g on Rn:The minimal supersolution to the above equation is readily obtained by Lax formula. Wededuce the inequality v (t; x) � + �(x j E (g) +BF (0; R t)):But, this is exactly (6.9).Now, �x a compact subset K of Rn and T � 0. Since g has values in R [ f1g,we know that T E (g) = ;. The increasing family of compact subsets K = E (g) \(BF (0; R T ) + K) therefore has empty intersection. So, one can �nd 0 in such a waythat K = ; whenever � 0. But, this means that the set (E (g) +BF (0; R T )) \K isempty. By the result of the preceding paragraph, this implies E (v ) \ ([0; T ] � K) = ;whenever � 0, or, equivalently, that u � 0 on [0; T ]�K.Since every u is a supersolution of (6.6) and u is locally bounded from below, wededuce from the stability results of viscosity solutions that u� is a supersolution of (6.6).30

We now show that u is below every supersolution. Let v be any lsc supersolutionof (6.6). We have to show that u � v. Because E( ; g) = E( ; g ), we deduce fromLemma 6.4 for the Hamiltonian H that the functionv = + �(� j E( ; v))is a supersolution of (6.7) that is bounded from below. Since g � g , v is a supersolutionof (6.8). From the minimal property of u , we obtain u � v . This implies thatE( ; v) = E( ; v ) � E( ; u ): (6.10)Since v(t; x) = inff j (t; x) 2 E( ; v)g and u(t; x) = inff j (t; x) 2 E( ; u )g, weconclude that u � v.Applying the preceding result to the supersolution u�, we deduce that u is lsc. Toget the level sets of u, we �rst apply (6.10) to v = u and deduce that E( ; u) � E( ; u ).Since the reverse inclusion E( ; u) � E( ; u ) follows immediately from the de�nition ofu, we conclude that E( ; u) = E( ; u ); 8 .Remark 6.8 This theorem shows that the evolution of the �level sets of the initial datag is independent of the actual function g. Of course this property is false in general, butfor geometric type pdes it is true.Now we will show that all of the quasiconvex Hopf and Lax formulas when we have udependence may be obtained by the use of Theorem 6.7 and the corresponding classicalconvex formulas using lsc data. The results of the preceding sections are recovered, undermore general assumptions.1. Hopf formula for ut +H(u;Du) = 0Theorem 6.9 Let g : Rn ! R [ f1g be a lsc quasiconvex function. Let H : R �Rn ! R be a continuous function such that H( ; p) is nondecreasing in and positivelyhomogeneous of degree 1 in p.Then the function given by Hopf quasiconvex #-formulau(t; x) = �g#( ; p) + tH( ; p)�#is the minimal lsc supersolution of (6.6).Proof . We apply Theorem 6.7 to the functiong (x) := g (x) = + �(x j E( ; g)):The function g is lsc, has values in R [ f1g and satis�es E( ; g) = E( ; g ). Moreover,since g is quasiconvex, g is convex. By the de�nition of the �rst quasiconvex #-conjugate31

of g, the Fenchel conjugate of g is (g )�(q) = � + g#( ; q). The minimal supersolutionu of (6.8) is therefore given by the convex Hopf formula with lsc data of Theorem 2.5.Hence, u (t; x) = [(g )�(p) + tH( ; p)]�(x)= + [g#( ; p) + tH( ; p)]�(x)= + supp2Rn[p � x� g#( ; p)� tH( ; p)]= + �((t; x)jE( ; (g#( ; p) + tH( ; p))#);where the last equality follows from the appendix. The minimal supersolution u of (6.6)given by Theorem 6.7 is thereforeu(t; x) = inff ju (t; x) � g = [g#( ; p) + tH( ; p)]#(x):2. Both # and % give the same Hopf formulaTheorem 6.10 Let g : Rn ! R[f1g be a lsc quasiconvex function satisfying condition(4.3). Let H : R�Rn ! R be a continuous function such that H( ; p) is nondecreasingin and positively homogeneous of degree 1 in p.Then the function given by Hopf quasiconvex %-formulau(t; x) = �g%( ; p) + tH( ; p)�%is the minimal lsc supersolution of (6.6). In particular, u(t; x) = �g#( ; p) + tH( ; p)�# (x).Proof . We apply Theorem 6.7 to the functiong (x) = ( [g + �(� j E( ; g))]��(x) if < sup g if � sup g:The function g is lsc and satis�es E( ; g) = E( ; g ) by Remark 4.5. Because of condition(4.3), g+ �(� j E( ; g)) is above an a�ne function when < sup g. Hence, g has values inR[f+1g. Moreover, g is convex and its Fenchel conjugate is given by (g )�(p) = g%( ; p)if < sup g and (g )�(p) = � + �(p j f0g) if � sup g.The minimal supersolution u of (6.8) is therefore given by the convex Hopf formulawith lsc data of Theorem 2.5. Hence,u (t; x) = [(g )�(p) + tH( ; p)]�(x) = [g%( ; p) + tH( ; p)]�(x)32

if < sup g, and u (t; x) = if � sup g. The minimal supersolution u of Theorem 6.7is thereforeu(t; x) = inff � sup g j [g%( ; p) + tH( ; p)]�(x) � g = [g%( ; p) + tH( ; p)]%(x);by the de�nition of the second %-conjugate (see Remark 4.3).3. Lax formula for ut +H(u;Du) = 0 and lsc data.Theorem 6.11 Let g : Rn ! R [ f1g be a lsc function. Let H : R � Rn ! Rbe a continuous function such that H( ; p) is nondecreasing in and convex, positivelyhomogeneous of degree 1 in p.Then the function given by the Lax quasiconvex formulau(t; x) := infy2Rn g(y) _H# �x� yt � ;is the minimal lsc supersolution of (6.6).Proof We apply Theorem 6.7 to the functiong (x) = + �(x j E( ; g)):This function is lsc and satis�es E( ; g) = E( ; g ). Since g is bounded from below andH( ; �) is convex, the minimal supersolution u of (6.8) is given by the convex Lax formulawith lsc data of Theorem 5.2. Hence,u (t; x) = infy2Rnf + �(y j E( ; g)) + tH� � ; x� yt �g:By the de�nition of the second quasiconvex #-conjugate ofH, we know thatH�( ; �) =�(�jE( ;H#)) (see the appendix). Using these facts, we deduce that u (t; x) � if andonly if there is y for which g(y) � and H# �x�yt � � . The minimal supersolution u inTheorem 6.7 is therefore given byu(t; x) = inff j 9y with g(y) _H# �x� yt � � g = infy2Rn �g(y) _H# �x� yt �� :Examples. We conclude this section with some examples of the use of the formulas.1. Consider the problem ut+ juxj = 0 on (0;1)�R with u(0; x) = g(x) = �(0;1)(x).This initial function is the characteristic function of (0;1) and is quasiconvex, lsc, andbounded below. The hamiltonianH(p) = jpj is convex and positively homogeneous degree33

one. We may apply either the convex lsc Lax formula or the quasiconvex Hopf formula.We use the Lax formula. We have,H�(y) = ( +1; if jyj > 1,0; if jyj � 1and u(t; x) = infz ��(0;1)(z) + tH� �x� zt �� = infjzj�1�(0;1)(x� zt);which becomes u(t; x) = ( 1; if x > t,0; otherwise2. Consider the problem ut + ujuxj = 0, with u(0; x) = g(x) = �(0;1)(x). Here we usethe quasiconvex Hopf formula,u(t; x) = [g#( ; p) + tH( ; p)]#(x) = inff j supp [p � x� g#( ; p)� tH( ; p)] � 0g;with g#( ; p) = 8>>><>>>: +1; if � 1; p 6= 0+1; if 0 � < 1; p < 00; if 0 � < 1; p � 0�1; if < 0; p 6= 0and H( ; p) = jpj: A computation gives the solutionu(t; x) = minf�xt �+ ; 1g; t > 0:Observe that this function is continuous for t > 0 and we have a smoothing e�ect.3. Consider the problem ut + (u+)2u+x = 0, withu(0; x) = g(x) = ( x; if x � 0+1; if x < 0:We use the quasiconvex Hopf formula. Then,g#( ; p) = ( p+ � ; if � 0�1; if < 0and u(t; x) = [g#( ; p) + tH( ; p)]#(x) = ( �1+p1+4tx2t ; if t > 0; x � 0+1; if x < 0:The function u(0; x) = g(x) in the sense of lsc solutions. Finally, u is a smooth solutionon fx > 0g. 34

4. Point source. In this example we �rst look at the problem ut + juxj = 0 withu(0; x) = g(x) = �(x j fx0g): Applying the lsc Hopf formula we get, since g�(p) = p � x0,that u(t; x) = ( 0; if jx� x0j � t+1; if jx� x0j > t:On the other hand, if we change the equation to ut+ujuxj = 0, we now need to apply thequasiconvex Hopf formula withg#( ; p) = ( p � x0; if � 0�1; if < 0and we obtain u(t; x) = jx� x0jt ; t > 0;a drastic smoothing of the initial data.7 Appendix : QuasiconvexityIn this section we will present some results on quasiconvex functions used in the paper.Quasiconvex functions have been under study for many years and the theory of quasicon-vex duality initiated by Crouzeix [23], Martinez-Legaz [19], [20], [21] et.al..A quasiconvex function g : Rn ! R [ f1g has its level sets E( ; g) convex for all 2 R. Equivalently,g(�x+ (1� �)y) � maxfg(x); g(y)g; 0 < � < 1; x; y 2 Rn:The �rst quasiconvex conjugate of a function g is given by (see, for instance [23])g#( ; p) = supx2E( ;g) p � x:This is the support function of the �level set of g. The second quasiconvex conjugate isde�ned by g##(x) = inff 2 R : supp2Rn[p � x� g#( ; p)] � 0gSince g#( ; p) is nondecreasing in , we can write this identity asg##(x) = minf 2 R : supp2Rn[p � x� g#( + 0; p)] � 0gwhenever g##(x) 2 R. In this appendix, we shall assume that g has values in R [ f1gand that it is lsc and quasiconvex. This will imply that g## = g.We �rst establish the alternate formulag##(x) = supp2Rn inff 2 R : p � x� g#( ; p) � 0g35

To see that this is true, let g1(x) denote the right hand side of the preceding and let � 2 Rbe arbitrary. Then, since g#( ; p) is homogeneous degree one in p and nondecreasing in , fx : g##(x) � �g= fx : supjpj�1 �p � x� g#(� + 0; p)� � 0g= \jpj�1fx : �p � x� g#(� + 0; p)� � 0g= \jpj�1 [ �� fx : �p � x� g#( + 0; p)� � 0g= fx : g1(x) � �g:Since � was arbitrary, we conclude that g##(x) = g1(x).The purpose of the rest of the appendix is to show that, in the Hopf formula, the infcan be replaced by a min, or, more precisely, that in (3.3) we can writeu(t; x) = minf 2 R j supp2Rn[p � x� g#( ; p)� tH( ; p)] � 0g when u(t; x) 2 R: (7.1)In terms of the level sets, this means thatfx j (t; x) 2 E( ; u)g = fx j supp2Rn[p � x� g#( ; p)� tH( ; p)] � 0g:Choosing t = 0, this proves in particular that 8 we haveE( ; g##) = fx j (g#)�( ; x) � 0g = fx j ���(x j E( ; g)) � 0g = E( ; g);because E( ; g) is a closed convex set. Hence g## = g.To establish (7.1), the �rst step is to show that g# has the following regularity in (g#)�( ; x) = (g#)�( + 0; x) 8 2 R; 8 x 2 Rn:Here (g#)�( +0; �) is the Fenchel conjugate of the function g#( +0; �). Since g#( +0; p) =inf 0> g#( 0; p) and (g#)�( 0; x) = �(x j E( 0; g)), we get for all x 2 Rn,(g#)�( + 0; x) = sup 0> (g#)�( 0; x) = sup 0> �(x j E( 0; g))= �(x j \ 0> E( 0; g)) = �(x j E( ; g)) = (g#)�( ; x):A simple interpretation of this result is the following. Taking the Legendre Fenchelconjugate in the identity, we obtain (g#)��( +0; �) = g#( ; �). But the function g#( +0; �)is convex, because it is the limit of a decreasing sequence of convex functions. By a classicalresult in convex analysis (see [22, Th 12.2]), we deduce that g#( + 0; �) is proper if andonly if g#( ; �) is proper. Consequently, when E( ; g) 6= ;, g#( ; �) is proper and it is thelsc envelope of g#( +0; �). On the contrary, when E( ; g) = ;, g#( +0; �) is not proper,hence there is p 2 Rn such that g#( + 0; p) = �1.36

Remark 7.1 When the level sets of g are compact, one can prove that g# is right con-tinuous in (see [7]), so that the above property is trivial. But it is not hard to constructexamples for which the right continuity in of g# is false, and this is why the result takesa more complicated form.We now consider a �nite continuous hamiltonian H( ; p) that is nondecreasing in and positively homogeneous degree one in p. Given t � 0, we put k( ; p) = g#( ; p) +tH( ; p). We claim thatk�( ; x) = k�( + 0; x) 8 2 R; 8 x 2 Rn:Indeed, when E( ; g) 6= ;, then g#( ; �) is the lsc envelope of g#( +0; �), and consequentlyk( ; �) is the lsc envelope of k( +0; �) because of the continuity of H. When E( ; g) = ;,then, for some p, g#( + 0; p) = �1 hence k( + 0; p) = �1. In both cases, we get thatk��( ; p) = k��( + 0; p) or k�( ; x) = k�( + 0; x).By the de�nition of the second conjugate, we have thatu(t; x) = k#(x) = inff j k�( ; x) � 0gSince the function 7! k�( ; x) is nonincreasing, we getinff j k�( ; x) � 0g = minf j k�( + 0; x) � 0g when u(t; x) 2 R:But k�( ; x) = k�( + 0; x) and therefore we obtain thatu(t; x) = minf j k�( ; x) � 0g when u(t; x) 2 R:This is (7.1).References[1] H. Attouch, Variational Convergence for Functions and Operators, Pitman,London, 1984.[2] M. Bardi and L.C. Evans, On Hopf's formulas for solutions of Hamilton Jacobiequations, Nonlinear Anal., TMA, 8(1984), pp. 1373{1381.[3] M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutionsof Hamilton Jacobi Bellman Equations, Birkhauser Boston, Cambridge, Ma.,1997.[4] G.Barles, Solutions de Viscosite des Equations de Hamilton-Jacobi, Mathe-matiques & Applications 17, Springer Verlag, New York, 1994.[5] E.N.Barron and R.Jensen, Semicontinuous viscosity solutions for Hamilton Jacobiequations with convex Hamiltonians, Comm. PDE, 15(1990), pp. 1713{1742.37

[6] E.N.Barron and R.Jensen, Optimal control and semicontinuous viscosity solutions,Proc. AMS, 113(1991), pp. 397{402.[7] E.N.Barron and W.Liu, Calculus of variations in L1, Appl. Math & Optimization,35(1997), pp. 237{263.[8] E.N.Barron and W.Liu, Semicontinuous solutions for Hamilton Jacobi equations andthe L1 control problem, Appl. Math. & Optimization, 34(1996), pp. 325{360.[9] E.N.Barron, R.Jensen, and W.Liu, Hopf-Lax formula for ut +H(u;Du) = 0, J. Di�.Eqs., 126(1996), pp. 48{61.[10] E.N.Barron, R.Jensen, and W.Liu, Hopf-Lax formula for ut + H(u;Du) = 0, II,Comm. PDE, 22(1997), pp. 1141{1160.[11] E.N.Barron, R.Jensen, and W.Liu, Explicit solution of some �rst order pde's, J. Dy-namical and Control Systems, 3(1997), pp.1{16.[12] E.N.Barron, R.Jensen, and W.Liu, Applications of the Hopf-Lax formula for ut +H(u;Du) = 0, SIAM J. Math. Anal., 29 (1998), no. 4, pp.1022{1039.[13] F.Clarke, Y.Ledyaev, R.Stern, and P.Wolenski, Nonsmooth Analysis and Con-trol Theory, Springer, New York, 1998.[14] L.C.Evans and J.Spruck, Motion of level sets by mean curvature. I., J. Di�erentialGeom., 33 (1991), pp. 635{681.[15] Y.G.Chen, Y.Giga, and S.Goto, Uniqueness and existence of viscosity solutions ofgeneralized mean curvature ow equations, J. Di�erential Geom. 33 (1991), pp. 749{786.[16] H.Ishii and P.Souganidis, Generalized motion of noncompact hypersurfaces with ve-locity having arbitrary growth on the curvature tensor, Tohoku Math. J. (2) 47 (1995),pp. 227{250.[17] E.Hopf, Generalized solutions of nonlinear equations of �rst order, J. Math. Mech.,14(1965), pp. 951{973.[18] P.-L.Lions and J.-C.Rochet, Hopf formula and multitime Hamilton Jacobi equations,Proc. AMS, 96(1986), pp.79{84.[19] J.E. Martinez-Legaz, On lower subdi�erentiable functions, in Trends in MathematicalOptimization, Editors K.H.Ho�man, J.B. Hiriart-Urruty, C. Lemarchal, and J. Zowe,International series Numer. Math., 84(1988), pp. 197{232.[20] J.E. Martinez-Legaz, Quasiconvex duality theory by generalized conjugation methods,Optimization, 19(1988), pp. 603{652. 38

[21] J.-P. Penot and M.Volle, On quasiconvex duality, Math. Opers. Res., 15(1990), pp.597{625.[22] R.T.Rockafellar,Convex Analysis, Princeton University Press, Princeton, New Jer-sey, 1970.[23] S.Schaible and W.T. Ziemba, editors, Generalized Concavity in Optimizationand Economics, Academic Press, New York, 1981. J.-P. Crouzeix, Continuity anddi�erentiability properties of quasiconvex functions on Rn, pp. 109{130; A dualityframework in quasiconvex programming, pp. 207{226.[24] A.I.Subbotin, Generalized Solutions of First Order PDEs, Birkhauser, Boston, 1995.

39