Optimal Control Of A Bioreactor With Modal Switching

17

Transcript of Optimal Control Of A Bioreactor With Modal Switching

Mathematical Models and Methods in Applied SciencesVol. No. ( ) {fc World Scienti�c Publishing CompanyOPTIMAL CONTROL OF A BIOREACTOR WITH MODAL SWITCHINGSUZANNE M. LENHARTDepartment of MathematicsUniversity of TennesseeKnoxville, TN 37996, USAe-mail: [email protected] I. SEIDMANDepartment of Mathematics and StatisticsUniversity of Maryland Baltimore CountyBaltimore, MD 21250, USAe-mail: [email protected] YONGLaboratory of Mathematics for Nonlinear Sciencesand Department of MathematicsFudan UniversityShanghai 200433, Chinae-mail: [email protected]: We consider a model for bioremediation of a pollutant by bacteria ina well-stirred bioreactor. A key feature is the inclusion of dormancy for bacteria, whichoccurs when the critical nutrient level falls below a critical threshold. This feature gives adiscrete component to the system due to the change in dynamics (governed by a system ofordinary di�erential equations between transitions) at switches to/from dormancy. Aftersetting the problem in an appropriate state space, the control is the rate of injection ofthe critical nutrient and the functional to be minimized is the pollutant level at the�nal time and the amount of nutrient added. The existence of an optimal control and adiscussion of the transitions between dormant and active states are given.Key Words: Optimal control, existence, bioremediation, modal switching, hybridsystem.Mathematics Subject Classi�cation: 49J15, 93A30, 93C15.1. IntroductionThe concept of bioremediation has become increasingly signi�cant in the treat-ment of environmental pollutant problems. The particular class of problems we nowhave in mind �ts the following scenario:Some quantity of an undesirable pollutant is initially located with dif-�cult access | e.g., an oil spill has soaked into the subsoil. Already

present in this location is a bacterial population which can potentiallymetabolize� the pollutant, but which initially is in a dormant state, lack-ing some critical nutrient. The introduction of this nutrient is expectedto re-activate the bacteria and so to induce the desired removal of thepollutant.For a mathematical model, the relevant continuous state variables here are:� = available amount of critical nutrient� = bacterial population (biomass)� = amount of pollutant remaining.Note that we must include the distinction between `dormant' and `active' states forthe biomass, say, by a (Boolean) indicator � = f1 if `active'; 0 if `dormant'g. Thisproduces a hybrid state space: involving both discrete and continuous components.This discontinuous alteration of states is the `modal switching' of our title.We defer consideration of a Distributed Parameter System (DPS) model, treat-ing the spatial variation which is likely to be signi�cant for a realistic in situ biore-mediation problem. See the related work of Butera, Fitzpatrick and Wypasek [3], incontaminant transport. Instead, for our present analysis, we treat a situation thatis closer to a well-stirred tank bioreactor, in which the pollutant and the bacteriaare present in spatially uniform, time-varying concentrations. Thus, �; �; � will befunctions of t only and the dynamics will involve ordinary di�erential equations be-tween the state transitions. We note three recent papers involving optimal controlof bioreactors, [8], [16], [9]. Two of these papers treat DPS models; the interestingfeature of \well clogging" is treated in [16]. None of these papers considers the fea-ture of dormancy which is a principalm concern here. A paper by Bruni and Koch[2] treats a related idea of modeling the cell cycle with quiescent compartments.See the work by Bellomo and De Angelis [1] and Firmani, Guerri, and Preziosi [5]for modeling of the cellular interactions involved in immunology models involvingtumor dynamics. The work by Fister and Panetta [6] involves optimal control andcell interaction in cancer chemotherapy.It is clear from the above that determination of the rate u(�) of injection ofthe critical nutrient as a function of time may be viewed as an optimal controlproblem, namely, balancing the costs of this procedure against the bene�t of reducedpollutant. Feedback mechanisms are not easily implemented and we consider theproblem in `open loop'.One notes that the modelling is somewhat uncertain due to the complexity ofthe structures involved and parameters which are di�cult to estimate. For ourpresent purposes we will, somewhat arti�cially, select a plausible structure which�This phrasing suggests that bacteria are directly consuming the pollutant, which, in turn,might suggest that the growth rate of bacteria would depend on the availability of pollutant. Whileconsistently using this language here, we note that in our scenario the bacteria do not actuallyconsume the pollutant directly, but that the consumption of other nutrients by the bacteria leadsto the transformation of the pollutant into a less hazardous substance; this type of bioremediationis more properly called \cometabolism." We note, for example, that paraxylene is degraded bymetabolism while trichloroxylene is degraded by cometabolism [14], [17].

exempli�es the rather interesting technical details which may arise. Chief amongthese are� the nature of the relevant cost functional, which might plausibly be a weightedsum of the amount of pollutant remaining at the end of the time consideredand the total amount of nutrient added | e�ectively, the L1-norm of thecontrol function u, so one is working in a context involving a non-re exiveBanach space and� the discontinuous transitions between dormant and active states which dom-inate our treatment of the model under consideration.Our primary goal in this paper is to prove existence of an optimal control inan application. In Section 2, we model the problem in greater detail, giving theprecise de�nition of solutions to our control system; existence of solutions is shownin Section 3. Section 4 is devoted to a relevant compactness result; the existence ofan optimal control is then a corollary to that analysis. We will proceed in Section 5to comment on some characterization of such optimal controls. Finally, in Section 6we comment on some results and conjectures for related problems.2. Modelling the problemWe are considering a �nite horizon problem with a speci�ed time interval [0; T ] andwith a given initial amount of pollutant. We seek to minimize the cost functionalJ := Z T0 u(t) dt+ b�(T ): (2.1)We assume that the biomass may, at any time, be in either of two conditions(modes): active or dormant. For present purposes we model the transitions asgoverned entirely by the nutrient level and occurring `instantly':� if � drops below ��, then the bacteria become dormant,� if � rises above ��, then the bacteria are reactivated,where �� and �� are given constants with 0 < �� < ��. We will discuss in moredetail later such limit cases as, e.g., �(t) falling to �� at t = � and then rising or,perhaps, remaining at �� on an interval [�; � 0].We note [13], [7] that exogenous dormancy is imposed by an unfavorable chemicalor physical environment: the uptake and the metabolism of organic compoundssometimes stops in such settings. On the other hand, [4], [10] the resulting sporeswill then germinate immediately on change to a favorable environment. We areassuming here that the bioreactor is rich in all nutrients required for the bacteriafor growth with a single exception | whose supply we may then identify as `critical.'The addition of quantities of this nutrient will then constitute the controllable aspectof the dynamics. Implicit in this is that the pollutant will never itself be a `criticalnutrient' for the bacteria.

We assume that in a `full transition cycle' (active 7! dormant 7! active) a (�xed)fraction (1 � �) of the bacteria fail to survive: the new value �(� 0+) right afterreactivation at t = � 0 � � is ��(��) where �(��) is the original value beforedormancy. For convenience, we will allocate this loss of biomass entirely to themoment of transition to dormancy and take �(�+) = ��(��), noting that thevalue �(�) is irrelevant for our control problem during the dormant interval untilreactivation.We are assuming here that there is a ux through the bioreactor with given ow rate. We assume the bacteria and the pollutant are insoluble and remainspatially �xed, but that the additive nutrient is soluble and is introduced by this ux (nominally, at rate u =control) and also is carried in the out ow. When thebiomass is dormant, then the dynamics takes a very simple form:_� = ���+ u �; � constant (2.2)where � > 0 is a constant determined by the ow rate in relation to the volumeof the bioreactor. The constancy of � and � here is, of course, the meaning of`dormant'. When the bacteria are active, on the other hand, we have bacterialgrowth with rate � and have metabolism at rates '; of the critical nutrient andof the pollutant, respectively, so_� = ���� '� + u _� = � _� = � �: (2.3)The injection rate u(�) is necessarily non-negative; we do note that it is quiteplausible (certainly as an idealization) to permit consideration of the injection of abolus of nutrient, so u(�) may contain �-functions; this is very much a question oftime scales in the modelling, as any model we consider is an idealization of realityand a �xed amount of injected nutrient will appear as a �-function if it is beinginjected in a time which is short compared to the time scale of normal interest. Theinstantaneity of the state transitions is to be viewed similarly.The (net) bacterial growth rate � which we consider is actually a balance betweenmultiplication and death. It is plausibly an increasing function of the concentrationof the critical nutrient: negative for � near the transition level ��, but positive when� would reach ��. For simplicity we take � = � with = (�). This is consistentwith our underlying assumption that all other requirements for bacterial growth areabundantly met (biomass much smaller than the environmental carrying capacity),but we note that the alternative, saturation of � as the biomass might get closer tothe carrying capacity, would not signi�cantly a�ect our arguments. The dormancyphenomenon ensures that we need consider � only for � � �� > 0 and it will followfrom the equations for � that � � �0+R t0 u, implying boundedness. One then easilygets a bound for � for �nite T ; a Monod model [11] or term � = (�)�=(k1 + �)would, of course, bound � a priori.It is plausible to take ' = '(�) and = (�) | notationally suppressingthe implicit speci�cation that the bacteria cannot consume nonexistent pollutant: = 0 when � = 0 | with '(�); (�) > 0 for � � ��; � > 0. For simplicity,

we assume the functions �, ' and are smooth on the relevant domains (exceptfor the treatment of at � = 0), although simple continuity (or weaker!) wouldbe su�cient for the existence arguments of Section 3. Note that with �(0) � 0,�(0) > 0, �(0) > 0 and �; v; u � 0, we can expect to have �(t); �(t); �(t) � 0 for alltime and will show this as a mathematical consequence below.3. The State EquationOur goal in this section is to investigate our state equation, proving the existence ofa solution and establishing some properties of the solutions. First, let us note thatthe cost functional we consider is coercive for the L1-norm | and, as has alreadybeen mentioned, we intend to admit �-functions in u(�). It is therefore convenientto reformulate the dynamics in terms ofU(t) = cumulative nutrient provided = �0 + Z t0 u(s)ds;c(t) = nutrient consumed = U(t)� �(t);� = f0 during dormancy; 1 when activeg:Thus, an admissible control function U(�) will be any monotone (nondecreasing)function on [0; T ]. We denote the set of all admissible controls by U . For any givenU(�) 2 U , we expect c(�), �(�), �(�) to be continuous | except for the jumps in �(�)at times of transition from active to dormant.In terms of the new variables, we now have, between modal transitions:_c = �(U � c) + �'�; _� = ��; _� = �� � (3.1)with � indicating the mode as above and with� = �(U � c; �); ' = '(U � c); = (U � c): (3.2)We assume that �(�; �), '(�), (�) are positive smooth (e.g., C1) functions of theirarguments for � � �� > 0 and � > 0. It would be reasonable to expect thatthe constitutive functions �; '; would each be nondecreasing in �, since this onlymeans that consumption and reproduction would not become more di�cult (perbacterium) if more of the critical nutrient is available; we will �nd it convenientlater to require for Theorem 6 that �; be increasing functions of �, but will notimpose this as a requirement for the time being. We require that := supfj�(�; �)=�j : �� � � � U(T )g (3.3)is �nite and set ' := maxf'(�) : �� � � � ��g. One might also permit some otherdependencies, but we will simply take these as in (3.2).We must now be somewhat more precise about the transition rules; compare[15]. Consider, �rst, the case when we are in the `dormant' mode at time ��,so U(��) � c(�) � �(��) � ��. We will see that c(�) is nondecreasing with

0 = c(0) � c(t) � U(t), so � can increase only by control action: increasing Ufaster than c, perhaps admitting a jump in U . If � is such that �(�+) � ��, thenwe permit a transition to the active mode (with (c; �; �) continuous at �) with thetransition mandatory if �(�+) > �� or if continuation of the dormant mode wouldhave �(�) increasing above �� for t near �+. Although we will later see that, forthe cost functionals we consider, it can never be desirable to do so, it is admissiblein principle that we might have �(�+) = �� and then control so that �(t) � ��on some interval by taking u(t) = _U(t) = ��� on this interval: we then considerit a further aspect of control to determine the length of this interval before eithermaking the transition or having � again drop strictly below ��.Next, consider the case in which we are in the `active' mode at time ��, withU(��)� c(�) � �(��) = ��. We now permit a transition to dormancy and requirethis transition if � would fall below �� as t increases past � . At such a transitiontime, c, � are to remain continuous, but �(�+) = ��(��) with � 2 (0; 1) given.Again, it would be admissible to control so that �(t) � �� on some interval, bytaking the control u(t) = _U(t) = ��� + '(��)��(t)on this interval | obtaining �� by solving: _� = �(��; �) | and, as above, wethen take the possibility and timing of a transition to dormancy to be a furtherelement of control. Note that if we have a transition to dormancy at time � , thenit is admissible (if one were to have a jump of at least (�� � ��) in U) to havean immediate reactivation | this transition `back' to the active mode is formallytaken as subsequent to the transition to dormancy (although the transition timesare numerically the same) and we do impose the loss of biomass associated with thefull cycle.Summarizing, a `solution' of the system for a given admissible control functionU 2 U consists of a (possibly empty, but �nite) set of transition times (0 � �1 ��2 � � � � � �N � T ) alternately to active and dormant modes (for de�niteness, wealways assume that the system is dormant at t = 0 so �1 is a transition time to theactive mode) and the functions c, �, � subject to the de�ning conditions:[C1] On each open interval (0; �1); � � � ; (�N�1; �N ); (�N ; T ), the functions c, �, �satisfy the di�erential system (3.1).[C2] The functions c, �, � are continuous with c(0) = 0, �(0) = �0 > 0, �(0) =�0 > 0 | except that for transition from active to dormant we have �(�2k+) =��(�2k�).[C3] At a transition time � , we haveU(�)� c(�) =: �(�) � �� (activation: � = �2k+1);U(��)� c(�) =: �(��) = �� (transition to dormancy: � = �2k):[C4] While active we have �(t) � ��; during dormancy, �(t) � ��.Our main result of this section is the following.

THEOREM 1: Let U(�) 2 U be any admissible control. Then the system hasa solution in the sense above | which, moreover, satis�es0 � c(t) � U(t); 8t 2 [0; T ]: (3.4)c(t) "; �(t) #; �(t); �(t) > 0; t 2 [0; T ]; (3.5)and �2k+2 � �2k+1 � � := �� � ���U(T ) + '�0e T ; (3.6)implying N � 1 + [T=�].Proof: For a given admissible control U(�) and some T 0 � T , suppose (c; �; �)with transition times 0 � �0 � �1 � � � � �k is a `restricted solution' of the dynamicsystem, i.e., satisfying the di�erential equations (3.1) and switching rules, on therestricted segment [0; T 0]. We �rst prove the a priori properties (3.4), (3.5), and(3.6) for such a solution. The existence of a solution on [0; T ] will then follow easilyby continuation in t and induction on k.When the mode is `active', we have �(t) := U(t) � c(t) � �� > 0. Thus,c(t) � U(t) in any active intervals and at the beginning t = �� of any interval ofdormancy (noting also that c(0) = 0 � U(0) for the case �� = 0). Suppose therewere some �t at which c(�t) > U(�t). Then there would be some t < �t with c(t) = U(t)and with c(t) > U(t) for t 2 (t; �t]. Since U(�) is nondecreasing, we must then havec(�t) > U(�t) � U(t) = c(t) while, on the other hand,c(�t)� c(t) = Z �tt _c(t)dt = �� Z �tt [U(t)� c(t)]dt < 0:This contradiction proves (3.4). This result is hardly surprising: it merely says thatone cannot consume more nutrient than has already been put in.Next, observe that � > 0 always: we have �(0) > 0 and j _�=�j � between modaltransitions (while, at the �nitely many transitions to dormancy, we have a jumpreduction in � but always by a �xed fraction so � remains positive there as well).It follows that c is nondecreasing (as _c = �(U � c) + �'� � 0) so, c(t) � c(0) = 0and it then follows that0 � U(t)� c(t) =: �(t) � U(t) � U(T ):By the de�nition of , we have �� � � so � � �0e T . If we are to have anactivation at time �n and a transition back to dormancy at �n+1, then (since U isincreasing and �(�n+) � ��, �(�n+1) � ��), we must have�� � �� � c(�n+1)� c(�n)= Z �n+1�n f�[U(s)� c(s)] + '(U(s)� c(s))�(s)g ds� [�U(T ) + '�0e T ](�n+1 � �n):

Condition (3.6) follows, bounding N , as noted; the monotonicity of �(�) is clear.With these observations in hand, we obtain existence of a solution by proceedingstepwise, terminating when t = T . To show existence it is su�cient | and simplest| to describe the particular solution, which is obtained by always making a modaltransition as soon as possible. We may have U(�) begining with an immediate jumpto (at least) ��, giving �1 = 0; otherwise there is an initial interval of dormancyuntil the (�rst) moment �1 at which | whether by a jump or by having the increaseof U be more rapid than the increase of c | one has � = U � c � ��, if this everdoes occur. The system then satis�es the `active' equation in (3.1) until a time �2at which U � c = �� and one switches to `dormant' mode with a jump decrease in�, and so on. We have already seen that the number of transitions is necessarily�nite so this provides a solution in the sense de�ned earlier.From Theorem 1 we see that for any U(�) 2 U , if (c; �; �) is a correspondingsolution, then any active intervals (except possibly the last) are nontrivial: thelength of such an interval is at least � > 0. Note if �� were equal to ��, we wouldlose the signi�cance of (3.6) providing a �nite bound for N.A key feature of our treatment of the transitions is that the rules allow somenon-uniqueness of the solutions so one would not have well-posedness in the usualsense. Indeed, in this section we might have taken the constitutive functions �, 'and only to be continuous and one would then have the additional possibility ofnon-uniqueness through bifurcation within the modal intervals. One does, however,have a closure property: the limit of solutions is a solution as one might vary thedata or the control function; compare the more detailed discussion in [15]. Aspectsof this dominate the discussion in [15], etc., and this property is essential to ourprincipal result on the existence of optimal controls. We will discuss this issue inthe following section.4. Existence of Optimal ControlsThe standard argument for existence of optima is to restrict consideration to a setof controls for which one has compactness in some sense (such that some minimiz-ing sequence `converges') and then, from this, to deduce adequate convergence ofthe state and the cost functional. Here, the compactness will be given by Helly'sTheorem (cf., e.g., [12]), but the nature of our dynamics (speci�cally, the possibil-ity of discontinuous modal transitions) makes the subsequent treatment somewhatdelicate.THEOREM 2: Let U� 2 U be a pointwise convergent sequence of controls suchthat U�(t)! �U(t) for each t 2 [0; T ] with �U 2 U . Let (c� ; �� ; ��) be correspondingsolutions of the system with transition times 0 � ��0 � ��1 � � � � � ��N� � T .Then there exists a subsequence, again indexed simply by �, and limits

(�c; ��; ��) with transition times 0 = ��0 � : : : � �N � T such that, as � !1,8><>:N� = N for some �xed N � 0��n ! ��n for each n = 1; : : : ; N(c� ; ��)! (c; �) uniformly on [0; T ]�� ! � uniformly on compact sets in each (�n; �n+1), (4.1)[If desired, this subsequence can be chosen so the convergence: ��n ! ��n will bemonotone for each n.] Finally, (�c; ��; ��) is a solution corresponding to the con-trol �U(�).Proof: By the Dominated Convergence Theorem, we also have L1 convergenceof U� to �U ; as U�(T ) ! �U(T ), these are uniformly bounded so all the estimatesobtained in the proof of Theorem 1 will hold uniformly in � | in particular, onehas a bound on N = N� =(number of transitions) so we may again extract asubsequence with each N� = N for some �xed N . Since [0; T ] is compact, wemay extract a further subsequence such that, for each given n = 1; 2; � � � ; N , either��n # ��n or ��n " ��n. Next, note that we have bounds (uniformly in �; t) for _c� , _��and _�� on the modal intervals. Thus fc� ; �� ; ��g is equicontinuous there so, forsome further subsequence, we have uniform convergence of c� ! �c, and �� ! �� on[0; T ]; and the convergence of �� ! �� uniformly on each compact sub-interval of(��n; ��n+1). [We have abused notation slightly by continuing to index simply by �through all the subsequence extractions.]We must now verify that f��1; : : : ; ��Ng, together with the limit functions (�c; ��; ��)satisfy the set of conditions de�ning a solution of the system associated with thelimit control function �U .To verify [C1], we note that in integrated form we have | e.g., for [s; t] withinany active interval as (��1 ; ��2 ) |c�(t) = c�(s) + Z ts f�[U�(r) � c�(r)] + '(U�(r) � c�(r))�� (r)g dr (4.2)for ��1 < s < t < ��2 for � � �� taking �� large enough to ensure that this implies alsothat ��1 < s < t < ��2 . Observe that, by our construction, c� ; �� converge uniformlyto �c; �� on [s; t] and U� ! �U (pointwise, hence L1) with the integrand boundeduniformly in �. Thus, (4.2) gives, in the limit,�c(t) = �c(s) + Z ts f�[ �U(r) � �c(r)] + '( �U(r) � �c(r)) ��(r)g dr (4.3)which, in view of the arbitrariness of s; t, gives the appropriate di�erential equationfor �c on (��1; ��2). Essentially the same argument can be used to verify the equationsfor �� and for �� on this interval | and again for each of these functions on each ofthe relevant intervals (where nonempty) as appropriate.Condition [C2] concerning the initial conditions is immediate, as the initial con-ditions are independent of �. To look at the correct jumps for �� at t = ��2k, we

introduce ��(t) = ��k��(t); ��2k < t � ��2k+1so �� is continuous across the transitions to dormancy. Thus, for some subsequenceand some �, we have �� ! � uniformly in any closed interval [��2k � "; ��2k + "] with" > 0 small enough that ��2k�1 < ��2k � ". Note that�(t) = ��k ��(t) for ��2k < t � ��2k+1so the jumps in �� at ��2k are correct.The most delicate (and original) aspect of this analysis is the treatment of [C3],[C4]. To verify [C3] for n odd, note that for arbitrary " > 0 we haveU�(��n ) � c�(��n ) + �� � �c(��n) + �� � ";for large enough � since the uniform convergence c� ! �c with ��n ! ��n givesc�(��n )! �c(��n). By assumption, �U is continuous to the right so we also have�U(��n) � �U(��n + �)� " for 0 < � � �(")and U�(��n ) � U�(��n+�) by the monotonicity of each U� and taking � large enoughthat ��n � ��n + �. Combining these gives�c(��n) + �� � " � U�(��n ) � U�(��n + �)! �U(��n + �) � �U(��n) + ";so �U(��n) � �c(��n)+���2" for arbitrarily small " > 0, whence �U(��n)��c(��n) � �� asdesired. The argument for n even is similar, but slightly more delicate as we needequality. Given " > 0, we have, for large �,U�(��n�) = c�(��n ) + �� � �c(��n) + �� + ":By monotonicity, U�(��n � �) � U�(���n�) for any � > 0 with � so large that ��n >��n � �. Thus we have �U(��n � �) � �c(��n) + " for � > 0 so as � ! 0 we have�U(��n�) � �c(��n)+��+" for arbitrary " > 0, whence �U(��n�)� �c(��n) � ��. To showequality, we choose � > 0 so that for large � one has ��n � � < ��n so the system� is`active' at ��n � � whence by [C4]� we haveU�(��n � �)� c�(��n � �) � ��:Now �rst taking � ! 1 and then � ! 0+ (noting the continuity of �c) we obtain�U(��n�)� �c(��n) � �� and so [C3].The argument for [C4] is much like the �nal part of the argument above. Forany (�xed) t during what is to be an `active' interval for the limit problem (so��n < t < ��n+1, with n odd), we will have ��n < t < ��n+1 (so system� is also active)for large enough �. By [C4]� we then have U�(t) � c�(t) � �� and letting � ! 1gives [C4] here. Similarly, for t in the interior of a dormant interval, we have system�dormant at t for large � whence U�(t) � c�(t) � �� and we get [C4] here as wellwhen � !1.

This completes our veri�cation of [C1]{[C4] in the limit.This theorem provides the requisite form of `well-posedness' under pointwiseconvergence of control functions and the existence of an optimizer for J is thenimmediate.THEOREM 3: There is an optimal control, i.e., J attains its minimum �Jover admissible control functions U(�) and corresponding solutions, subject to thesystem dynamics as described above.Proof: Take a minimizing sequence fU�g with associated solutions f(c� ; �� ; ��)g(and transition times f��ng). Since U�(T ) � J = J � ! �J and each U�(�) isnondecreasing, by Helly's Theorem we have existence of a subsequence such thatU�(t)! �U(t) pointwise everywhere on [0; T ] with �U admissible, nondecreasing, and�U(0) = �0. By Theorem 2, we can extract a subsequence for which U� ; c� ; �� ; ��are suitably convergent to �U , �c; ��; ��; in such a way that (�c; ��; ��) (and f��ng) providea solution for the admissible control �U . Clearly, the convergence of ��(T )! ��(T )and of U�(T )! �U(T ) ensure that, in the limit, J = �J , i.e., that �U (together withauxiliary choices, if any, used to get this solution) is an optimal control.5. CharacterizationIn this section, we derive necessary conditions satis�ed by the controls we consider.These are not intended be the full set of `�rst order (necessary) conditions foroptimality', which we hope to discuss in more detail in a subsequent paper, butonly some preliminary comments on the characterization of optimal controls andoptimally controlled solutions, especially with regard to the `switching structure'.In what follows, we let ( �U; �c; ��; ��) be optimal with transition times satisfying��0 � 0 � ��1 � ��2 � � � � � ��N � T:[By our convention, the bacteria are dormant in [��2k; ��2k+1) and are active in[��2k+1; ��2k+2) for each k � 0.]; we also denote �� = �U � �c.Our �rst result is that the optimal control �U is continuous at t = T , i.e.,�U(T�) = �U(T ). [To see this, suppose �U(T�) < �U(T ). We then de�ne U(t) tobe the same as �U(T ), except rede�ned at t = T so U(T ) = �U(T�). Then eU 2 Uand (�c; ��; ��) is a solution of the state equation under the control eU . Clearly,J (eU) = J ( �U)� [ �U(T )� �U(T�)] < J ( �U);which contradicts the assumed optimality of �U .] This result is not very surprising,since the possible jump at time t = T in the control U does not play any role inreducing the pollutant � at t = T so an optimal control �U will not incur the costof such a jump. The following result is more interesting.

THEOREM 4: Let the situation be as described above. Then (except possiblyat the beginning or end) there are no non-trivial dormancy sub-intervals, i.e.,��2k = ��2k+1; 8k � 1 with 2k < N: (5.1)In addition, if ��1 > ��0 := 0 then �U(t) = 0 on t 2 [0; ��1) and if N = 2M with ��N < Tthen �U(t) = �U(��N ) on t 2 [��N ; T ].Proof: Suppose for some k � 1 with 2k < N , we have ��2k < ��2k+1. Note thatthe bacteria are dormant in [��2k; ��2k+1) and active on [��2k�1; ��2k) and [��2k+1; ��2k+2).Pick any � 2 (��2k ; ��2k+1). One has�c(� ) < �c(��2k+1); ��(� ) = ��(��2k); ��(� ) = ��(��2k):Now de�neeU(t) =8<: �U(t) on [0; �),�U(t� � + ��2k+1)� �c(��2k+1) + �c(� ) on [� ; T + � � ��2k+1),�U(T )� �c(��2k+1) + �c(� ) on [T + � � ��2k+1; T ],ec(t) = � �c(t) on [0; �),�c(t� � + ��2k+1)� �c(��2k+1) + �c(� ) on [� ; T + � � ��2k+1),e�(t) = � ��(t) on [0; � ),��(t� � + ��2k+1) on [� ; T + � � ��2k+1),e�(t) = � ��(t) on [0; � ),��(t� � + ��2k+1) on [� ; T + � � ��2k+1).Note that on [T+ ����2k+1; T ], we could solve the di�erential equations with U = eUto obtain ec(�), e�(�) and e�(�). Thus (eU;ec; e�; e�) is admissible andeU(T ) = �U(T )� �c(��2k+1) + �c(� ) < �U(T );e�(T ) � e�(T + � � ��2k+1) = ��(T );which contradicts the optimality of ( �U; �c; ��; ��):J (eU) = eU(T ) + !(e�(T )) < �U(T ) + !(��(T )) = J ( �U):In the case ��1 > ��0 = 0, if �U(�t) > 0 for some �t 2 [0; ��1), we take � 2 (�t; ��1) andrepeat this argument with similar contradiction. In the case N = 2M and ��N < T ,if �U(��N ) < �U(�t) for some �t 2 (��N ; T ] (as �U(T�) = �U(T ), we may assume �t < T ),then we take � 2 (�t; T ) and repeat the argument to get a contradiction again. Thiscompletes our proof.THEOREM 5: Suppose we additionally assume that � is nondecreasing and increasing as functions of the argument �. Then, under optimal control, transitionto dormancy can only (possibly) occur to give a terminal dormant interval: an`internal' transition to dormancy can never occur.Proof: Suppose, to the contrary, we were to have an optimal controlled solution

with one or more internal transitions to dormancy | by the previous theorem,necessarily with immediate transition back to activity by an impulsive control. Let� be the last such pair of transition times: � = ��2k�1 = ��2k (with k > 1, so thisis preceeded by an active interval) and let T1 be the end of the active sub-intervalinitiated at � , i.e., T1 = T if the system remains active until then (N = 2k), butT1 = ��2k1 if there would be a �nal interval of dormancy.We wish to compare the presumed optimal ( �U; �c; ��; ��) with what would occurif we were to use `the same control' (in a sense to be described below) on [�+; T1]without the transition cycle to dormancy at t2k�1 with immediate re-activation;by our de�nition of the control structure, this alternative would also constitute anadmissible control. As in the earlier arguments, if we can show that the alternativewould lead to a decrease of J , then we will have shown that this last transition paircould not have occurred under optimal control | and so, recursively, that there areno internal transition pairs.It is easiest to make the comparison after making a change of variable for timeduring the active sub-interval [�+; T1], introducing a new `time variable' s = s(t)by setting _s = �(t) so s(t) = Z t� �(t) dt (5.2)for this sub-interval. We set s� = s(T1), using �� in (5.2), so the sub-intervalhas become [0; s�]; abusing notation slightly, we continue to write (U; �; : : :) forthe control and state variables, now considered as functions of s 2 [0; s�] | and,similarly, ( �U; ��; : : :) for the optimal set.We now use a subscript s to indicate di�erentiation with respect to s. In thiscontext, on the active sub-interval (0; s�] the dynamics is given by(�� �U)s = �'(�)� �=�; �s = �(�; �)=�; �s = � (�) (5.3)both for the optimal controlled solution and for our comparison solution, which wedenote by (e�; e�; e�). Note that we are taking the control function �U to be the samein each case as a function of s | although, since e� 6= �� in (5.2), the relation of sto t will be di�erent and the control function eU will not generally be the same asthe original putative optimal control �U as a function of t. What is really di�erentin the comparison case, however, is the initial condition at s = 0, corresponding tothe optimal controlled solution at t = �+. If we set�z = ����t=� �z = ����t=��; �z = ����t=� ;then the initial conditions for (5.3) will be:��(0) = �z ��(0) = ��z ��(0) = �z~�(0) = �z ~�(0) = �z ~�(0) = �z: (5.4)From (5.3) with (5.4) we now observe that ~� > �� and ~� > �� on (0; s�] whence~� < �� on (0; s�]. To see this, we �rst note that ��(0) < ~�(0) because of the assumed

failure of a fraction (1��) of the bacteria to survive the switching cycle (supposedly)involved in the optimal controlled solution, so there is a maximal s in [0; s�] with~� > �� (and �=~� < �=��) on [0; s). Using this in the �rst equation of (5.3) then givesy~� > �� on (0; s]. It follows that (~�) � (��) on [0; s] so ~� > �� there | whence, bythe maximality of s, we must have s = s�. Since we have assumed (�) is (strictly)increasing, we then have � (~�) < � (��) on (0; s�] so ~� < �� there; in particular,~� < �� at the end of the interval.We can now obtain the `actual' alternative control function ~U (as a functionof t) by solving d~s=dt = ~�(~s) with ~s(�) = 0 and then setting~U(t) = � �U(t) for 0 � t � ��U(� + ~s(t) for � � t � T2where ~s(T2) = s�. Since ~� > �� on the interval, a comparison with (5.2) shows thatT2 < T1 � T . We extend ~U as constant (equal to �U(T ), of course) on [T2; T ] andnote that ~����t=T � ~����t=T2 < �����s=s� = �����t=Tso (2.1) gives ~J < �J . This shows that the original scenario could not have beenoptimal, as assumed.We have shown that, for this particular problem, the switching structure underoptimal control is fairly trivial and one might well ask why it was important toconsider at all the full set of complications attendant on treatment of this as ahybrid system. We may note that more complicated structures seem likely to occur(as noted in the next section) for related problems, but a more direct answer wouldbe that our treatment is strongly a�ected by the possibility of repeated transitions,which could not be ruled out even here without an argument embedded in thisgeneral hybrid setting.6. Further remarksWe begin by noting that several variants of (2.1) might also be of interest. Forexample, if one would consider termination of nutrient injection (so u(t) � 0 fort > 0) but permit the system to continue until, at some time T 0 � T , one hadtransition to permanent dormancy, one might use �(T 0) rather than �(T ) in (2.1).A further variant would be the use of an increasing function ! : IR+ ! IR+ of theamount of residual pollutant as a generalization of the term b�.Somewhat more interesting for consideration is the time-optimal problem:yThis would be easy if �U were smooth. One could set �(s) := ~�(s) � ��(s) > 0, set �(s; �) :=��(s) + ��(s) for 0 � s � 1, solve the di�erential equation using � = �(�; �) to get �(s; �), anddi�erentiate with respect to � to see that z = @�=@� satis�es zs = �'0(�)z + (�=�2)� (z(0) = 0)so z > 0 for s > 0 | with a uniform positive lower bound since we consider '0(�) on a compactset. We can then get the present case by a limit argument.

Fix an `acceptable pollutant level' �� > 0 and, de�ning T as the time at which�(T ) � �� (for some speci�ed `acceptable pollutant level' ��), determine the controlso as to minimizeJ = J2 := Z T0 u(t) + aT + b Z T0 �(t); subject to �(T ) � �� (6.1)with a; b � 0 given.Note that the �rst term in J is, in this case as for (2.1), the cost of adding nutrient| taken as simply proportional to the total amount added.For this case we can again prove existence of optimal controls by an argumentrather similar to that given for the Corollary following Theorem 2.COROLLARY: There is an optimal control, i.e., J2 attains its minimum overadmissible functions U(�) and corresponding solutions, subject to the system dy-namics as described above.Proof: For the cost functional J2 we must modify the previous argument slightly,since we then have T � instead of T . The simplest approach is to �x T � boundingT � for some minimizing sequence f[U� ; T � ] ; � � �g and de�ne U�(t) := U�(T �) forT � < t � T � such that fU�(T �)g is bounded. By Theorem 1 we may extend thesolution f� � � ; c� ; �� ; ��g to [0; T �] and then proceed with our previous argument |now with the prior extraction of a subsequence so T � ! �T . The argument ensuresthat we have a solution in the limit | as a solution on [0; T �] but this certainlymeans that the restriction to [0; �T ] is a solution there. It is then clear thatJ2(U�)! J2( �U):Hence J2 attains its minimum using this limit control.We must still verify satisfaction of the constraint. The uniform convergenceand uniform continuity imply ��(T �) ! ��( �T ) so the constraint: ��( �T ) � �� willalso be satis�ed in the limit, provided it is satis�ed for each �. We conclude byreturning to show that it was, indeed, possible in this setting to have a minimizingsequence as assumed. To see this, a single exemplar satisfying the constraint willbe su�cient. We proceed by choosing some �xed �� > �� and de�ning the controlU as starting with an initial jump at t = 0 to �� (so �1 = 0) and then keeping�(t) � �� (i.e., U(t) = c(t) + ��) while solving the `active' equation in (3.1) forward`as long as necessary'. Note that this gives � (��) so �(t) = �0e (��)t and wehave _c = ��� + '(��)� so there is no di�culty with this construction. As this gives (��; �) � � > 0, we have _� bounded from above strictly below 0 here and soare assured that � must decrease to �� at some stopping time. This completes theproof.It would also be of interest to consider an in�nite horizon problem with thecontinued introduction of further pollutant (at some deterministic or stochasticrate) and corresponding modi�cation of the equations. We conjecture that optimalcontrol for that problem would be asymptotically periodic with repeated transitionsbetween the dormant and the active states.

Also of possible future interest would be the consideration of a class of modelsin which the ow rate through the reactor (determining � in the system dynamics)would also be controllable. One might also have coupling with a holding tank forincoming pollutant prior to the bioreactor | with mobility (some solubility,. . . )for the pollutant and then a capacity limit for the holding tank and a cost for anypollutant in the out ow. We defer consideration of these and other possibilities toa future time.Finally, we remark on the possibility of constructing a discretization scheme tocompute approximate solutions of the system. The di�culty in this is closely relatedto the possibility of bifurcation in the solution | there are situations which canoccur in which arbitrarily small perturbations of the dynamics (comparable, e.g.,to discretization error) can lead to the possibility of a signi�cant change in sometransition and so to signi�cant changes in the subsequent evolution. The best onecan hope for is that limits of solutions for the discretization will be solutions of oursystem and that, conversely, every solution of the original system can be obtainedin this way. It is, in fact, possible to construct such a scheme, but consideration ofthe details will also be deferred.AcknowledgementThe �rst author was supported in part by the National Science Foundation, undergrant #DMS9704199.The third author was supported in part by the NSFC, under grant #79790130,by the National Distinguished Youth Science Foundation of China under grant#19725106, by the Chinese State Education Commission Science Foundation undergrant #97024607, and by the Li Foundation. Part of the work has been done whilethis author was visiting the Department of Mathematics and Statistics, Universityof Maryland Baltimore County.References[1] JN. Bellomo ad E. De Angelis, Strategies of Applied Mathematics Towards anImmuno-Mathematical Theory on Tumors and Immune System Interactions,Mathematical Models and Methods in Applied Sciences 8(1998), 1403-1429.[2] C. Bruni and G. Koch, Modeling the Cell Cycle with Quiescent Compartments,Mathematical Models and Methods in Applied Sciences 2(1992), 53-78.[3] J.V. Butera, B.G. Fitzpatrick, and C.J. Wypasek, Dispersion Modeling andSimulation in Subsurface Contaminant Transport, Mathematical Models andMethods in Applied Sciences 8(1998), 1183-1198.

[4] E. Dawes, Growth and Survival of Bacteria, pp. 67{93 in Bacteria in Nature,vol. 2 (J. Poindexter and E.R. Leadbetter, editors), Plenum Publishers, NewYork, 1988.[5] B. Firmani, L. Guerri and L. Preziosi, Tumor/Immune System Competitionwith Medically Induced Activation/Deactivation, Mathematical Models andMethods in Applied Sciences, 9(1999), 491-512.[6] K. R. Fister and J.C. Panetta, Optimal Control Applied to Cell-cycle-Speci�cCancer Chemotherapy, SIAM J. Appl. Math 60(2000), 1059-1072.[7] D.M. Gri�th, A comparison of the roles of bacteria and fungi, pp. 221{255 inBacteria in Nature, vol. 1 (J. Poindexter and E.R. Leadbetter, editors), PlenumPublishers, New York, 1985.[8] A. Heinricher, S. Lenhart, and A. Solomon, The application of optimal controlmethodology to a well-stirred bioreactor, Natural Resource Modeling 9 pp. 61{80 (1996).[9] N. Handagama and S. Lenhart, Optimal control of a PDE/ODE system mod-eling a gas-phase bioreactor, in Proc. Conf. on Math. Models in Medical andHealth Sciences, to appear.[10] E. R. Leadbetter, Prokaryotic Diversity: Form, Ecophysiology, and Habitat,pp. 14{24 inManual of EnvironmentalMicrobiology, (C.J. Hunt, editor), Amer-ican Society of Microbiology Press, Washington D.C., 1997.[11] J. Monod, La technique des cultures continues: Th�eorie et applications, Ann.Inst. Pasteur 79, pp. 390-410 (1950).[12] I.P. Natanson (transl. L. Boron), Theory of Functions of a Real Variable, F.Ungar, New York, 1955.[13] National Research Council, In situ Bioremediation: When does it work?, Na-tional Academy Press, 1993.[14] Norris, R. D., et.al., Handbook of Bioremediation, Lewis Publishers, BocaRaton, 1994.[15] T.I. Seidman, Switching systems, I, Control and Cybernetics 19, pp. 63{92(1990).[16] M.S. Shouche, J.N. Peterson, and R.S. Skeen, Use of a mathematical modelfor prediction of optimum feeding strategy for in situ bioremediation, AppliedBiochemistry and Biotechnology 39/40 pp. 763{770 (1993).[17] E. Venkataramani and R. Ahlert, Role of cometabolism in biological oxidationof synthetic compounds, Biotechnology and Bioengineering 27, pp. 1306-1311(1985).