Multiobjective Evolutionary Optimization of Greenhouse Vegetable Crop Distributions

19
Bulletin of Mathematical Biology (2009) 71: 1463–1481 DOI 10.1007/s11538-009-9409-7 ORIGINAL ARTICLE Multi-Objective Evolutionary Optimization of Biological Pest Control with Impulsive Dynamics in Soybean Crops Rodrigo T.N. Cardoso a , André R. da Cruz b , Elizabeth F. Wanner c , Ricardo H.C. Takahashi b,a Department of Physics and Mathematics, Centro Federal de Educação Tecnológica de Minas Gerais, Belo Horizonte, Brazil b Department of Mathematics, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil c Department of Mathematics, Universidade Federal de Ouro Preto, Ouro Preto, Brazil Received: 7 August 2007 / Accepted: 10 February 2009 / Published online: 7 March 2009 © Society for Mathematical Biology 2009 Abstract The biological pest control in agriculture, an environment-friendly practice, maintains the density of pests below an economic injury level by releasing a suitable quantity of their natural enemies. This work proposes a multi-objective numerical solu- tion to biological pest control for soybean crops, considering both the cost of application of the control action and the cost of economic damages. The system model is nonlinear with impulsive control dynamics, in order to cope more effectively with the actual con- trol action to be applied, which should be performed in a finite number of discrete time instants. The dynamic optimization problem is solved using the NSGA-II, a fast and trust- worthy multi-objective genetic algorithm. The results suggest a dual pest control policy, in which the relative price of control action versus the associated additional harvest yield determines the usage of either a low control action strategy or a higher one. Keywords Biological control · Predator-prey · Impulsive control · Multiobjective optimization · Genetic algorithms · Soy farming · Dynamic optimization 1. Introduction Biological control is the reduction of pest populations with the addition of other living organisms like parasitoids, predators and pathogens, often called natural enemies or ben- eficial species (DeBach, 1964, 1991; Caltagirone and Doutt, 1989; Rafikov and Balthazar, 2005). Another approach, employed nowadays by most of the commercial agriculture, is the chemical control of pests, using pesticides. There is another view (Tang et al., 2005; Liu et al., 2006) that defends the hybrid control (integrating the biological control with Corresponding author. E-mail addresses: [email protected] (Rodrigo T.N. Cardoso), [email protected] (André R. da Cruz), [email protected] (Elizabeth F. Wanner), [email protected] (Ricardo H.C. Takahashi).

Transcript of Multiobjective Evolutionary Optimization of Greenhouse Vegetable Crop Distributions

Bulletin of Mathematical Biology (2009) 71: 1463–1481DOI 10.1007/s11538-009-9409-7

O R I G I NA L A RT I C L E

Multi-Objective Evolutionary Optimization of BiologicalPest Control with Impulsive Dynamics in Soybean Crops

Rodrigo T.N. Cardosoa, André R. da Cruzb, Elizabeth F. Wannerc,Ricardo H.C. Takahashib,∗

aDepartment of Physics and Mathematics, Centro Federal de Educação Tecnológicade Minas Gerais, Belo Horizonte, Brazil

bDepartment of Mathematics, Universidade Federal de Minas Gerais, Belo Horizonte, BrazilcDepartment of Mathematics, Universidade Federal de Ouro Preto, Ouro Preto, Brazil

Received: 7 August 2007 / Accepted: 10 February 2009 / Published online: 7 March 2009© Society for Mathematical Biology 2009

Abstract The biological pest control in agriculture, an environment-friendly practice,maintains the density of pests below an economic injury level by releasing a suitablequantity of their natural enemies. This work proposes a multi-objective numerical solu-tion to biological pest control for soybean crops, considering both the cost of applicationof the control action and the cost of economic damages. The system model is nonlinearwith impulsive control dynamics, in order to cope more effectively with the actual con-trol action to be applied, which should be performed in a finite number of discrete timeinstants. The dynamic optimization problem is solved using the NSGA-II, a fast and trust-worthy multi-objective genetic algorithm. The results suggest a dual pest control policy,in which the relative price of control action versus the associated additional harvest yielddetermines the usage of either a low control action strategy or a higher one.

Keywords Biological control · Predator-prey · Impulsive control · Multiobjectiveoptimization · Genetic algorithms · Soy farming · Dynamic optimization

1. Introduction

Biological control is the reduction of pest populations with the addition of other livingorganisms like parasitoids, predators and pathogens, often called natural enemies or ben-eficial species (DeBach, 1964, 1991; Caltagirone and Doutt, 1989; Rafikov and Balthazar,2005). Another approach, employed nowadays by most of the commercial agriculture, isthe chemical control of pests, using pesticides. There is another view (Tang et al., 2005;Liu et al., 2006) that defends the hybrid control (integrating the biological control with

∗Corresponding author.E-mail addresses: [email protected] (Rodrigo T.N. Cardoso), [email protected] (AndréR. da Cruz), [email protected] (Elizabeth F. Wanner), [email protected] (Ricardo H.C. Takahashi).

1464 Cardoso et al.

the chemical one); this has been shown to be cheaper and more efficient than applyingonly one of the other methods separately.

However, in the last decades, a growing global awareness for the necessity of conser-vation of the environment and for the intelligent usage of the available natural resourceshas been observed. In this way, some benefits of biological control are to produce a highquality food without pesticides, while keeping a potential positive effect in the preserva-tion of biodiversity. A critical issue for reaching a full economic viability of the biologicalcontrol is the determination of enhanced control management policies, that lead to bettereconomic results from the invested financial resources (Griffiths et al., 2008). This articledeals with the problem of synthesis of optimal policies for biological control in soybeancrops.

There are three types of biological control to consider (DeBach, 1964):

(1) Classical Biological Control: it consists in investigating the origin of the pest to dis-cover the natural enemies. The individuals of the enemy species are imported from aplace to another, in order that they can naturally reproduce and attack the pests.

(2) Augmentation Control: it is a more recent method which consists in producing thenatural enemies of the pests in laboratory. A certain amount of these natural enemiesare then released. This method needs frequent management and does not provide acontinuous solution.

(3) Conservation Control: it consists in manipulating aspects of the ecosystem to con-serve and to increase populations of the natural enemies in order to reduce the prob-lems with pests.

The type of control considered in this work fits in categories 1 and 2 above. Thiswork proposes to apply the pest control considering the relationship between the soybeancaterpillar (Anticarsia gematalis) and its natural enemies, like wasps and spiders. Thisbiological example has already been examined from an optimal point of view in Feltrinand Rafikov (2002), Rafikov and Balthazar (2005), in which the authors determined, basedon the dynamic programming approach via HJB (Hamilton–Jacobi–Bellman) equation, acontinuous optimal control allowing that the pest density stabilizes below an economicinjury level that represents an “acceptable” damage to crop. The work (Bor, 2003) usesthe adaptive control concept in uncertain optimal control problems, like the chemical andbiological controls.

The main difficulty of such approach is that the mathematical formulation of the prob-lem artificially constrains the objective function to be quadratic, and can hardly deal withproblem constraints in state variables and/or in control variables. Also, such approachusually leads to continuous-time control laws that must be discretized, in some way, tobe applied. In fact, if a continuous control action is prescribed for solving the particularproblem of pest control, it should be discretized somehow before being applied: to releasecontinuously a continuously varying density of predators, with this action lasting severalmonths, would not be possible.

The previously published papers on the subject have also considered both the damageto crop and the cost of control as monetary costs that could be simply added into a singleobjective function. However, as long as the relative prices of both items vary along thetime, the conclusion gained from a single-objective optimization is restricted to somespecific market situation that is expected to vary along the period from sowing to theharvest. Even if the user runs the single-objective dynamic programming optimization

Multi-Objective Evolutionary Optimization of Biological Pest Control 1465

each time a market change occurs, this approach does not lead to a general insight aboutthe problem structure that underlies the application of such techniques in a varying-marketenvironment.

The contributions of this article are twofold:

(i) The control of pests is studied as an impulsive dynamic programming problem, withnonlinear dynamics. An impulsive control action is accomplished by the release, in atime instant, of a big amount of the pests’ natural enemies. The idea, in this article,is to optimize the pest control through the insertion of predators in the crop farm ina series of time instants separated by discrete fixed intervals of time, with the sys-tem being described by a differential equation (continuous in the time) that is validbetween each pair of successive instants. The problem of controlling such hybrid dy-namics is called an impulsive control problem and the respective associated dynamicsystem is called an impulsive system. The resulting control action comes directly indiscrete form. From the practical point-of-view, the result produced by impulsive dy-namic optimization, as suggested here, is easier to implement in crop farms than theresult obtained by continuous-time dynamic optimization: to prescribe control actionsthat are performed, for instance, once each week, is more natural than control actionsthat have a continuous nature.

(ii) The problem is defined here as a bi-objective one, instead of being stated as a single-objective problem. The optimization consists in minimizing the cost of control ac-tion and the cost of damage caused by pests in a multi-objective sense. A high-performance multi-objective genetic algorithm, the NSGA-II, is employed for solvingthe problem. The multi-objective analysis is shown to be more informative, allowingto define pest management policies that depend on soybean future asset market prices,or on soybean sell price forecast.

The issue of impulsive control for pest management has been studied in Tang et al.(2005) and Zhang et al. (2007). Both works have investigated the stability of pest-controlperiodic solutions. An important difference of those works to the present paper is that theydo not perform the synthesis of an optimal control law, but only the maintenance of thepests bellow an economic injury level. In Tang et al. (2005), the economic consequencesare analyzed taking into account the cost of control action when a steady-state regimeis reached. The final discussion in Zhang et al. (2007) indicates that the scheduling ofimpulsive predator release action, in any real case, should be performed using an optimalcontrol design method (what has not been done in that work). The present paper performssuch study that has not been considered in the former works. A very flexible methodologyfor the synthesis of the optimal control law, that can be adapted to different predator–preyinteraction models, is presented here.

This article is structured as follows: Section 2 presents the Lotka–Volterra mathemati-cal model. Section 3 presents the multi-objective impulsive dynamic programming formu-lation for this problem. Section 4 shows a brief review about genetic algorithms and spe-cially about the NSGA-II (Nondominated Sorting Genetic Algorithm II). This section isintended to provide some material for end-users that are not experts in evolutionary algo-rithms, allowing them to read this paper. Section 5 comments the adaptations which havebeen introduced in the problem formulation for solving it via NSGA-II and presents thenumerical results. Section 6 presents some discussion about the economic consequencesof the results that have been found. Finally, Section 7 presents the conclusions.

1466 Cardoso et al.

2. Mathematical model

The interaction between the pests (prey) and their enemies (predators) can be modeledby the Lotka–Volterra differential equations, a well-known nonlinear dynamical system(Freeman and Primbs, 1996):

{x(t) = x(a − γ x − αy) + ν,

y(t) = y(−b + βx)(1)

where a, b, α, β , γ , b and ν are positive constants with known values. The variables x andy denote, respectively, the density of prey and predators per square meter. The productsax and −by model, respectively, the prey increasing rate and the predator mortality rate.The terms −αxy and βxy represent the interactions between the two populations in anyequation. Each interaction tends to be favorable for predators, increasing the number ofpredators and decreasing the number of prey. The term −γ x2 determines the unspecificcompetition among the prey.

The parameter ν is introduced here in order to represent a small steady flux of soybeancaterpillar (Anticarsia gematalis) pests that come from the outer environment to the farm.This flux, although being insufficient for affecting significantly the evolution of the sys-tem state variables when the endogenous population has a density that is not too small,prevents the rise of some small-population effects, such as the allee effect. In fact, thepersistence of such steady flux implies that an infestation of pests will eventually occur,even in the case of initially clean farms, if there is no pest control. This effect correspondsto the situation of areas in which the pests are endemic, or naturally present in the naturalsurrounding environment. The opposite effect of some flux of pests going from the farmto the outer environment is included in the term −γ x2. The condition γ > 0 is sufficientfor the global stability of the system, but this equilibrium can be unacceptable from thepractical point-of-view. Notice that an important advantage of the biological control overthe chemical control approach is its robustness to such exogenous flux of pests, that isusually unavoidable.

In this work, the relationship between the soybean caterpillar (Anticarsia gematalis)and its enemies (Nabis spp, Geocoris, Aracnid, etc.) is considered. The values for parame-ters used in this work are shown in Table 1 and were based on data extracted from Feltrinand Rafikov (2002), except the value of ν. This value has been chosen by the criterion thatthe overall dynamic behavior of the system should not be altered due to its effect, exceptfor very small values of x.

The system shown in Equation (1) describes the relationship between prey and preda-tors with no control action. Although not entirely realistic, this model has been studied asa simple reference departure throughout the decision process. Regarding the purpose ofdescribing the system behavior when the state vector assumes values that are expected tooccur most of the time of the soy plant development, the resulting equation (1), although

Table 1 Constant values for Problem (1) considering the relationship between the soybean caterpillar(Anticarsia gematalis) and its enemies

Constant a b α β γ ν

Value 0.16 0.19 0.02 0.0029 0.001 0.0001

Multi-Objective Evolutionary Optimization of Biological Pest Control 1467

Table 2 Four initials conditions for Problem (1)

Condition x0 y0 x y

1 Not null Not null 65.517 4.7222 Not null Null 160.001 0

Fig. 1 Behavior of the prey–predator system (1) without any external agent, for the initial conditionx0 = 0.01 and y0 = 0.01.

simple, is not worse than more intricate models, and has the advantage of owning rela-tively few unknown parameters to be adjusted. It should be noticed also that the method-ology for the synthesis of optimal impulsive control presented here does not depend onthis specific model structure: any dynamic model could replace the model of Equation (1),without changing the synthesis procedure.

The equilibrium points of this system, obtained for each type of possible initial con-dition by simulations, are shown in Table 2. The first condition represents the most partof the cases, in which the system tends to a (stable) non-zero fixed point. The secondone describes the situation without any predators, in which the equilibrium is determinedby unspecific competition. Figure 1 shows an example of the evolution of this system,without any control action, for an initial condition x0 = 0.01 and y0 = 0.01 (a low den-sity of both prey and predators at the beginning). Note that the system approaches theequilibrium point shown in Table 2.

The soybean caterpillar density corresponds to the prey density variable (x) in the con-sidered system and its stable equilibrium for non-null initial conditions has mean valueequal to x = 65.517. At this point of equilibrium, the economic damage caused by thepests to the soybean crop is significant (Feltrin and Rafikov, 2002). Therefore, a controlaction should be performed in order to avoid such damage—this can be done through theinsertion of caterpillar natural enemies. A mathematical model of biological control, pre-

1468 Cardoso et al.

sented in the next section, considers the control action of insertion of caterpillar enemiesat discrete time instants.

In this paper, it is assumed that there is a threshold X (the economic injury level) of pestdensity above which the damage grows fast (Feltrin and Rafikov, 2002; Tang et al., 2005).This means that there must be, at least, a control action that keeps this pest density belowsuch threshold. Under such threshold, there is a trade-off between the cost of performingcontrol action and the damage caused by pests. The proposed methodology studies theoptimal control policies that are to be applied in the range of validity of such trade-off.

3. Proposed multi-objective impulsive optimal control

In this work, a biological pest control is proposed using an impulsive dynamic optimiza-tion methodology. The idea is to solve impulsive control problems (Yang, 1999, 2001)through discrete-time dynamic programming algorithms. There would be several alterna-tives for performing this: exact, approximated or heuristic ones (Bertsekas, 1995). Thechosen method is the open-loop continuous-variable as discussed in Bertsekas (2005),Cardoso et al. (2009). In this work, the multi-objective solutions are found employing anevolutionary algorithm.

Let T ∈ R be the optimization horizon. This time horizon corresponds to the periodfrom sowing to the harvest, that lasts from 120 to 200 days, depending on the soybeanvariety and on climate conditions. The impulsive approach discretizes the time interval[0, T ] in N stages and performs control action in the dynamic system impulsively, withthe impulsive control being executed at the instants in which the time stage changes.Define a set of control instants

� = {τ0, . . . , τN },

such that:⎧⎪⎨⎪⎩

τk < τk+1,

τ0 = 0, and

τN = T .

These intervals do not need to be equidistant. The time instant τ+k is defined as a time

instant “just after” the impulsive action in τk . Given any very small δ > 0, τ+k is formally

defined as any number that fulfills:

{τ+k = τk + ε,

0 < ε < δ.

The discrete-time variables of the problem are:

(a) x[k] = x(τk) is the density of prey at the beginning of stage k;(b) y[k] = y(τk) is the density of predators at the beginning of the stage k; and(c) u[k], for each stage k, is the density of predators to be launched at the beginning of

such stage.

Multi-Objective Evolutionary Optimization of Biological Pest Control 1469

Consider given initial densities, x[0] = x0 and y[0] = y0, of prey and predators respec-tively. Since the control variable acts directly only in the number of predators, then:

x[k+] = x

(τ+k

) = x[k], (2)

for all stage k, and the new predator density, in each stage k, will be:

y[k+] = y

(τ+k

) = y[k] + u[k]. (3)

A bi-objective problem is formulated taking into account the need for controlling thedensity of pests with a minimal cost. For this, two objective functions are considered: tominimize the additive total cost of the damage to the crop due to pests (measured in termsof soybean loss per square meter) and to minimize the additive total cost with the insertionof predators (measured in terms of number of predators inserted per square meter). Thesefunctions are supposed to be linear, with the parameters ck (corresponding to the pests)and dk (corresponding to the predators) as positive constants. The decision variables of theproblem, u[0], . . . , u[N − 1], are the density of predators to be launched at each discretetime stage.

The multi-objective impulsive dynamic programming problem to deal with this bio-logical control can be formulated as (4):

minu[0],...,u[N−1]

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

J1 =N∑

k=0

ckx[k],

J2 =N−1∑k=0

dku[k],

subject to:

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

x(t) = x(a − γ x − αy) + ν;y(t) = y(−b + βx);x[k] = x(τk), y[k] = y(τk);x(τ+

k ) = x[k];y(τ+

k ) = y[k] + u[k];x[k] ≥ 0, y[k] ≥ 0, u[k] ≥ 0;x[k] ≤ X;k = 0,1, . . . ,N − 1;x[0] = x0 and y[0] = y0.

(4)

Note that, for the calculus of the variables x[k] and y[k] in each stage k, it is necessary tosolve the IVP (initial value problem) shown in Equation (5):

⎧⎪⎨⎪⎩

x(t) = x(a − γ x − αy) + ν;y(t) = y(−b + βx);t ∈ (

τ+k , τk+1

].

(5)

1470 Cardoso et al.

This IVP is valid during the time in which there is no control action in the system. A newinitial condition for the system is established in each stage, due to control action, accord-ing to (2) and (3), linking each stage with the next one. The initial condition of this systemmust be known previously.

In the multi-objective optimization problems, such as (4), there is not a solution thatis the best or the global optimum with respect to all objectives. The presence of multipleobjectives in a problem usually gives rise to a family of nondominated solutions whichform the Pareto-optimal set. When comparing two solutions within this set, if a solutionA has a better objective component than another solution B , then B must be better than A

in another objective component. In a minimization problem with vector objective functionJ ∈ R

m, if y ∈ Rn denotes the decision variable vector of the problem and D ⊂ R

n is theset of all feasible y, the set of nondominated solutions Y ⊂ D is characterized by:

Y = {y ∈ D |� ∃ y ∈ D : Ji(y) ≤ Ji(y),∀i = 1, . . . ,m;J (y) �= J (y)

}. (6)

Since none of the solutions in the nondominated set is absolutely better than any otherone, none of them should be discarded a priori. The whole set of Pareto-optimal solutionsshould be presented to the human decision-maker, that will perform a choice employingnot only the problem objective functions, but also some exogenous information (often,this additional information comes from some subjective judgment).

In the case of the problem posed in this paper, such additional information comesfrom the variation of relative prices, that is not available at the time of the optimizationprocedure. The Pareto-optimal set, in our case, is employed not only for the purpose oftaking a decision, but also for evidencing what are the decisions that constitute reasonablechoices, and under what circumstances each such decision is to be taken. This paper showsthat only a small subset of the Pareto-set solutions constitute, in fact, solutions that shouldbe considered.

Dynamic programming techniques have been applied to impulsive control problem,under a theoretical viewpoint. For instance, in Dar’in et al. (2005) the solution has beenobtained via the Hamilton–Jacobi–Bellman (HBJ) equation. On the other hand, the spe-cific problem of pest control, using Lotka–Volterra models, has been considered in Feltrinand Rafikov (2002), in a continuous control framework: the HBJ approach has been usedfor the control synthesis in that case too. Up to the authors’ knowledge, there is no opti-mal control synthesis method for pest management with impulsive control action and nomulti-objective formulation for the problem of pest control, published up to now.

In this paper, the impulsive control sequence that solves the problem (4) is obtainedby a multi-objective version of the open-loop continuous-variable discrete-time dynamicprogramming approach discussed in Bertsekas (2005). A multi-objective formulation hasbeen presented in Cardoso et al. (2009), for systems with linear dynamics, and is extendedhere for the case of nonlinear dynamics. A core component of the approach proposedhere is the multi-objective optimization algorithm that is used for finding the solutions ofproblem (4). A multi-objective genetic algorithm, the NSGA-II, is employed here for thispurpose.

4. Genetic algorithms and the NSGA-II

The naturalist Charles Darwin defined the Natural Selection as the preservation of favor-able individual differences and variations, and the destruction of those that are injuri-

Multi-Objective Evolutionary Optimization of Biological Pest Control 1471

ous (Darwin, 1882). In nature, individuals more adapted to their environment have morechance to compete, survive and reproduce. Along the evolution of a species, those fea-tures that make it weaker generally are eliminated. Such features are controlled by unitscalled genes which form sets called chromosomes. Over subsequent generations not onlythe fittest individuals survive, but also their fittest genes which are transmitted to theirdescendants during the sexual recombination process which is called crossover. Theseanalogies between the mechanism of natural selection and a learning process led to thedevelopment of the so-called Genetic Algorithms (GAs).

Genetic algorithms (GAs), introduced by John Holland in the 1960s (Holland, 1962a,1962b), are computational methods of search based on mechanisms of genetic and naturalevolution. This class of algorithm is reaching increasing importance in many fields of op-timization. In this algorithm, a set of tentative solutions (population) evolves accordinglyto probabilistic rules inspired by biological metaphors: in each generation, the individualstend to be better while the evolution process resumes. In general, GAs have the followingcharacteristics (Goldberg, 1989; Holland, 1975):

• GAs work in a set of points (population);• GAs work in a codified search space;• GAs need only the information about the objective function for each member of the

population; and• GAs use probabilistic transitions.

In any GA, some common operators are initialization and evaluation (of the popula-tion), fitness function evaluation, selection, mutation and recombination. In other ones,there may exist (depending of the objective of application) local search, niche, decima-tion, for instance. Goldberg (1989) announced four reasons that make GAs attractive forapplications:

• GAs can solve hard problems in short time and trustfully;• The interface building between GAs and existing models is usually simple;• GAs are extensible; and• GAs are easy to hybridize.

In this work, a well-known multi-objective GA is employed: the NSGA-II proposedin Deb et al. (2000). The idea of Nondominated Sorting Genetic Algorithm (NSGA) wassuggest by Goldberg (1989) and implemented by Srinivas and Deb (1994). The main char-acteristic of the nondominated sorting procedure is that a ranking selection method is usedto emphasize good points and a niching method is used to maintain a stable subpopulationof good points. The main difference between NSGA and a simple genetic algorithm is inthe structure of the selection operator: in NSGA, multiple objectives are reduced to a sin-gle fitness measure by the definition of the ordinal number of fronts, sorted according tonondomination. The other operators can be as usual. In 2000, Deb et al. (2000) reporteda new version, called NSGA-II, which introduced a fast nondominated sorting procedure,an elitism-preserving approach, and a parameterless niching operator for diversity preser-vation (crowding distance comparison operator), leading to an enhanced computationalcomplexity. NSGA-II also incorporates a simple and efficient penalty-parameterless ap-proach for solving constraints.

1472 Cardoso et al.

5. Solving the multi-objective prey–predator problem via NSGA-II

The multi-objective dynamical problem (4) is solved here using the NSGA-II. The prob-lem statement is: when the time of each stage finishes, a density of natural enemies (preda-tors) per square meter should be launched in the soybean crops to reduce the number ofpests (prey). The algorithm is used to find this optimal control action sequence program.

Using the open-loop dynamic optimization methodology discussed in Bertsekas(2005), Cardoso et al. (2009), the objective function and the constraints in formulation(4) must be considered as a function of the initial state and of the control action sequence,formally replacing each x[k] and y[k] in those functions as calculated numerically by theIVP shown in Equation (5). For evaluating this nonlinear differential equation dynamicsystem, the Runge–Kutta method (Burden and Faires, 2003) is used.

Each individual in the NSGA-II is a structure which contains a vector of predatordensity that will be launched in each time stage k. The vector of control variables U iscomposed by the predator density to be launched from the initial stage to the last-but-onestage:

U = [u[0] · · · u[N − 1] ] . (7)

This is the decision variable of the problem; each individual is characterized by its vec-tor U . Given U , the following variables are computed, for each individual:

(1) Number of prey in each stage;(2) Number of predators in each stage;(3) Vector of the evaluated objective functions;(4) Vector of constraint violations;

which, in turn, are used in the computation of the following values, for each individual:

(1) Sum of violation of constraints;(2) Value of the crowding distance of the individual;(3) Rank, or level of the individual in Pareto set.

These three last values are used in the computation of the fitness value of the individual,that is used for the operation of selection, inside the genetic algorithm.

In this work, the NSGA-II was edited to work with continuous-variable encoding,selection by binary tournament and polynomial mutation. The algorithm parameter valuesare listed below:

• Number of generations: 200• Crossover rate: 0.70• Index of distribution for crossover: 10• Mutation rate: 0.30• Index of distribution for mutation: 10

The parameters of the optimization are:

• Initial density of prey per square meter: 0.01• Initial density of predators per square meter: 0.01• Search space for the density of predator launched in each stage: [0;40]

Multi-Objective Evolutionary Optimization of Biological Pest Control 1473

• Constraint on the maximum number of prey per square meter in each stage: X = 20• Horizon (days): 200• Time for each stage (days): 20

The positive constants of the objective functions ck and dk are all supposed to be equalto 1, which means that the total number of prey counted (J1), and the total number ofpredators released (J2), are considered as proxies of the total monetary costs that will beincurred by crop damage due to pests and by predator release control action, respectively.

An execution of the NSGA-II generates a Pareto front as shown in Fig. 2. The front iswell defined and describes the cost with launched predators versus the damage with pests.All the resulting Pareto-set solutions obey the specified threshold. And also, the less thedamage with the pests the bigger the cost with predators.

Figures 3, 4 and 5 show the time evolution of the system variables and the numberof predators released during the development of the pest control for some Pareto-optimalsolutions. The first example, corresponding to point A in Fig. 2, represents a low controlpolicy, in which the number of predators to be launched per stage is small, leaving a highprey density. The objective function values of this point are (J1, J2) = (158.98,186.85).The third example corresponds to the point C in Fig. 2. It represents a high control policywith a high total number of predators to be launched, leaving a small density of prey. Theobjective function values of this point are (J1, J2) = (0.89,277.96). Note the trade-off:the damage due to pests is high in the first example, but the cost with launched predatorsis small and, in the other one, an opposite behavior occurs. The second example, cor-responding to point B of Fig. 2, shows an intermediate solution between such extremalcases, with objective function values (J1, J2) = (18.93,234.36).

Fig. 2 Pareto-front obtained by NSGA-II in the problem (4). The axes correspond to the number of prey,and the number of predators to be released, per square meter, as measured in the beginning of each timeinterval and summed up along the time horizon. Three solutions, A, B, and C, are emphasized.

1474 Cardoso et al.

Fig. 3 Solution A: Time evolution diagram, and the number of predators launched per stage, during thedevelopment of the pest control in a low control action policy. The damage due to pests is high, while thecost with launched predators is small.

5.1. Continuous-variable control

It is worthy to compare the proposed control with the traditional methodology of acontinuous-variable optimal control. Figure 6 shows the time evolution of the systemvariables and the number of released predators for the continuous-time strategy proposedin Feltrin and Rafikov (2002), under two versions. In one version, the continuous-variablecontrol action is executed as a continuous-variable action: in every instant the action ofreleasing predators is taking place. It should be noticed that such continuous control ac-tion is not practical. In the other version, the same control action is discretized, in or-

Multi-Objective Evolutionary Optimization of Biological Pest Control 1475

Fig. 4 Solution B: Time evolution diagram, and the number of predators launched per stage, during thedevelopment of the pest control in a intermediate-level control action policy. The damage due to pests andthe cost with launched predators assume intermediate values.

der to generate impulsive control actions that are to be performed in some discrete timeinstants—which is more adequate for implementation in practice. This discretization isperformed integrating the control variable in each interval between two control actions,and releasing the resulting number of predators in the beginning of the time interval. Thetime for each discrete stage is taken as 20 days.

As a conclusion of this experiment, it can be stated that an impulsive control shouldbe calculated directly in discrete-time form, since the computation in continuous-variableform followed by a discretization procedure can lead to an impulsive control that no longer

1476 Cardoso et al.

Fig. 5 Solution C: Time evolution diagram, and the number of predators launched per stage, during thedevelopment of the pest control in a high control action policy. The damage due to pests is small, whilethe cost with launched predators is high.

keeps similarity with the original continuous version. As a consequence, the attribute ofoptimality of the continuous control becomes no longer valid after discretization.

6. Discussion

An interesting conclusion can be drawn from Figure 2: there are two distinct regions inthe Pareto-set, one from point A to point B, and the other one from point B to point C.In the first region, which corresponds to a control action objective function varying from

Multi-Objective Evolutionary Optimization of Biological Pest Control 1477

Fig. 6 Time evolution of the number of prey and predators and the number of predators launched, for acontinuous-time control action policy and in its discretized counterpart. The curves of continuous controlare represented by continuous curves, and the curves of discretized control are represented by dashedcurves. The discretized control action is represented by discrete points.

about 185 predators launched per square meter up to about 235 predators launched persquare meter (total objective function along the cultivation lasting), the marginal gain oflaunching predators in the soybean crop stays constant within most of the control variablerange (approximately 2.3 prey/predator). In this region, an almost affine function betweencost of control action and cost of damage to crop holds. Let this region be called region I.

1478 Cardoso et al.

In the other region, there is a saturation of the benefits obtained by further control action:the damage to crop presents a decreasing marginal gain. Let this region be called region II.

Ecologically, the tendency of region II to become vertical as the control action becomesstronger is related to a quick collapse in the population of prey when too much predatorsare released.

Under an economic viewpoint:

• The employment of control action with objective function over 240 would be ineffi-cient. This would be the maximum value to be recommended in practice.

• The strategy to be employed would probably be chosen from a set of two alternatives:the control action with predator objective function about 190 and the control actionwith predator objective function about 240.

• There would exist a critical soybean price for which any control action with preda-tor objective function between 190 and 240 would be acceptable. For this price, theeconomic cost of launching more predators (augmenting the control effort) would beexactly compensated by the economic gain obtained from the additional soybean thatwould be harvested.

• Otherwise, the relative prices between the soybean and the control action would de-fine what strategy would be chosen, necessarily among the two extreme strategies ofregion I. A low market price for soybean would lead to control action with predator ob-jective function 190: in this case, the economic gain due to the additional harvest yieldthat would be obtained if more predators were launched does not cover the cost of thecontrol action. On the other hand, a high market price for soybean would lead to controlaction with objective function 240, since the cost of launching up to 240 predators iscovered by the economic gain of selling the additional soybean that is harvested.

• Launching more predators than 240 would present no economic gain: the additionalyield would be small, since the system would operate in region II.

The critical relation between soybean market price and control action cost that separatesthe two strategies is given by the inclination of the line segment that adjusts region I.Under the viewpoint of the soybean farmer, this relation in fact defines the control policyto be adopted. However, under the viewpoint of the predator producer, this critical relationdefines a price threshold to be followed: a small variation in price can change the demandfor her/his product by a factor of about 25%.

Figure 7 represents the Pareto-set that would be obtained without imposing the con-straint that the economic injury level of pests (X = 20 pests by square meter) shouldnot be attained in any time: this makes the Pareto-curve to extend up to points in whichthe number of predators released become almost zero, and the number of prey becomevery high. The Pareto-curve is also extended in the other extreme, in which the numberof predators grows while the number of prey remains near to zero. The economic injurylevel X defines the right extreme of the Pareto-curve of Fig. 2 that is employed in the eco-nomic analysis. It should be noticed that, beyond the economic injury level, the damagesto crop would no longer fit a linear relation with the number of pests—what means thatthe Pareto-set of Fig. 7 is not meaningful outside the region of economic analysis.

It is also remarkable that:

• The strategy known as optimal control is not suitable for practical purposes, becauseit stands on a continuous action of introduction of predators. An alternative could be

Multi-Objective Evolutionary Optimization of Biological Pest Control 1479

Fig. 7 Pareto-front obtained by NSGA-II in the problem (4), when no constraint is considered. ThePareto-front of Fig. 2 is superimposed, represented by larger points. The axes correspond to the num-ber of prey, and the number of predators to be released, per square meter, as measured in the beginning ofeach time interval and summed up along the time horizon.

to discretize the control action in time, but the operation of discretization distorts theresulting control, causing the loss of optimality.

• A fixed non-null predator insertion cost per stage would describe more accurately theactual cost of control. This modification of the objective function formulation can beeasily performed within the proposed methodology – in contrast with the continuous-time optimal control, which would not admit, at all, such cost associated to a discreteaction of predator insertion.

Finally, it should be mentioned that the proposed strategy can be run once by timestage, in order to adapt to eventual changes in system variables due to external distur-bances and/or to model inaccuracies. The time scale of running the NSGA-II for solv-ing the problem is of the order of some minutes in a standard laptop, which is muchsmaller than the time scale of the interval between two control actions, which is of theorder of two weeks. Although generating open-loop control actions, the proposed strat-egy becomes virtually closed-loop with the periodic updating of the empirically observedproblem conditions in new runs.

7. Conclusion

There is a growing consensus about the need of environmental care in human activity.This work is inserted within this effort, studying the issue of implementing the biologi-cal control of pests in agriculture. The economic viability of such pest control policy is

1480 Cardoso et al.

a key factor for its adoption, and the results of this study reveal an interesting structurethat underlies this problem: there is a dual strategy, which is related to the price relation-ship between predator production cost and soybean selling price, that determines if thebiological control policy can be employed in large scale or not.

The proposed dynamic model, with nonlinear continuous time dynamics and impul-sive control, is directly applicable to the problem. The multi-objective genetic algorithmNSGA-II has revealed to be suitable for solving this problem, with a reasonably smallcomputational effort. The proposed methodology, thanks both to the dynamic model andto the multi-objective genetic algorithm, is flexible for dealing with different constraints,objective functions, external disturbances, etc.

Future research will take into account a more detailed analysis of the effect of systemmodel uncertainties in the resulting control policies. It is expected that the main conclu-sions of this paper, in their general lines, will remain valid.

Acknowledgements

The authors would like to thank the Brazilian agencies CAPES, CNPQ and FAPEMIG forthe financial support. The authors are also grateful to the anonymous reviewers, for theirinsightful comments.

References

Bertsekas, D.P., 1995. Dynamic Programming and Optimal Control. Athena Scientific, Nashua.Bertsekas, D.P., 2005. Dynamic programming and suboptimal control: a survey from ADP to MPC. Eur.

J. Control. 11(4–5), 310–334.Bor, Y.J., 2003. Uncertain control of dynamic economic threshold in pest management. Agric. Syst. 78,

105–118.Burden, R.L., Faires, J.D., 2003. Numerical Analysis. Thomson, Belmont.Caltagirone, L.E., Doutt, R.L., 1989. The history of the vedalia beetle importation to California and its

impact on the development of biological control. Ann. Rev. Entomol. 34, 01–16.Cardoso, R.T.N., Takahashi, R.H.C., Fonseca, C.M., 2009. An open-loop invariant-set approach for multi-

objective dynamic programming problems. In: Proceedings of the IFAC Workshop on Control Appli-cations of Optimization, Jyvaskyla, Finland, May 2009. IFAC.

Dar’in, A.N., Kurzhanskii, A.B., Selesznev, A.V., 2005. The dynamic programming method in impulsivecontrol synthesis. Ordinary Differ. Equ. 41(11), 1491–1500.

Darwin, C.R., 1882. The Variation of Animals and Plants under Domestication. Murray, London.Deb, K., Agrawal, S., Pratab, A., Meyarivan, T., 2000. A fast elitist non-dominated sorting genetic algo-

rithm for multi-objective optimization: NSGA-II. In: Proceedings of the VI Conference in ParallelProblem Solving from Nature, Lecture Notes in Computer Science, vol. 1917, pp. 849–858. Springer,Berlin

DeBach, P., 1964. Biological Control of Insects Pests and Weeds. Van Nostrand-Rheinhold, New York.DeBach, P., 1991. Biological Control by Natural Enemies. Cambridge University Press, Cambridge.Feltrin, C.C., Rafikov, M., 2002. Aplicação da função de Lyapunov num problema de controle ótimo de

pragas. Tend. Mat. Apl. Comput. 3(2), 83–92 (in portuguese).Freeman, R., Primbs, J., 1996. Control Lyapunov functions: New ideas from and old sources. In: Proceed-

ings of the 35th IEEE Conference on Decision and Control, pp. 3926–3931.Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-

Wesley, Reading.Griffiths, G.J.K., Holland, J.M., Bailey, A., Thomas, M.B., 2008. Efficacy and economics of shelter habi-

tats for conservation biological control. Biol. Control 45(2), 200–209.

Multi-Objective Evolutionary Optimization of Biological Pest Control 1481

Holland, J.H., 1962a. Concerning efficient adaptive systems. In: Yovits, M.C., Jacobi, G.T., Goldstein,G.D. (Eds.), Self-Organizing Systems, pp. 215–230. Spartan Books, Washington

Holland, J.H., 1962b. Outline for a logical theory of adaptive systems. J. Assoc. Comput. Mach. 9, 297–314.

Holland, J.H., 1975. Adaptation in Natural and Artificial Systems. University of Michigan Press, AnnArbor.

Liu, B., Teng, Z., Chen, L., 2006. Analysis of a predator–prey model with Holling II functional responseconcerning impulsive control strategy. J. Comput. Appl. Math. 193(2), 347–362.

Rafikov, M., Balthazar, J.M., 2005. Optimal pest control problem in population dynamics. Comput. Appl.Math. 24(1), 65–81.

Srinivas, N., Deb, K., 1994. Multi-objective optimization using nondominated sorting in genetic algo-rithms. Evol. Comput. 2(3), 221–248.

Tang, S., Xiao, Y., Chen, L., Cheke, R., 2005. Integrated pest management models and their dynamicalbehavior. Bull. Math. Biol. 67, 115–135.

Yang, T., 1999. Impulsive control. IEEE Trans. Automat. Contr. 44(5), 1081–1083.Yang, T., 2001. Impulsive Control Theory. Springer, New York.Zhang, H., Jiao, J., Chen, L., 2007. Pest management through continuous and impulsive control strategies.

BioSystems 90, 350–361.