Post on 24-Apr-2023
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 2, FEBRUARY 2006 183
Relatively Optimal Control With CharacteristicPolynomial Assignment and Output Feedback
Franco Blanchini, Senior Member, IEEE, and Felice Andrea Pellegrino
Abstract—A relatively optimal control is a stabilizing controllersuch that, if initialized at its zero state, produces the optimal (con-strained) behavior for the nominal initial condition of the plant(without feedforwarding and tracking the optimal trajectory). Inthis paper, we prove that a relatively optimal control can be ob-tained under quite general constraints and objective function, inparticular without imposing 0-terminal constraints as previouslydone. The main result is that stability of the closed-loop systemcan be achieved by assigning an arbitrary closed-loop character-istic stable polynomial to the plant. An explicit solution is provided.We also show how to choose the characteristic polynomial in sucha way that the constraints (which are enforced on a finite horizon)can be globally or ultimately satisfied (i.e., satisfied from a certaintime on). We provide conditions to achieve strong stabilization (sta-bilization by means of a stable compensator) precisely, we showhow to assign both compensator and closed-loop poles. We considerthe output feedback problem, and we show that it can be success-fully solved by means of a proper observer initialization (based onoutput measurements only). We discuss several applications of thetechnique and provide experimental results on a cart-pendulumsystem.
Index Terms—Characteristic polynomial assignment, con-strained control, linear control, optimal control.
I. INTRODUCTION
IN THIS paper, we consider the problem of determining
an optimal control for linear discrete-time systems under
general constraints and cost functions. Except for very special
cases, determining an optimal control in a feedback form, under
output or input constraints is a computationally hard task. The
techniques based on dynamic programming [4], [14], which
provide general solutions, are known to be effective only for
systems of small dimensions. There have been several attempts
in the literature to approach optimal constrained control prob-
lems such as those considering the minimum-time problem
[8], [9] or the linear quadratic constrained problem [2], [3].
However, these techniques are based on set-computation in
the state space, and therefore may encounter some difficulties
when applied to high dimensional systems. Optimal control
can be faced by means of the receding-horizon approach (see
the survey work in [7], [11], and [12]) whose main difficulty is
the necessity of solving online an optimization problem.
In [6], we have proposed a different approach named rela-
tively optimal control motivated by the observation that in many
Manuscript received June 10, 2004; revised October 4, 2005. Recommendedby Associate Editor M. Rotea. This work was supported by MURST, Italy.
The authors are with the Dipartimento di Matematica e Informatica,the Università di Udine, 33100 Udine, Italy (e-mail: blanchini@uniud.it;fapellegrino@units.it).
Digital Object Identifier 10.1109/TAC.2005.863493
problems the request of optimality and constraints satisfaction
is essential only for some special initial conditions. Indeed there
are many examples of plants (such as lifts, bascule bridges,
automatic gates, floodgates, cranes, etc.) which are explicitly
built to perform, under normal circumstances, specific operation
through a specific trajectory with known initial and final states.
Moreover, in many problems the initial state is determined by
perturbations, such as impulses, therefore they have privileged
directions given by the column of the disturbance input matrix.
Optimality from a specific initial condition can be achieved
by a feedforward feedback compensator. However, the feed-
forward solution is not practical since it allows the system to
perform appropriately only for the nominal operation while for
nonnominal initial conditions the behavior is inadequate. In [6]
a pure feedback control is proposed which is stabilizing but is
required to satisfy the constraints and to be optimal only for a
given (or a set of given) initial condition(s). However the as-
sumption of zero terminal constraint was considered in the opti-
mization problem and this restriction led to the achievement of
dead-beat compensators only.
In this paper, we still work with general convex (integral and
pointwise) constraints and costs but without zero terminal state.
Removing the assumption of finite horizon arrival to the origin
reproposes the question of how stability can be assured for the
closed-loop system. Indeed, even for the nominal initial state,
which results in the optimal trajectory, the problem of the global
transient (precisely, the evolution after the optimization horizon)
is open.
As a first result, we show that closed loop stability can be in-
deed assured by the assignment of the closed-loop characteristic
polynomial (and, therefore, of the closed-loop poles). Assigning
the closed-loop poles is a desirable task since it allows to impose
proper modes to the closed-loop plant. Then we further inves-
tigate the problem of the choice of the closed-loop polynomial
and its influence on the global constraint satisfaction. We also
solve the problem of output feedback, left as open in the pre-
vious work. The main contribution of the paper can be summa-
rized in the following points.
• The problem of determining a relatively optimal con-
trol is generalized by removing the assumption of zero
terminal constraint. The consequence is that there is no
restriction to FIR closed-loop systems.
• In the state feedback case, the relatively optimal con-
troller, if initialized at its zero state, produces the op-
timal open-loop constrained trajectory if the plant ini-
tial state is the nominal one. The compensator is linear
time-invariant dynamic, and its order is equal to the
time horizon minus the order of the plant.
0018-9286/$20.00 © 2006 IEEE
184 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 2, FEBRUARY 2006
• The controller is stabilizing for all initial conditions
and the closed-loop polynomial can be assigned by
means of an additional linear constraint in the opti-
mization problem.
• In the output feedback case, the problem can be solved
by means of an observer which has to be properly ini-
tialized. In this case, the compensator order is equal
to the open-loop optimization horizon. It is shown that
the mentioned initialization (clearly based on the mea-
sured output information only) is necessary to achieve
the same open-loop performances.
• It is shown how to choose the characteristic polynomial
to assure that, as long as the optimal finite horizon tra-
jectory satisfies the constraints, then these will be sat-
isfied at each time.
• A solution to the problem of ultimate constraints sat-
isfaction (the ultimate boundedness in a target) is pro-
posed. Again this reduces to an appropriate choice of
the characteristic polynomial.
• The approach can be applied to several optimization
problems such as optimal arrival to a target, optimiza-
tion of the feedback matrix for multiple-input pole as-
signment and impulse response optimization.
• An application to the time-optimal arrival to a target set
under constraints for a cart-pendulum system is pro-
posed. Experimental results are provided to show the
efficacy of the approach and the behavior for nonnom-
inal initial states.
II. PROBLEM STATEMENT
Consider the discrete-time reachable system
(1)
where is the state, is the control input,
is the performance output, is the mea-
sured output, and , , , , and are matrices of appro-
priate dimensions. We assume that the system is detectable from
the measured output . For this system, we consider the
following locally bounded convex functions:
(we assume them 0-symmetric, i.e., ,
and ) and the constraint
(2)
where is a 0-symmetric convex and closed set (as well as all
the sets we will introduce in the following). Consider also the
final constraint
(3)
Fix and fix any Schur-stable (i.e., having roots of
modulus strictly less than one) polynomial
(4)
Fig. 1. Feedforward + feedback and pure feedback solution.
and consider the next optimization problem with assigned initial
condition (we assume as initial time)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
The formulation of the problem is essentially that in [6], with the
only substantial difference of the last two constraints. The con-
straint (12) is a generalization of the terminal constraint
which can be considered as a special case by assuming
Note that we have imposed a constraint on the final part of the
horizon from to , where . This allows
for solving the problem of ultimate constraints satisfaction for
and suitably chosen, as we will show in Section V.
Note also that the cost includes a weight on the terminal state
. The problem of achieving the optimal trajectory can
be solved by means of a stabilizing feedback feedforward
control as in Fig. 1 (left), according to the control law
(14)
where and denote Z-transformed optimal input se-
quence and the measured output sequence corresponding to the
optimal trajectory, respectively. It is obvious that, if the initial
condition is the corresponding trajectory is the op-
timal one, by definition of . As already discussed in [6], this
scheme presents several disadvantages. In particular, the main
trouble is that the controller basically tracks the optimal trajec-
tory originating in , therefore if is far from (for instance
the opposite state ), then the resulting transient will be com-
pletely unsatisfactory. On the contrary, in this work we consider
a feedback scheme as in Fig. 1 (right). The basic request for this
scheme is that the system has to be stable and the transient has
to be optimal if the initial condition is . As we will see next,
we will achieve many advantages from this kind of scheme. We
BLANCHINI AND PELLEGRINO: RELATIVELY OPTIMAL CONTROL 185
are now in the position of formalizing the problem. We initially
consider state feedback, namely .
Problem 1: Find a state feedback compensator having a pure
feedback structure (Fig. 1, right) such that for both
and the control and state trajectories are optimal
and which is stabilizing (possibly with constraints violations)
for all initial conditions.
In the formulation of the problem in [6], we assumed that the
compensator state is not initialized, namely we assumed
. In the new formulation, we remove this assump-
tion. Actually, the condition is maintained in the state
feedback case, while for the measurement feedback we have to
suitably initialize the compensator. Also, the optimality require-
ment for both and (which is redundant in the state feed-
back case being the provided compensator linear) is introduced
to exclude a trivial solution for the output feedback problem.
Finally, we note that the constraints of the problem have to be
considered as a design specification “soft” constraints and that
their violation implies only a performance loss.
III. RELATIVELY OPTIMAL CONTROL WITH CHARACTERISTIC
POLYNOMIAL ASSIGNMENT—THE STATE FEEDBACK CASE
In this section, we show how to solve Problem 1. We intro-
duce the following technical assumption (which can be easily
removed, see [6]).
Assumption 1: The initial state does not belong to any
proper —invariant subspace1 of .
Denote by
(15)
the matrix containing the optimal state trajectory and by
(16)
the matrix containing the optimal input sequence. By
construction, and satisfies
(12). This means that matrices and satisfy the next equation
(17)
where the square matrix is the Frobenius matrix associated
with
(18)
which is, therefore, a stable matrix. In view of Assumption 1,
the next Lemma holds [6].
Lemma 3.1: The matrix has rank , namely has full row
rank.
1A subspace S is said (A;B)—invariant if for all x 2 S there exists u(x)such that Ax + Bu(x) 2 S [1]. It is said proper if S 6= IR .
First consider the case . Let us consider any
matrix of the form
(19)
(note that the first column is zero) such that the next square
matrix is invertible
(20)
Clearly, finding in such a way that the matrix is invertible
is always possible (see [6, App. A]). Now, denote by
(21)
Consider the linear compensator
(22)
(23)
where , , , and are achieved as the unique solution of
the linear equation
(24)
The next theorem states that such a compensator is stabilizing
and, denoting by , , the generic solution of the
closed-loop system, if we take and , then
the resulting trajectory is the optimal one, namely ,
, and .
Theorem 3.1: The compensator given by (22)–(24) is a solu-
tion of Problem 1.
Proof: By combining (17) and (21), we derive the fol-
lowing equation:
(25)
The closed-loop system with compensator (22) and (23) has
state matrix
(26)
By (24) and (25), we have that
(27)
Since is invertible, the matrix , so that it is
similar to , hence is stable. If we consider the expression of
the th column in (27) and the expression of in (20), denoting
by the th column of , we see that if we take the initial
condition
186 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 2, FEBRUARY 2006
then the solution
(28)
is such that
and
and, denoting by the th column of
(29)
and, therefore, the state sequence is the optimal one. The corre-
sponding control sequence is achieved by considering (22) and
(23) and, in particular
so that, from (24), the control sequence is ,
, the optimal one.
In the special case , is square and no augmentation
is necessary. In this case we get the linear static compensator
with .
Remark 3.1: We point out that the role of constraint (12) is
essential. Precisely if we formulate the problem without (12)
and we seek for a compensator which is both relatively optimal
and produces the closed-loop characteristic polynomial ,
then such a compensator must be constructed on a trajectory
which satisfies constraints (12) (as any other closed-loop un-
forced trajectory).
A. Compensator Eigenvalue Assignment
In this section we present a synthesis procedure which allows
for the simultaneous assignment of both the closed-loop and the
compensator eigenvalues.
Procedure: Simultaneous placement of closed-loop and
compensator poles.
• Fix a stable (whose eigenvalues are those of the com-
pensator) and a stable
(whose roots are the closed-loop poles).
• Consider the linear equation
(30)
where is the th row of and
columns
• Each of the infinite solutions of (30) provides a pair
, . If the
resulting matrix is invertible, compute
and and from (24).
It is not difficult to see that the previous equation corresponds to
, and, according
to (27)
where the last equality holds since .
IV. OUTPUT FEEDBACK RELATIVELY OPTIMAL CONTROL WITH
OBSERVER INITIALIZATION
The relatively optimal control proposed in this paper is
thought for state feedback although we can use it for output
feedback provided that the problem allows for the initialization
of an observer. As mentioned in the introduction, our compen-
sator is suitable for special problems such as point-to-point
operations in which the initialization is reasonable. Basically, in
this way we achieve a compensator which is not time-invariant.
Let us reconsider the output equation
with . We introduce the following technical as-
sumption.
Assumption 2: The vector and the matrix are such that
We will comment on this assumption later. Consider any ob-
server with initialization2
(31)
(32)
where is Schur-stable. Then the compensator can be
computed exactly as shown in the previous sections with the
2The trivial initialization x̂(1) = �x is not feasible since we require optimalityfrom both �x and ��x.
BLANCHINI AND PELLEGRINO: RELATIVELY OPTIMAL CONTROL 187
difference that now the feedback of the estimated state is con-
sidered instead of (22) and (23)
(33)
(34)
We formalize the result in the next theorem.
Theorem 4.1: Consider any matrix such that
is a Schur matrix. Then, the output feedback compensator (33)
and (34) with the observer (32) is stabilizing and, if initialized
as in (31), then it is optimal for the initial state . If the
cost and the constraints are symmetrical, then it is optimal also
for and satisfies the constraints for all
for . If the problem is unconstrained and the functions
and are homogeneous then it is optimal for all the initial
conditions , .
Proof: The fact that the compensator is stabilizing is an
immediate consequence of the fact that if we take into account
the error we have that
, so that we can write the closed-loop system as
which is stable as long as is Schur. Furthermore, if
the observer is initialized as (31), as long as , we have
so that . Then, for .
As far as the observer initialization is concerned, we can
easily see that it cannot be avoided in a relative optimality con-
text for the following reason. If we take as a fictitious
input matrix and we consider the augmented system
then the closed-loop output impulse response with our control
turns out to be . The corresponding resulting
closed-loop transfer function is
Unfortunately, such a closed-loop transfer function can
be the result of unstable zero-pole cancellations as it can be seen
by very simple examples. Indeed, the admissible closed-loop
impulse responses are subject to well-known restrictions [10],
[16], precisely, that they must preserve the unstable open-loop
zeros to assure internal stability and these constraints are not
included in the original optimization problem to compute
and . As we have seen, this problem disappears if we allow
for an observer initialization since this basically means that we
are considering a time-varying compensator (although of a very
special type and very simple to implement).
Finally we would like to discuss about the Assumption 2.
If we introduce the fictitious matrix , this is a “rela-
tive degree” assumption. If it fails, then solving the relatively
optimal control problem is hopeless. Indeed, if we
have also therefore the controller cannot discrim-
inate the two situations or and therefore
cannot (unless for trivial cases) be optimal for both as required
in the problem statement. To summarize, we make the following
points.
• As long as we have state feedback, any open-loop (op-
timal) trajectory from can be achieved by a dynamic
pure feedback compensator.
• In the output feedback case the trajectory can be
achieved by means of a proper observer initialization
(under Assumption 2).
• If the observer initialization is not possible, achieving
relative optimality might be not compatible with in-
ternal stability.
V. GLOBAL AND ULTIMATE CONSTRAINTS SATISFACTION WITH
APPLICATIONS
The assumption of final zero state considered in [6] is a re-
striction and may have undesirable consequences. The provided
scheme removes such a restriction but leaves open the problem
of what happens after the first steps. We have seen that the
closed-loop modes are fixed by the characteristic polynomial
. In brief, we have to face two problems.
• How can we assure that the point-wise constraints im-
posed on the finite horizon are globally satisfied?
• How can we conveniently fix ?
Interestingly, we can simultaneously face these problems, as
shown in the sequel.
A. Constraint Satisfaction Via Polynomial Assignment
Let us consider the constraint
To assure that this constraint is satisfied even after the horizon
, one can take in such a way that
(35)
Indeed we can state the following result.
Theorem 5.1: Assume that is chosen according to (35)
and that the optimization problem is feasible. Then the closed-
loop system is stable and, if , satisfies the constraints
(9) for all .
Proof: The fact that is a Schur-stable polynomial, if
(35) holds, is immediately derived by (35). The constraint (9)
holds by construction for . We show that
also for . Being the closed-loop matrix, after
reaching the end of the optimal trajectory, the system behaves
according to the following:
188 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 2, FEBRUARY 2006
By construction and
, hence the previous equations can
be rewritten as
which are equivalent to the following:
By setting , we obtain
(36)
The previous expressions for and can be substituted
in (23), leading to
(37)
Since , from (36) and (37), we get:
therefore, after steps, the output is a weighted sum of
the preceding outputs. Since the weights satisfy (35), the
following hold for :
Hence, being convex and 0-symmetric and
by construction, it follows that
, .
We can solve also a generalized version of the problem
namely the ultimate constraints satisfaction. Let us replace the
constraints (13) and let us choose in such a way that
and (38)
We have the following.
Theorem 5.2: Assume that is chosen according to (38)
and that the optimization problem is feasible. Then, if ,
the closed-loop system satisfies the constraints (13) for all
.
Proof: To prove the theorem, set . Then, it is
enough to notice that in view of (38), expression (36) becomes
(39)
Then, the proof proceeds exactly as in the previous theorem by
replacing by .
B. Ultimate Arrival to a Target
A possible alternative way to impose the pointwise con-
straints over the infinite horizon is to impose the arrival of the
final state to a target set as follows:
(40)
where must be suitably taken.
A first possibility is that (or a subset) is controlled in-
variant, that is there exists a local control that renders posi-
tively invariant and such that the constraints are satisfied for all
initial conditions inside the set [5]. In this case, one can switch
to the local controller as soon as the condition is
satisfied. This idea is well-known in the receding horizon frame-
work (see [15]).
A second possibility is the following. Consider the convex
hull of the vectors of the optimal sequence and their opposite
and assume that is small enough to be included in
(41)
Then the inclusion (40) implies the condition
(42)
which means that (27) is satisfied with characteristic polynomial
. Therefore, instead of
arbitrarily fixing one can proceed as follows.
• Remove the constraint (12) and take a sufficiently
small final set in such a way that (41) is satisfied
for the optimal sequence.
• Compute the vector such that and
.
• Determine the compensator as previously shown.
The first of the previous steps might require an iteration, because
if the condition (41) is not satisfied one should reduce , (pos-
sibly increase ) and repeat the computation.
Remark 5.1: In the “ultimate confinement” problem to a
target one can consider free and solve a minimum-time
problem of reaching a target. This is what we actually do in the
experiments of Section VI.
C. Optimal Pole Assignment
A known problem in pole assignment via static state feedback
for multiple input systems is to exploit the degree of freedom
given by the multiple choices of the feedback matrix [13]. This
problem can be faced in the present context as a special case for
. We can fix the first column of as , and solve
the optimization problem to determine the remaining columns
of and the columns of . Matrix turns out to be square
BLANCHINI AND PELLEGRINO: RELATIVELY OPTIMAL CONTROL 189
Fig. 2. Cart-pole system.
invertible as long as Assumption 1 holds. From the equation
, well-known in the eigenvalue assignment
context, we have the static linear control
Just as an example, consider the problem of finding the con-
troller which places the poles in the desired positions and that
minimizes the maximum control effort, in terms of a proper
norm , in the transient starting from . The natural problem
to be solved is the following:
(43)
(44)
where carries the desired polynomial as previously shown.
This problem is a convex programming one and it can be
reduced to a linear programming problem if one chooses the
-norm for . Clearly the solution does not assure that the the
condition is satisfied after the transient even
for the initial condition . However, the condition can
be assured for any if we consider a characteristic polynomial
subject to the constraints (35).
VI. EXPERIMENTAL RESULTS
In this section, we report some experimental results obtained
by applying a relatively optimal controller to the cart-pole
system shown in Fig. 2. The system has one input (the current
given to the electrical motor, proportional to the force applied
to the cart) and two outputs (the position of the cart and the
angle of the pole , measured by means of two encoders). The
system has the following state vector: . A scheme
of the system is reported in Fig. 3. The continuous-time model,
linearized around a stable equilibrium point is
where is the acceleration of gravity, the length of the pole,
and the friction coefficients for the pole and the cart, re-
spectively, and a coefficient that takes into account the whole
Fig. 3. Scheme of the cart-pole system.
Fig. 4. Open-loop optimal trajectory.
mass of the cart and the pole and the proportionality between the
input and the force applied to the cart (note that the dynamics of
the pole does not influence that of the cart). By substituting the
proper values ( , and having been determined experimen-
tally) and performing a zero-order-hold sampling of the system
(sampling time: 0.125 s) we get the state and input matrices:
Now, we solve the following open-loop minimum-time con-
strained problem: Find the shortest input sequence ,
that drives the system from to a
neighborhood of the origin under the constraints
and . We stress that these constraints are not hard:
they can be violated without particular consequences. They have
to be considered as design specification, limiting the accelera-
tion of the cart and the angle of the pole of the optimal trajectory.
As a final constraint, we impose
For , , , , ,
we get the optimal
open-loop trajectory reported, for the cart position and the pole
angle, in Fig. 4. A phase-plane representation of such trajectory
(which is steps long) is shown in Fig. 5.
By choosing a that ensures invertibility of , we get a
relatively optimal state-feedback dynamic controller of order
(note that being , the
190 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 2, FEBRUARY 2006
Fig. 5. Phase-plane representation of the open-loop optimal trajectory.
Fig. 6. Actual trajectory (squares) and the optimal one (points).
Fig. 7. Trajectory of the controlled system from �x = [0 0� 0:3 0] .
characteristic polynomial assigned to the closed-loop system is
). Fig. 6 shows a comparison
between the optimal trajectory (dot-marked line) and that ob-
tained by means of the proposed compensator (square-marked
line) starting from the nominal initial condition. Notwith-
standing the extremely simple model (that neglects the strong
static friction and some dependencies of the parameters from
the position of the cart), the controlled system behaves well
and the actual trajectory is very close to the optimal. As
expected, the proposed feedback scheme produces a scaled
version of the optimal trajectory when the (nonnominal) initial
condition is aligned with the nominal one: Fig. 7 shows the
trajectory of the system from the nonnominal initial condi-
tion . In Fig. 8, we report the behavior
of the controlled system for the nonnominal initial condition
. The relatively optimal controller employed
Fig. 8. Trajectory of the controlled system from �x = [�0:51 0 0 0] .
Fig. 9. Comparison between a state-feedback trajectory (continuous line)and an output-feedback trajectory (dashed line) obtained by means of areduced-order observer.
for the previous experiments is in practice a state-feedback
controller, because the cart and pole velocities are computed by
means of second-order filters based on the position encoders,
read at each 1 ms. As said in Section IV, whenever it is possible
to initialize an observer, an output-feedback relatively optimal
controller can be implemented. As an example, Fig. 9 shows the
state-feedback trajectory (continuous line) and the output-feed-
back trajectory (dashed line) from the same initial condition
(aligned with the nominal). We employed a second-order dis-
crete-time (reduced) observer having the same sampling time
of the discretized system, i.e., 0.125 s according to the scheme
in [1]). In this case, this reduced observer has to be initialized at
its zero state when activating the controller. The behavior of the
system under the output-feedback relatively optimal controller
is close (although slightly worst) to that obtained when using
the state-feedback. Movies of the experimental results reported
here, are available at http://www.dimi.uniud.it/pellegri.
VII. CONCLUSION
In this paper, we have shown that a relatively optimal con-
trol (i.e., a state feedback controller which produces the optimal
transient from a “nominal initial condition” and that is stabi-
lizing, possibly with constraint violation, from any other ini-
tial state) can be achieved without considering a zero terminal
constraint. As a result, we obtained a closed-loop system which
is not a finite-impulse response (FIR) but whose characteristic
BLANCHINI AND PELLEGRINO: RELATIVELY OPTIMAL CONTROL 191
polynomial can be arbitrarily assigned. We have shown that we
can skip the assignment but achieve stability if we impose some
terminal constraints to the open-loop optimization problem. The
problem of strong stabilization has been considered as well, pre-
cisely we faced the problem of simultaneous assignment of the
closed-loop and compensator eigenvalues. In the output feed-
back case, a relatively optimal control can be achieved by an
observer that has to be properly initialized. The experimental
results provided show that the presented technique works sig-
nificantly well on a real problem.
The proposed subject is suitable to further investigation.
Among the possible significant extensions we mention the goal
of determining a (nonlinear) static relatively optimal control,
instead of a linear dynamic one. Furthermore, the constraints
considered in this paper are “soft” performance constraints.
If hard constraints are included in the problem, such as input
saturations, then, the stabilization may be clearly not achiev-
able from any initial condition. Combining the proposed design
methodology with a suitable (possibly maximal) domain of
attraction is a future subject of research.
REFERENCES
[1] G. Basile and G. G. Marro, Controlled and Conditioned Invariants in
Linear system Theory. Englewood Cliffs, NJ: Prentice-Hall, 1992.[2] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The explicit
linear quadratic regulator for constrained systems,” Automatica, vol. 38,no. 1, pp. 3–3, Jan. 2002.
[3] A. Bemporad, F. Borrelli, and M. Morari, “Model predictive controlbased on linear programming—The explicit solution,” IEEE Trans.
Autom. Control, vol. 47, no. 12, pp. 1974–1985, Dec. 2002.[4] D. P. Bertsekas, Dynamic Programming and Optimal Control. Bel-
mont, MA: Athena Scientific, 2000.[5] F. Blanchini, “Set invariance in control—A survey,” Automatica, vol. 35,
no. 11, pp. 1747–1767, 1999.[6] F. Blanchini and F. A. Pellegrino, “Relatively optimal control and its
linear implementation,” IEEE Trans. Autom. Control, vol. 48, no. 12,pp. 2151–2162, Dec. 2003.
[7] C. E. Garcia, D. M. Prett, and M. Morari, “Model predictive controltheory and practice—A survey,” Automatica, vol. 25, no. 3, pp. 335–348,1989.
[8] P. O. Gutman and M. Cwikel, “An algorithm to find maximal state con-straint sets for discrete-time linear dynamical systems with bounded con-trol and states,” IEEE Trans. Autom. Control, vol. AC-32, no. 3, pp.251–254, Mar. 1987.
[9] S. S. Keerthi and E. G. Gilbert, “Computation of minimum-timefeedback control laws for discrete-time systems with state-control con-straints,” IEEE Trans. Autom. Control, vol. AC-32, no. 5, pp. 432–435,May 1987.
[10] V. Kucera, Analysis and Design of Discrete Linear Control Systems,ser. Prentice-Hall International Series in Systems and Control Engi-neering. Englewood Cliffs, NJ: Prentice-Hall, 1991.
[11] D. Q. Mayne, “Control of constrained dynamic systems,” Eur. J. Control,vol. 7, pp. 87–99, 2001.
[12] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Con-strained model predictive control: Stability and optimality,” Automatica,vol. 36, pp. 789–814, 2000.
[13] P. Petkov, N. D. Christov, and M. KonstantinovM, Computational
Methods for Linear Control Systems. Englewood Cliffs, NJ: Pren-tice-Hall, 1991.
[14] A. S. Sage and C. C. White, Optimum system control. Upper SaddleRiver, NJ: Prantice-Hall, 1997.
[15] M. Sznaier and M. Damborg, “Suboptimal control of linear systems withstate and control inequality constraints,” in Proc. 26th Conf. Decision
and Control, Los Angeles, CA, Dec. 1987, pp. 761–762.[16] D. C. Youla, H. A. Jabr, and J. J. Bongiorno, “Modern Wiener-Hopf
design of optimal controllers, II. The multivariable case,” IEEE Trans.
Autom. Control, vol. AC-21, no. 3, pp. 319–338, Mar. 1976.
Franco Blanchini (M’92–SM’05) was born in Leg-nano (MI), Italy, on December 29, 1959. He receivedthe Laurea degree in electrical engineering from theUniversity of Trieste, Trieste, Italy, in 1984.
He is a Full Professor with the Engineering Fac-ulty of the University of Udine, Udine, Italy, where heteaches dynamic system theory and automatic con-trol. He is a member of the Department of Mathe-matics and Computer Science of the same university,and is the Director of the System Dynamics Labora-tory.
Dr. Blanchini has been Associate Editor of Automatica since 1996. In 1997,he was a member of the Program Committee of the 36th IEEE Conference onDecision and Control, San Diego, CA. In 1999, he was a member of the ProgramCommittee of the 38th IEEE Conference on Decision and Control, Phoenix, AZ.In 2001, he was member of the Program Committee of the 40th IEEE Confer-ence on Decision and Control, Orlando, FL. In 2003, he was a member of theProgram Committee of the 42nd IEEE Conference on Decision and Control,Maui, HI. He was Chairman of the 2002 IFAC workshop on Robust Control,Cascais, Portugal. He has been Program Vice-Chairman for the conference JointConference on Decision and Control-ECC, Seville, Spain, December 2005. Heis the recipient of 2001 ASME Oil and Gas Application Committee Best PaperAward as a coauthor of the article “Experimental evaluation of a high—gaincontrol for compressor surge instability.” He is the recipient of the 2002 IFACprize survey paper award as author or the article “Set Invariance in Control—Asurvey,” Automatica, November, 1999.
Felice Andrea Pellegrino was born in Conegliano,Italy, in 1974. He received the Laurea degree (magna
cum laude) in engineering from the University ofUdine, Udine, Italy, in 2000, and the Ph.D. degreefrom the same university, in 2005, discussing a thesison constrained and optimal control.
From 2001 to 2003, he was a Research Fellow atInternational School for Advanced Studies, Trieste,Italy. Currently, he is Research Fellow with theDepartment of Mathematics and Computer Science,University of Udine. His research interests include
control theory, computer vision, and pattern recognition.