Hybrid Impulsive State Feedback Control of Markovian Switching Linear Stochastic Systems
Transcript of Hybrid Impulsive State Feedback Control of Markovian Switching Linear Stochastic Systems
Communications in Applied Analysis 16 (2012), no. 4, 665–686
HYBRID IMPULSIVE STATE FEEDBACK CONTROL OF
MARKOVIAN SWITCHING LINEAR STOCHASTIC SYSTEMS
S. SATHANANTHAN1 , N.J. JAMESON2, M. KNAP3, AND L.H. KEEL4
1Department of Mathematics & Center of Excellence in ISEM
Tennessee State University, Nashville, TN 37209 USA
E-mail: [email protected]
2Department of Mechanical and Manufacturing Engineering &
Center of Excellence in ISEM
Tennessee State University, Nashville, TN 37209 USA
E-mail: [email protected]
3Department of Mathematics & Center of Excellence in ISEM
Tennessee State University, Nashville, TN 37209 USA
E-mail: [email protected]
4Department of Electrical and Computer Engineering & Center of Excellence
in ISEM, Tennessee State University, Nashville, TN 37209 USA
E-mail: [email protected]
ABSTRACT. Motivated by Markovian Switching Rational Expectation Models (MSRE) in eco-
nomics, a problem of state feedback stabilization of discrete-time linear Markovian switching sto-
chastic systems with multiplicative noise is considered. Under some appropriate assumptions, the
stability of this system under pure impulsive control is given. Further under impulsive control,
the state feedback stabilization problem is investigated. The Markovian switching is modeled by a
discrete-time Markov chain. The control input is simultaneously applied to both the rate vector and
the diffusion term. Sufficient conditions based on linear matrix inequalities (LMIs) for stochastic
stability is obtained. The robustness of the LMI-based stability and stabilization concepts against
all admissible uncertainties are also investigated. The parameter uncertainties we consider here are
norm bounded. An example is given to demonstrate the obtained results.
AMS (MOS) Subject Classification. 99Z00.
1. INTRODUCTION
Naturally, there are many evolution processes that experience abrupt changes of
state at certain intervals of time. In most such systems, the duration of these short
term perturbations is negligible in comparison with the duration of the entire process.
Consequently, for modeling purposes it is sufficient to assume that these perturba-
tions act instantaneously, that is in the form of impulses. It is known, for example,
Received November 29, 2011 1083-2564 $15.00 c©Dynamic Publishers, Inc.
666 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
that many biological phenomena involving thresholds, bursting rhythm models in
medicine and biology, optimal control models in economics, pharmacokinetics and
frequency modulated systems, do exhibit impulsive behavior [1, 2, 3]. Control of dy-
namical systems with impulsive effects, studied by the control community since the
introduction of modern control theory, has been gaining more attention in the last
few years. Control concepts based on impulsive and switched systems have proven
to be an effective methodology in the sense that it allows stabilization of a com-
plex system by using only small control impulses in different modes, even though the
nominal system behavior may follow unpredictable patterns [4]. Moreover, it presents
an efficient design approach to dealing with various dynamic systems, such as hybrid
systems, chaotic systems, communication networks, switching systems and networked
controlled systems [5, 6, 7, 8, 9, 10].
On the other hand, switched systems are an important class of hybrid dynamical
systems which consists of a family of subsystems driven by a logical rule such as a
Markov chain; that controls the switching mechanism between various subsystems.
The Markovian switching jump linear systems (MJLS) are dynamical systems subject
to abrupt variations during the operation. Since MJLS are natural to represent
dynamical systems that are often inherently vulnerable to component failures, sudden
disturbances, change of internal interconnections, and abrupt variations in operating
conditions, they are an important class of stochastic dynamical systems ([11, 12, 13,
14, 15] and the references therein).
Discrete-time Markovian switching models are playing a significant role in eco-
nomic problems. For example, reduced form Markovian switching models have been
widely used to study economic fluctuations and monetary policy transmission mech-
anisms. Forward looking rational expectation models, which are generally called
the New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models, have
been developed and been in use for more than fifteen years to study economic fluc-
tuations. For recent research which has combined the Markovian switching with
forward-looking rational expectation models (MSRE) (see [16, 17, 18, 19, 20] and
the references therein). In most of these works, (i) impulsive control analysis was
not investigated, (ii) robustness of the sufficient conditions for stability and stabiliz-
ability were not considered, (iii) control input is applied only to the rate vector, not
simultaneously applied to both the rate vector and the diffusion term, and (iv) the
sufficient conditions resulted in a set of coupled algebraic Riccati equations, which
are in general very difficult to solve.
In this paper, motivated by rational expectation models in economics, sufficient
conditions for stability and stabilization of a class of discrete-time stochastic systems
with multiplicative noise under Markovian switching are obtained. We use pure im-
pulsive control to achieve the stabilization of this system. A technique to design a
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 667
state feedback stabilizing controller that achieves stochastic stability under impulsive
control for such discrete-time stochastic systems is provided. The results are extended
to deal with the problem of robust stability and stabilizability of uncertain systems
under impulsive control laws. Further, the design of a robust state feedback stabi-
lizing controller under impulsive control is provided. For the stochastic impulsive
control, only a few results were reported in the literature [21, 22, 23]. These analyses
did not cover our specific problem of interest. Our sufficient conditions are written in
matrix forms which are determined by solving linear matrix inequalities (LMIs) [12].
Examples are given for illustration.
2. MOTIVATION
In reduced form, a law of motion of an economy [16] with a control action can
be written as
xt = Gxx(st)xt−1 + Gxu(st)ut−1 + Hx(st)ǫt (2.1)
The problem is to find for a control law
ut = −F (st)xt (2.2)
that stabilizes the system (2.1), where xt- is a vector of variables of an economy that
depends on lags and leads. Gxx, Gxu, F and Hx are matrices of appropriate dimensions
which depend on the regime st = 1, 2, . . . , N. E[ǫt|It] = 0 is a vector of stochastic
shocks with It the information set at time t; the shocks ǫt are uncorrelated with It.
The regime st, which is observable at time t, is assumed to be a Markov chain with
probability transition matrix, pij, i, j = 1, . . . , N , in which pij = P [st+1 = j|st =
i],∑N
j=1 pij = 1, i = 1, . . . , N is the probability of moving from state i at time t to
state j at time t + 1. The main focus is on developing simple methods for working
out the best interest rate response to shocks in an evolving economy in a Markovian
switching framework. It is also assumed that in this economy the private sector forms
so-called rational expectations. That is, in forming their views about the future they
understand what the transmission mechanism is in the different regimes and they
also understand how policy makers set the interest rates in response to shocks. Such
forward looking Markovian switching rational expectation models have been widely
used to study economic problems in which there are occasional structural shifts in
fundamentals (see [16, 17, 18, 19, 20] and the references therein). The formulation
(2.1) is general enough to capture different types of random changes in an economic
system, and therefore it incorporates different sources of model uncertainty.
3. PROBLEM FORMULATION
All systems which undergo regime shifting such as (2.1)can be modeled by a gen-
eral, discrete stochastic iterative system under Markovian switching with an output
668 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
feedback
x(k + 1) = A1(η(k))x(k) + u(k) (3.1)
+
(
A2(η(k))x(k) + u(k)
)
ξ(k + 1).
Here x(k) ∈ ℜn is the state of the stochastic system at step k ∈ I(k0) = k0, k0 +
1, . . . , and u(k) ∈ ℜm is the control input. Let ξ(k+1) be a sequence of i.i.d normal
random variables defined on the complete probability space (Ω,F , P ), independent
of x(k)‘s. Let Fk be an increasing family of σ-algebras, Fk ⊆ F , k ∈ I(k0), such that
ξ(k) is Fk-measurable for k ∈ I(k0). A1(·), and A2(·) are matrices of appropriate
dimensions. For k ∈ I(k0), let x(k), η(k) is a Markov process with a finite number
of states, that is, η(k) ∈ I[1, s] = 1, 2, . . . , s. It is also assumed that η(k) is
Fk-measurable, and moreover, ξ(k + 1) and η(k) are mutually independent for every
k ∈ I(k0).
For k ∈ I(k0), let x(k), ηk is a Markov process with a finite number of states,
that is, ηk ∈ I[1, s] = 1, 2, . . . , s. This Markov chain has transition probabilities,
πijs defined by
πij = P ηk+1 = j | ηk = i ≥ 0 ands∑
j=1
πij = 1.
where, πij is non-negative. We construct a hybrid controller for system (3.1) u =
u1 + u3, in the rate vector with inputs u1 and u3 defined as:
u1(k) =∞∑
k=k0
B1(η(k))uc(k)li(k),
u3(k) =
∞∑
k=k0
(Ck − I)x(k)δ(k − Nk) (3.2)
Also, a hybrid controller for system (3.1) u = u2 + u3, in the diffusion term with
inputs u2 and u3 defined as:
u2(k) =
∞∑
k=k0
B2(η(k))uc(k)li(k),
u3(k) =∞∑
k=k0
(Ck − I)x(k)δ(k − Nk) (3.3)
where B1(η(k)), B2(η(k)) are known real matrices, uc(k) ∈ ℜm is the continuous con-
trol input. li(k) = 1 as N+k ≤ k ≤ Nk+1, and otherwise, li(k) = 0 with discontinuity
points
(i) k0 = N0 < N1 · · · < Nk < Nk+1 < · · · , limk→∞
Nk = ∞
(ii) Nk+1 − Nk ≥ 2
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 669
Ck are matrices to be determined at each k. δ(.) is the Dirac impulse. This implies
that the controller u2(k) has the effect of suddenly changing the states of (3.1) at the
points Nk’s, that is u3(k) is an impulsive control, and u1(k) and u2(k) are a switching
controls. Without loss of generality, it is assumed that
x(Nk) = x(N−
k ) = limk→∞
x(Nk − h)
Under the control (3.2) and (3.3), the system (3.1) becomes
x(k + 1) = A1(η(k))x(k) + B1(η(k))uc(k) (3.4)
+
(
A2(η(k))x(k) + B2(η(k))uc(k)
)
ξ(k + 1)
N+k ≤ k ≤ Nk+1
x(N+k ) = Ckx(Nk), k = Nk, k = 1, 2, . . .
The objective of this paper is to establish sufficient conditions for robust stabilization
results of a linear stochastic uncertain discrete-time Markovian switching system (3.1)
under impulsive control. To proceed, we first introduce the following definition of
stability criteria.
Definition 3.1. The Markovian switching linear stochastic system (3.1) with u(k) ≡
0 is said to be stochastically stable if there exists a constant T (η(k0), x(k0)) such that
E
[∞∑
k=k0
x(k)T x(k) | (η(k0), x(k0))
]
≤ T (η(k0), x(k0)) (3.5)
Remark 3.2. This definition is in line with those of stochastic stability and stochastic
stabilizability of discrete-time Markovian switching linear systems [12, 15].
4. STABILITY AND STABILIZATION CRITERIA
In this section, we establish stability criteria of (3.1) under pure impulsive control
with uc(k) ≡ 0. In this case, the system (3.1) becomes
x(k + 1) = A1(η(k))x(k) (4.1)
+A2(η(k))x(k)ξ(k + 1)
N+k ≤ k ≤ Nk+1
x(N+k ) = Ckx(Nk), k = Nk, k = 1, 2, . . .
The following theorem establishes sufficient conditions for the stochastic stability of
the Markovian switching linear stochastic system (4.1) under pure impulsive control.
670 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
Theorem 4.1. If there exists symmetric, positive-definite matrices
Q = diag(Q(1), Q(2), · · · , Q(s)) > 0,
G(i) =s∑
j=1
πijQ(j)
satisfying the algebraic Riccati inequalities (ARI)
AT1 (i)G(i)A1(i) + AT
2 (i)G(i)A2(i) − Q(i) ≡ Ω(i) < 0, (4.2)
and
CTk Q(i)Ck − Q(i) ≤ 0 (4.3)
or satisfying the LMIs
−Q(i) JT1 (i) JT
2 (i)
J1(i) −Q 0
J2(i) 0 −Q
< 0, (4.4)
and[
−Q(i) CTk Q(i)
Q(i)Ck −Q(i)
]
< 0, (4.5)
for i = 1, 2, · · · , s where
JT1 (i) =
[√
(πi1)AT1 (i)Q(1), · · · ,
√
(πis)AT1 (i)Q(s)
]
(4.6)
JT2 (i) =
[√
(πi1)AT2 (i)Q(1), · · · ,
√
(πis)AT2 (i)Q(s)
]
(4.7)
Then, the system (3.1) with uc(k) ≡ 0 is stochastically stable.
Proof. Without loss of generality, we assume that ξ(k)s are standard N(0, 1) random
variables (see [15]). Consider the Lyapunov function candidate
V (k, x, η(k)) = xT (k)Q(η(k))x(k)
where Q(i), i = 1, 2, . . . , s are positive definite matrices. Then the difference operator
can be written as,
Vi(k, x) = E[V (k + 1, x, η(k + 1))] − V (k, x, i)
and is given by
Vi(k, x) = xT (k)
[ s∑
j=1
πij
(
AT1 (i)Q(j)A1(i)
+AT2 (i)Q(j)A2(i)
)
− Q(i)
]
x(k). (4.8)
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 671
Let G(i) =∑s
j=1 πijQ(j), then the above equation can be written as
Vi(k, x) = xT (k)
(
AT1 (i)G(i)A1(i)
+AT2 (i)G(i)A2(i) − Q(i)
)
x(k). (4.9)
Let α = infλmin(−Ω(i)) of Theorem 4.1, we get,
E[V (x(k + 1), η(k + 1))] −E[V (x(k), η(k))] ≤ −αE[xT (k)x(k)
](4.10)
If k = Nl, then we obtain the following
V (N+l , x(N+
l ), η(Nl)) − V (Nl, x(Nl), η(Nl))
= xT (N+l )Q(η(Nl))x(N+
l ) − xT (Nl)Q(η(Nl))x(Nl)
= xT (Nl)CTNl
Q(η(Nl))CNlx(Nl) − xT (Nl)Q(η(Nl))x(Nl)
= xT (Nl)CTNl
Q(η(Nl))CNl− Q(η(Nl))x(Nl) ≤ 0 (4.11)
Let α = infλmin(−Ω(i)) of Theorem 4.1, we get, for T ≥ 1
T∑
k=ko
E[V (x(k + 1), η(T + 1))] −E[V (x(k), η(k))]
= E[V (x(k0 + 1), η(k0 + 1))] − E[V (x(k0), η(ko))]
+E[V (x(k0 + 2), η(k0 + 2))] − E[V (x(k0 + 1), η(ko + 1))]
+ · · ·E[V (x(N1), η(N1))] − E[V (x(N1 − 1), η(N1 − 1))]
+E[V (x(N1 + 1), η(N1 + 1))] − E[V (x(N+1 ), η(N1))]
+ · · ·E[V (x(Nl), η(Nl))] − E[V (x(Nl − 1), η(Nl − 1))]
+E[V (x(Nl), η(Nl))] − E[V (x(N+l ), η(Nl))]
+ · · ·E[V (x(T + 1), η(T + 1))] − E[V (x(T ), η(T ))]
≥ E[V (x(T + 1), η(T + 1))] − E[V (x(k0), η(ko))] (4.12)
Thus,
E[V (x(T + 1), η(T + 1))] − E[V (x(k0), η(ko))]
≤
T∑
k=ko
E[V (x(k + 1), η(k + 1))] − E[V (x(k), η(k))]
≤ −αE
[T∑
k=k0
xT (k)x(k)
]
(4.13)
672 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
Which in turns leads to the inequality
E
[T∑
k=k0
xT (k)x(k)
]
≤1
α
(
E [V (x(k0), η(k0))] − E [V (x(T + 1), η(T + 1))]
)
(4.14)
This inequality therefore leads to
E
[∞∑
k=k0
xT (k)x(k)
]
≤1
αE [V (x(k0), η(k0))] < ∞ (4.15)
which leads to the stochastic stability of (3.1) with uc(k) ≡ 0.
We now consider the problem of synthesizing a state feedback controller
u(k) = K(η(k))x(k) (4.16)
that stochastically stabilizes the Markovian switching linear stochastic system (3.1).
The following theorem gives a stabilizability condition.
Theorem 4.2. If there exists symmetric, positive-definite matrices
X = diag(X1, X2, . . . , Xs) > 0,
and matrices
Y = (Y1, Y2, . . . , Ys)
satisfying the LMIs
−Xi JT1 (i) JT
2 (i)
J1(i) −X 0
J2(i) 0 −X
< 0, (4.17)
[
−Xi XiCTk
CkXi −Xi
]
< 0, (4.18)
for i = 1, 2, . . . , s where
JT1 (i) =
[√
(πi1)(A1(i)Xi + B1(i)Yi)T , . . .
· · ·√
(πis)(A1(i)Xi + B1(i)Yi)T]
(4.19)
JT2 (i) =
[√
(πi1)(A2(i)Xi + B2(i)Yi)T , . . .
· · ·√
(πis)(A2(i)Xi + B2(i)Yi)T]
(4.20)
Then controller
u(k) = K(ηk)x(k) (4.21)
with
K(i) = YiX−1i , i = 1, 2, . . . , s
stochastically stabilizes the Markovian switching linear stochastic system (3.1).
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 673
Proof. Substituting (4.21) into (3.1) yields the dynamics of the closed-loop system
described by
x(k + 1) = [A1(η(k)) + B1(η(k))K(η(k)))]︸ ︷︷ ︸
A1(η(k))
x(k)
+ [A2(η(k)) + B2(η(k))K(η(k)))]︸ ︷︷ ︸
A2(η(k))
x(k)ξ(k + 1)
= A1(η(k))x(k) + A2(η(k))x(k)ξ(k + 1). (4.22)
Then from Theorem 4.1, it suffices to show that there exists symmetric, positive
definite matrix,
Q = diag(Q(1), . . . , Q(s)) > 0, G(i) =s∑
j=1
πijQ(j)
AT1 (i)G(j)A1(i) + AT
2 (i)G(j)A2(i) − Q(i) < 0 (4.23)
where
A1(η(k)) = A1(η(k)) + B1(η(k))K(η(k))
A2(η(k)) = A2(η(k)) + B2(η(k))K(η(k)) (4.24)
and
CTk Q(i)Ck − Q(i) ≤ 0 (4.25)
Let, Xi = Q−1(i). Pre- and post-multiplying (4.23) by Xi yields
XiAT1 (i)G(i)A1(i)Xi + XiA
T2 (i)G(i)A2(i)Xi − Xi < 0 (4.26)
Letting, Yi = K(i)Xi, and using the Schur complement, the above inequality (4.26)
is equivalent to the LMI (4.17) and (4.25) is equivalent to (4.18) . This completes the
proof.
5. ROBUST STABILITY AND STABILIZATION CRITERIA
In the previous section, we investigated the stability and stabilizability of the
discrete-time system or iterative processes with Markovian switching given by (3.1).
The conditions given are under the assumption that no uncertainties are presented
in the system or system parameters. In this section, we consider the case when the
plant parameters are subject to perturbations. Under this consideration, we study
the conditions for robust stability and stabilization of the discrete-time system with
Markovian switching.
674 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
Consider the system (3.1) with uncertainties:
x(k + 1) = A1(η(k))x(k) + B1(η(k))u(k)
+ [A2(η(k))x(k) + B2(η(k))u(k)] ξ(k + 1),
k = k0, k0 + 1, ., ., ., (5.1)
and where we define the uncertainties,
A1(η(k)) = A1(η(k)) + ∆A1(η(k))
B1(η(k)) = B1(η(k)) + ∆B1(η(k)) (5.2)
A2(η(k)) = A2(η(k)) + ∆A2(η(k))
B2(η(k)) = B2(η(k)) + ∆B2(η(k))
where
∆A1(η(k)) = D(η(k))∆(η(k))Ea1(η(k))
∆B1(η(k)) = D(η(k))∆(η(k))Eb1(η(k)) (5.3)
∆A2(η(k)) = D(η(k))∆(η(k))Ea2(η(k))
∆B2(η(k)) = D(η(k))∆(η(k))Eb2(η(k)).
Note that A1(η(k)), B1(η(k)), A2(η(k)), B2(η(k)), D(η(k)), Ea1(η(k)), Eb1(η(k)),
Ea2(η(k)), Eb2(η(k)) are known matrices of appropriate dimensions. We say that the
uncertainty ∆(η(k)) is admissible, if all its elements are Lebesgue measurable and if
it satisfies the following condition:
∆T (η(k))∆(η(k)) ≤ I (5.4)
Before we state the condition for robust stability, we consider the following lemma
which will be used to prove the result.
Lemma 5.1 ([12]). Let A, D, ∆, E be real matrices of appropriate dimensions with
‖∆‖ ≤ 1. Then, we have
(i) for any matrix P > 0 and scalar ǫ > 0 satisfying ǫI − EPET > 0,
(A + D∆E)P (A + D∆E)T
≤ APAT + APET (ǫI − EPET )−1EPAT + ǫDDT
(ii) for any matrix P > 0 and scalar ǫ > 0 satisfying P − ǫDDT > 0,
(A + D∆E)T P−1(A + D∆E) ≤ AT(P − ǫDDT
)−1A +
1
ǫET E
We now state the LMI based sufficient condition for the linear stochastic sys-
tem (5.1) to be robustly stochastically stable under the pure impulsive control with
uc(k) ≡ 0.
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 675
Theorem 5.2. If for two given sets of scalars ǫ1i > 0, i = 1, 2, . . . , s, and ǫ2i >
0, i = 1, 2, . . . , s there exists a set of symmetric, positive definite matrices
Q = diag(Q1, Q2, . . . , Qs) > 0, ǫ1iI − DT (i)G(i)D(i) > 0,
ǫ2iI − DT (i)G(i)D(i) > 0
satisfying the LMIs
J0(i) AT1 (i)G(i)D(i) AT
2 (i)G(i)D(i)
DT (i)G(i)A1(i) J1(i) 0
DT (i)G(i)A2(i) 0 J2(i)
< 0, (5.5)
and [
−Q(i) CTk Q(i)
Q(i)Ck −Q(i)
]
< 0, (5.6)
for every i ∈ S where
J0(i) = AT1 (i)G(i)A1(i) + AT
2 (i)G(i)A2(i)
+ǫ1iETa1(i)Ea1(i) + ǫ2iE
Ta2(i)Ea2(i) − Q(i)
J1(i) = −ǫ1iI + DT (i)G(i)D(i) (5.7)
J2(i) = −ǫ2iI + DT (i)G(i)D(i)
and G(i) =∑s
j=1 πijQ(j). Then the Markovian switching linear stochastic system
(5.1) is robustly stochastically stable when uc(k) ≡ 0.
Proof. Using the sufficient condition of Theorem 4.1, for the robust stochastic stability
of the linear stochastic system (5.1), it suffices to show that, there exists symmetric,
positive definite matrix
Q = diag(Q(1), · · · , Q(s)) > 0
G(i) =
s∑
j=1
πijQ(j), i = 1, 2, . . . , s
satisfying
AT1(i)G(i)A1(i) + AT
2(i)G(i)A2(i) − Q(i) ≡ Ω(i) < 0, (5.8)
Using Lemma 5.1, given ǫ1i > 0, ǫ1iI − DT (i)G(i)D(i) > 0, we have
AT1(i)G(i)A1(i) ≤ AT
1 (i)G(i)A1(i) (5.9)
−AT1 (i)G(i)D(i)J−1
1 (i)DT (i)G(i)A1(i) + ǫ1iETa1(i)Ea1(i).
Similarly, given ǫ2i > 0, ǫ2iI − DT (i)G(i)D(i) > 0, we have
AT2(i)G(i)A2(i) ≤ AT
2 (i)G(i)A2(i) (5.10)
−AT2 (i)G(i)D(i)J−1
2 (i)DT (i)G(i)A2(i) + ǫ2iETa2(i)Ea2(i).
676 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
Thus, the inequality (5.8) becomes
J0(i) − AT1 (i)G(i)D(i)J−1
1 (i)DT (i)G(i)A1(i)
−AT2 (i)G(i)D(i)J−1
2 (i)DT (i)(i)G(i)A2(i) < 0 (5.11)
which in turn yields the LMI (5.5).
The following theorem provides an LMI-based sufficient condition for the lin-
ear uncertain stochastic system (5.1) to be robustly stochastically stable with the
feedback
u(k) = K(η(k))x(k). (5.12)
Theorem 5.3. If there exists a set of symmetric, positive definite matrices X =
(X1, X2, . . . , Xs) > 0, and a set of matrices Y = (Y1, Y2, . . . , Ys) and scalars ǫ1i > 0,
ǫ2i > 0 satisfying the LMI’s, for every i ∈ S
−Xi 0 0 UT1i UT
2i UT4i UT
5i
0 −ǫ1iI 0 UT3i 0 0 0
0 0 −ǫ2iI 0 UT3i 0 0
U1i U3i 0 −X 0 0 0
U2i 0 U3i 0 −X 0 0
U4i 0 0 0 0 −ǫ1iI 0
U5i 0 0 0 0 0 −ǫ2iI
< 0, (5.13)
and[
−Xi XiCTk
CkXi −Xi
]
< 0, (5.14)
where
UT1i =
[√
(πi1)Ξ1(i), . . . ,√
(πis)Ξ1(i)]
Ξ1(i) = (A1(i)Xi + B1(i)Yi)T
UT2i =
[√
(πi1)Ξ2(i), . . . ,√
(πis)Ξ2(i)]
Ξ2(i) = (A2(i)Xi + B2(i)Yi)T
UT3i =
[√
(πi1)DT (i), . . . ,
√
(πis)DT (i)
]
UT4i = [Ea1(i)Xi + Eb1(i)Yi]
T
UT5i = [Ea2(i)Xi + Eb2(i)Yi]
T (5.15)
and G(i) =∑s
j=1 πijQ(j). Then system (5.1) is robustly stochastically stable with the
feedback u(k) = K(η(k))x(k) where
K(i) = YiX−1i , i = 1, 2, . . . , s (5.16)
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 677
Proof. Consider the dynamics of the closed-loop system described by
x(k + 1) =(
A1(η(k)) + B1(η(k))K(η(k))))
x(k)
+(
A2(η(k)) + B2(η(k))K(η(k))))
x(k)ξ(k + 1)
Using the sufficient condition of Theorem 4.2, for the stochastic stabilizability of
the linear uncertain stochastic system (5.1) it suffices to prove that there exists a
symmetric, positive definite matrix, Q = diag(Q(1), . . . , Q(s)) > 0,
G(i) =
s∑
j=1
πijQ(j)
satisfying
s∑
j=1
πij
(
AT1(i)Q(j)A1(i) + AT
2(i)Q(j)A2(i)
)
− Q(i) < 0 (5.17)
where
A1(i) = A1(i) + A1(i)
A2(i) = A2(i) + A2(i)
A1(i) = A1(i) + B1(i)K(i) (5.18)
A1(i) = D(i)∆(i)(Ea1(i) + Eb1(i)K(i))
A2(i) = A2(i) + B2(i)K(i)
A2(i) = D(i)∆(i)(Ea2(i) + Eb2(i)K(i))
Let G(i) =∑s
j=1 πijQ(j), i = 1, 2, . . . , s, the inequality (5.17) can be written as
AT1(i)G(i)A1(i) + AT
2(i)G(i)A2(i) − Q(i) ≡ Ω(i) < 0. (5.19)
Using Lemma 5.1, given ǫ1i > 0, ǫ1iI − DT (i)G(i)D(i) > 0, we have
AT1(i)G(i)A1(i) ≤ AT
1 (i)G(i)A1(i)
+AT1 (i)G(i)D(i)J−1
2 (i)D(i)G(i)A1(i) (5.20)
+ǫ1i(Ea1(i) + Eb1(i)K(i))T (Ea1(i) + Eb1(i)K(i))
where J2(i) = ǫ1iI − DT (i)G(i)D(i).
Similarly, given ǫ2i > 0, ǫ2iI − DT (i)G(i)D(i) > 0, we have
AT2(i)G(i)A2(i) ≤ AT
2 (i)G(i)A2(i)
+AT2 (i)G(i)D(i)J−1
3 (i)D(i)G(i)A2(i) (5.21)
+ǫ2i(Ea2(i) + Eb2(i)K(i))T (Ea2(i) + Eb2(i)K(i))
678 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
where J3(i) = ǫ2iI − DT (i)G(i)D(i). By using Schur complements, we will end up
with the following LMI,
J1(i) AT1 (i)G(i)D(i) AT
2 (i)G(i)D(i)
DT (i)G(i)A1 −J2(i) 0
DT (i)G(i)A2 0 −J3(i)
< 0, (5.22)
for every i ∈ S where
J1(i) = −Q(i) + Ξ3(i)G(i)ΞT3 (i) + Ξ4(i)G(i)ΞT
4 (i) (5.23)
+ǫ1i(Ea1(i) + Eb1(i)K(i))T (Ea1(i) + Eb1(i)K(i))
+ǫ2i(Ea2(i) + Eb2(i)K(i))T (Ea2(i) + Eb2(i)K(i))
where,
Ξ3(i) = (A1(i) + B1(i)K(i))T
Ξ4(i) = (A2(i) + B2(i)K(i))T
Let Xi = Q−1(i), K(i) = YiX−1i , pre- and post-multiply equation (5.22) by the matrix
(Xi, I, I), we obtain
XiJ1(i)Xi Ξ1(i)G(i)D(i) Ξ2(i)G(i)D(i)
DT (i)G(i)ΞT1 (i) −J2(i) 0
DT (i)G(i)ΞT2 (i) 0 −J3(i)
< 0, (5.24)
The expression, XiJiXi can be written as
XiJ1(i)Xi = −Xi + Ξ1(i)G(i)ΞT1 (i) + Ξ2(i)G(i)Ξ2(i)
+ǫ1iχ1(i)χT1 (i) + ǫ2iχ2(i)χ
T2 (i)
where
χ1(i) = (Ea1(i)Xi + Eb1(i)Yi)T
χ2(i) = (Ea2(i)Xi + Eb2(i)Yi)T
By Schur complements, the above LMI in equation (5.24) can be written as (5.13).
Example 5.4. In the following example, we demonstrate the advantages of the pro-
posed Markovian switching approach using a two-state Markov chain, under the as-
sumption that no uncertainties are present in the system or system parameters.
Consider the following Markovian switching linear discrete stochastic system with
no uncertainties present in the system or system parameters,
x(k + 1) = A1(η(k))x(k) + B1(η(k))u(k)
+A2(η(k))x(k) + B2(η(k))u(k)ξ(k + 1) (5.25)
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 679
where η(k) ∈ S = 1, 2 is a Markov chain with 2 states, u(k) = K(η(k))L(η(k))x(k)
and ξ(k + 1) are a sequence of independent N(0, 1) random variables and are inde-
pendent of x(k)’s. The plant parameters are given as,
A1(1) =
[
−0.6951 2.2456
−1.1594 −0.1335
]
, A1(2) =
[
−0.3653 1.1803
−0.6094 −0.0702
]
A2(1) =
[
−1.2594 −0.1242
−0.6596 0.7764
]
, A2(2) =
[
−0.2817 −0.0278
−0.1475 0.1737
]
The input matrices are
B1(1) =
[
0.5366
1.3406
]
, B1(2) =
[
0.5324
1.3300
]
.
B2(1) =
[
0.3861
0.6449
]
, B2(2) =
[
0.0034
0.0057
]
.
We take
Ck(1) = Ck(2) = · · · =
[
0.5892 0.5706
0.2496 0.3688
]
The transition probability matrix is given by
(πij)2×2 =
[
0.4 0.6
0.3 0.7
]
The switching sequence for the system is shown in Figure 1.
0 5 10 15 20 25 30 35 40 45 50
1
2
Random Switching Sequence
Discrete time, k
Mod
e
Figure 1. Sample Switching Sequence
When there is no controller applied to the system, i.e. K(η(k)) = 0, which implies
u(k) = 0, the open-loop response is shown in Figure 2. The objective is to design a
Markovian switching feedback controller of an unstable open loop system such that
the closed-loop system is stochastically stable. For this purpose, we need to find
680 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
0 5 10 15 20 25 30 35 40 45 50−4
−3
−2
−1
0
1
2
3x 10
5
Discrete time, k
Sta
tes
of S
yste
m
Response of the Open−loop System
state 1state 2
Figure 2. Response of the Open-Loop System with No Uncertainty
symmetric, positive-definite matrices Q(1) > 0 and Q(2) > 0, and feedback gains,
K(1) and K(2) satisfying the Algebraic Riccati inequality (ARI).
2∑
j=1
(
Aj(i) + Bj(i)K(i))T
G(i)(
Aj(i) + Bj(i)K(i))
− Q(i) ≡ Ω(i) < 0 (5.26)
where G(i) =∑s
j=1 πijQ(j) and Q(i) = X−1i .
To solve the LMI’s and find the values of Q(1) and Q(2) and the controller values
of K(1) and K(2), we used the LMI toolbox within Matlab. With the given system,
we find that,
Q(1) =
[
0.2468 −0.0697
−0.0697 1.4428
]
,
Q(2) =
[
0.1572 0.0043
0.0043 0.3830
]
For the system without uncertainties (5.25), the corresponding controller gains are
K(1) = [0.9328 − 0.2223]
K(2) = [0.4655 − 0.0176].
The results clearly demonstrate that the Markovian switching approach for sto-
chastic stabilization can be achieved using an appropriate transition probability ma-
trix and Lyapunov functional. Figure 3 graphically demonstrates the control achieved
with the use of the calculated controller gains.
Example 5.5. In the following example, we demonstrate the advantages of the pro-
posed Markovian switching approach using a two-state Markov chain applied to a
system where plant parameters are subjected to perturbations.
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 681
0 5 10 15 20 25 30 35 40 45 50−10
−8
−6
−4
−2
0
2
4
6
8Response of the Closed−loop System
Discrete time, k
Sta
tes
of S
yste
m
state 1state 2
Figure 3. Response of the Closed-Loop System with No Uncertainty
Consider the following Markovian switching linear discrete stochastic system,
with uncertainties present in the plant parameters
x(k + 1) = A1(η(k))x(k) + B1(η(k))u(k)
+ [A2(η(k))x(k) + B2(η(k))u(k)] ξ(k + 1), (5.27)
where η(k) ∈ S = 1, 2 is a Markov chain with 2 states, u(k) = K(η(k))L(η(k))x(k)
and ξ(k + 1) are a sequence of independent N(0, 1) random variables and are inde-
pendent of x(k)’s.
Keeping the same plant parameters and transition probability matrix as in the
previous example and adding perturbations according to equations (5.2) and (5.3),
D(1) =
[
−0.1414
−0.0384
]
, ∆(1) = 0.4608
Ea1(1) =[
0.1259 0.0858]
, Eb1(1) = 0.0494
A1(1) =
[
−0.6964 2.2447
−1.1598 −0.1338
]
, B1(1) =
[
0.5361
1.3405
]
The matrices D(1), ∆(1), Ea1(1), and Eb1(1) were randomly generated. Similarly,
the matrices D(2), ∆(2), Ea2(1), Eb2(1), Ea2(2), Eb2(2) were randomly generated and
used to produce the remaining plant parameters.
A2(1) =
[
−1.2600 −0.1226
−0.6598 0.7769
]
, A1(2) =
[
−0.3631 1.1806
−0.6080 −0.0699
]
A2(2) =
[
−0.2815 −0.0273
−0.1474 0.1739
]
, B2(1) =
[
0.3860
0.6449
]
,
B1(2) =
[
0.5333
1.3306
]
, B2(2) =
[
0.0040
0.0060
]
682 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
The switching sequence is shown in Figure 4.
0 5 10 15 20 25 30 35 40 45 50
1
2
Random Switching Sequence
Discrete time, k
Mod
e
Figure 4. Sample Switching Sequence
When there is no controller applied to the system, i.e. K(η(k)) = 0, which implies
u(k) = 0, the open-loop response is shown in Figure 5.
0 5 10 15 20 25 30 35 40 45 50−10
−8
−6
−4
−2
0
2
4
6
8x 10
4
Discrete time, k
Sta
tes
of S
yste
m
Response of the Open−loop System
state 1state 2
Figure 5. Response of the Open-Loop System with Uncertainty
The objective is to design a Markovian switching feedback controller of an unsta-
ble open loop system such that the closed-loop system is robustly stochastically stable.
For this purpose, we need to find symmetric, positive-definite matrices Q(1) > 0 and
Q(2) > 0, and feedback gains, K(1) and K(2) satisfying the Algebraic Riccati in-
equality (ARI).
2∑
j=1
(
Aj(i) + Bj(i)K(i))T
G(i)(
Aj(i) + Bj(i)K(i))
− Q(i) ≡ Ω(i) < 0 (5.28)
where G(i) =∑s
j=1 πijQ(j) and Q(i) = X−1i .
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 683
We used the LMI toolbox within Matlab to solve the LMIs. With the given
system, we find that,
Q(1) =
[
0.2405 −0.0663
−0.0663 1.4092
]
,
Q(2) =
[
0.1565 0.0029
0.0029 0.3769
]
For the system with perturbations (5.27), the corresponding controller gains are
K(1) = [0.9302 − 0.2252]
K(2) = [0.4604 − 0.0193].
The results clearly demonstrate that the Markovian switching approach for ro-
bust, stochastic stabilization can be achieved using an appropriate transition proba-
bility matrix and Lyapunov functional. Figure 6 graphically demonstrates the control
achieved with the use of the calculated controller gains.
0 5 10 15 20 25 30 35 40 45 50−10
−8
−6
−4
−2
0
2
4
6
8Response of the Closed−loop System
Discrete time, k
Sta
tes
of S
yste
m
state 1state 2
Figure 6. Response of the Closed-Loop System with Uncertainty
6. CONCLUDING REMARKS
Motivated by Markovian Switching Rational Expectation Models (MSRE) in eco-
nomics, we investigated a problem of robust stability analysis and stabilization with
impulsive control of a discrete-time stochastic system with multiplicative noise un-
der Markovian switching. The control input is simultaneously applied to both the
rate vector and the random diffusion term which explicitly distinguishes our method
compared to the existing works in such MSRE models. Our attention is focused on
the design of state feedback stabilization controllers with impulsive control, which
guarantee that the closed-loop discrete-time systems under Markovian switching is
stochastically stable. Sufficient conditions for robustness of such state feedback sta-
bilization with impulsive control under admissible perturbations are also established.
684 S. SATHANANTHAN, N. J. JAMESON, M. KNAP, AND L. H. KEEL
The results of this paper could be easily extended to Markovian switching time-delay
systems.
ACKNOWLEDGMENTS
This work was supported in part by Department of Defense Grant W911NF-08-
0514 and National Science Foundation Grant CMMI-0927664.
REFERENCES
[1] V. Laksmikantham, D. D. Bainov, and P. S. Simeonov. Theory of Impulsive Differential Equa-
tions. World Scientific, 1989.
[2] D. D. Bainov, and Valery Corachev. Impulsive Differential Equations with a Small Parameter.
World Scientific, 1995.
[3] P. S. Simeonov and D. D. Bainov. Impulsive Differential Equations: Asymptotic Properties of
the Solutions. World Scientific, 1995.
[4] Z. H. Guan, D. J. Hill and X. Shen. On Hybrid Impulsive Switching Systems and Applications
to Nonlinear Control. IEEE Transactions on Automatic Control Systems, Vol.50, No.7, :1058-
1062, July 2005.
[5] J. P. Aubin, J. Lygeros, M. Quincampoix, S. Sastry, and N. Seube. Impulsive Differential
Inclusions: A Viability Approach to Hybrid Systems. IEEE Transactions on Automatic Control
Systems, Vol.47, No.1, :1–20, January 2001.
[6] D. Chen, J. Sun, and Q. Wu. Impulsive Control and its Application to Lu’s Chaotic Systems.
Chaos, Solitons, Fractals, Vol.21, :1135–1142, 2004.
[7] Z. H. Guan, G. Chen and T. Ueta. On Impulsive Control of a Periodically Forced Chaotic
Pendulum System. IEEE Transactions on Automatic Control Systems, Vol. 45, No. 9, 1724–
1727, Sept. 2000.
[8] A. Khadra, X. Liu, and X. Shen. Application of Impulsive Synchronization to Communication
Security. IEEE Transactions on Circuits and Systems, Vol. 50, No. 3, 341–350, March 2003.
[9] B. Liu, X. Liu, G. Chen and H. Wang. Robust Impulsive Synchronization of Uncertain Dy-
namical Networks. IEEE Transactions on Circuits and Systems I, Vol. 52, No. 7, 1431–1441,
July 2005.
[10] W. Haddad, V. Chellaboina and S. G. Nersesov. Impulsive and Hybrid Dynamical Systems.
Princeton Series in Applied Mathematics, 2006.
[11] M. Mariton. Jump Linear Systems. Marcel Dekker, New York, 1990.
[12] E. K. Boukas and Z. K. Liu. Deterministic and Stochastic Time Delay Systems. Birkhauser,
Boston, 2002.
[13] S. Sathananthan, O. Adetona, C. Beane, and L. H. Keel. Feedback Stabilization of Markov
jump linear systems with time-varying delay. Stochastic Analysis and Applications, 26, 577–
594, 2008.
[14] S. Sathananthan, O. Adetona, C. Beane, and L. H. Keel. Delay-dependent stability criteria
for Markovian switching networks with time-varying delay. Proceedings of the IFAC World
Congress, 2008, Seoul, Korea, July 6-1, 2625–630, 2008.
[15] S. Sathananthan, M. J. Knap, and L. H. Keel. Stochastic Robust Control of Discrete-time
Systems with State and Controller Dependent Noise under Markovian Switching. Proceedings
of the American Control Conference, 2010, pp. 904–909, 2010.
HYBRID IMPULSIVE STATE FEEDBACK CONTROL 685
[16] A. Blake and F. Zampolli. Optimal Monetary Policy in Markov-Switching Models with Ratio-
nal Expectations Agents. Working Paper No.298, Bank of England, June 2006.
[17] R. E. A. Farmer, D. F. Waggoner and T. Zha. Understanding Markov Switching Rational
Expectation Models. Journal of Economic Theory, Vol. 144, No. 5, 1849–1867, September
2009.
[18] R. E. A. Farmer, D. F. Waggoner and T. Zha. Indeterminacy in a Forward Looking Regime
Switching Model. International Journal of Economic Theory, Vol. 144, No. 5, 1849–1867,
September 2009.
[19] L. E. Svensson and N. Williams. Monetary Policy with Model Uncertainty: Distribution For-
cast Targetting. Manuscript, Princeton University, 2007.
[20] E. M. Leeper and T. Zha. Modest Policy Interventions. Journal of Monetary Economics, Vol.
50, No. 8, 1673–1700, 2003.
[21] S. J. Wu, J. . Sun. P-th Moment Stability of Stochastic Differential Equations with Impulsive
Jumps and Markovian Switching. Automatica, Vol. 42, No. 10, 1753–1759, 2006.
[22] Y. Dong and J. T. Sun. On Hybrid Control of a Class of Stochastic Non-linear Markovian
Switching Systems. Automatica, Vol. 44, No. 2, 990–995, 2008.
[23] L. Hou and A. N. Michel. Moment Stability of Discontinuous Stochastic Dynamical Systems.
IEEE Transactions on Automatic Control, Vol. 46, No. 6, 938–943, June 2001.