Passivity and passification for a class of singularly perturbednonlinear systems via neural networks
Shiping Wen, Zhigang Zeng, and Tingwen Huang
AbstractβThis paper is concerned with the problem ofpassivity and passification for a class of singularly perturbednonlinear systems (SPNS) via neural network. By constructinga proper functional as well as the linear matrix inequalities(LMIs) technique, some novel sufficient conditions are derivedto make SPNS passive. The allowable perturbation bound πβ
can be determined via certain algebra inequalities, the proposedcontroller based on neural network will make SPNS passive forall π β (0, πβ). Finally, a numerical example is given to illustratethe theoretical results.
Keywords: Passivity and passification; SPNS; Neural net-work
I. INTRODUCTION
In many practical systems, the presence of small masses,moments of inertia, and resistances gives rise to two-time-scale (singularly perturbed) systems. The singularly per-turbed systems represented by slow and fast subsystems havebeen studied by many researchers [1]-[3]. Using the singu-larly perturbation method, the stability of a high dimensionalsystem can be analyzed based on the lower order subsystems,slow and fast subsystems. Recently, several works haveconsidered stability analysis and stabilization of singularperturbation systems with the stability bound πβ [4]-[5].
There are many studies about linear singularly perturbedsystems [6]-[8], and SPNS [9]-[14]. On the other hand, thepassivity and passification problems for practical systemshave been attracting great attention. The passivity theoryplays an important role in circuits, networks, systems andcontrol. It provides a useful way to deal with delay systems[15], fuzzy systems [16], networked control systems [17],hybrid systems [18], etc.
In the past decades, neural networks have been extensivelystudied [19]-[26], and successfully applied in many areassuch as combinatorial optimization, signal processing, imageprocessing and pattern recognition. And adaptive controlhave been found in many literatures [27]-[29]. However, forthe neural network based passivity analysis and passificationfor SPNS is very few. Therefore, the purpose of this paperis to shorten such a gap.
Manuscript received December 15, 2011. The work is supported by theNatural Science Foundation of China under Grants 60974021 and 61125303,the 973 Program of China under Grant 2011CB710606, the Fund forDistinguished Young Scholars of Hubei Province under Grant 2010CDA081.
Shiping Wen and Zhigang Zeng are with the Department of ControlScience and Engineering, Huazhong University of Science and Tech-nology, and Key Laboratory of Image Processing and Intelligent Con-trol of Education Ministry of China, Wuhan, Hubei, 430074, China(Email:[email protected]).
Tingwen Huang is with Texas A & M University at Qatar, Doha 5825,Qatar (Email: [email protected]).
Motivated by the above discussion, in this paper, we in-vestigate the problems of passivity and passification of SPNSvia neural networks. The main contributions of this papercan be summarized as follows: (i) the passivity analysis isfirst extended to the SPNS; (ii) a novel Lyapunov functionalcombined with the matrix analysis techniques is developedto obtain sufficient conditions under which the closed-loopsystem is globally passive in the sense of expectation. Thesesufficient conditions are given in the form of LMIs that canbe solved numerically.
The rest of the paper is organized as follows. In SectionII, the system studied in this paper is proposed and somepreliminaries are given. In Section III, we will address theneural network adaptive controller design scheme in detail.Passivity analysis of the neural network adaptive controllerand the passification bounds of the singularly perturbationparameter are also carried out in this section. In SectionIV, an illustrative example is constructed to demonstratedthe effectiveness and usefulness of the acquired results andfinally, conclusions are drawn in Section V.
Notation. The notation used through the paper is fairlystandard. β is the set of natural numbers and β
+ standsfor the set of nonnegative integers; β
π and βπΓπ denote,
respectively, the π dimensional Euclidean space and the set ofall πΓπ real matrices. The notation π > 0(β₯ 0) means thatπ is positive definite(positive semi-definite). In symmetricblock matrices or complex matrix expressions, we use anasterisk (β) to represent a term that is induced by symmetryand diag{β β β } stands for a block-diagonal matrix. Matrices,if their dimensions are not explicitly stated, are assumed tobe compatible for algebraic operations. πΌ{π₯} stands for theexpectation of stochastic variable π₯. π2[0,+β) is the spaceof square integrable vectors. The notation β£β£.β£β£ stands for theusual π2 norm while β£.β£ refers to the Euclidean vector norm.Sometimes, when no confusion would arise, the dimensionsof a function or a matrix will be omitted for convenience.
II. PRELIMINARIES
In this paper, we aim to consider the following SPNS:β§β¨β©
οΏ½ΜοΏ½(π‘) = π1(π₯(π‘), π§(π‘)) + π1(π₯(π‘), π§(π‘))π’(π‘)+ πΈ1π(π‘),
ποΏ½ΜοΏ½(π‘) = π2(π₯(π‘), π§(π‘)) + π2(π₯(π‘), π§(π‘))π’(π‘)+ πΈ2π(π‘),
π¦1(π‘) = πΊ11π₯(π‘) +πΊ12π(π‘),π¦2(π‘) = πΊ21π§(π‘) +πΊ22π(π‘),
(1)
978-1-4673-1490-9/12/$31.00 Β©2012 IEEE
WCCI 2012 IEEE World Congress on Computational Intelligence June, 10-15, 2012 - Brisbane, Australia IJCNN
3187
where π₯(π‘) =[π₯1(π‘) π₯2(π‘) β β β , π₯π(π‘)
]π β βπΓ1
and π§(π‘) =[π§1(π‘) π§2(π‘) β β β , π§π(π‘)
]π β βπΓ1 denote
the state vectors, π¦1(π‘) β βπΓ1 is the output of π₯(π‘),
π¦2(π‘) β βπΓ1 is the output of π§(π‘), π1(π₯(π‘), π§(π‘)) β
βπΓ1, π2(π₯(π‘), π§(π‘)) β β
πΓ1, π1(π₯(π‘), π§(π‘)) β βπΓπ,
π2(π₯(π‘), π§(π‘)) β βπΓπ are nonlinear functions, π’(π‘) β β
πΓ1
is the control input, π(π‘) β βπΓ1 is the disturbance,
which is assumed to an arbitrary signal in π2[0,β). πΊ11 ββ
πΓπ, πΊ21 β βπΓπ, πΈ1, πΊ12 β β
πΓπ, πΈ2, πΊ22 β βπΓπ
are constant matrices, and π is the singular perturbationparameter. While
π1(π₯(π‘), π§(π‘)) = π΄11π₯(π‘) +π΄12π§(π‘) + Ξπ11(π‘)π₯(π‘)
+ Ξπ12(π‘)π§(π‘),
π2(π₯(π‘), π§(π‘)) = π΄21π₯(π‘) +π΄22π§(π‘) + Ξπ21(π‘)π₯(π‘)
+ Ξπ22(π‘)π§(π‘),
π1(π₯(π‘), π§(π‘)) = π΅1π(π‘), π2(π₯(π‘), π§(π‘)) = π΅2π(π‘),
π΄11 β βπΓπ, π΄12β
πΓπ, π΄21βπΓπ, π΄22 β β
πΓπ, Ξπ11 ββ
πΓπ, Ξπ12 β βπΓπ, Ξπ21 β β
πΓπ, Ξπ22 β βπΓπ,
π΅1 β βπΓπ, π΅2 β β
πΓπ, π(π‘) β βπΓπ, then SPNS (1) can
be rearranged as the following form,β§β¨β©
Ξ(π)οΏ½ΜοΏ½(π‘) = π΄π(π‘) + Ξπ(π₯(π‘), π§(π‘))π(π‘)+π΅π(π₯(π‘), π§(π‘))π’(π‘) + πΈπ(π‘),
π (π‘) = πΊ1π(π‘) +πΊ2π(π‘),(2)
where Ξ(π) =
[πΌπ 00 ππΌπ
], π(π‘) =
[π₯π (π‘) π§π (π‘)
]π,
π (π‘) =[π¦π1 (π‘) π¦π2 (π‘)
]π, π΄,π΅, and πΈ,πΊ1, πΊ2 are
known constant matrices with π΄ β β(π+π)(π+π), π΅ β
β(π+π)π and πΈ β β
(π+π)π which are defined as π΄ =[π΄11 π΄12
π΄21 π΄22
], π΅ =
[π΅π
1 π΅π2
], πΈ =
[πΈπ
1 πΈπ2
],
πΊ1 = diag{πΊ11, πΊ21}, πΊ2 = diag{πΊ12, πΊ22}. Andπ(π₯(π‘), π§(π‘)) β β
πΓπ is a known matrix function andthe determinant of π(π₯(π‘), π§(π‘)) is not equal to zero,Ξπ(π₯(π‘), π§(π‘)) β β
(π+π)(π+π) is a matrix function withnonlinear term of some state valuables in every element. Inthis paper, the function Ξπ(π₯(π‘), π§(π‘)) is a continuously anddifferentiable matrix function which can be approximated bya neural network as the following,
Ξπ(π₯(π‘), π§(π‘)) = π΅πππ(π₯(π‘), π§(π‘)) + π(π₯(π‘), π§(π‘)) (3)
where π β βπΓπ denotes optimal unknown constant
weighting matrix, π(π₯(π‘), π§(π‘)) β βπ(π+π) is a given basis
function such that each component of π(π₯(π‘), π§(π‘)) takesvalues between 0 and 1. π(π₯(π‘), π§(π‘)) β β
(π+π)(π+π) is anapproximation error. To this end, the following assumptionis given
Assumption 1. β£β£π(π₯(π‘), π§(π‘))ππ(π₯(π‘), π§(π‘))β£β£ β€ πβ1, whereπβ1 is a real constant.
Before formulating the main problem, we first give thefollowing definition.
Definition 1. System (2) is said to be passive if there existsa scalar π½ > 0 such that
2πΌ
{β« π‘
0
ππ (π )π (π )ππ
}β₯ βπ½πΌ
{β« π‘
0
ππ (π )π(π )ππ
}(4)
for all π‘ > 0 under zero initial condition.
Remark 1. The concept of system passivity is related tothe output signal and the external input signal. The physicalmeaning of a passive system is that the increment of thenonlinear system energy from π = 0 to π = π‘ is alwaysless than or equal to the supply from the external energy.This means that the motion of a passive system is alwaysaccompanied by energy dissipation.
To obtain our results, the following Lemmas will be em-ployed.
Lemma 1. (Schur Complement). Given constant matricesπ1, π2 and π3, where π1 = ππ
1 and π2 = ππ2 . Then π1 +
ππ3 π
β12 π3 < 0 if and only if
[π1 ππ
3
π3 βπ2
]< 0, or
[ βπ2 π3
ππ3 π1
]< 0.
Lemma 2. It is known that for any positive constant π andany matrices π and π ,
πππ + π ππ β€ ππππ + πβ1π ππ. (5)
The objective of the following section is how to designthe neural network controller such that SPNS (2) is passivity.Define the following matrices, ππ = π + ππ and ππ > 0,
where π =
[π11 0ππ12 π22
], π =
[0 π12
0 0
], π11 = ππ
11 >
0 and π22 = ππ22 > 0. Furthermore, we assume πβ1 = π.
III. MAIN RESULTS
In this section, we will present a sufficient condition interms of LMIs, under which SPNS (2) is passive.
Theorem 1. If there exist positive-definite matrices π,π ,and real matrix π , positive constants π, πΌ, π, such thatβ‘
β£ Ξ ππ ππ
β βππΌ 0β β βπΌπΌ
β€β¦ < 0, (6)
[πΌπΊ1πΊ
π1 β 2πΊ2 β π½πΌ πΈπ
β βππΌ]< 0, (7)
then when the neural network adaptive controller and theadaptive law is defined as follows:
π’(π‘) = π(π₯(π‘), π§(π‘))β1[πΎπ(π‘) β οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘))π(π‘)
],
ΛΜπ = π(π₯(π‘), π§(π‘))π(π‘)ππ (π‘)πππ π΅, (8)3188
οΏ½ΜοΏ½ = οΏ½ΜοΏ½ β π , πΎ = ππ , SPNS (2) is globally passivefor all π β (0, πβ), where πβ is a positive solution of thefollowing inequality:
π2π2 + π1π β π0 < 0, (9)
Ξ = πππ΄π +π΄π+πππ΅π +π΅π + ππΌ +π ,
π2 = β£β£ππππ β£β£,π1 = β£β£(π΄+π΅πΎ)ππ + ππ (π΄+π΅πΎ) + (πππ + πππ )πβ£β£,π0 = β£β£πmin(π
ππ π )β£β£.
Proof. Choose functional π (.) β π : βπ Γ β+ β β
+ to be
π (π‘) = ππ (π‘)πππ Ξ(π)π(π‘) + tr(οΏ½ΜοΏ½π οΏ½ΜοΏ½ ). (10)
Let π be the weak infinitesimal generator, then
ππ (π‘) = 2ππ (π‘)πππ Ξ(π)οΏ½ΜοΏ½(π‘) + 2π‘π(οΏ½ΜοΏ½π ΛΜ
π ),
= ππ (π‘)[(π΄+π΅πΎ)πππ + ππ
π (π΄+π΅πΎ)]π(π‘)
+ππ (π‘)[(π(π₯(π‘), π§(π‘)) βπ΅οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘)))πππ
+ πππ (π(π₯(π‘), π§(π‘)) βπ΅οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘)))
]π(π‘)
+ππ (π‘)πππ πΈπ(π‘) + ππ (π‘)πΈππππ(π‘)
+ tr(π΅ππππ(π‘)ππ (π‘)ππ (π₯(π‘), π§(π‘))οΏ½ΜοΏ½ )
+ tr(οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘))π(π‘)ππ (π‘)πππ π΅)
= ππ (π‘)[(π΄+π΅πΎ)πππ + ππ
π (π΄+π΅πΎ)]π(π‘)
+ππ (π‘)[(π(π₯(π‘), π§(π‘)) βπ΅πππ(π₯(π‘), π§(π‘)))πππ
+ πππ (π(π₯(π‘), π§(π‘)) βπ΅πππ(π₯(π‘), π§(π‘)))
]π(π‘)
+ππ (π‘)πππ πΈπ(π‘) + ππ (π‘)πΈππππ(π‘)
+ tr(π΅ππππ(π‘)ππ (π‘)ππ (π₯(π‘), π§(π‘))οΏ½ΜοΏ½ )
+ tr(οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘))π(π‘)ππ (π‘)πππ π΅)
βππ (π‘)[(π΅οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘)))πππ
+ πππ π΅οΏ½ΜοΏ½
ππ(π₯(π‘), π§(π‘))]π(π‘)
β€ ππ (π‘)[(π΄+π΅πΎ)πππ + ππ
π (π΄+π΅πΎ)]π(π‘)
+ππ (π‘)[(π(π₯(π‘), π§(π‘)) βπ΅πππ(π₯(π‘), π§(π‘)))πππ
+ πππ (π(π₯(π‘), π§(π‘)) βπ΅πππ(π₯(π‘), π§(π‘)))
]π(π‘)
+ πππ (π‘)πππ πππ(π‘) + πβ1ππ (π‘)πΈππΈπ(π‘)
+ tr(π΅ππππ(π‘)ππ (π‘)ππ (π₯(π‘), π§(π‘))οΏ½ΜοΏ½ )
+ tr(οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘))π(π‘)ππ (π‘)πππ π΅)
βππ (π‘)[(π΅οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘)))πππ
+ πππ π΅οΏ½ΜοΏ½
ππ(π₯(π‘), π§(π‘))]π(π‘)
As tr[Ξ 1Ξ 2] = tr[Ξ 2Ξ 1] for any matrices Ξ 1 and Ξ 2 with
appropriate dimensions, then
ππ (π‘)ππ (π₯(π‘), π§(π‘))οΏ½ΜοΏ½π΅ππππ(π‘)
= π‘π(ππ (π‘)ππ (π₯(π‘), π§(π‘))οΏ½ΜοΏ½π΅ππππ(π‘)
)= tr
(π΅ππππ(π‘)ππ (π‘)ππ (π₯(π‘), π§(π‘))οΏ½ΜοΏ½
),
ππ (π‘)πππ π΅οΏ½ΜοΏ½
ππ(π₯(π‘), π§(π‘))π(π‘)
= tr(ππ (π‘)ππ
π π΅οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘))π(π‘)
)= tr
(οΏ½ΜοΏ½ππ(π₯(π‘), π§(π‘))π(π‘)ππ (π‘)ππ
π π΅).
Combined with Assumption 1,
ππ (π‘) β€ ππ (π‘){[
(π΄+π΅πΎ)ππ + ππ (π΄+π΅πΎ)
+ ππππ + πβ1πΌ]+ π
[(π΄+π΅πΎ)ππ
+ ππ (π΄+π΅πΎ) + π(πππ + πππ )]
+ π2[ππππ
]}π(π‘)
+ πβ1ππ (π‘)πΈππΈπ(π‘). (11)
Consider the following index:
π½(π‘) = ππ (π‘) β 2ππ (π‘)π (π‘) β π½ππ (π‘)π(π‘)
β€ ππ (π‘) + πΌβ1ππ (π‘)π(π‘)
+ ππ (π‘)(πΌπΊ1πΊ
π1 β 2πΊ2 β π½πΌ
)π(π‘)
β€ ππ (π‘){[
(π΄+π΅πΎ)ππ + ππ (π΄+π΅πΎ)
+ ππππ + πβ1πΌ + πΌβ1πΌ]
+ π[(π΄+π΅πΎ)ππ + ππ (π΄+π΅πΎ)
+ π(πππ + πππ )]+ π2
[ππππ
]}π(π‘)
+ ππ (π‘)[πβ1πΈππΈ + πΌπΊ1πΊ
π1
β 2πΊ2 β π½πΌ]π(π‘)
β€ ππ (π‘){ππ
[ππ (π΄+π΅πΎ)π + (π΄+π΅πΎ)π
+ ππΌ +πππβ1π+πππΌβ1π]π
+ πβ£β£(π΄+π΅πΎ)ππ + ππ (π΄+π΅πΎ)
+ π(πππ + πππ )β£β£ + π2β£β£ππππ β£β£}π(π‘)
+ ππ (π‘)[πβ1πΈππΈ + πΌπΊ1πΊ
π1
β 2πΊ2 β π½πΌ]π(π‘)
β€ ππ (π‘){ππ
[πππ΄π +π΄π+πππ΅π +π΅π
+ ππΌ +πππβ1π+πππΌβ1π]π
+ πβ£β£(π΄+π΅πΎ)ππ + ππ (π΄+π΅πΎ)
+ π(πππ + πππ )β£β£ + π2β£β£ππππ β£β£}π(π‘)
+ ππ (π‘)[πβ1πΈππΈ + πΌπΊ1πΊ
π1
3189
β 2πΊ2 β π½πΌ]π(π‘). (12)
If
πππ΄π +π΄π+πππ΅π +π΅π + ππΌ
+πππβ1π+πππΌβ1π < βπ ,then
π½(π‘) < ππ (π‘)[β πππ π + ππ1πΌ + π2π2πΌ
]π(π‘)
+ ππ (π‘)[πβ1πΈππΈ + πΌπΊ1πΊ
π1
β 2πΊ2 β π½πΌ]π(π‘)
β€ ππ (π‘)[β π1πΌ + ππ1πΌ + π2π2πΌ
]π(π‘)
+ ππ (π‘)[πβ1πΈππΈ + πΌπΊ1πΊ
π1
β 2πΊ2 β π½πΌ]π(π‘). (13)
From condition (6), it is easy to obtain
ππ (π‘) β 2ππ (π‘)π (π‘) β π½ππ (π‘)π(π‘) < 0, (14)
which means
2πΌ
{β« π‘
0
ππ (π )π (π )ππ
}
β₯ πΌ
{π (π‘) β π½
β« π‘
0
ππ (π )π(π )ππ
}
β₯ βπ½πΌ{β« π‘
0
ππ (π )π(π )ππ
}. (15)
Therefore, SPNS (2) is globally passive.
Remark 2. If the solution of (6) has π = diag{π11, π22},and π = πβ1, then π12 = 0, therefore π = 0. According to(9), we can get the stability bound πβ β β.
As polytopic uncertainty description can be used to char-acterize uncertain parameters, where the system matricesare supposed to contain partially unknown parameters andthey reside in a give polytope. An assumption is given afterTheorem 1 as follows: the matrices π΄,π΅,πΈ,πΊ1, πΊ2 containpartially unknown parameters. Assume that
Ξ© β (π΄,π΅,πΈ,πΊ1, πΊ2) β β,where β is a given convex-bounded polyhedral domaindescribed by π vertices
β = {Ξ©(π)β£Ξ©(π) =π β
π=1
ππΞ©π;π β
π=1
ππ = 1, ππ β₯ 0}, (16)
where Ξ©π = (π΄π, π΅π, πΈπ, πΊ1π, πΊ2π) denote the vertices ofthe polytope. Since LMIs (6) and (7) in Theorem 1 areaffine in the system matrices, this theorem can therefore bedirectly used for the passivity and passification problem onthe basis of quadratic stability notion. Therefore, we presentthe following corollary without proof.
Fig. 1. An electronic circuit with parasitic capacitor and nonlinear resistor.
Corollary 1. Suppose system (2) contains polytopic un-certainty described in (16). There exists a neural networkadaptive controller (4) such that system (2) is globallypassive if there exist positive matrices π,π , and real ma-trix π , positive constants π, πΌ, π satisfy (6) and (7) forπ = 1, 2, β β β , π , where the matrices π΄,π΅,πΈ,πΊ1, πΊ2 aretaken with π΄π, π΅π, πΈπ, πΊ1π, πΊ2π, respectively.
IV. AN ILLUSTRATIVE EXAMPLE
In this section, we present an example to illustrate theeffectiveness of the proposed approach. Consider a nonlinearcircuit [10] illustrated in Fig. 1 with nonlinear resistor. TheππΆ β πΌπ characteristics of the resistor is πΌπ = 1
5π3πΆ β 1
5ππΆ .Applying the Kirchoffβs voltage and current laws, we canobtain the state equation as
πΏπΌπΏ(π‘) = βπΌπΏπ β ππΆ + π1π’+ π1(π‘),
πΆοΏ½ΜοΏ½πΆ(π‘) = πΌπΏ β 1
5(π 3
πΆ β ππΆ) + π2π’+ π2(π‘), (17)
where π = πΆ, π1 = 0.6, π2 = 0.8, π = 1Ξ© and πΏ = 0.1π» .Let π₯(π‘) = πΏπΌπΏ and π§(π‘) = ππΆ . π1(π‘) and π2(π‘) areexternal noises. Let π1(π‘) = 0.4 cos(0.1ππ‘) and π2(π‘) =0.6 cos(0.2ππ‘), then the state equation (17) can be rewrittenby the following SPNS:
οΏ½ΜοΏ½(π‘) = β10π₯(π‘) β π§(π‘) + 0.6π’(π‘) + π1(π‘),
ποΏ½ΜοΏ½(π‘) = 10π₯(π‘) β 0.2(π§2(π‘) β 1)π§(π‘)
+ 0.8π’(π‘) + π2(π‘). (18)
And consider the output
π¦1(π‘) = 0.2π₯(π‘) + 0.2 cos(0.1ππ‘),
π¦2(π‘) = 0.1π§(π‘) + 0.1 cos(0.2ππ‘),
then
π΄ =
[ β10 β110 0.2
],Ξπ(π₯(π‘), π§(π‘)) =
[0 00 β0.2π§2(π‘)
],
π΅ =
[0.60.8
], π(π₯(π‘), π§(π‘)) = 1,
πΈ =
[0.4 00 0.6
], π(π‘) =
[cos(0.1ππ‘)cos(0.2ππ‘)
],
πΊ1 =
[0.2 00 0.1
], πΊ2 =
[0.2 00 0.1
].
3190
0 0.5 1 1.5 2β6
β4
β2
0
2
4
6
t
x(t)
,z(t
)
x(t)z(t)
Fig. 2. States evolution π₯(π‘), π§(π‘) of SPNS (18) with the neural networkadaptive controller (21) and the adaptive law (22) with π = 0.9.
While π = 0.9, then the desired solution can be determinedby (6) and (7) by the LMI Toolbox,
π =[ β1.2923 β17.2209
],
π =
[0.7973 0
0 0.6129
],
π =
[8.8561 β0.2280β0.2280 8.9891
],
πΎ =[ β1.0304 β10.5547
],
πΌ = 9.1601, π½ = 9.1128,
π = 8.9226, π β β.
If π(π₯(π‘), π§(π‘)) =[
11+πβ0.3π₯(π‘)
11+πβ0.7π§(π‘)
], then the
neural network adaptive controller and the adaptive law areobtained as follows:
π’(π‘) = β1.0304π₯(π‘) β 10.5547π§(π‘) β π₯(π‘)οΏ½ΜοΏ½
1 + πβ0.3π₯(π‘)
β π§(π‘)οΏ½ΜοΏ½
1 + πβ0.7π§(π‘), (19)
ΛΜπ =( π₯(π‘)
1 + πβ0.3π₯(π‘)+
π§(π‘)
1 + πβ0.7π§(π‘)
)Γ(0.4784π₯(π‘) + 0.4903π§(π‘)
), (20)
where οΏ½ΜοΏ½ β β1Γ1. The proposed neural network adaptive
control (21) and adaptive law (22) is applied to the SPNS(18). Simulation results of the states π₯(π‘), π§(π‘) and neuralnetwork adaptive controller π’(π‘), adaptive law οΏ½ΜοΏ½ are shownin Figs. 2-4 with π₯(0) = 6, π§(0) = β5 and π = 10.
While π = 0.1, then the desired solution can be deter-mined by (6) and (7) by the LMI Toolbox,
π =[ β69.1965 β107.7597
],
π =
[0.7973 0
0 0.6129
],
π =
[46.5857 55.192355.1923 78.7812
],
0 0.5 1 1.5 2β10
0
10
20
30
40
50
t
u(t)
u(t)
Fig. 3. States evolution of the neural network adaptive controller (21) withπ = 0.9.
0 0.5 1 1.5 22.2
2.25
2.3
2.35
2.4
2.45
2.5
2.55
2.6
t
wβ
wβ
Fig. 4. States evolution of the adaptive law (22) with π = 0.9.
πΎ =[ β55.1704 β66.0459
],
πΌ = 106.6095, π½ = 117.1773,
π = 9.9363, π β β.
Then the neural network adaptive controller and the adaptivelaw are obtained as follows:
π’(π‘) = β55.1704π₯(π‘) β 66.0459π§(π‘) β π₯(π‘)οΏ½ΜοΏ½
1 + πβ0.3π₯(π‘)
β π§(π‘)οΏ½ΜοΏ½
1 + πβ0.7π§(π‘), (21)
ΛΜπ =( π₯(π‘)
1 + πβ0.3π₯(π‘)+
π§(π‘)
1 + πβ0.7π§(π‘)
)Γ(0.4784π₯(π‘) + 0.4903π§(π‘)
), (22)
Simulation results of the states π₯(π‘), π§(π‘) and neural networkadaptive controller π’(π‘), adaptive law οΏ½ΜοΏ½ are shown in Figs.5-7 with the same initial values as above.
Remark 3. From the illustrative example, we can see that thelarger the approximation error is, the greater absolute valueof the control gain to make the system passivity is required.
V. CONCLUSIONS
In this paper, the passivity and passification have beenstudied for a class of SPNS via neural networks. A newfunctional has been used to design the neural network adap-tive controller, such that the closed-loop system is globally3191
0 0.5 1 1.5 2β6
β4
β2
0
2
4
6
8
t
x(t)
,z(t
)
x(t)z(t)
Fig. 5. States evolution π₯(π‘), π§(π‘) of SPNS (18) with the neural networkadaptive controller (21) and the adaptive law (22) with π = 0.1.
0 0.5 1 1.5 22.2
2.25
2.3
2.35
2.4
2.45
2.5
2.55
2.6
t
wβ
wβ
Fig. 6. States evolution of the neural network adaptive controller (21) withπ = 0.1.
passive. And the controller parameters can be obtained bysolving certain LMIs. An illustrative example has been usedto show the effectiveness of the proposed method.
REFERENCES
[1] P. V. Kokotovic, H. K. Khalil, and J. O. Reilly, Singularly perturbationmethod in control: Analysis and Design. Orlando, FL: Academic, 1986.
[2] D. S. Naidu, Singular perturbation methodology in control systems.London: Peter Peregrinus, 1988.
[3] H. K. Khalil, Nonliear system, 3rd ed. Upper Saddle River, NJ: PrenticeHall, 2002.
[4] B. S. Chen and C. L. Lin, βOn the stability of singularly perturbedsystems,β IEEE Trans. Automat. Control, vol. 35, pp. 1265 - 1270,1990.
0 0.5 1 1.5 2β10
0
10
20
30
40
50
t
u(t)
u(t)
Fig. 7. States evolution of the adaptive law (22) with π = 0.1.
[5] S. J. Chen and J. L. Lin, βMaximal stability bounds of singularlyperturbed systems,β J. Franklin Inst., pp. 1209 - 1218, 1999.
[6] H. Kando and T. Iwazumi, βStabilizing feedback controllers for singu-larly perturbed systems,β IEEE Trans. Syst., Man, Cybern., vol. SMC-14, no. 6, pp. 903 - 911, 1984.
[7] J. S. Chiou, F. C. Kung and T. H. Li, βRobust stabilization of a classsingularly perturbed discrete bilinear systems,β IEEE Trans. Autom.Control, vol. 45, no. 6, pp. 1187 -1191, 2000.
[8] F. Sun, Y. Hu and H. Liu, βStability analysis and robust controller designfor uncertain discrete-time singularly perturbed systems,β Dyn. Contin.Discrete Impuls. Syst. Ser. B Appl. Algorithms, vol. 12, no. 5-6, pp. 849- 865, 2005.
[9] W. Assawinchaichote and S. K. Nguang, βπ»β fuzzy control design fornonlinear singularly perturbed systems with pole placement constraints:an LMI approach,β IEEE Trans. Syst. Man, Cybern., vol. 34, pp. 579 -588, 2004.
[10] T. S. Li and K. J. Lin, βStabilization of singularly perturbed fuzzysystems,β IEEE Trans. Fuzzy Syst., vol. 12, pp. 579 - 595, 2004.
[11] K. J. Lin, βNeural network based observer and adaptive controldesign for a class of singularly perturbed nonlinear systems,β In Conf.ASCC2011, pp. 1176 - 1180, 2011.
[12] K. J. Lin, βComposite obersever-based feedback design for singularlyperturbed systems via LMI approach,β In Proc. SICE2010, pp. 3065 -3061, 2010.
[13] K. J. Lin and T. S. Li, βStabilization of uncertain singularly perturbedsystems with pole-placement constraints,β IEEE Trans. Circuit Syst. II,vol. 53, pp. 916 - 920, 2006.
[14] T. S. Li and K. J. Lin, βStabilization of singularly perturbed fuzzysystems,β IEEE Trans. Fuzzy Syst., vol. 12, pp. 579 - 595, 2004.
[15] M. S. Mahmoud, A. Ismail, βPassivity and passification of time-delaysystemsβ J. Math. Anal. Appl., vol. 292, pp. 247 - 258, 2004.
[16] C. Li, H. Zhang, X. Liao, βPassivity and passification of fuzzy systemswith time delaysβ Comput. Math. Appl., vol. 52, pp. 1067 - 1078, 2006.
[17] H. Gao, T. Chen, T. Chai, βPassivity and passification for networkedcontrol systemsβ SIAM J. Control Optim., vol. 46, pp. 1299 - 1322,2007.
[18] A. Bemporad, G. Bianchini, F. Brogi, βPassivity analysis and passi-fication of discrete-time hybrid systmsβ IEEE Trans. Autom. Control,vol. 53, pp. 1004 - 1009, 2008.
[19] Z. G. Zeng, T. W. Huang and W. X. Zheng, βMultistability of recurrentneural networks with time-varying delays and the piecewise linearactivation functionβ IEEE Trans. Neural Netw., vol. 21, no. 8, pp. 1371-1377, 2010.
[20] Z. G. Zeng and J. Wang, βAssociative memories based on continuous-time cellular neural networks designed using space-invariant cloningtemplatesβ Neural Netw., vol. 22, pp. 651-657, 2009.
[21] Z. G. Zeng and J. Wang, βDesign and analysis of high-capacityassociative memories based on a class of discrete-time recurrent neuralnetworks,β IEEE Trans. Syst. Man Cy. B, vol. 38, no. 6, pp. 1525-1536,2008.
[22] Z. G. Zeng and J. Wang, βGlobal exponential stability of recurrentneural networks with time-varying delays in the presence of strongexternal stimuli,β Neural Netw., vol. 19, no. 10, pp. 1528-1537, 2006.
[23] Z. G. Zeng and J. Wang, βMultiperiodicity of discrete-time delayedneural networks evoked by periodic external inputsβ IEEE Trans. NeuralNetw., vol. 17, no. 5, pp. 1141-1151, 2006.
[24] Z. G. Zeng and J. Wang, βImproved conditions for global exponentialstability of recurrent neural networks with time-varying delaysβ IEEETrans. Neural Netw., vol. 17, no. 3, pp. 623-635, 2006.
[25] Z. G. Zeng and J. Wang, βComplete stability of cellular neuralnetworks with time-varying delaysβ IEEE Trans. Circuits Syst. I, vol.53, no. 4, pp. 944-955, 2006.
[26] Z. G. Zeng, J. Wang and X. X Liao, βGlobal asymptotic stabilityand global exponential stability of neural networks with unboundedtime-varying delaysβ IEEE Trans. Circuits Syst. II, vol. 52, no. 3, pp.168-173, 2005.
[27] Y. C. Chang and B. S. Chen, βA nonlinear adaptive π»β trackingcontrol deisgn in robotic systems via neural networks,β IEEE Control.Syst. Tech., vol. 5, pp. 13 - 29, 1997.
[28] F. Abdaollahi, H. A. Talebi and R. V. Patel, βA stable neural network-based observer with application to flexible-joint manipulators,β IEEETrans. Neural Networks, vol. 17, pp. 118 - 129, 2006.
[29] T. Hayakawa, W. M. Haddad and N. Hovakimyan, βNeural networkadaptive control for a class of nonlinear systems,β IEEE Trans. NeuralNetworks, vol. 19, pp. 80 - 89, 2008.3192
Top Related