Passivity and passification for a class of singularly perturbed nonlinear systems via neural...

6
Passivity and passification for a class of singularly perturbed nonlinear systems via neural networks Shiping Wen, Zhigang Zeng, and Tingwen Huang Abstractβ€”This paper is concerned with the problem of passivity and passification for a class of singularly perturbed nonlinear systems (SPNS) via neural network. By constructing a proper functional as well as the linear matrix inequalities (LMIs) technique, some novel sufficient conditions are derived to make SPNS passive. The allowable perturbation bound βˆ— can be determined via certain algebra inequalities, the proposed controller based on neural network will make SPNS passive for all ∈ (0, βˆ— ). Finally, a numerical example is given to illustrate the theoretical results. Keywords: Passivity and passification; SPNS; Neural net- work I. I NTRODUCTION In many practical systems, the presence of small masses, moments of inertia, and resistances gives rise to two-time- scale (singularly perturbed) systems. The singularly per- turbed systems represented by slow and fast subsystems have been studied by many researchers [1]-[3]. Using the singu- larly perturbation method, the stability of a high dimensional system can be analyzed based on the lower order subsystems, slow and fast subsystems. Recently, several works have considered stability analysis and stabilization of singular perturbation systems with the stability bound βˆ— [4]-[5]. There are many studies about linear singularly perturbed systems [6]-[8], and SPNS [9]-[14]. On the other hand, the passivity and passification problems for practical systems have been attracting great attention. The passivity theory plays an important role in circuits, networks, systems and control. It provides a useful way to deal with delay systems [15], fuzzy systems [16], networked control systems [17], hybrid systems [18], etc. In the past decades, neural networks have been extensively studied [19]-[26], and successfully applied in many areas such as combinatorial optimization, signal processing, image processing and pattern recognition. And adaptive control have been found in many literatures [27]-[29]. However, for the neural network based passivity analysis and passification for SPNS is very few. Therefore, the purpose of this paper is to shorten such a gap. Manuscript received December 15, 2011. The work is supported by the Natural Science Foundation of China under Grants 60974021 and 61125303, the 973 Program of China under Grant 2011CB710606, the Fund for Distinguished Young Scholars of Hubei Province under Grant 2010CDA081. Shiping Wen and Zhigang Zeng are with the Department of Control Science and Engineering, Huazhong University of Science and Tech- nology, and Key Laboratory of Image Processing and Intelligent Con- trol of Education Ministry of China, Wuhan, Hubei, 430074, China (Email:[email protected]). Tingwen Huang is with Texas A & M University at Qatar, Doha 5825, Qatar (Email: [email protected]). Motivated by the above discussion, in this paper, we in- vestigate the problems of passivity and passification of SPNS via neural networks. The main contributions of this paper can be summarized as follows: (i) the passivity analysis is first extended to the SPNS; (ii) a novel Lyapunov functional combined with the matrix analysis techniques is developed to obtain sufficient conditions under which the closed-loop system is globally passive in the sense of expectation. These sufficient conditions are given in the form of LMIs that can be solved numerically. The rest of the paper is organized as follows. In Section II, the system studied in this paper is proposed and some preliminaries are given. In Section III, we will address the neural network adaptive controller design scheme in detail. Passivity analysis of the neural network adaptive controller and the passification bounds of the singularly perturbation parameter are also carried out in this section. In Section IV, an illustrative example is constructed to demonstrated the effectiveness and usefulness of the acquired results and finally, conclusions are drawn in Section V. Notation. The notation used through the paper is fairly standard. β„• is the set of natural numbers and β„• + stands for the set of nonnegative integers; ℝ and ℝ Γ— denote, respectively, the dimensional Euclidean space and the set of all Γ— real matrices. The notation > 0(β‰₯ 0) means that is positive definite(positive semi-definite). In symmetric block matrices or complex matrix expressions, we use an asterisk (βˆ—) to represent a term that is induced by symmetry and diag{β‹… β‹… β‹… } stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations. {} stands for the expectation of stochastic variable . 2 [0, +∞) is the space of square integrable vectors. The notation ∣∣.∣∣ stands for the usual 2 norm while ∣.∣ refers to the Euclidean vector norm. Sometimes, when no confusion would arise, the dimensions of a function or a matrix will be omitted for convenience. II. PRELIMINARIES In this paper, we aim to consider the following SPNS: ⎧ ⎨ ⎩ Λ™ ()= 1 ((),()) + 1 ((),())() + 1 (), Λ™ ()= 2 ((),()) + 2 ((),())() + 2 (), 1 ()= 11 ()+ 12 (), 2 ()= 21 ()+ 22 (), (1) 978-1-4673-1490-9/12/$31.00 Β©2012 IEEE WCCI 2012 IEEE World Congress on Computational Intelligence June, 10-15, 2012 - Brisbane, Australia IJCNN 3187

Transcript of Passivity and passification for a class of singularly perturbed nonlinear systems via neural...

Passivity and passification for a class of singularly perturbednonlinear systems via neural networks

Shiping Wen, Zhigang Zeng, and Tingwen Huang

Abstractβ€”This paper is concerned with the problem ofpassivity and passification for a class of singularly perturbednonlinear systems (SPNS) via neural network. By constructinga proper functional as well as the linear matrix inequalities(LMIs) technique, some novel sufficient conditions are derivedto make SPNS passive. The allowable perturbation bound πœ‰βˆ—

can be determined via certain algebra inequalities, the proposedcontroller based on neural network will make SPNS passive forall πœ‰ ∈ (0, πœ‰βˆ—). Finally, a numerical example is given to illustratethe theoretical results.

Keywords: Passivity and passification; SPNS; Neural net-work

I. INTRODUCTION

In many practical systems, the presence of small masses,moments of inertia, and resistances gives rise to two-time-scale (singularly perturbed) systems. The singularly per-turbed systems represented by slow and fast subsystems havebeen studied by many researchers [1]-[3]. Using the singu-larly perturbation method, the stability of a high dimensionalsystem can be analyzed based on the lower order subsystems,slow and fast subsystems. Recently, several works haveconsidered stability analysis and stabilization of singularperturbation systems with the stability bound πœ‰βˆ— [4]-[5].

There are many studies about linear singularly perturbedsystems [6]-[8], and SPNS [9]-[14]. On the other hand, thepassivity and passification problems for practical systemshave been attracting great attention. The passivity theoryplays an important role in circuits, networks, systems andcontrol. It provides a useful way to deal with delay systems[15], fuzzy systems [16], networked control systems [17],hybrid systems [18], etc.

In the past decades, neural networks have been extensivelystudied [19]-[26], and successfully applied in many areassuch as combinatorial optimization, signal processing, imageprocessing and pattern recognition. And adaptive controlhave been found in many literatures [27]-[29]. However, forthe neural network based passivity analysis and passificationfor SPNS is very few. Therefore, the purpose of this paperis to shorten such a gap.

Manuscript received December 15, 2011. The work is supported by theNatural Science Foundation of China under Grants 60974021 and 61125303,the 973 Program of China under Grant 2011CB710606, the Fund forDistinguished Young Scholars of Hubei Province under Grant 2010CDA081.

Shiping Wen and Zhigang Zeng are with the Department of ControlScience and Engineering, Huazhong University of Science and Tech-nology, and Key Laboratory of Image Processing and Intelligent Con-trol of Education Ministry of China, Wuhan, Hubei, 430074, China(Email:[email protected]).

Tingwen Huang is with Texas A & M University at Qatar, Doha 5825,Qatar (Email: [email protected]).

Motivated by the above discussion, in this paper, we in-vestigate the problems of passivity and passification of SPNSvia neural networks. The main contributions of this papercan be summarized as follows: (i) the passivity analysis isfirst extended to the SPNS; (ii) a novel Lyapunov functionalcombined with the matrix analysis techniques is developedto obtain sufficient conditions under which the closed-loopsystem is globally passive in the sense of expectation. Thesesufficient conditions are given in the form of LMIs that canbe solved numerically.

The rest of the paper is organized as follows. In SectionII, the system studied in this paper is proposed and somepreliminaries are given. In Section III, we will address theneural network adaptive controller design scheme in detail.Passivity analysis of the neural network adaptive controllerand the passification bounds of the singularly perturbationparameter are also carried out in this section. In SectionIV, an illustrative example is constructed to demonstratedthe effectiveness and usefulness of the acquired results andfinally, conclusions are drawn in Section V.

Notation. The notation used through the paper is fairlystandard. β„• is the set of natural numbers and β„•

+ standsfor the set of nonnegative integers; ℝ

𝑛 and β„π‘›Γ—π‘š denote,

respectively, the 𝑛 dimensional Euclidean space and the set ofall π‘›Γ—π‘š real matrices. The notation 𝑃 > 0(β‰₯ 0) means that𝑃 is positive definite(positive semi-definite). In symmetricblock matrices or complex matrix expressions, we use anasterisk (βˆ—) to represent a term that is induced by symmetryand diag{β‹… β‹… β‹… } stands for a block-diagonal matrix. Matrices,if their dimensions are not explicitly stated, are assumed tobe compatible for algebraic operations. 𝔼{π‘₯} stands for theexpectation of stochastic variable π‘₯. 𝑙2[0,+∞) is the spaceof square integrable vectors. The notation ∣∣.∣∣ stands for theusual 𝑙2 norm while ∣.∣ refers to the Euclidean vector norm.Sometimes, when no confusion would arise, the dimensionsof a function or a matrix will be omitted for convenience.

II. PRELIMINARIES

In this paper, we aim to consider the following SPNS:⎧⎨⎩

οΏ½Μ‡οΏ½(𝑑) = 𝑓1(π‘₯(𝑑), 𝑧(𝑑)) + 𝑔1(π‘₯(𝑑), 𝑧(𝑑))𝑒(𝑑)+ 𝐸1πœ”(𝑑),

πœ‰οΏ½Μ‡οΏ½(𝑑) = 𝑓2(π‘₯(𝑑), 𝑧(𝑑)) + 𝑔2(π‘₯(𝑑), 𝑧(𝑑))𝑒(𝑑)+ 𝐸2πœ”(𝑑),

𝑦1(𝑑) = 𝐺11π‘₯(𝑑) +𝐺12πœ”(𝑑),𝑦2(𝑑) = 𝐺21𝑧(𝑑) +𝐺22πœ”(𝑑),

(1)

978-1-4673-1490-9/12/$31.00 Β©2012 IEEE

WCCI 2012 IEEE World Congress on Computational Intelligence June, 10-15, 2012 - Brisbane, Australia IJCNN

3187

where π‘₯(𝑑) =[π‘₯1(𝑑) π‘₯2(𝑑) β‹… β‹… β‹… , π‘₯π‘š(𝑑)

]𝑇 ∈ β„π‘šΓ—1

and 𝑧(𝑑) =[𝑧1(𝑑) 𝑧2(𝑑) β‹… β‹… β‹… , 𝑧𝑛(𝑑)

]𝑇 ∈ ℝ𝑛×1 denote

the state vectors, 𝑦1(𝑑) ∈ β„π‘šΓ—1 is the output of π‘₯(𝑑),

𝑦2(𝑑) ∈ ℝ𝑛×1 is the output of 𝑧(𝑑), 𝑓1(π‘₯(𝑑), 𝑧(𝑑)) ∈

β„π‘šΓ—1, 𝑓2(π‘₯(𝑑), 𝑧(𝑑)) ∈ ℝ

𝑛×1, 𝑔1(π‘₯(𝑑), 𝑧(𝑑)) ∈ β„π‘šΓ—π‘,

𝑔2(π‘₯(𝑑), 𝑧(𝑑)) ∈ ℝ𝑛×𝑝 are nonlinear functions, 𝑒(𝑑) ∈ ℝ

𝑝×1

is the control input, πœ”(𝑑) ∈ β„π‘žΓ—1 is the disturbance,

which is assumed to an arbitrary signal in 𝑙2[0,∞). 𝐺11 βˆˆβ„

π‘šΓ—π‘š, 𝐺21 ∈ ℝ𝑛×𝑛, 𝐸1, 𝐺12 ∈ ℝ

π‘šΓ—π‘ž, 𝐸2, 𝐺22 ∈ β„π‘›Γ—π‘ž

are constant matrices, and πœ‰ is the singular perturbationparameter. While

𝑓1(π‘₯(𝑑), 𝑧(𝑑)) = 𝐴11π‘₯(𝑑) +𝐴12𝑧(𝑑) + Δ𝑓11(𝑑)π‘₯(𝑑)

+ Δ𝑓12(𝑑)𝑧(𝑑),

𝑓2(π‘₯(𝑑), 𝑧(𝑑)) = 𝐴21π‘₯(𝑑) +𝐴22𝑧(𝑑) + Δ𝑓21(𝑑)π‘₯(𝑑)

+ Δ𝑓22(𝑑)𝑧(𝑑),

𝑔1(π‘₯(𝑑), 𝑧(𝑑)) = 𝐡1𝑔(𝑑), 𝑔2(π‘₯(𝑑), 𝑧(𝑑)) = 𝐡2𝑔(𝑑),

𝐴11 ∈ β„π‘šΓ—π‘š, 𝐴12ℝ

π‘šΓ—π‘›, 𝐴21β„π‘›Γ—π‘š, 𝐴22 ∈ ℝ

π‘šΓ—π‘›, Δ𝑓11 βˆˆβ„

π‘šΓ—π‘š, Δ𝑓12 ∈ β„π‘šΓ—π‘›, Δ𝑓21 ∈ ℝ

π‘›Γ—π‘š, Δ𝑓22 ∈ ℝ𝑛×𝑛,

𝐡1 ∈ β„π‘šΓ—π‘, 𝐡2 ∈ ℝ

𝑛×𝑝, 𝑔(𝑑) ∈ ℝ𝑝×𝑝, then SPNS (1) can

be rearranged as the following form,⎧⎨⎩

Ξ”(πœ‰)οΏ½Μ‡οΏ½(𝑑) = 𝐴𝑋(𝑑) + Δ𝑓(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)+𝐡𝑔(π‘₯(𝑑), 𝑧(𝑑))𝑒(𝑑) + πΈπœ”(𝑑),

π‘Œ (𝑑) = 𝐺1𝑋(𝑑) +𝐺2πœ”(𝑑),(2)

where Ξ”(πœ‰) =

[πΌπ‘š 00 πœ‰πΌπ‘›

], 𝑋(𝑑) =

[π‘₯𝑇 (𝑑) 𝑧𝑇 (𝑑)

]𝑇,

π‘Œ (𝑑) =[𝑦𝑇1 (𝑑) 𝑦𝑇2 (𝑑)

]𝑇, 𝐴,𝐡, and 𝐸,𝐺1, 𝐺2 are

known constant matrices with 𝐴 ∈ ℝ(𝑛+π‘š)(𝑛+π‘š), 𝐡 ∈

ℝ(𝑛+π‘š)𝑝 and 𝐸 ∈ ℝ

(𝑛+π‘š)π‘ž which are defined as 𝐴 =[𝐴11 𝐴12

𝐴21 𝐴22

], 𝐡 =

[𝐡𝑇

1 𝐡𝑇2

], 𝐸 =

[𝐸𝑇

1 𝐸𝑇2

],

𝐺1 = diag{𝐺11, 𝐺21}, 𝐺2 = diag{𝐺12, 𝐺22}. And𝑔(π‘₯(𝑑), 𝑧(𝑑)) ∈ ℝ

𝑝×𝑝 is a known matrix function andthe determinant of 𝑔(π‘₯(𝑑), 𝑧(𝑑)) is not equal to zero,Δ𝑓(π‘₯(𝑑), 𝑧(𝑑)) ∈ ℝ

(𝑛+π‘š)(𝑛+π‘š) is a matrix function withnonlinear term of some state valuables in every element. Inthis paper, the function Δ𝑓(π‘₯(𝑑), 𝑧(𝑑)) is a continuously anddifferentiable matrix function which can be approximated bya neural network as the following,

Δ𝑓(π‘₯(𝑑), 𝑧(𝑑)) = π΅π‘Šπ‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)) + πœ‘(π‘₯(𝑑), 𝑧(𝑑)) (3)

where π‘Š ∈ β„π‘Γ—π‘Ÿ denotes optimal unknown constant

weighting matrix, πœ™(π‘₯(𝑑), 𝑧(𝑑)) ∈ β„π‘Ÿ(𝑛+π‘š) is a given basis

function such that each component of πœ™(π‘₯(𝑑), 𝑧(𝑑)) takesvalues between 0 and 1. πœ‘(π‘₯(𝑑), 𝑧(𝑑)) ∈ ℝ

(𝑛+π‘š)(𝑛+π‘š) is anapproximation error. To this end, the following assumptionis given

Assumption 1. βˆ£βˆ£πœ‘(π‘₯(𝑑), 𝑧(𝑑))π‘‡πœ‘(π‘₯(𝑑), 𝑧(𝑑))∣∣ ≀ πœŽβˆ’1, whereπœŽβˆ’1 is a real constant.

Before formulating the main problem, we first give thefollowing definition.

Definition 1. System (2) is said to be passive if there existsa scalar 𝛽 > 0 such that

2𝔼

{∫ 𝑑

0

πœ”π‘‡ (𝑠)π‘Œ (𝑠)𝑑𝑠

}β‰₯ βˆ’π›½π”Ό

{∫ 𝑑

0

πœ”π‘‡ (𝑠)πœ”(𝑠)𝑑𝑠

}(4)

for all 𝑑 > 0 under zero initial condition.

Remark 1. The concept of system passivity is related tothe output signal and the external input signal. The physicalmeaning of a passive system is that the increment of thenonlinear system energy from 𝑠 = 0 to 𝑠 = 𝑑 is alwaysless than or equal to the supply from the external energy.This means that the motion of a passive system is alwaysaccompanied by energy dissipation.

To obtain our results, the following Lemmas will be em-ployed.

Lemma 1. (Schur Complement). Given constant matrices𝑆1, 𝑆2 and 𝑆3, where 𝑆1 = 𝑆𝑇

1 and 𝑆2 = 𝑆𝑇2 . Then 𝑆1 +

𝑆𝑇3 𝑆

βˆ’12 𝑆3 < 0 if and only if

[𝑆1 𝑆𝑇

3

𝑆3 βˆ’π‘†2

]< 0, or

[ βˆ’π‘†2 𝑆3

𝑆𝑇3 𝑆1

]< 0.

Lemma 2. It is known that for any positive constant 𝜌 andany matrices π‘Š and 𝑉 ,

π‘Šπ‘‡π‘‰ + 𝑉 π‘‡π‘Š ≀ πœŒπ‘Šπ‘‡π‘Š + πœŒβˆ’1𝑉 𝑇𝑉. (5)

The objective of the following section is how to designthe neural network controller such that SPNS (2) is passivity.Define the following matrices, π‘ƒπœ‰ = 𝑃 + πœ‰π‘‡ and π‘ƒπœ‰ > 0,

where 𝑃 =

[𝑃11 0𝑃𝑇12 𝑃22

], 𝑇 =

[0 𝑃12

0 0

], 𝑃11 = 𝑃𝑇

11 >

0 and 𝑃22 = 𝑃𝑇22 > 0. Furthermore, we assume π‘ƒβˆ’1 = 𝑄.

III. MAIN RESULTS

In this section, we will present a sufficient condition interms of LMIs, under which SPNS (2) is passive.

Theorem 1. If there exist positive-definite matrices 𝑄,𝑅,and real matrix 𝑀 , positive constants 𝜎, 𝛼, πœ€, such that⎑

⎣ Ξ› 𝑄𝑇 𝑄𝑇

βˆ— βˆ’πœŽπΌ 0βˆ— βˆ— βˆ’π›ΌπΌ

⎀⎦ < 0, (6)

[𝛼𝐺1𝐺

𝑇1 βˆ’ 2𝐺2 βˆ’ 𝛽𝐼 𝐸𝑇

βˆ— βˆ’πœ€πΌ]< 0, (7)

then when the neural network adaptive controller and theadaptive law is defined as follows:

𝑒(𝑑) = 𝑔(π‘₯(𝑑), 𝑧(𝑑))βˆ’1[𝐾𝑋(𝑑) βˆ’ οΏ½Μ„οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)

],

Λ™Μ„π‘Š = πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ 𝐡, (8)3188

οΏ½Μ‚οΏ½ = οΏ½Μ„οΏ½ βˆ’ π‘Š , 𝐾 = 𝑀𝑃 , SPNS (2) is globally passivefor all πœ‰ ∈ (0, πœ‰βˆ—), where πœ‰βˆ— is a positive solution of thefollowing inequality:

πœ‡2πœ‰2 + πœ‡1πœ‰ βˆ’ πœ‡0 < 0, (9)

Ξ› = 𝑄𝑇𝐴𝑇 +𝐴𝑄+𝑀𝑇𝐡𝑇 +𝐡𝑀 + πœ€πΌ +𝑅,

πœ‡2 = βˆ£βˆ£πœ€π‘‡π‘‡π‘‡ ∣∣,πœ‡1 = ∣∣(𝐴+𝐡𝐾)𝑇𝑇 + 𝑇𝑇 (𝐴+𝐡𝐾) + (𝑇𝑇𝑃 + 𝑃𝑇𝑇 )πœ€βˆ£βˆ£,πœ‡0 = βˆ£βˆ£πœ†min(𝑃

𝑇𝑅𝑃 )∣∣.

Proof. Choose functional 𝑉 (.) ∈ 𝜈 : ℝ𝑛 Γ— ℝ+ β†’ ℝ

+ to be

𝑉 (𝑑) = 𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ Ξ”(πœ‰)𝑋(𝑑) + tr(�̂�𝑇 οΏ½Μ‚οΏ½ ). (10)

Let π’œ be the weak infinitesimal generator, then

π’œπ‘‰ (𝑑) = 2𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ Ξ”(πœ‰)οΏ½Μ‡οΏ½(𝑑) + 2π‘‘π‘Ÿ(�̂�𝑇 Λ™Μ‚

π‘Š ),

= 𝑋𝑇 (𝑑)[(𝐴+𝐡𝐾)π‘‡π‘ƒπœ‰ + 𝑃𝑇

πœ‰ (𝐴+𝐡𝐾)]𝑋(𝑑)

+𝑋𝑇 (𝑑)[(𝑓(π‘₯(𝑑), 𝑧(𝑑)) βˆ’π΅οΏ½Μ„οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))π‘‡π‘ƒπœ‰

+ π‘ƒπ‘‡πœ‰ (𝑓(π‘₯(𝑑), 𝑧(𝑑)) βˆ’π΅οΏ½Μ„οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))

]𝑋(𝑑)

+𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ πΈπœ”(𝑑) + πœ”π‘‡ (𝑑)πΈπ‘‡π‘ƒπœ‰π‘‹(𝑑)

+ tr(π΅π‘‡π‘ƒπœ‰π‘‹(𝑑)𝑋𝑇 (𝑑)πœ™π‘‡ (π‘₯(𝑑), 𝑧(𝑑))οΏ½Μ‚οΏ½ )

+ tr(οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ 𝐡)

= 𝑋𝑇 (𝑑)[(𝐴+𝐡𝐾)π‘‡π‘ƒπœ‰ + 𝑃𝑇

πœ‰ (𝐴+𝐡𝐾)]𝑋(𝑑)

+𝑋𝑇 (𝑑)[(𝑓(π‘₯(𝑑), 𝑧(𝑑)) βˆ’π΅π‘Šπ‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))π‘‡π‘ƒπœ‰

+ π‘ƒπ‘‡πœ‰ (𝑓(π‘₯(𝑑), 𝑧(𝑑)) βˆ’π΅π‘Šπ‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))

]𝑋(𝑑)

+𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ πΈπœ”(𝑑) + πœ”π‘‡ (𝑑)πΈπ‘‡π‘ƒπœ‰π‘‹(𝑑)

+ tr(π΅π‘‡π‘ƒπœ‰π‘‹(𝑑)𝑋𝑇 (𝑑)πœ™π‘‡ (π‘₯(𝑑), 𝑧(𝑑))οΏ½Μ‚οΏ½ )

+ tr(οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ 𝐡)

βˆ’π‘‹π‘‡ (𝑑)[(π΅οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))π‘‡π‘ƒπœ‰

+ π‘ƒπ‘‡πœ‰ 𝐡�̂�

π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))]𝑋(𝑑)

≀ 𝑋𝑇 (𝑑)[(𝐴+𝐡𝐾)π‘‡π‘ƒπœ‰ + 𝑃𝑇

πœ‰ (𝐴+𝐡𝐾)]𝑋(𝑑)

+𝑋𝑇 (𝑑)[(𝑓(π‘₯(𝑑), 𝑧(𝑑)) βˆ’π΅π‘Šπ‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))π‘‡π‘ƒπœ‰

+ π‘ƒπ‘‡πœ‰ (𝑓(π‘₯(𝑑), 𝑧(𝑑)) βˆ’π΅π‘Šπ‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))

]𝑋(𝑑)

+ πœ€π‘‹π‘‡ (𝑑)π‘ƒπ‘‡πœ‰ π‘ƒπœ‰π‘‹(𝑑) + πœ€βˆ’1πœ”π‘‡ (𝑑)πΈπ‘‡πΈπœ”(𝑑)

+ tr(π΅π‘‡π‘ƒπœ‰π‘‹(𝑑)𝑋𝑇 (𝑑)πœ™π‘‡ (π‘₯(𝑑), 𝑧(𝑑))οΏ½Μ‚οΏ½ )

+ tr(οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ 𝐡)

βˆ’π‘‹π‘‡ (𝑑)[(π΅οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑)))π‘‡π‘ƒπœ‰

+ π‘ƒπ‘‡πœ‰ 𝐡�̂�

π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))]𝑋(𝑑)

As tr[Ξ 1Ξ 2] = tr[Ξ 2Ξ 1] for any matrices Ξ 1 and Ξ 2 with

appropriate dimensions, then

𝑋𝑇 (𝑑)πœ™π‘‡ (π‘₯(𝑑), 𝑧(𝑑))οΏ½Μ‚οΏ½π΅π‘‡π‘ƒπœ‰π‘‹(𝑑)

= π‘‘π‘Ÿ(𝑋𝑇 (𝑑)πœ™π‘‡ (π‘₯(𝑑), 𝑧(𝑑))οΏ½Μ‚οΏ½π΅π‘‡π‘ƒπœ‰π‘‹(𝑑)

)= tr

(π΅π‘‡π‘ƒπœ‰π‘‹(𝑑)𝑋𝑇 (𝑑)πœ™π‘‡ (π‘₯(𝑑), 𝑧(𝑑))οΏ½Μ‚οΏ½

),

𝑋𝑇 (𝑑)π‘ƒπ‘‡πœ‰ 𝐡�̂�

π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)

= tr(𝑋𝑇 (𝑑)𝑃𝑇

πœ‰ π΅οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)

)= tr

(οΏ½Μ‚οΏ½π‘‡πœ™(π‘₯(𝑑), 𝑧(𝑑))𝑋(𝑑)𝑋𝑇 (𝑑)𝑃𝑇

πœ‰ 𝐡).

Combined with Assumption 1,

π’œπ‘‰ (𝑑) ≀ 𝑋𝑇 (𝑑){[

(𝐴+𝐡𝐾)𝑇𝑃 + 𝑃𝑇 (𝐴+𝐡𝐾)

+ πœ€π‘ƒπ‘‡π‘ƒ + πœŽβˆ’1𝐼]+ πœ‰

[(𝐴+𝐡𝐾)𝑇𝑇

+ 𝑇𝑇 (𝐴+𝐡𝐾) + πœ€(𝑇𝑇𝑃 + 𝑃𝑇𝑇 )]

+ πœ‰2[πœ€π‘‡π‘‡π‘‡

]}𝑋(𝑑)

+ πœ€βˆ’1πœ”π‘‡ (𝑑)πΈπ‘‡πΈπœ”(𝑑). (11)

Consider the following index:

𝐽(𝑑) = π’œπ‘‰ (𝑑) βˆ’ 2πœ”π‘‡ (𝑑)π‘Œ (𝑑) βˆ’ π›½πœ”π‘‡ (𝑑)πœ”(𝑑)

≀ π’œπ‘‰ (𝑑) + π›Όβˆ’1𝑋𝑇 (𝑑)𝑋(𝑑)

+ πœ”π‘‡ (𝑑)(𝛼𝐺1𝐺

𝑇1 βˆ’ 2𝐺2 βˆ’ 𝛽𝐼

)πœ”(𝑑)

≀ 𝑋𝑇 (𝑑){[

(𝐴+𝐡𝐾)𝑇𝑃 + 𝑃𝑇 (𝐴+𝐡𝐾)

+ πœ€π‘ƒπ‘‡π‘ƒ + πœŽβˆ’1𝐼 + π›Όβˆ’1𝐼]

+ πœ‰[(𝐴+𝐡𝐾)𝑇𝑇 + 𝑇𝑇 (𝐴+𝐡𝐾)

+ πœ€(𝑇𝑇𝑃 + 𝑃𝑇𝑇 )]+ πœ‰2

[πœ€π‘‡π‘‡π‘‡

]}𝑋(𝑑)

+ πœ”π‘‡ (𝑑)[πœ€βˆ’1𝐸𝑇𝐸 + 𝛼𝐺1𝐺

𝑇1

βˆ’ 2𝐺2 βˆ’ 𝛽𝐼]πœ”(𝑑)

≀ 𝑋𝑇 (𝑑){𝑃𝑇

[𝑄𝑇 (𝐴+𝐡𝐾)𝑇 + (𝐴+𝐡𝐾)𝑄

+ πœ€πΌ +π‘„π‘‡πœŽβˆ’1𝑄+π‘„π‘‡π›Όβˆ’1𝑄]𝑃

+ πœ‰βˆ£βˆ£(𝐴+𝐡𝐾)𝑇𝑇 + 𝑇𝑇 (𝐴+𝐡𝐾)

+ πœ€(𝑇𝑇𝑃 + 𝑃𝑇𝑇 )∣∣ + πœ‰2βˆ£βˆ£πœ€π‘‡π‘‡π‘‡ ∣∣}𝑋(𝑑)

+ πœ”π‘‡ (𝑑)[πœ€βˆ’1𝐸𝑇𝐸 + 𝛼𝐺1𝐺

𝑇1

βˆ’ 2𝐺2 βˆ’ 𝛽𝐼]πœ”(𝑑)

≀ 𝑋𝑇 (𝑑){𝑃𝑇

[𝑄𝑇𝐴𝑇 +𝐴𝑄+𝑀𝑇𝐡𝑇 +𝐡𝑀

+ πœ€πΌ +π‘„π‘‡πœŽβˆ’1𝑄+π‘„π‘‡π›Όβˆ’1𝑄]𝑃

+ πœ‰βˆ£βˆ£(𝐴+𝐡𝐾)𝑇𝑇 + 𝑇𝑇 (𝐴+𝐡𝐾)

+ πœ€(𝑇𝑇𝑃 + 𝑃𝑇𝑇 )∣∣ + πœ‰2βˆ£βˆ£πœ€π‘‡π‘‡π‘‡ ∣∣}𝑋(𝑑)

+ πœ”π‘‡ (𝑑)[πœ€βˆ’1𝐸𝑇𝐸 + 𝛼𝐺1𝐺

𝑇1

3189

βˆ’ 2𝐺2 βˆ’ 𝛽𝐼]πœ”(𝑑). (12)

If

𝑄𝑇𝐴𝑇 +𝐴𝑄+𝑀𝑇𝐡𝑇 +𝐡𝑀 + πœ€πΌ

+π‘„π‘‡πœŽβˆ’1𝑄+π‘„π‘‡π›Όβˆ’1𝑄 < βˆ’π‘…,then

𝐽(𝑑) < 𝑋𝑇 (𝑑)[βˆ’ 𝑃𝑇𝑅𝑃 + πœ‰πœ‡1𝐼 + πœ‰2πœ‡2𝐼

]𝑋(𝑑)

+ πœ”π‘‡ (𝑑)[πœ€βˆ’1𝐸𝑇𝐸 + 𝛼𝐺1𝐺

𝑇1

βˆ’ 2𝐺2 βˆ’ 𝛽𝐼]πœ”(𝑑)

≀ 𝑋𝑇 (𝑑)[βˆ’ πœ‡1𝐼 + πœ‰πœ‡1𝐼 + πœ‰2πœ‡2𝐼

]𝑋(𝑑)

+ πœ”π‘‡ (𝑑)[πœ€βˆ’1𝐸𝑇𝐸 + 𝛼𝐺1𝐺

𝑇1

βˆ’ 2𝐺2 βˆ’ 𝛽𝐼]πœ”(𝑑). (13)

From condition (6), it is easy to obtain

π’œπ‘‰ (𝑑) βˆ’ 2πœ”π‘‡ (𝑑)π‘Œ (𝑑) βˆ’ π›½πœ”π‘‡ (𝑑)πœ”(𝑑) < 0, (14)

which means

2𝔼

{∫ 𝑑

0

πœ”π‘‡ (𝑠)π‘Œ (𝑠)𝑑𝑠

}

β‰₯ 𝔼

{𝑉 (𝑑) βˆ’ 𝛽

∫ 𝑑

0

πœ”π‘‡ (𝑠)πœ”(𝑠)𝑑𝑠

}

β‰₯ βˆ’π›½π”Ό{∫ 𝑑

0

πœ”π‘‡ (𝑠)πœ”(𝑠)𝑑𝑠

}. (15)

Therefore, SPNS (2) is globally passive.

Remark 2. If the solution of (6) has 𝑄 = diag{𝑄11, 𝑄22},and 𝑃 = π‘„βˆ’1, then 𝑃12 = 0, therefore 𝑇 = 0. According to(9), we can get the stability bound πœ‰βˆ— β†’ ∞.

As polytopic uncertainty description can be used to char-acterize uncertain parameters, where the system matricesare supposed to contain partially unknown parameters andthey reside in a give polytope. An assumption is given afterTheorem 1 as follows: the matrices 𝐴,𝐡,𝐸,𝐺1, 𝐺2 containpartially unknown parameters. Assume that

Ξ© β‰œ (𝐴,𝐡,𝐸,𝐺1, 𝐺2) ∈ β„›,where β„› is a given convex-bounded polyhedral domaindescribed by 𝑠 vertices

β„› = {Ξ©(πœ†)∣Ω(πœ†) =π‘ βˆ‘

𝑖=1

πœ†π‘–Ξ©π‘–;π‘ βˆ‘

𝑖=1

πœ†π‘– = 1, πœ†π‘– β‰₯ 0}, (16)

where Ω𝑖 = (𝐴𝑖, 𝐡𝑖, 𝐸𝑖, 𝐺1𝑖, 𝐺2𝑖) denote the vertices ofthe polytope. Since LMIs (6) and (7) in Theorem 1 areaffine in the system matrices, this theorem can therefore bedirectly used for the passivity and passification problem onthe basis of quadratic stability notion. Therefore, we presentthe following corollary without proof.

Fig. 1. An electronic circuit with parasitic capacitor and nonlinear resistor.

Corollary 1. Suppose system (2) contains polytopic un-certainty described in (16). There exists a neural networkadaptive controller (4) such that system (2) is globallypassive if there exist positive matrices 𝑄,𝑅, and real ma-trix 𝑀 , positive constants 𝜎, 𝛼, πœ€ satisfy (6) and (7) for𝑖 = 1, 2, β‹… β‹… β‹… , 𝑠, where the matrices 𝐴,𝐡,𝐸,𝐺1, 𝐺2 aretaken with 𝐴𝑖, 𝐡𝑖, 𝐸𝑖, 𝐺1𝑖, 𝐺2𝑖, respectively.

IV. AN ILLUSTRATIVE EXAMPLE

In this section, we present an example to illustrate theeffectiveness of the proposed approach. Consider a nonlinearcircuit [10] illustrated in Fig. 1 with nonlinear resistor. The𝑉𝐢 βˆ’ 𝐼𝑅 characteristics of the resistor is 𝐼𝑅 = 1

5𝑉3𝐢 βˆ’ 1

5𝑉𝐢 .Applying the Kirchoff’s voltage and current laws, we canobtain the state equation as

𝐿𝐼𝐿(𝑑) = βˆ’πΌπΏπ‘…βˆ’ 𝑉𝐢 + π‘Ž1𝑒+ 𝑑1(𝑑),

𝐢�̇�𝐢(𝑑) = 𝐼𝐿 βˆ’ 1

5(𝑉 3

𝐢 βˆ’ 𝑉𝐢) + π‘Ž2𝑒+ 𝑑2(𝑑), (17)

where πœ‰ = 𝐢, π‘Ž1 = 0.6, π‘Ž2 = 0.8, 𝑅 = 1Ξ© and 𝐿 = 0.1𝐻 .Let π‘₯(𝑑) = 𝐿𝐼𝐿 and 𝑧(𝑑) = 𝑉𝐢 . 𝑑1(𝑑) and 𝑑2(𝑑) areexternal noises. Let 𝑑1(𝑑) = 0.4 cos(0.1πœ‹π‘‘) and 𝑑2(𝑑) =0.6 cos(0.2πœ‹π‘‘), then the state equation (17) can be rewrittenby the following SPNS:

οΏ½Μ‡οΏ½(𝑑) = βˆ’10π‘₯(𝑑) βˆ’ 𝑧(𝑑) + 0.6𝑒(𝑑) + 𝑑1(𝑑),

πœ‰οΏ½Μ‡οΏ½(𝑑) = 10π‘₯(𝑑) βˆ’ 0.2(𝑧2(𝑑) βˆ’ 1)𝑧(𝑑)

+ 0.8𝑒(𝑑) + 𝑑2(𝑑). (18)

And consider the output

𝑦1(𝑑) = 0.2π‘₯(𝑑) + 0.2 cos(0.1πœ‹π‘‘),

𝑦2(𝑑) = 0.1𝑧(𝑑) + 0.1 cos(0.2πœ‹π‘‘),

then

𝐴 =

[ βˆ’10 βˆ’110 0.2

],Δ𝑓(π‘₯(𝑑), 𝑧(𝑑)) =

[0 00 βˆ’0.2𝑧2(𝑑)

],

𝐡 =

[0.60.8

], 𝑔(π‘₯(𝑑), 𝑧(𝑑)) = 1,

𝐸 =

[0.4 00 0.6

], πœ”(𝑑) =

[cos(0.1πœ‹π‘‘)cos(0.2πœ‹π‘‘)

],

𝐺1 =

[0.2 00 0.1

], 𝐺2 =

[0.2 00 0.1

].

3190

0 0.5 1 1.5 2βˆ’6

βˆ’4

βˆ’2

0

2

4

6

t

x(t)

,z(t

)

x(t)z(t)

Fig. 2. States evolution π‘₯(𝑑), 𝑧(𝑑) of SPNS (18) with the neural networkadaptive controller (21) and the adaptive law (22) with 𝜎 = 0.9.

While 𝜎 = 0.9, then the desired solution can be determinedby (6) and (7) by the LMI Toolbox,

𝑀 =[ βˆ’1.2923 βˆ’17.2209

],

𝑃 =

[0.7973 0

0 0.6129

],

𝑅 =

[8.8561 βˆ’0.2280βˆ’0.2280 8.9891

],

𝐾 =[ βˆ’1.0304 βˆ’10.5547

],

𝛼 = 9.1601, 𝛽 = 9.1128,

πœ€ = 8.9226, πœ‰ β†’ ∞.

If πœ™(π‘₯(𝑑), 𝑧(𝑑)) =[

11+π‘’βˆ’0.3π‘₯(𝑑)

11+π‘’βˆ’0.7𝑧(𝑑)

], then the

neural network adaptive controller and the adaptive law areobtained as follows:

𝑒(𝑑) = βˆ’1.0304π‘₯(𝑑) βˆ’ 10.5547𝑧(𝑑) βˆ’ π‘₯(𝑑)οΏ½Μ„οΏ½

1 + π‘’βˆ’0.3π‘₯(𝑑)

βˆ’ 𝑧(𝑑)οΏ½Μ„οΏ½

1 + π‘’βˆ’0.7𝑧(𝑑), (19)

Λ™Μ„π‘Š =( π‘₯(𝑑)

1 + π‘’βˆ’0.3π‘₯(𝑑)+

𝑧(𝑑)

1 + π‘’βˆ’0.7𝑧(𝑑)

)Γ—(0.4784π‘₯(𝑑) + 0.4903𝑧(𝑑)

), (20)

where οΏ½Μ„οΏ½ ∈ ℝ1Γ—1. The proposed neural network adaptive

control (21) and adaptive law (22) is applied to the SPNS(18). Simulation results of the states π‘₯(𝑑), 𝑧(𝑑) and neuralnetwork adaptive controller 𝑒(𝑑), adaptive law οΏ½Μ„οΏ½ are shownin Figs. 2-4 with π‘₯(0) = 6, 𝑧(0) = βˆ’5 and πœ‰ = 10.

While 𝜎 = 0.1, then the desired solution can be deter-mined by (6) and (7) by the LMI Toolbox,

𝑀 =[ βˆ’69.1965 βˆ’107.7597

],

𝑃 =

[0.7973 0

0 0.6129

],

𝑅 =

[46.5857 55.192355.1923 78.7812

],

0 0.5 1 1.5 2βˆ’10

0

10

20

30

40

50

t

u(t)

u(t)

Fig. 3. States evolution of the neural network adaptive controller (21) with𝜎 = 0.9.

0 0.5 1 1.5 22.2

2.25

2.3

2.35

2.4

2.45

2.5

2.55

2.6

t

wβˆ’

wβˆ’

Fig. 4. States evolution of the adaptive law (22) with 𝜎 = 0.9.

𝐾 =[ βˆ’55.1704 βˆ’66.0459

],

𝛼 = 106.6095, 𝛽 = 117.1773,

πœ€ = 9.9363, πœ‰ β†’ ∞.

Then the neural network adaptive controller and the adaptivelaw are obtained as follows:

𝑒(𝑑) = βˆ’55.1704π‘₯(𝑑) βˆ’ 66.0459𝑧(𝑑) βˆ’ π‘₯(𝑑)οΏ½Μ„οΏ½

1 + π‘’βˆ’0.3π‘₯(𝑑)

βˆ’ 𝑧(𝑑)οΏ½Μ„οΏ½

1 + π‘’βˆ’0.7𝑧(𝑑), (21)

Λ™Μ„π‘Š =( π‘₯(𝑑)

1 + π‘’βˆ’0.3π‘₯(𝑑)+

𝑧(𝑑)

1 + π‘’βˆ’0.7𝑧(𝑑)

)Γ—(0.4784π‘₯(𝑑) + 0.4903𝑧(𝑑)

), (22)

Simulation results of the states π‘₯(𝑑), 𝑧(𝑑) and neural networkadaptive controller 𝑒(𝑑), adaptive law οΏ½Μ„οΏ½ are shown in Figs.5-7 with the same initial values as above.

Remark 3. From the illustrative example, we can see that thelarger the approximation error is, the greater absolute valueof the control gain to make the system passivity is required.

V. CONCLUSIONS

In this paper, the passivity and passification have beenstudied for a class of SPNS via neural networks. A newfunctional has been used to design the neural network adap-tive controller, such that the closed-loop system is globally3191

0 0.5 1 1.5 2βˆ’6

βˆ’4

βˆ’2

0

2

4

6

8

t

x(t)

,z(t

)

x(t)z(t)

Fig. 5. States evolution π‘₯(𝑑), 𝑧(𝑑) of SPNS (18) with the neural networkadaptive controller (21) and the adaptive law (22) with 𝜎 = 0.1.

0 0.5 1 1.5 22.2

2.25

2.3

2.35

2.4

2.45

2.5

2.55

2.6

t

wβˆ’

wβˆ’

Fig. 6. States evolution of the neural network adaptive controller (21) with𝜎 = 0.1.

passive. And the controller parameters can be obtained bysolving certain LMIs. An illustrative example has been usedto show the effectiveness of the proposed method.

REFERENCES

[1] P. V. Kokotovic, H. K. Khalil, and J. O. Reilly, Singularly perturbationmethod in control: Analysis and Design. Orlando, FL: Academic, 1986.

[2] D. S. Naidu, Singular perturbation methodology in control systems.London: Peter Peregrinus, 1988.

[3] H. K. Khalil, Nonliear system, 3rd ed. Upper Saddle River, NJ: PrenticeHall, 2002.

[4] B. S. Chen and C. L. Lin, β€œOn the stability of singularly perturbedsystems,” IEEE Trans. Automat. Control, vol. 35, pp. 1265 - 1270,1990.

0 0.5 1 1.5 2βˆ’10

0

10

20

30

40

50

t

u(t)

u(t)

Fig. 7. States evolution of the adaptive law (22) with 𝜎 = 0.1.

[5] S. J. Chen and J. L. Lin, β€œMaximal stability bounds of singularlyperturbed systems,” J. Franklin Inst., pp. 1209 - 1218, 1999.

[6] H. Kando and T. Iwazumi, β€œStabilizing feedback controllers for singu-larly perturbed systems,” IEEE Trans. Syst., Man, Cybern., vol. SMC-14, no. 6, pp. 903 - 911, 1984.

[7] J. S. Chiou, F. C. Kung and T. H. Li, β€œRobust stabilization of a classsingularly perturbed discrete bilinear systems,” IEEE Trans. Autom.Control, vol. 45, no. 6, pp. 1187 -1191, 2000.

[8] F. Sun, Y. Hu and H. Liu, β€œStability analysis and robust controller designfor uncertain discrete-time singularly perturbed systems,” Dyn. Contin.Discrete Impuls. Syst. Ser. B Appl. Algorithms, vol. 12, no. 5-6, pp. 849- 865, 2005.

[9] W. Assawinchaichote and S. K. Nguang, β€œπ»βˆž fuzzy control design fornonlinear singularly perturbed systems with pole placement constraints:an LMI approach,” IEEE Trans. Syst. Man, Cybern., vol. 34, pp. 579 -588, 2004.

[10] T. S. Li and K. J. Lin, β€œStabilization of singularly perturbed fuzzysystems,” IEEE Trans. Fuzzy Syst., vol. 12, pp. 579 - 595, 2004.

[11] K. J. Lin, β€œNeural network based observer and adaptive controldesign for a class of singularly perturbed nonlinear systems,” In Conf.ASCC2011, pp. 1176 - 1180, 2011.

[12] K. J. Lin, β€œComposite obersever-based feedback design for singularlyperturbed systems via LMI approach,” In Proc. SICE2010, pp. 3065 -3061, 2010.

[13] K. J. Lin and T. S. Li, β€œStabilization of uncertain singularly perturbedsystems with pole-placement constraints,” IEEE Trans. Circuit Syst. II,vol. 53, pp. 916 - 920, 2006.

[14] T. S. Li and K. J. Lin, β€œStabilization of singularly perturbed fuzzysystems,” IEEE Trans. Fuzzy Syst., vol. 12, pp. 579 - 595, 2004.

[15] M. S. Mahmoud, A. Ismail, β€œPassivity and passification of time-delaysystems” J. Math. Anal. Appl., vol. 292, pp. 247 - 258, 2004.

[16] C. Li, H. Zhang, X. Liao, β€œPassivity and passification of fuzzy systemswith time delays” Comput. Math. Appl., vol. 52, pp. 1067 - 1078, 2006.

[17] H. Gao, T. Chen, T. Chai, β€œPassivity and passification for networkedcontrol systems” SIAM J. Control Optim., vol. 46, pp. 1299 - 1322,2007.

[18] A. Bemporad, G. Bianchini, F. Brogi, β€œPassivity analysis and passi-fication of discrete-time hybrid systms” IEEE Trans. Autom. Control,vol. 53, pp. 1004 - 1009, 2008.

[19] Z. G. Zeng, T. W. Huang and W. X. Zheng, β€œMultistability of recurrentneural networks with time-varying delays and the piecewise linearactivation function” IEEE Trans. Neural Netw., vol. 21, no. 8, pp. 1371-1377, 2010.

[20] Z. G. Zeng and J. Wang, β€œAssociative memories based on continuous-time cellular neural networks designed using space-invariant cloningtemplates” Neural Netw., vol. 22, pp. 651-657, 2009.

[21] Z. G. Zeng and J. Wang, β€œDesign and analysis of high-capacityassociative memories based on a class of discrete-time recurrent neuralnetworks,” IEEE Trans. Syst. Man Cy. B, vol. 38, no. 6, pp. 1525-1536,2008.

[22] Z. G. Zeng and J. Wang, β€œGlobal exponential stability of recurrentneural networks with time-varying delays in the presence of strongexternal stimuli,” Neural Netw., vol. 19, no. 10, pp. 1528-1537, 2006.

[23] Z. G. Zeng and J. Wang, β€œMultiperiodicity of discrete-time delayedneural networks evoked by periodic external inputs” IEEE Trans. NeuralNetw., vol. 17, no. 5, pp. 1141-1151, 2006.

[24] Z. G. Zeng and J. Wang, β€œImproved conditions for global exponentialstability of recurrent neural networks with time-varying delays” IEEETrans. Neural Netw., vol. 17, no. 3, pp. 623-635, 2006.

[25] Z. G. Zeng and J. Wang, β€œComplete stability of cellular neuralnetworks with time-varying delays” IEEE Trans. Circuits Syst. I, vol.53, no. 4, pp. 944-955, 2006.

[26] Z. G. Zeng, J. Wang and X. X Liao, β€œGlobal asymptotic stabilityand global exponential stability of neural networks with unboundedtime-varying delays” IEEE Trans. Circuits Syst. II, vol. 52, no. 3, pp.168-173, 2005.

[27] Y. C. Chang and B. S. Chen, β€œA nonlinear adaptive 𝐻∞ trackingcontrol deisgn in robotic systems via neural networks,” IEEE Control.Syst. Tech., vol. 5, pp. 13 - 29, 1997.

[28] F. Abdaollahi, H. A. Talebi and R. V. Patel, β€œA stable neural network-based observer with application to flexible-joint manipulators,” IEEETrans. Neural Networks, vol. 17, pp. 118 - 129, 2006.

[29] T. Hayakawa, W. M. Haddad and N. Hovakimyan, β€œNeural networkadaptive control for a class of nonlinear systems,” IEEE Trans. NeuralNetworks, vol. 19, pp. 80 - 89, 2008.3192