Passivity analysis of memristor-based recurrent neural networks with time-varying delays

17
Available online at www.sciencedirect.com Journal of the Franklin Institute 350 (2013) 23542370 Short communication Passivity analysis of memristor-based recurrent neural networks with time-varying delays $ Shiping Wen a,b , Zhigang Zeng a,b,n , Tingwen Huang c , Yiran Chen d a School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China b Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China c Texas A & M University at Qatar, Doha 5825, Qatar d Department of Electronical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA Received 28 March 2013; received in revised form 22 May 2013; accepted 28 May 2013 Available online 5 June 2013 Abstract This paper investigates the delay-dependent exponential passivity problem of the memristor-based recurrent neural networks (RNNs). Based on the knowledge of memristor and recurrent neural network, the model of the memristor-based RNNs is established. Taking into account of the information of the neuron activation functions and the involved time-varying delays, several improved results with less computational burden and conservatism have been obtained in the sense of Filippov solutions. A numerical example is presented to show the effectiveness of the obtained results. & 2013 The Franklin Institute. Published by Elsevier Ltd. All rights reserved. 1. Introduction The sequential processing of fetch, decode, and execution of instructions through the classical von Neumann bottleneck of conventional digital computers has resulted in less efcient machines as their eco-systems have grown to be increasingly complex [1]. Though the current digital computers can now possess the computing speed and complexity to emulate the brain www.elsevier.com/locate/jfranklin 0016-0032/$32.00 & 2013 The Franklin Institute. Published by Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.jfranklin.2013.05.026 This work was supported by the Natural Science Foundation of China under Grant 61125303, National Basic Research Program of China (973 Program) under Grant 2011CB710606, Research Fund for the Doctoral Program of Higher Education of China under Grant 20100142110021, the Excellent Youth Foundation of Hubei Province of China under Grant 2010CDA081, National Priority Research Project NPRP 4-451-2-168, funded by Qatar National Research Fund. n Corresponding author. Tel.: +86 18971124190; fax: 86 27 87543130. E-mail addresses: [email protected] (Z. Zeng), [email protected] (Y. Chen).

Transcript of Passivity analysis of memristor-based recurrent neural networks with time-varying delays

Available online at www.sciencedirect.com

Journal of the Franklin Institute 350 (2013) 2354–2370

0016-0032/$3http://dx.doi.o

☆This workResearch ProgHigher Educaunder Grant 2Fund.

nCorresponE-mail ad

www.elsevier.com/locate/jfranklin

Short communication

Passivity analysis of memristor-based recurrent neuralnetworks with time-varying delays$

Shiping Wena,b, Zhigang Zenga,b,n, Tingwen Huangc, Yiran Chend

aSchool of Automation, Huazhong University of Science and Technology, Wuhan 430074, ChinabKey Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China

cTexas A & M University at Qatar, Doha 5825, QatardDepartment of Electronical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA

Received 28 March 2013; received in revised form 22 May 2013; accepted 28 May 2013Available online 5 June 2013

Abstract

This paper investigates the delay-dependent exponential passivity problem of the memristor-basedrecurrent neural networks (RNNs). Based on the knowledge of memristor and recurrent neural network, themodel of the memristor-based RNNs is established. Taking into account of the information of the neuronactivation functions and the involved time-varying delays, several improved results with less computationalburden and conservatism have been obtained in the sense of Filippov solutions. A numerical example ispresented to show the effectiveness of the obtained results.& 2013 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

1. Introduction

The sequential processing of fetch, decode, and execution of instructions through the classicalvon Neumann bottleneck of conventional digital computers has resulted in less efficientmachines as their eco-systems have grown to be increasingly complex [1]. Though the currentdigital computers can now possess the computing speed and complexity to emulate the brain

2.00 & 2013 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.rg/10.1016/j.jfranklin.2013.05.026

was supported by the Natural Science Foundation of China under Grant 61125303, National Basicram of China (973 Program) under Grant 2011CB710606, Research Fund for the Doctoral Program oftion of China under Grant 20100142110021, the Excellent Youth Foundation of Hubei Province of China010CDA081, National Priority Research Project NPRP 4-451-2-168, funded by Qatar National Research

ding author. Tel.: +86 18971124190; fax: 86 27 87543130.dresses: [email protected] (Z. Zeng), [email protected] (Y. Chen).

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2355

functionality of animals like a spider, mouse, and cat [2,3], the associated energy dissipation inthe system grows exponentially along the hierarchy of animal intelligence. For example, toperform certain cortical simulations at the cat scale even at 83 times slower firing rate, the IBMteam in [2] has to employ Blue Gene/P (BG/P), a super computer equipped with 147 456 CPUsand 144 TBs of main memory. On the other hand, the human brain contains more than 100billion neurons and each neuron has more than 20 000 synapses. Efficient circuit implementationof synapses, therefore, it is very important to build a brain-like machine. However, sinceshrinking the current transistor size is very difficult, introducing a more efficient approach isessential for further development of neural network implementations.

In 2008, the Williams group announced a successful fabrication of a very compact and non-volatile nanoscale memory called memristor [4], which was postulated by Chua [5] as the fourthbasic circuit elements in electrical circuits. It is based on the nonlinear characteristics of chargeand flux. By supplying a voltage or current to the memristor, its resistance can be altered. In thisway, the memristor remembers information. Several examples of successful multichip networksof spiking neurons have been recently proposed [6–8]; however there are still a number ofpractical problems that hinder the development of truly large-scale, distributed, massivelyparallel networks of very large scale integration (VLSI) neurons, such as how to set the weight ofindividual synapses in the network. It is well-known that changes in the synaptic connectionsbetween neurons are widely believed to contribute to memory storage, and the activity-dependentdevelopment of neural networks. These changes are thought to occur through correlated-based,or Hebbian plasticity.

In addition, we notice that RNNs have been widely studied in recent years, for their immenseapplication prospective [9–19]. Many applications have been developed in different areas such ascombinatorial optimization, knowledge acquisition and pattern recognition. Recently, theproblem of synchronization of coupled neural networks which is one of the hot research fields ofcomplex networks has been a challenging issue due to its potential application such asinformation science, biological systems and so on [20–24].

On the other hand, the passivity theory originated from circuit theory plays an essentiallyimportant role in the analysis and design of linear and nonlinear systems, especially for high-order systems. In fact, the essence of the passivity theory is that the passive properties of asystem can keep the system internal stability. In last decade, great attention has been paid to thepassivity analysis of RNNs with time delays either in delay-independent [25] or delay-dependent[26–46].

However, to the best of the author's knowledge, the research on delay-dependent exponentialpassivity of memristor-based recurrent neural networks is still an open problem that deservesfurther investigation. To shorten sup gap, we investigate the problem of exponential passivity fora class of memristor-based recurrent neural networks with time-varying delays. The maincontributions of this paper can be summarized as follows: (i) the passivity analysis is extended tothe memristor-based neural networks; (ii) the information of neuron activation function and theinvolved time-varying delays is adequately considered.

2. Memristor-based RNNs

The memrsitor-based recurrent network can be implemented by VLSI circuits as shown inFig. 1, and the connection weights are implemented by the memristors. fj is the activationfunction, τjðtÞ is the time-varying delay, for the i-th subsystem, xi(t) is the voltage of the capacitorCi, f jðxjðtÞÞ; f jðxjðt−τjðtÞÞÞ are the functions about xi(t) with and without time-varying delays

Fig. 1. Circuit of memristor-based recurrent network, where xið�Þ is the state of the i-th subsystem, f jð�Þ is the amplifier,Mfij is the connection memritor between the amplifier f jð�Þ and state xið�Þ, Ri and Ci are the resistor and capacitor, Ii(t) is theexternal input, ai; bi are the outputs, i; j¼ 1; 2;…; n.

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702356

respectively, Mfij is the memristor between the feedback function f jðxjðtÞÞ and xi(t), Mgij is thememristor between the feedback function f jðxjðt−τjðtÞÞÞ and xiðt−τiðtÞÞ, Ri is the resistor parallelto the capacitor Ci, Ii is an external input or bias, where i; j¼ 1; 2;…; n.And the memductance of the memristors can be depicted as in Fig. 2 [47], which are bounded.

Thus by Kirchoff's current law, the equation of the i-th subsystem is written as the following:

Ci _xiðtÞ ¼− ∑n

j ¼ 1ðWfijðxjðtÞÞ þWgijðxjðtÞÞÞ þ

1Ri

" #xiðtÞ þ ∑

n

j ¼ 1signijf jðxjðtÞÞWfijðxjðtÞÞ

−1 −0.5 0 0.5 1−1.5

−1

−0.5

0

0.5

1

1.5 x 10−4

V

I

Fig. 2. Typical I–V characteristic of memristor [47]. The pinched hysteresis loop is due to the nonlinear relationshipbetween the memristance current and voltage. The memristor exhibits the feature of pinched hysteresis, which means thata lag occurs between the application and the removal of a field and its subsequent effect, just as the neurons in the humanbrain have.

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2357

þ ∑n

j ¼ 1signijf jðxjðt−τjðtÞÞÞWgijðxjðtÞÞ þ IiðtÞ; ð1Þ

where xi is the voltage worked on the capacitor as Xi in Fig. 1, f jðxjðtÞÞ and f jðxjðt−τjðtÞÞÞ are thecorresponding output of xj(t) and xjðt−τjðtÞÞ through the algorithm units, and

signij ¼1; i≠j;−1; i¼ j;

(

and WfijðxjðtÞÞ;WgijðxjðtÞÞ are the memductances of the memristors Mfij;Mgij, and

WfijðxiðtÞÞ ¼W′fij; signij _f jðxjðtÞÞ−_xiðtÞ≤0;W″fij; signij _f jðxjðtÞÞ−_xiðtÞ40;

(

WgijðxiðtÞÞ ¼W ′gij; signij _f jðxjðt−τjðtÞÞÞ−_xiðt−τiðtÞÞ≤0;W″gij; signij _f jðxjðt−τjðtÞÞÞ−_xiðt−τiðtÞÞ40:

(

Therefore

_xiðtÞ ¼ −diðxðtÞÞxiðtÞ þ ∑n

j ¼ 1aijðxjðtÞÞf jðxjðtÞÞ þ ∑

n

j ¼ 1bijðxjðtÞÞf jðxjðt−τjðtÞÞÞ þ uiðtÞ; ð2Þ

where

aijðxjðtÞÞ ¼signijCi

WfijðxjðtÞÞ ¼a′ij; signij _f jðxjðtÞÞ−_xiðtÞ≤0;a″ij; signij _f jðxjðtÞÞ−_xiðtÞ40;

8<:

bijðxjðtÞÞ ¼signijCi

WgijðxjðtÞÞ ¼b′ij; signij _f jðxjðt−τjðtÞÞÞ−_xiðt−τiðtÞÞ≤0;b″ij; signij _f jðxjðt−τjðtÞÞÞ−_xiðt−τiðtÞÞ40;

8<:

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702358

diðxðtÞÞ ¼1Ci

∑n

j ¼ 1ðWfijðxjðtÞÞ þWgijðxjðtÞÞÞ þ

1Ri

" #¼

d′i; −_f iðxiðtÞÞ−_xiðtÞ≤0;d″i ; −_f iðxiðtÞÞ−_xiðtÞ40;

(

uiðtÞ ¼IiðtÞCi

:

Then we can get

_xðtÞ ¼ −DðxðtÞÞxðtÞ þ AðxðtÞÞf ðxðtÞÞ þ BðxðtÞÞf ðxðt−τðtÞÞÞ þ uðtÞ; ð3Þwhere

DðxðtÞÞ ¼ diagfd1ðxðtÞÞ; d2ðxðtÞÞ;…; dnðxðtÞÞg;AðxðtÞÞ ¼ ½aijðxjðtÞÞ�n�n;

BðxðtÞÞ ¼ ½bijðxjðtÞÞ�n�n; uðtÞ ¼ ðu1ðtÞ; u2ðtÞ;…; unðtÞÞT :

Remark 1. According to the analysis above, DðxðtÞÞ;AðxðtÞÞ;BðxðtÞÞ in this system are changedaccording to the state of the system, so system (3) represents a class of memristor-based RNNswith time-varying delays.From Eq. (2)

_xiðtÞ∈−cofd−i ; dþi gxiðtÞ þ ∑n

j ¼ 1cofa−ij ; aþij gf jðxjðtÞÞ

þ ∑n

j ¼ 1cofb−ij ; bþij gf jðxjðt−τjðtÞÞÞ þ uiðtÞ; ð4Þ

where

d−i ¼minfd′i; d″i g; dþi ¼maxfd′i; d″i g;a−ij ¼minfa′ij; a″ijg; aþij ¼maxfa′ij; a″ijg;b−ij ¼minfb′ij; b″ijg; bþij ¼maxfb′ij; b″ijg:

or equivalently, for i; j∈f1; 2;…; ng, there exist d i∈cofd−i ; dþi g, aij∈cofa−ij ; aþij g, bij∈cofb−ij ; bþij g,which are dependent on the initial condition of network (2) and time t, then

_xiðtÞ ¼ −d ixiðtÞ þ ∑n

j ¼ 1aijf jðxjðtÞÞ þ ∑

n

j ¼ 1bijf jðxjðt−τiðtÞÞÞ þ uiðtÞ: ð5Þ

And the interval matrices ½D−;Dþ� ¼ diag½d−i ; dþi �ði¼ 1;…; nÞ, ½A−;Aþ� ¼ ½a−ij ; aþij �n�n,½B−;Bþ� ¼ ½b−ij ; bþij �n�n.Then system (3) can be rewritten as follows:

_xðtÞ∈−½D−;Dþ�xðtÞ þ ½A−;Aþ�f ðxðtÞÞ þ ½B−;Bþ�f ðxðt−τðtÞÞÞ þ uðtÞ: ð6ÞThere exist ϝ∈½D−;Dþ�;ℶ∈½A−;Aþ� and ℸ∈½B−;Bþ�, such that

_xðtÞ ¼ −ϝxðtÞ þ ℶf ðxðtÞÞ þ ℸf ðxðt−τðtÞÞÞ þ uðtÞ: ð7ÞAnd consider the output of memristor-based RNNs (7) as

yðtÞ ¼ f ðxðtÞÞ þ f ðxðt−τðtÞÞÞ þ uðtÞ; ð8Þwhich includes all the information influence the performance of the memristor-based RNNs asshown in Fig. 3.

Memristor-basedRNNs

+

u(t)

f(x(t))f(x(t-tau(t)))

y(t)

Fig. 3. The output y(t) of the memristor-based RNNs (7) includes all the elements that influence the performance of thesenetworks such as the input signals u(t) and information of neuron activation functions with or without time delays.

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2359

Remark 2. As the size of the integrated circuits grows larger and larger, passivity analysis willbecome more and more necessary in the process to research and design such circuits.

In the following part, several assumptions will be needed throughout the paper:A1. For i∈f1; 2;…; ng, the activation function fi is Lipschitz continuous; and ∀r1; r2∈R, there

exists real number ιi such that

0≤f iðr1Þ−f iðr2Þ

r1−r2≤ιi;

where f ið0Þ ¼ 0; r1; r2∈R and r1≠r2.Under this assumption, the following inequalities hold for any diagonal matrices

H140;H240;H340,

xT ðtÞLH1LxðtÞ−f T ðxðtÞÞH1f ðxðtÞÞ≥0; ð9Þ

2f T ðxðtÞÞH2ðLxðtÞ−f ðxðtÞÞÞ≥0; ð10Þ

2f T ðxðt−τðtÞÞÞH3ðLxðt−τðtÞÞ−f ðxðt−τðtÞÞÞÞ≥0; ð11Þwhere L¼ diagfι1; ι2;…; ιng.

A2. τðtÞ satisfies0≤τðtÞ≤τ; _τðtÞ≤μ: ð12Þ

It is obvious that the set-valued map

xðtÞ⊸−DIxðtÞ þ AIf ðxðtÞÞ þ BIf ðxðt−τðtÞÞÞ þ uðtÞ; t≥0;

has nonempty compact convex values. And it is upper semi-continuous [48]. Then the localexistence of a solution xðtÞ ¼ ðx1ðtÞ; x2ðtÞ;…; xnðtÞÞT with initial conditions xðtÞ ¼ φðtÞ of system

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702360

(6) is obvious [49]. Under assumption (A1), if f ð�Þ is bounded, this local solution x(t) can beextended to the interval ½0;þ∞Þ in the sense of Filippov by Definition 1.

Definition 1 (Forti and Nistri [50]). DðxðtÞÞ;AðxðtÞÞ;BðxðtÞÞ∈GD if for j¼ 1; 2;…; n, and fj areLipschitz continuous, and i¼ 1; 2;…; n, diðxjðtÞÞ, aijðxjðtÞÞ, bijðxjðtÞÞ are piecewise continuous,bounded, and nondecreasing. Then for φðsÞ∈Rn, there is at least a local solution xnðtÞ of Eq. (2)with initial condition φðsÞ ¼ xðsÞ; s∈½t0−τ; t0�. Furthermore, any solution is bounded and hencedefined on ½−τ;þ∞Þ.

In order to derive sufficient conditions for the delay-dependent exponential passivity ofmemristor-based RNNs (6), we will need the following definition and lemma:

Definition 2 (Bacciotti and Rosier [51]). Define the set-valued map ϕ : Rn⊸Rn as

ϕðuÞ ¼ ⋂Ω40

⋂μðNÞ ¼ 0

coff ðBðu;ΩÞ\Ng; ð13Þ

where N is an arbitrary set with measure zero. A solution of Eq. (3) on an interval ½t0; t1�,t0≤t1≤þ∞, with initial condition φðsÞ ¼ xðsÞ; s∈½t0−τ; t0�, is an absolutely continuous functionx(t), which is defined on ½t0; t1�, then for almost all t∈½t0; t1�, the differential inclusion_xðtÞ∈ϕðxðtÞÞ is satisfied. For a set Ω∈Rn, we denote the closure of the convex hull of Ω by K½Ω�.Definition 3 (Zhu et al. [38]). The memristor-based RNNs (6) are exponentially passive, ifthere exist a Lyapunov functional VðxðtÞÞ : Rn-R and a positive constant ϵ such that thefollowing inequality holds:

_V ðxðtÞÞ þ ϵVðxðtÞÞ≤2yT ðtÞuðtÞ; t≥0: ð14ÞLemma 1 (Liu et al. [52]). For any positive definite matrix G40, scalars τ1; τ240 and afunction ϕ : ½τ1; τ2�-Rn such that the integrations concerned are well defined, the followinginequality holds:Z τ2

τ1

ϕðtÞ dt� �T

G

Z τ2

τ1

ϕðtÞ dt≤ðτ2−τ1ÞZ τ2

τ1

ϕT ðtÞGϕðtÞ dt: ð15Þ

Notation: The notation used here is fairly standard. The superscript “T” stands for matrixtransposition, Rn denotes the n-dimensional Euclidean space, and Rm�n is the set of all realmatrices of dimension m� n, I and 0 represent the identity matrix and zero matrix, respectively.The notation P≥0 means that P is real symmetric and positive definite (semi-definite). Insymmetric block matrices or complex matrix expressions, we use an asterisk ðnÞ to represent aterm that is induced by symmetry and diag f⋯g stands for a block-diagonal matrix. Matrices, iftheir dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.Denote u¼ ðu1;…; unÞT , juj as the absolute-value vector; i.e., juj ¼ ðju1j; ju2j;…; junjÞT , ∥x∥p asthe p-norm of the vector x with p, 1≤po∞. ∥x∥∞ ¼maxi∈f1;2;…;ngjxij is the vector infinity norm.Denote ∥D∥p as the p-norm of the matrix D with p. Denote C as the set of continuous functions.

3. Main results

In this section, the main results will be established via LMI method.

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2361

Theorem 1. Memristor-based RNN (7) is exponentially passive, if there exist P40;Q40;R¼½R1n

R2R3�;M40;N and diagonal matrices H1;H2;H340, Γ ¼ diagfγ1; γ2;…; γng40, and

Δ¼ diagfδ1; δ2;…; δng40 such that

Π ¼

Π11 Π12 N Π14 Π15 Π16 −τϝMn Π22 Π23 0 Π25 0 0

n n Π33 0 0 0 0

n n n Π44 Π45 Π46 τℶTM

n n n n Π55 −I τℸTM

n n n n n −2I τM

n n n n n n −M

2666666666664

3777777777775o0; ð16Þ

M N

n M

� �40; ð17Þ

where

Π11 ¼ −ðPþ LΔÞϝ−ϝðPþ LΔÞ þ Qþ R1 þ LH1L−M;

Π12 ¼M−N;Π14 ¼ ðPþ LΔÞℶ−ϝðΓ−ΔÞ þ R2 þ LH2;Π15 ¼ ðPþ LΔÞℸ;Π16 ¼ Pþ LΔ;

Π22 ¼ −ð1−μÞðR1 þ LH1LÞ−2M þ N þ NT ;Π23 ¼M−N;Π25 ¼ −ð1−μÞR2 þ LH3;Π33 ¼ −M−ð1−μÞQ;Π44 ¼ ðΓ−ΔÞℶþ ℶT ðΓ−ΔÞ þ R3−H1−2H2;Π45 ¼ ðΓ−ΔÞℸ;Π46 ¼ Γ−Δ−I;Π55 ¼ −ð1−μÞðR3−H1Þ−2H3:

Proof. By Schur Complement, we can derive from Eq. (16)

Π11 Π12 N Π14 Π15 Π16

n Π22 Π23 0 Π25 0

n n Π33 0 0 0

n n n Π44 Π45 Π46

n n n n Π55 −In n n n n −2I

2666666664

3777777775þ

−τϝM0

0

τℶTM

τℸTM

I

2666666664

3777777775M

−τϝM0

0

τℶTM

τℸTM

I

2666666664

3777777775

T

o0: ð18Þ

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702362

Therefore, there exist small enough scalars α140 and α240, such that

Ψ ¼

Π11 þ τ2α1IΠ12 N Π14 Π15 Π16

n Π22 Π23 0 Π25 0

n n Π33 0 0 0

n n n Π44 Π45 Π46

n n n n Π55 −In n n n n −2I

2666666664

3777777775

þ

−τϝM0

0

τℶTM

τℸTM

I

2666666664

3777777775ðM þ α2IÞ

26666666664

−τϝM0

0

τℶTM

τℸTM

I

37777777775

T

o0: ð19Þ

Denote η¼ ½xT ðtÞ f T ðxðtÞÞ�T and consider the following Lyapunov functional for the memristor-based recurrent neural networks (7) as follows:

VðtÞ ¼ ∑4

i ¼ 1ViðtÞ; ð20Þ

where

V1ðtÞ ¼ xT ðtÞPxðtÞ þ ∑n

i ¼ 1γi

Z xiðtÞ

0f iðsÞ dsþ 2 ∑

n

i ¼ 1δi

Z xiðtÞ

0ðιis−f iðsÞÞ ds;

V2ðtÞ ¼Z t

t−τηT ðsÞQηðsÞ dsþ

Z t

t−τðtÞηT ðsÞRηðsÞ ds;

V3ðtÞ ¼Z t

t−τðtÞðxT ðsÞLH1LxðsÞ−f T ðxðsÞÞH1f ðxðsÞÞÞ ds;

V4ðtÞ ¼ τ

Z 0

−τ

Z t

tþβðα1xT ðsÞxðsÞ þ _xT ðsÞðM þ α2IÞ_xðsÞÞ ds dβ:

Taking the derivative of Eq. (20) along Eq. (7) yields

_V 1ðtÞ ¼ 2xT ðtÞP_xðtÞ þ 2f T ðxðtÞÞΓ _xðtÞ þ 2ðLxðtÞ−f ðxðtÞÞÞTΔ_xðtÞ¼ 2xT ðtÞðPþ LΔÞ_xðtÞ þ 2f T ðxðtÞÞðΓ−ΔÞ_xðtÞ; ð21Þ

_V 2ðtÞ≤ηT ðtÞQηðtÞ−ð1−μÞηT ðt−τÞQηðt−τÞ þ ηT ðtÞRηðtÞ−ð1−μÞηT ðt−τðtÞÞRηðt−τðtÞÞ; ð22Þ_V 3ðtÞ≤xT ðtÞLH1LxðtÞ−f T ðxðtÞÞH1f ðxðtÞÞ−ð1−μÞxT ðt−τðtÞÞLH1Lxðt−τðtÞÞþð1−μÞf T ðxðt−τðtÞÞÞH1f ðxðt−τðtÞÞÞ; ð23Þ

_V 4ðtÞ ¼ τ2xT ðtÞxðtÞ þ τ2 _xT ðtÞðM þ α1IÞ_xðtÞ−α1τZ t

t−τ∥xðsÞ∥2 ds

−α2Z t

t−τ∥_xðsÞ∥2 ds−τ

Z t

t−τ_xT ðtÞM _xðsÞ ds: ð24Þ

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2363

On the other hand, let

ζ1ðtÞ ¼Z t

t−τðtÞ_xðsÞ ds; ζ2ðtÞ ¼

Z t−τðtÞ

t−τ_xðsÞ ds:

As 0≤τðtÞ≤τ, according to Lemma 1, we can get

τ

Z t

t−τ_xT ðsÞM _xðsÞ ds¼ τ

Z t

t−τðtÞ_xT ðsÞM _xðsÞ þ τ

Z t−τðtÞ

t−τ_xT ðsÞM _xðsÞ

≥τ

τðtÞ ζT1 ðtÞMζ1ðtÞ þ

τ

τ−τðtÞ ζT2 ðtÞMζ2ðtÞ

¼ ζT1 ðtÞMζ1ðtÞ þτ−τðtÞτðtÞ ζT1 ðtÞMζ1ðtÞ þ ζT2 ðtÞMζ2ðtÞ þ

τðtÞτ−τðtÞ ζ

T2 ðtÞMζ2ðtÞ: ð25Þ

Therefore ffiffiffiffiffiffiffiffiffiτ−τðtÞτðtÞ

qζ1ðtÞ

−ffiffiffiffiffiffiffiffiffiτðtÞ

τ−τðtÞ

qζ2ðtÞ

264

375T

M N

n M

� � ffiffiffiffiffiffiffiffiffiτ−τðtÞτðtÞ

qζ1ðtÞ

−ffiffiffiffiffiffiffiffiffiτðtÞ

τ−τðtÞ

qζ2ðtÞ

264

375≥0; ð26Þ

which implies

τ−τðtÞτðtÞ ζT1 ðtÞMζ1ðtÞ þ

τðtÞτ−τðtÞ ζ

T2 ðtÞMζ2ðtÞ≥ζT1 ðtÞMζ1ðtÞ þ ζT2 ðtÞMζ2ðtÞ: ð27Þ

From Eqs. (25) and (27), we have

τ

Z t

t−τ_xT ðsÞM _xðsÞ ds≥ζT1 ðtÞMζ1ðtÞ þ ζT2 ðtÞMζ2ðtÞ þ ζT1 ðtÞMζ2ðtÞ þ ζT2 ðtÞMζ1ðtÞ

¼ζ1ðtÞζ2ðtÞ

" #TM N

n M

� � ζ1ðtÞζ2ðtÞ

" #: ð28Þ

It is obvious that when τðtÞ ¼ 0 or τðtÞ ¼ τ, we can get ζ1ðtÞ ¼ 0 or ζ2ðtÞ ¼ 0, respectively, andthus Eq. (28) still holds. It is clear that Eq. (28) implies

−τZ t

t−τ_xT ðsÞM _xðsÞ ds≤χT ðtÞϒχðtÞ; ð29Þ

where

χðtÞ ¼ ½xT ðtÞ xT ðt−τðtÞÞ xT ðt−τÞ�T ;

ϒ ¼−M M−N N

n N þ NT−2M M−Nn n −M

264

375:

From Eqs. (10), (11), (21)–(24) and (29), we can get

_V ðtÞ−2yT ðtÞuðtÞ≤ΞT ðtÞΨΞðtÞ−α1τZ t

t−τ∥xðsÞ∥2 ds−α2τ

Z t

t−τ∥_xðsÞ∥2 ds

≤λmaxðΨ Þ∥xðtÞ∥2−α1τZ t

t−τ∥xðsÞ∥2 ds−α2τ

Z t

t−τ∥_xðsÞ∥ ds; ð30Þ

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702364

where

ΞðtÞ ¼ ½xT ðtÞ xT ðt−τðtÞÞ xT ðx−τÞ f T ðxðtÞÞ f T ðxðt−τðtÞÞÞ uT ðtÞ�T :Meanwhile, from Eq. (20), there exists a scalar α3, such that

VðtÞ≤α3∥xðtÞ∥2 þ α3

Z t

t−τ∥xðsÞ∥2 dsþ α3

Z t

t−τ∥_xðsÞ∥ ds: ð31Þ

And there exists a small enough ρ40, such that

ρα3 þ λmaxðΨ Þo0; ρα3−α1τo0; ρα3−α2τo0: ð32ÞThen,

_V ðtÞ þ ρVðtÞ−2yT ðtÞuðtÞo0: ð33ÞBy Definition 3, memristor-based RNN (7) is exponentially passive. This completes theproof. □

Remark 3. In [38], Zhu et al. have done excellent works for the passivity problem of neuralnetworks with time-varying delays through the method of the free-weighting matrix. However,there are twenty-one such matrices introduced, and some of them do not reduce the conservatismof the resulting passivity criterion; on the contrary, they make the passivity analysis morecomplicated and significant increase in the computation demand. In this paper, only one matrix isintroduced and thus the passivity criterion derived in this paper has less decision variables thanthat in [38]. Furthermore, it can be calculated that the condition in [38] involves 24n2 þ 7ndecision variables, while the condition, in theorem in this paper, involves only 4:5n2 þ 7:5ndecision variables. Hence, our passivity criterion has computation advantages over [38].

Remark 4. Compared with [38], the Lyapunov functional constructed in this paper takes2∑n

i ¼ 1δiR xiðtÞ0 ðιis−f iðsÞÞ, 2

R tt−τðtÞ x

T ðsÞR2f ðxðsÞÞ ds and V3ðtÞ into account, which make full use ofthe information of the neuron activation functions and the involved time-varying delays. Thus, theLyapunov functional presented in this paper is much more general and desirable than that in [38].Furthermore, in [38], the terms τðtÞ and τ−τðtÞ are both enlarged as τ in the process to estimate theintegral term

R tt−τ _x

T ðsÞM _xðsÞ ds, thus the information of the involved time-varying delays has notbeen fully used and such treatment may lead to a conservative result. In this paper, a matrix N isintroduced to deal with this integral term and makes full use of the information of the involved time-varying delays. Therefore, the result in this paper is less conservative than that in [38].

Remark 5. It is important to realize intelligence for human evolution. The tools become moreintelligent, the degrees of freedom of humans become greater towards better health andknowledge. Memristor-based neural networks will extend human brain capability in terms ofintelligence. Compact, efficient electronics based on the brains neural system could yieldelectronic microchips to restore some brain functions, like vision, hearing, and memoryimpairment or loss. Undoubtedly, memristor-based neural networks will play an important roletowards this objective.

With uðtÞ ¼ 0, the exponential stability of memristor-based RNNs (7) can also be guaranteed.Thus we can get the following exponential stability condition from Theorem 1.

Corollary 1. Memristor-based RNN (7) is exponentially stable, if there exist P40;Q40;R¼½R1n

R2R3�;M4N and diagonal matrices H1;H2;H340, Γ ¼ diagfγ1; γ2;…; γng40, and

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2365

Δ¼ diagfδ1; δ2;…; δng40 such that

Π11 Π12 N Π14 Π15 −τϝMn Π22 Π23 0 Π25 0

n n Π33 0 0 0

n n n Π44 Π45 τℶTM

n n n n Π55 τℸTM

n n n n n −M

2666666664

3777777775o0; ð34Þ

where Π11;Π12;Π14;Π15;Π22;Π23;Π25;Π33;Π44;Π45;Π55 are defined as in Theorem1.

4. Numerical examples

In this section, a numerical example is given to illustrate the effectiveness of the obtainedexponential passive results.

Example 1. Consider memristor-based neural circuit (2) which can be implemented by resistors,capacitors and memristors with the following parameters:

a11ðx1ðtÞÞ ¼ a22ðx2ðtÞÞ ¼ 0;

a12ðx2ðtÞÞ ¼0:99 _f 2ðx2ðtÞÞ−_x1ðtÞ≤0;1:01 _f 2ðx2ðtÞÞ−_x1ðtÞ40;

(

a21ðx1ðtÞÞ ¼1:01 _f 1ðx1ðtÞÞ−_x2ðtÞ≤0;0:99 _f 1ðx1ðtÞÞ−_x2ðtÞ40;

(

b11ðx1ðtÞÞ ¼−0:98 −_f 1ðx1ðt−τ1ðtÞÞÞ−_x1ðt−τ1ðtÞÞ≤0;−1:02 −_f 1ðx1ðt−τ1ðtÞÞÞ−_x1ðt−τ1ðtÞÞ40;

(

b12ðx2ðtÞÞ ¼ b21ðx2ðtÞÞ ¼ −1;

b22ðx2ðtÞÞ ¼−0:98 −_f 2ðx2ðt−τ2ðtÞÞÞ−_x2ðt−τ2ðtÞÞ≤0;−1:02 −_f 2ðx2ðt−τ2ðtÞÞÞ−_x2ðt−τ2ðtÞÞ40;

(

d1ðxðtÞÞ ¼3:01 −_f 1ðx1ðtÞÞ−_x1ðtÞ≤0;2:99 −_f 1ðx1ðtÞÞ−_x1ðtÞ40;

(

d2ðxðtÞÞ ¼5:01 −_f 2ðx2ðtÞÞ−_x2ðtÞ≤0;4:99 −_f 2ðx2ðtÞÞ−_x2ðtÞ40:

(

Then, there exist

ϝ¼ 3 0

0 5

� �; ℶ¼ 0 1

1 0

� �; ℸ¼ −1 −1

−1 −1

� �;

such that ϝ∈DðxðtÞÞ;ℶ∈AðxðtÞÞ;ℸ∈BðxðtÞÞ. Consider the activation function asf ðxjð�ÞÞ ¼ tanhðxjð�ÞÞ; j¼ 1; 2. Obviously, this activation function satisfies the vector conditionwith L¼ I. The corresponding upper bounds of τ for various μ derived by Theorem 1 and themethod in [38] are listed in Table 1. From Table 1, it is obvious that the passivity criterionproposed in this paper is much less conservative than that in [38].

0 2 4 6 8 10−1.5

−1

−0.5

0

0.5

1

1.5

time

x 1(t)

0 2 4 6 8 10−1.5

−1

−0.5

0

0.5

1

1.5

time

x 2(t)

Fig. 4. The state curves of the memristor-based RNNs (3) with input u(t).

Table 1Upper bounds of τ derived, for different μ, by [38] and Theorem 1 respectively.

μ 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95[38] 0.9924 0.8543 0.7512 0.6755 0.6101 0.5528 0.4926 0.4486Theorem 1 4.2327 2.1199 1.6244 1.3605 1.2026 1.0781 1.0034 0.9471

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702366

Furthermore, there are only 33 decision variables involved by Theorem 1 proposed in thispaper, and 110 decision variables involved by the theory in [38]. Thus, the proposed passivitycriterion clearly reduces the computation burdens.For the purpose of simulation, we set τðtÞ ¼ 0:15þ 0:1 sin ð8tÞ. Thus, τ ¼ 0:95; μ¼ 0:8, and

by Theorem 1, we can get

P¼ 2:7541 −1:4190−1:4190 3:8232

� �; Q¼ 3:1116 −1:7486

−1:7486 4:2231

� �;

R1 ¼2:0036 −1:9226−1:9226 6:0028

� �; R2 ¼

−1:7706 1:0088

1:0088 −4:1828

� �;

R3 ¼6:999 −4:4432

−4:4432 16:1343

� �; M ¼ 0:8718 −0:3857

−0:3857 0:8288

� �;

N ¼ −0:4682 −0:0463−0:0463 −0:2950

� �; H1 ¼

2:0052 −2:2369−2:2369 6:4741

� �;

H2 ¼6:6986 −5:0080−5:0080 13:0498

� �; H3 ¼

2:9630 −1:4811−1:4811 4:1621

� �:

And the state dynamics of Example 1 are depicted in Figs. 4 and 5 with inputuðtÞ ¼ ½2−2 cos ð3tÞ; 3−4n sin 2ðtÞ�T , as well as in Figs. 6 and 7 without input u(t).

−1.5 −1 −0.5 0 0.5 1 1.5−1.5

−1

−0.5

0

0.5

1

1.5

x1(t)

x 2(t)

Fig. 7. The state curves of the memristor-based RNNs (3) without input u(t).

0 1 2 3 4 5−1.5

−1

−0.5

0

0.5

1

1.5

time

x 1(t)

0 1 2 3 4 5−1.5

−1

−0.5

0

0.5

1

1.5

time

x 2(t)

Fig. 6. The state curves of the memristor-based RNNs (3) without input u(t).

−1.5 −1 −0.5 0 0.5 1 1.5−1.5

−1

−0.5

0

0.5

1

1.5

x1(t)

x 2(t)

Fig. 5. The state curves of the memristor-based RNNs (3) with input u(t).

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2367

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702368

5. Conclusions

In this paper, the problem of delay-dependent exponential passivity has been discussedfor memristor-based neural networks with time-varying delays. The information of neuronactivation functions and the involved time-varying delays has been fully utilized.Some improved results with less computational demand and conservatism have beenobtained. A numerical example has been given to show the effectiveness of the developedapproaches.

References

[1] S. Jo, T. Chang, I. Ebong, B. Bhadviya, P. Mazumder, W. Lu, Nanoscale memristor device as synapse inneuromorphic systems, Nanotechnology Letters 10 (2010) 1297–1301.

[2] R. Ananthanarayanan, S. Eser, H. Simon, D. Modha, in: Proceedings of 2009 IEEE/ACM Conference HighPerformance Networking Computing, Portland, OR, November 2009.

[3] L. Smith, Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with EmergingTechnologies, Springer, New York, 2006, pp. 433–475.

[4] D. Strukov, G. Snider, D. Stewart, R. Williams, The missing memristor found, Nature 453 (2008) 80–83.[5] L. Chua, Memristor—The missing circuit element, IEEE Transactions on Circuit Theory 18 (1971) 507–519.[6] T. Choi, B. Shi, K. Boahen, An on-off orientation selective address event representation image transceiver chip,

IEEE Transactions on Circuits and Systems I 51 (2004) 342–353.[7] G. Indiveri, A neuromorphic VLSI device for implementing 2-D selective attention systems, IEEE Transactions on

Neural Networks 12 (2001) 1455–1463.[8] S. Liu, R. Douglas, Temporal coding in a silicon network of integrate-and-fire neurons, IEEE Transactions on

Neural Networks 15 (2004) 1305–1314.[9] J. Cao, D. Huang, Y. Qu, Global robust stability of delayed recurrent neural networks, Chaos, Solitons & Fractals 23

(2005) 221–229.[10] J. Cao, K. Yuan, H. Li, Global asymptotical stability of recurrent neural networks with multiple discrete delays and

distributed delays, IEEE Transactions on Neural Networks 17 (2006) 1646–1651.[11] A. Chen, J. Cao, L. Huang, An estimation of upperbound of delays for global asymptotic stability of delayed

Hopfield neural networks, IEEE Transactions on Circuits and Systems I 49 (2002) 1028–1032.[12] S. Hu, J. Wang, Global asmptotic stability and global exponential stability of continuous-time recurrent neural

networks, IEEE Transactions on Automatic Control 47 (2002) 802–807.[13] H. Huang, J. Cao, On global asymptotic stability of recurrent neural networks with time-varying delays, Applied

Mathematics and Computation 142 (2003) 143–154.[14] Y. Shen, J. Wang, An improved algebraic criterion for global exponential stability of recurrent neural networks with

time-varying delays, IEEE Transactions on Neural Networks 19 (2008) 528–531.[15] S. Wen, Z. Zeng, Dynamics analysis of a class of memristor-based recurrent networks with time-varying delays in

the presence of strong external stimuli, Neural Processing Letters 35 (2012) 47–59.[16] S. Wen, Z. Zeng, T. Huang, Dynamics behaviors of memristor-based delayed recurrent networks, Neural Computing

& Applications. doi: http://dx.doi.org/10.1007/s00521-012-0998-y.[17] Z. Zeng, J. Wang, Global exponential stability of recurrent neural networks with time-varying delays in the presence

of strong external stimuli, Neural Networks 19 (2006) 1528–1537.[18] Z. Zeng, J. Wang, X. Liao, Global asmptotic stability and global exponential stability of neural networks with

unbounded time-varying delays, IEEE Transactions on Circuits and Systems II 52 (2005) 168–173.[19] W. Zhao, Q. Zhu, New results of global robust expoential stability of neural networks with delays, Nonlinear

Analysis: Real World Applications 11 (2010) 1190–1197.[20] P. Balasubramaniam, K. Kalpana, R. Rakkiyappan, Linear matrix inequality approach for synchronization control of

fuzzy cellular neural networks with mixed time delays, Chinese Physics B 21 (2012) 048402.[21] J. Cao, G. Chen, P. Li, Global synchronization in an array of delayed neural networks with hybrid coupling, IEEE

Transactions on Systems Man and Cybernetics Part B 38 (2008) 488–498.

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–2370 2369

[22] M. Park, O. Kwon, J. Park, S. Lee, E. Cha, Synchronization criteria for coupled neural networks with interval time-varying delays and leakage delay, Applied Mathematics and Computation 218 (2012) 6762–6775.

[23] Z. Xing, J. Peng, Exponential lag synchronization of fuzzy cellular neural networks with time-varying delays,Journal of the Franklin Institute 349 (2012) 1074–1086.

[24] G. Zhang, T. Wang, T. Li, S. Fei, Exponential synchronization for delayed chaotic neural networks with nonlinearhybrid coupling, Neurocomputing 85 (2012) 53–61.

[25] M. Mahmoud, Y. Xia, Improved exponential stability analysis for delayed recurrent neural networks, Journal of theFranklin Institute 348 (2011) 201–211.

[26] Y. Chen, H. Wang, A. Xue, R. Lu, Passivity analysis of stochastic time-delay neural networks, Nonlinear Dynamics61 (2010) 71–82.

[27] X. Li, R. Rakkiyappan, P. Balasubramaniam, Existence and global stability analysis of equilibrium of fuzzy cellularneural networks with time delay in the leakage term under impulsive perturbations, Journal of the Franklin Institute348 (2011) 135–155.

[28] Y. Zhang, Exponential stability analysis for discrete-time impulsive delay neural networks with and withoutuncertainty, Journal of the Franklin Institute 350 (2013) 737–756.

[29] Q. Song, J. Liang, Z. Wang, Passivity analysis of discrete-time stochastic neural networks with time-varying delays,Neurocomputing 72 (2009) 1782–1788.

[30] Z. Wu, J. Park, H. Su, J. Chu, New results on exponential passivity of neural networks with time-varying delays,Nonlinear Analysis: Real World Applications 13 (2012) 1593–1599.

[31] Z. Wu, P. Shi, H. Su, J. Chu, Passivity analysis for discrete-time stochastic Markovian jump neural networks withmixed time delays, IEEE Transactions on Neural Networks 22 (2011) 1566–1575.

[32] S. Xu, W. Zheng, Y. Zou, Passivity analysis of neural networks with time-varying delays, IEEE Transactions onCircuits and Systems II 56 (2009) 325–329.

[33] Z. Zhang, S. Mou, J. Lam, H. Gao, New passivity criteria for neural networks with time-varying delay, NeuralNetworks 22 (2010) 864–868.

[34] H. Shen, S. Xu, J. Lu, J. Zhou, Passivity based control for uncertain stochastic jumping systems with mode-dependent round-trip time delays, Journal of the Franklin Institute 349 (2012) 1665–1680.

[35] Z. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks with time-varyingdelay using sampled-data, IEEE Transactions on Cybernetics 99 (2013) 1–11.

[36] Z. Wu, P. Shi, H. Su, J. Chu, Delay-dependent stability analysis for switched neural networks with time-varyingdelay, IEEE Transactions on Systems Man and Cybernetics Part B: Cybernetics 41 (2011) 1522–1530.

[37] Z. Feng, J. Lam, Stability and dissipativity analysis of distributed delay cellular neural networks, IEEE Transactionson Neural Networks 22 (2011) 976–981.

[38] S. Zhu, Y. Shen, G. Chen, Exponential passivity of neural networks with time-varying delay and uncertainty,Physics Letters A 375 (2010) 136–142.

[39] X. Yin, D. Yue, Event-triggered tracking control for heterogeneous multi-agent systems with Markovcommunication delays, Journal of the Franklin Institute 350 (2013) 1312–1334.

[40] X. Yin, D. Yue, S. Hu, Distributed event-triggered control of discrete-time heterogeneous multi-agent systems,Journal of the Franklin Institute 350 (2013) 651–669.

[41] S. Wen, Z. Zeng, T. Huang, H∞ filtering for neutral systems with mixed delays and multiplicative noises, IEEETransactions on Circuits and Systems II 59 (2012) 820–824.

[42] S. Wen, Z. Zeng, T. Huang, Robust probabilistic sampling H∞ output tracking control for a class of nonlinearnetworked systems with multiplicative noises, Journal of the Franklin Institute 350 (2013) 1093–1111.

[43] H. Li, X. Liao, R. Liao, A unified approach to chaos suppressing and inducing in a periodically forced family ofnonlinear oscillators, IEEE Transactions on Circuits and Systems I 59 (2012) 784–795.

[44] H. Li, X. Liao, T. Dong, L. Xiao, Second-order consensus seeking in directed networks of multi-agent dynamicalsystems via generalized linear local interaction protocols, Nonlinear Dynamics 70 (2012) 2213–2226.

[45] X. He, C. Li, T. Huang, C.J. Li£, Bogdanov–Takens singularity in tri-neuron network with time delay, IEEETransactions on Neural Networks and Learning Systems 24 (2013) 1001–1007.

[46] X. He, C. Li, Y. Shu, Bogdanov–Takens bifurcation in a single inertial neuron model with delay, Neurocomputing89 (2012) 193–201.

[47] N. Gergel-Hackett, B. Hamadani, J. Suehle, C. Richter, C. Hacker, D. Gundlach, A flexible solution-processedmemristor, IEEE Electron Device Letters 30 (2009) 706–708.

[48] F. Clarke, Y. Ledyaev, R. Stem, R. Wolenski, Nonsmooth Analysis and Control Theory, Springer, New York, 1988.[49] A. Filippov, Differential Equations with Discontinuous Right-Hand Side, Mathematics its Applications, Kluwer

Academic, Boston, MA, 1988.

S. Wen et al. / Journal of the Franklin Institute 350 (2013) 2354–23702370

[50] M. Forti, P. Nistri, Global convergence of neural networks with discontinous neuron activations, IEEE Transactionson Circuits and Systems I 50 (2003) 1421–1435.

[51] A. Bacciotti, L. Rosier, Lyapunov Functions and Stability in Control Theory, Springer, Berlin, 2005.[52] Y. Liu, Z.D. Wang, J. Liang, X. Liu, Synchronization and state estimation for discrete-time complex networks with

distributed delays, IEEE Transactions on Systems Man and Cybernetics Part B: Cybernetics 13 (2008) 1314–1325.