Exponential stability of impulsive high-order Hopfield-type neural networks with delays and...

14
International Journal of Computer Mathematics Vol. 88, No. 15, October 2011, 3150–3162 Exponential stability of impulsive high-order Hopfield-type neural networks with delays and reaction–diffusion Chaojie Li a , Chuandong Li a * and Tingwen Huang b a College of Computer, Chongqing University, Chongqing 400044, China; b Department of Applied Mathematics, Texas A&M University at Qatar, Doha, Qatar (Received 23 October 2009; revised version received 19 October 2010; accepted 20 May 2011) The problem of global exponential stability analysis of Impulsive high-order Hopfield-type neural networks with time-varying delays and reaction–diffusion terms has been investigated in this paper. Using the Lyapunov function method and M-matrix theory, we establish the global exponential stability of the neural networks with its estimated exponential convergence rate. As an illustration, a numerical example is given using the results. Keywords: high-order Hopfield-type neural networks; exponential stability; impulse; delay; reaction– diffusion 2010 AMS Subject Classification: 37B25 1. Introduction During the last two decades, the stability of Hopfield neural networks has been an active field of research due to its values in many applications such as pattern recognition and associative memory. As a result, a considerable number of stability criteria for Hopfield neural networks have been proposed (see [4–6,11–13,24]). Since the instability for neural networks mainly originate from the time delays, there has been a lot of attention given in the literature (see, e.g. [9,14,20,22,25] and references cited therein) on Hopfield-type neural networks with time delays. In real world, many evolutionary processes are characterized by abrupt changes at certain time. These evolutionary processes such as biological systems, economic systems, population dynamics and optimal control, undergo sudden and sharp changes occurring instantaneously. Those systems are formulated in impulsive differential equations [2], which have been successfully introduced to the modelling of impulsive neural networks (see, e.g. [15,27]). Therefore, there has been a growing interest on stability analysis of such systems with time delays in recent years(see, e.g. [8,23]). *Corresponding author. Email: [email protected] ISSN 0020-7160 print/ISSN 1029-0265 online © 2011 Taylor & Francis DOI: 10.1080/00207160.2011.594884 http://www.informaworld.com

Transcript of Exponential stability of impulsive high-order Hopfield-type neural networks with delays and...

International Journal of Computer MathematicsVol. 88, No. 15, October 2011, 3150–3162

Exponential stability of impulsive high-order Hopfield-typeneural networks with delays and reaction–diffusion

Chaojie Lia, Chuandong Lia* and Tingwen Huangb

aCollege of Computer, Chongqing University, Chongqing 400044, China; bDepartment of AppliedMathematics, Texas A&M University at Qatar,

Doha, Qatar

(Received 23 October 2009; revised version received 19 October 2010; accepted 20 May 2011)

The problem of global exponential stability analysis of Impulsive high-order Hopfield-type neural networkswith time-varying delays and reaction–diffusion terms has been investigated in this paper. Using theLyapunov function method and M-matrix theory, we establish the global exponential stability of the neuralnetworks with its estimated exponential convergence rate. As an illustration, a numerical example is givenusing the results.

Keywords: high-order Hopfield-type neural networks; exponential stability; impulse; delay; reaction–diffusion

2010 AMS Subject Classification: 37B25

1. Introduction

During the last two decades, the stability of Hopfield neural networks has been an active field ofresearch due to its values in many applications such as pattern recognition and associative memory.As a result, a considerable number of stability criteria for Hopfield neural networks have beenproposed (see [4–6,11–13,24]). Since the instability for neural networks mainly originate fromthe time delays, there has been a lot of attention given in the literature (see, e.g. [9,14,20,22,25]and references cited therein) on Hopfield-type neural networks with time delays.

In real world, many evolutionary processes are characterized by abrupt changes at certain time.These evolutionary processes such as biological systems, economic systems, population dynamicsand optimal control, undergo sudden and sharp changes occurring instantaneously. Those systemsare formulated in impulsive differential equations [2], which have been successfully introducedto the modelling of impulsive neural networks (see, e.g. [15,27]). Therefore, there has been agrowing interest on stability analysis of such systems with time delays in recent years(see, e.g.[8,23]).

*Corresponding author. Email: [email protected]

ISSN 0020-7160 print/ISSN 1029-0265 online© 2011 Taylor & FrancisDOI: 10.1080/00207160.2011.594884http://www.informaworld.com

International Journal of Computer Mathematics 3151

For Hopfield neural networks characterized by first-order interactions, Abu-Mostafa andJacques [1], McEliece [17], and Baldi [3] presented their intrinsic limitations. As a result, a dif-ferent architecture with high-order interactions (see [7,14,16,21,25,26]) have been successivelyintroduced to design neural networks which have stronger approximation performance, fasterconvergence rate, greater storage capacity, and higher fault tolerance than lower-order neuralnetworks.

On the other hand, strictly speaking, diffusion effects cannot be avoided in the neural networkswhen electrons are moving in asymmetric electromagnetic fields. So, we shall consider the casethat the activations vary in space as well as in time. The authors in [10,18,19,20,22,23,28] haveconsidered the stability of neural networks with diffusion terms. However, as far as we know, noresult has been reported in literature on the exponential stability of impulsive high-order Hopfield-type neural networks with both time-varying delays and reaction–diffusion terms. So, it is of greatvalue to study the stability of those systems.

Motivated by the above discussions, the objective of this paper is to apply Lyapunov techniquesto study a new class of impulsive high-order Hopfield-type neural networks with reaction–diffusion terms. Via Lyapunov stability theory, we obtain a sufficient condition ensuring theglobal exponential stability of high-order Hopfield-type neural networks with time-varying delayand reaction–diffusion terms. An example is given to illustrate the effectiveness of the results.

2. Model description and preliminaries

We consider the system of impulsive high-order Hopfield-type neural networks, assuming that theyare confined to a fixed bounded space domain � ⊂ Rm with a smooth boundary ∂�, non-uniformlydistributed in the domain and subjected to short-term external influence at fixed moments of time.The model is described as follows:

Cidui(t, x)

dt=

m∑r=1

∂xr

(Dir

∂ui(t, x)

∂xr

)− ui(t, x)

Ri+

n∑j=1

Tijgj(uj(t − τj(t), x))

+n∑

j=1

n∑l=1

Tijlgj(uj(t − τj(t), x)) × gl(ul(t − τl(t), x)) + Ii, t �= tk , (1a)

�ui(t, x) = eiui(t−, x) +

n∑j=1

Wijhj(uj(t− − τj(t), x))

+n∑

j=1

n∑l=1

Wijlhj(uj(t− − τj(t), x)) × hl(ul(t

− − τl(t), x)), t = tk , (1b)

∂ui(t, x)

∂x

∣∣∣∣∂�

:=(

∂ui(t, x)

∂x1, . . . ,

∂ui(t, x)

∂xm

)T

= 0, t ≥ t0. (1c)

where i = 1, 2, . . . , n, �ui(tk , x) = ui(tk , x) − ui(t−k , x), and ui(t

−k , x) = limt→t−k ui(t,x), k ∈ Z .

The time sequence {tk} satisfies 0 < t1 < t2 < · · · < tk < tk+1 < · · · , and limk→∞ tk = ∞.Ci > 0, Ri > 0 and Ii are the capacitance, the resistance, and the external input of the ith neuron,respectively. Dir ≥ 0 correspond to the transmission diffusion coefficients along the ith neuron.Tij, Wij and Tijl, Wijl are the first- and second-order synaptic weights of the neural networks,respectively. Time delays τi(t) are continuous functions, they correspond to the finite speed ofaxonal signal transmission and 0 ≤ τi(t) ≤ τ .

3152 C. Li et al.

Remark 1 When the transmission diffusion coefficients Dir = 0, this system will be reducedto the high-order Hopfield-type neural networks without impulses (with impulses). Its dynamicbehaviour and stability properties have been studied by Kosmatopoulos [17], as well as otherresearchers, such as Liu [14] and Xu [25,26]. When the transmission diffusion coefficient Dir = 0and all of the second-order synaptic weights equals zero (Tijl = 0, Wijl = 0), the system combinedby Equations (1a) and (1b) is actually Hopfield neural networks (with impulses). It has beenextensively studied by Hopfield [6]. Equation (1c) describes the state in the Neumann boundary.

The initial condition associated with Equation (1) is of the form

ui(t0 + s, x) = ψi(s, x), −τ ≤ s ≤ 0, (2)

where ψi(t, x) ∈ [−τ , 0] × R is a continuous function.Throughout this paper, the activation functions gi(u) and hi(u) are assumed to be continuously

differentiable, and satisfy the following properties:(H1) There exist positive numbers Miand Ni such that

|gi(u)| ≤ Mi, |hi(u)| ≤ Ni for any u ∈ R (i = 1, 2, . . . , n). (3a)

(H2) There exist positive numbers Ki and Li such that, for u, v ∈ R,

|gi(u) − gi(v)| ≤ Ki|u − v|, |hi(u) − hi(v)| ≤ Li|u − v|. (3b)

We assume that there exists at least one solution of system (1) with the initial condition (2).Let u∗ be an equilibrium point of system (1) and u(t, x) be any solution of system (1). Set

yi(t, x) = ui(t, x) − u∗i , (4)

eiu∗i +

n∑j=1

Wijhj(u∗j ) +

n∑j=1

n∑l=1

Wijlhj(u∗j )hl(u

∗l ) = 0, (5)

fi(yi(t − τi(t), x)) = gi(ui(t − τi(t), x)) − gi(u∗i ) (6)

and

ϕi(yi(t − τi(t), x)) = ϕi(ui(t − τi(t), x)) − hi(u∗i ). (7)

Then, for each i = 1, 2, . . . , n,

|fi(z)| ≤ Ki|z|, ∀z ∈ R (8a)

and

|φi(z)| ≤ Li|z|, ∀z ∈ R. (8b)

Hence, system (1) can be rewritten as

Cidyi(t, x)

dt=

m∑r=1

∂xr

(Dir

∂yi(t, x)

∂xr

)− yi(t, x)

Ri+

n∑j=1

(Tij +

n∑l=1

(Tijl + Tilj)ςl

)

× fj(yj(t − τj(t), x)), t �= tk , (9a)

�yi(t, x) = eiyi(t−, x) +

n∑j=1

(Wij +

n∑l=1

(Wijl + Wilj)ζl

)

× φj(yj(t− − τj(t), x)), t = tk , (9b)

∂yi(t, x)

∂m

∣∣∣∣∂�

:=(

∂yi(t, x)

∂x1, . . . ,

∂yi(t, x)

∂xm

)T

= 0, t ≥ t0, (9c)

International Journal of Computer Mathematics 3153

where i = 1, 2, . . . , n. ςl is between gl(ul(t − τl(t), x)) and gl(u∗l ); ζl is between hl(ul(t− −

τl(t), x)) and hl(u∗l ).

The norms are defined as follows:

‖yi(t, x)‖2 =[∫

|yi(t, x)|2dx

]1/2

, i = 1, 2, . . . , n, (10a)

‖ψ‖ = sup−τ≤s≤0

n∑i=1

‖ψi(s, x)‖2. (10b)

Definition 1 An equilibrium point u∗ = (u∗1, u∗

2, . . . , u∗n)

T of system (1) is said to be globallyexponentially stable, if there exist positive constants λ > 0 and M ≥ 1 such that

n∑i=1

‖ui(t, x) − u∗i ‖2 ≤ M‖ψ − u∗‖ exp(−λt), t ≥ 0, (11a)

‖ψ − u∗‖ = sup−τ≤s≤0

n∑i=1

‖ψi(s, x) − u∗i ‖2, i = 1, 2, . . . , n. (11b)

Lemma 1 [10] Let τ > 0, a < b ≤ +∞. Suppose that v(t) = (v1(t), v2(t), . . . , vn(t))T ∈C[[a, b), Rn] satisfies the following differential inequality:{

D+v(t) ≤ Pv(t) + (Q ⊗ V(t))en, t ∈ [a, b),

v(a + s) ∈ PC, s ∈ [−τ , 0], (12)

where P = (pij)n×n with pij ≥ 0, for i �= j, Q = (qij)n×n ≥ 0, and V(t) = (v(t − τij(t)))n×n.If the initial condition satisfies

v(t) ≤ κξe−λ(t−a), κ ≥ 0, t ∈ [a − τ , a], (13)

where ξ = (ξ1, ξ2, . . . , ξn)T > 0 and the positive number λ is determined by the following

inequality:

[λE + P + Q ⊗ E(λ)]ξ < 0, (14)

with E(λ) = (eλτij )n×n. Then v(t) ≤ κξe−λ(t−a), t ∈ [a, b).

Remark 2 There are some notations and definitions used in the above lemma. To make this paperself-contained, we restate them. They are described as follows.

PC[J × �, Rn] �= {u(t, x) : J × � → Rn|u(t, x) is continuous at t �= tk , u(t+k , x) = u(tk , x) andu(t−k , x) exists for t, tk ∈ J , k ∈ N}, where J ⊂ R is an interval.

PC[J , Rn] �= {u(t) : J → Rn|u(t) is continuous at t �= tk , u(t+k ) = u(tk) and u(t−k ) exists for t,tk ∈ J , k ∈ N}, where J ⊂ Ris an interval.

PC(�)�= {ϕ : [−τ , 0] × � → Rn|ϕ(s+, x) = ϕ(s, x) for s ∈ [−τ , 0), φ(s−, x) exists for s ∈

(−τ , 0], φ(s−, x) = φ(s, x) for all but at most a finite number of points s ∈ (−τ , 0]}.PC

�= {φ : [−τ , 0] → Rn|φ(s+) = φ(s) for s ∈ [−τ , 0), φ(s−) exists for s ∈ (−τ , 0], φ(s−) =φ(s) for all but at most a finite number of points s ∈ (−τ , 0]}.

For A, B ∈ Rm×n, we define the Hadamard product or Schur product by A ⊗ B = [aijbij]m×n.A ≥ B(A > B) means that each pair of corresponding elements of A and B such that the inequalityaij ≥ bij(aij > bij). Denote |A| = [|aij|]m×n, en = (1, 1, . . . , 1)T ∈ Rn.

3154 C. Li et al.

3. Main result

It is clear that the stability of the zero solution of system (9) is equivalent to the stability of theequilibrium point u∗ of system (1). In this section, it will be shown that, under certain conditions,the zero solution of system (9) is globally exponentially stable.

Theorem 1 Under assumptions (H1)–(H2), the zero solution of system (9) is globally exponen-tially stable with its convergence rate λ − η if the following conditions are satisfied:

(C1) there exist a vector ξ = (ξ1, ξ2, . . . , ξn)T > 0 and a positive number λ > 0

such that [λE − P + CQK ⊗ E(λ)]ξ < 0, where P = diag(|C−11 R−1

1 |, . . . , |C−1n R−1

n |), C =diag(|C−1

1 |, . . . , |C−1n |), K = diag(K1, . . . , Kn), Q = [|Tij| +∑n

l=1 (|Tijl| + |Tilj|)Ml]n×n, E(λ) =[eλτij(t)]n×n.

(C2)

η = supk∈N

{ln ηk

tk − tk−1

}< λ,

where η0 = 1 and ηk = max1≤k≤n{1, ak + bkeλτj(t)}with ak = ‖1 + eik‖2, bk =∑n

j=1

[‖w(k)ij ‖2 +∑n

l=1 (‖w(k)

ijl ‖)2 + ‖|w(k)

ilj ‖2)Nl] · ‖Lj‖2.

Proof Multiplying both sides of Equation (9a) by yi(t, x) and integrating it on � yield

Ci

2

d

dt

(∫�

y2i (t, x)dx

)=∫

yi(t, x)

[m∑

r=1

∂xr

(Dir

∂yi(t, x)

∂xr

)]dx −

∫�

y2i (t, x)

Ridx

+n∑

j=1

(Tij +

n∑l=1

(Tijl + Tilj)ςl

)×∫

yi(t, x)fj(yj(t − τj(t), x)) dx.

(15)

According to the Neumann boundary, we know

∫�

yi(t, x) ×[

m∑r=1

∂xr

(Dir

∂yi(t, x)

∂xr

)]dx

=∫

yi(t, x)∇•

(Dir

∂yi(t, x)

∂xr

)m

r=1

dx

=∫

∇•

(yi(t, x) × Dir

∂yi(t, x)

∂xr

)m

r=1

dx −∫

(Dir

∂yi(t, x)

∂xr

)m

r=1

•∇yi(t, x) dx (16)

=∫

∂�

(yi(t, x) × Dir

∂yi(t, x)

∂xr

)m

r=1

ds −m∑

r=1

∫�

Dir

(∂yi(t, x)

∂xr

)2

dx

= −m∑

r=1

∫�

Dir

(∂yi(t, x)

∂xr

)2

dx,

where ∇ = (∂/∂x1, . . . , ∂/∂xm is a gradient operator and

(Dir

∂yi(t, x)

∂xr

)m

r=1

:=(

Di1∂yi(t, x)

∂x1, . . . , Dim

∂yi(t, x)

∂xm

).

International Journal of Computer Mathematics 3155

By Equation (8a), we obtain

d

dt

(∫�

y2i (t, x) dx

)= −2C−1

i

m∑r=1

∫�

Dir

(∂yi(t, x)

∂xr

)2

dx − 2C−1i

∫�

y2i (t, x)

Ridx

+ 2C−1i

n∑j=1

(Tij +

n∑l=1

(Tijl + Tilj)ςl

)

×∫

yi(t, x) · fj(yj(t − τj(t), x)) dx

≤ −2|C−1i R−1

i ‖|yi(t, x)‖22

+ 2|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)

×∫

|yi(t, x)| · Kj|yj(t − τj(t), x)| dx,

(17)

d

dt‖yi(t, x)‖2

2 ≤ −2|C−1i R−1

i ‖|yi(t, x)‖|22 + 2|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)

× Kj‖yi(t, x)‖2‖yj(t − τj(t), x)‖2, (18)

d

dt‖yi(t, x)‖2 ≤ −|C−1

i R−1i ‖|yi(t, x)‖2

+ |C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)Ml

)× Kj|yj(t − τj(t), x)|2 (19)

(see Remark 3).Let vi(t, x) = ‖yi(t, x)‖2, then

D+vi(t, x) ≤ −|C−1i R−1

i |vi(t, x) + |C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)Ml

)× Kjvj(t − τj(t), x).

(20)

D+v(t, x) ≤ −Pv(t, x) + [CQK ⊗ V(t, x)]en, (21)

where V(t, x) = [vi(t − τij(t), x)]n×n.Based on Condition (C1), there must exist a vector ξ = (ξ1, . . . , ξn)

T > 0 and a positive numberλ satisfying the following inequality:

(λE − P + CQK ⊗ E(λ))ξ < 0. (22)

where E(λ) = (eλτij(t))n×n, τij(t) = τj(t).Let κ = ‖ψ − u ∗ ‖/ min1≤i≤n{ξi}, we have

v(s, x) ≤ κξe−λt , −τ ≤ t ≤ 0. (23)

From Lemma 1, it follows that

v(t, x) ≤ κξe−λt , 0 ≤ t ≤ t1, (24)

3156 C. Li et al.

Note that

vi(tk , x) = ‖yi(t−k , x) + �yi(t

−k , x)‖2

=∥∥∥∥∥∥(1 + eik)yi(t

−k , x) +

n∑j=1

(W (k)

ij +n∑

l=1

(W (k)

ijl + W (k)

ilj )ζl

)× φj(yj(t

−k − τj(t), x))

∥∥∥∥∥∥2

≤ ‖1 + eik‖2‖yi(t−k , x)‖2 +

n∑j=1

(‖Wij‖(k)

2 +n∑

l=1

(‖W (k)

ijl ‖2 + ‖W (k)

ilj ‖2

)Nl

)

× ‖Lj‖2‖yj(t−k − τj(t), x)‖2

≤ akvi(t−k , x) + bkvj(t

−k − τj(t), x). (25)

We claim that

v(t, x) ≤ κη0η1 . . . ηk−1ξe−λt , tk−1 ≤ t < tk , k ∈ N . (26)

It is obvious that Equation (26) is true for k = 1. Now, we assume that Equation (26) is true fork = m, i.e.

v(t, x) ≤ κη0η1 . . . ηm−1ξe−λt , tm−1 ≤ t < tm.

Then, one observes

vi(t−m ) ≤ amvi(t

−m , x) + bmvj(t

−m − τj(t), x)

≤ amκη0η1 . . . ηm−1ξe−λtm + bmκη0η1 . . . ηm−1ξe−λtm · eλτj(t)

≤ (am + bmeλτj(t))κη0η1 . . . ηm−1ξe−λtm

≤ κη0η1 . . . ηm−1ηmξe−λtm .

(27)

Hence, we obtain

v(t, x) ≤ κη0η1 . . . ηm−1ηmξe−λt , tm − τ ≤ t < tm. (28)

It follows from Lemma 1 that

v(t, x) ≤ κη0η1 . . . ηm−1ηmξe−λt , tm ≤ t < tm+1. (29)

This implies that Equation (26) is also true for k = m + 1. By the mathematical induction method,the claim (26) holds for any k ∈ N .

By Condition (C2) and Equation (26), we have

v(t, x) ≤ κeηt1 eη(t2−t1) · · · eη(tk−1−tk−2)ξe−λt

≤ κξeηte−λt

≤ κξe−(λ−η)t , tk−1 ≤ t < tk .

(30)

This implies that

‖yi(t, x)‖2 ≤ κξe−(λ−η)t .

Namely,n∑

i=1

‖ui(t, x) − u ∗ ‖2 ≤ M||ψ − u ∗ ‖e−(λ−η)t , t ≥ 0,

International Journal of Computer Mathematics 3157

where

M =∑n

i=1 ξi

min1≤i≤n {ξi} .

The proof is thus completed. �

Remark 3 As defined in the continuous system (9), ςl and ζl are respect with t. Both of themshould be smaller or equal to the max value of g(u) and h(u), respectively. Hence the upperboundof those activation fuction can be treated as system parameters.

Remark 4 In Theorem 1, ηk and η are subjected to the impulsive disturbance of system (9), λ isthe exponential convergence rate of the continuous system (9).

Remark 5 The condition (C1) in this theorem means that P − CQK is a nonsingular M-matrix. If P − CQK is a nonsingular M-matrix, there must be a vector ξ = (ξ1, . . . ξn)

T >

0 such that (−P + CQK ⊗ E(λ))ξ < 0. Let the function L(λi) = ξi(λi − |C−1i R−1

i |) +C−1

i

∑nj=1 ξj(|Tij| +∑n

l=1 (|Tijl| + |Tilj|)Ml) × Kj eλiτij . There exists a λ∗ < 1 such that

ξi(λ∗ − |C−1

i R−1i |) + C−1

i

n∑j=1

ξj

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)Ml

)× Kje

λ∗τij < 0. (31)

Theorem 2 Under assumptions (H1)–(H2), the zero solution of system (9) is globally exponen-tially stable with its convergence rate λ − η if the following conditions are satisfied:(C1) there exists a positive number λ > 0 such that

W = P − λE − CQK ⊗ E(λ) is a nonsingular M-matrix,

where P, Q, C and E are defined in Theorem 1.

(C2) η = supk∈N

{ln ηk

tk − tk−1

}< λ, where η0 = 1 and

ηk = max1≤k≤n

{1, ak + bkeλτj(t)}, k ∈ N .

where ak = ‖1 + eik‖2, bk =∑nj=1

[‖w(k)

ij ‖2 +∑nl=1 (‖w(k)

ijl ‖2 + ||w(k)

ilj ‖2)Nl

]· ‖Lj‖2.

Proof Let wi(t, x) = ‖eλtyi(t, x)‖22. We know wi(t, x) = ∫

�e2λty2

i (t, x) dx. Then, for t �= tk , tocompute the Dini derivative of wi(t, x).

dwi(t, x)

dt=∫

[e2λt · 2yi(t, x)

dyi(t, x)

dt+ 2λe2λty2

i (t, x)

]dx

= 2e2λt

[∫�

yi(t, x)dyi(t, x)

dtdx +

∫�

λy2i (t, x) dx

], (32)

∫�

yi(t, x)dyi(t, x)

dtdx = d‖yi(t, x)‖2

2

dt

≤ −|C−1i R−1

i ‖|yi(t, x)‖22

+ |C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)

× Kj‖yi(t, x)‖2‖yj(t − τj(t), x)‖2, (33)

3158 C. Li et al.

dwi(t, x)

dt≤ 2e2λt(−|C−1

i R−1i |)‖yi(t, x)‖2

2

+ 2e2λt|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)× Kj‖yi(t, x)‖2‖yj(t − τj(t), x)‖2

+ 2λ‖eλtyi(t, x)‖22

= 2e2λt(−|C−1i R−1

i |)∫

y2i (t, x) dx

+ 2e2λt|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)

× Kj

[∫�

y2i (t, x) dx

]1/2 [∫�

y2j (t − τj, x) dx

]1/2

+ 2λ‖eλtyi(t, x)‖22

= 2(−|C−1i R−1

i |)∫

e2λty2i (t, x) dx

+ 2eλτj(t)|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)

× Kj

[∫�

e2λty2i (t, x) dx

]1/2 [∫�

e2λ(t−τj)y2j (t − τj, x) dx

]1/2

+ 2λ‖eλtyi(t, x)‖22

= (−2|C−1i R−1

i |)‖eλtyi(t, x)‖22

+ 2eλτj(t)|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)ςl

)

× Kj‖eλtyi(t, x)‖2‖eλ(t−τj(t))yi(t − τj, x)‖2

+ 2λ‖eλtyi(t, x)‖22, (34)

d‖eλtyi(t, x)‖2

dt≤ (λ − |C−1

i R−1i |)‖eλtyi(t, x)‖2

+ eλτj(t)|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)Ml

)

× Kj‖eλ(t−τj(t))yi(t − τj, x)‖2. (35)

Let vi(t, x) = ‖eλtyi(t, x)‖2, then

vi(t, x) ≤ (λ − |C−1i R−1

i |)vi(t, x) + eλτj(t)|C−1i |

n∑j=1

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|)Ml

)

× Kjvj(t − τj(t), x), (36)

D+v(t, x) ≤ (λE − P)v(t, x) + {[CQK ⊗ V(t, x)]E(λ)}en. (37)

International Journal of Computer Mathematics 3159

By condition (C1), W = P − λE − CQK ⊗ E(λ) is a nonsingular M-matrix, which implies thatthere exist a vector ξ = (ξ1, ξ2, . . . , ξn)

T > 0 such that [λE − P + CQK ⊗ E(λ)]ξ < 0. Hence,by Lemma 1 and the arguments similar to the method used in the proof of Theorem 1, we have

n∑i=1

‖ui(t, x) − u ∗ ‖2 ≤ M||ψ − u ∗ ‖e−(λ−η)t , t ≥ 0, (38)

where

M =∑n

i=1 ξi

min1≤i≤n{ξi} . �

Remark 6 In fact, the parameter λ in Remark 4 satisfies 0 < λ < λ∗ = min1≤i≤n{λi}. This allowsus to estimate the upper bound of λ in condition (C2) by solving the following optimizationproblem:

(P1)

{max λ∗,

s.t. L(λi) < 0.

This problem which is the optimization problem with nonlinear constraints can be easily solvedby the program FMINCON in Matlab.

Corollary 1 Under assumptions (H1)–(H2), zero solution of system (9) is globally expo-nentially stable if condition (C2) holds in Theorem 1 and any of the following conditions issatisfied:

(i) M = P − CQK is a row (column) diagonally dominant matrix.(ii)

|R−1i | −

[|Tii| +

n∑l=1

(|Tiil| + |Tili|)Ml

]Ki

>1

ξi

⎡⎣ n∑

j=1,j �=i

ξj

(|Tij| +

n∑l=1

(|Tijl| + |Tilj|))

Ml

⎤⎦Kj.

(iii)

max1≤i≤n

|Ri|n∑

j=1

[|Tij| +

n∑l=1

(|Tijl| + |Tilj|)Ml

]Kj < 1.

Proof In fact, those three conditions are equivalent to P − CQK is a nonsingular M-matrix. Asknown, it can ensure that condition (C1) in Theorem 1 holds. This completes the proof. �

4. Numerical example

Consider the following impulsive high-order Hopfield neural network:

Cidui(t, x)

dt=

2∑r=1

∂xr

(Dir

∂ui(t, x)

∂xr

)− ui(t, x)

Ri+

3∑j=1

Tijgj(uj(t − τj(t), x))

+3∑

j=1

3∑l=1

Tijlgj(uj(t − τj(t), x)) × gl(ul(t − τl(t), x)) + Ii, t �= tk , (39)

3160 C. Li et al.

�ui(t, x) = eiui(t−, x) +

3∑j=1

Wijhj(uj(t− − τj(t), x))

+3∑

j=1

3∑l=1

Wijlhj(uj(t− − τj(t), x)) × hl(ul(t

− − τl(t), x)), t = tk , (40)

∂ui(t, x)

∂x

∣∣∣∣∂�

:=(

∂ui(t, x)

∂x1· · · ∂ui(t, x)

∂xm

)T

= 0, m = 3, t ≥ t0, (41)

where i = 1, 2, 3 and r = 1, 2.We take the system parameters as follows:

g1(u1) = tanh(0.89u1(t, x)), g2(u2) = tanh(0.76u2(t, x)), g3(u3) = tanh(0.81u3(t, x)),

h1(u1) = tanh(0.10u1(t, x)), h2(u2) = tanh(0.13u2(t, x)), h3(u3) = tanh(0.12u3(t, x)),

τ1(t) = sin2(t), τ2(t) = 2 cos2(t), τ3(t) = 3 sin2(t), τ = 3,

C = diag(1.2, 1.5, 1.6), R = diag(0.05, 0.03, 0.06), E = diag(−0.95, −0.84, −0.99),

K = diag(0.89, 0.76, 0.81), L = diag(0.87, 0.85, 0.76), M = (1 1 1)T,

N = (1 1 1)T,

Dir =⎡⎣0.01 0.02

0.02 0.030.02 0.01

⎤⎦ , Tij =

⎡⎣ 0.30 0.71 −0.2

−0.46 0.24 0.5−0.12 −0.5 0.6

⎤⎦ , T1ij =

⎡⎣−0.11 0.32 0.45

0.02 0.01 0.09−0.31 −0.02 −0.03

⎤⎦ ,

T2ij =⎡⎣ 0.41 −0.06 −0.4

0.11 −0.31 0.12−0.09 0.34 −0.05

⎤⎦ , T3ij =

⎡⎣ 0.12 0.04 0.01

−0.10 −0.11 −0.30.09 0.04 0.09

⎤⎦ ,

Wij =⎡⎣ 0.11 0.15 −0.12

0.16 −0.17 0.09−0.18 0.04 0.03

⎤⎦ , W1ij =

⎡⎣−0.03 0.04 0.03

0.06 0.07 −0.070.02 −0.04 0.01

⎤⎦ ,

W2ij =⎡⎣0.05 0.02 −0.06

0.06 0.07 −0.070.02 −0.04 0.01

⎤⎦ , W3ij =

⎡⎣−0.05 −0.04 0.02

0.04 −0.03 −0.05−0.03 0.03 −0.04

⎤⎦ .

Solving (P1) of the form ⎧⎪⎪⎪⎨⎪⎪⎪⎩

max λ

s.t. [λE − P + CQK ⊗ E(λ)]ξ < 0,

λ > 0,

ξ > 0.

with E(λ) =⎡⎣eλ e2λ e3λ

eλ e2λ e3λ

eλ e2λ e3λ

⎤⎦, and ξ = (ξ1, ξ2, ξ3)

T, we obtain λ∗ = 0.654928 and

⎡⎣ξ1

ξ2

ξ3

⎤⎦ =

⎡⎣450640000

334560000739170000

⎤⎦. It is easy to verify that condition (C2) is satisfied by selecting ηk = 2.333 and

International Journal of Computer Mathematics 3161

infk∈Z{tk − tk−1} > 1.2936. Hence, by Theorem 1, the equilibrium point of system (40) is globallyexponentially stable with an estimated convergence rate 0.2313.

Theorem 2 can also be applied to this example. Note that with λ = 0.5, the matrix

P − λE − CQK ⊗ E(λ) =⎡⎣12.4884 −2.3275 −0.7865

−3.2441 20.7725 −1.353−2.1938 −1.2986 8.2643

⎤⎦

is an M-matrix, which makes condition (C1) in Theorem 2 hold. Condition (C2) in Theorem 2 canbe ensured by ηk = 1.7541 and infk∈Z{tk − tk−1} > 1.1240. By Theorem 2, the equilibrium pointu∗ of system (40) is globally exponentially stable with the estimated convergence rate 0.2190.

5. Conclusions

The problem of global exponential stability for impulsive high-order Hopfield-type neural net-works with time-varying delays and reaction–diffusion terms has been investigated in this paper.Two main global exponential stability criteria have been derived with its exponential convergencerate by establishing a delay differential inequality with impulsive initial conditions and usingM-matrix theory. These criteria are easy to verify and they can be applicable in the designing ofglobal exponential stable artificial neural networks. One illustrative numerical example is alsogiven to show the effectivity of the results.

Acknowledgements

This work was partially supported by the Fundamental Research Funds for the Central Universities of China (Project Nos.CDJXS10 18 00 16, CDJZR10 18 55 01) and the National Natural Science Foundation of China (Grant No. 60974020).

References

[1] Y. Abu-Mostafa and J. Jacques, Information capacity of the Hopfield model, IEEE Trans. Inform. Theory 31 (1985),pp. 461–464.

[2] D.D. Bainov and P.S. Simeonov, Systems with Impulse Effect: Stability, Theory and Applications, Ellis Horwood,Chichester, 1989.

[3] P. Baldi, Neural networks, orientations of the hypercube, and algebraic threshold functions, IEEE Trans. Inform.Theory 34 (1988), pp. 523–530.

[4] J. Cao, Global exponential stability of Hopfield neural networks, Int. J. Syst. Sci. 32 (2001), pp. 233–236.[5] Z.H. Guan, L. James, and G.R. Chen, On impulsive auto-associative neural networks, Neural Networks 13 (2000),

pp. 63–69.[6] J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl

Acad. Sci. USA 79 (1982), pp. 2554–2558.[7] B. Kosmatopoulos, M. Polycarpou, A. Christodoulou, and A. Ioannou, High-order neural network structures for

identification of dynamical systems, IEEE Trans. Neural Networks 6(2) (1995), pp. 422–432.[8] C.D. Li, G. Feng, and T.W. Huang, On hybrid impulsive and switching neural networks, IEEE Trans. Syst. Man,

Cybern.-B 38(6) (2008), pp. 1549–1560.[9] C.D. Li and X.F. Liao, Delay-dependent and delay-independent stability criteria for cellular neural networks with

delays, Int. J. Bifurc. Chaos 16(11) (2006), pp. 3323–3340.[10] K.L. Li, and Q.K. Song, Exponential stability of impulsive Cohen–Grossberg neural networks with time-varying

delays and reaction–diffusion terms, Neurocomputing 72 (2008), pp. 231–240.[11] X.J. Liang and L.D. Wu, Global exponential stability of Hopfield neural network and its applications, Sci. China

(Ser. A) 25 (1995), pp. 523–532.[12] X.X. Liao, Stability of Hopfield neural networks, Sci. China (Ser. A) 23 (1992), pp. 1025–1035.[13] X.X. Liao andY. Liao, Stability of Hopfield-type neural networks(II), Sci. China (Ser. A) 40(8) (1997), pp. 813–816.[14] X.Z. Liu and K.L. Teo, Exponential stability of impulsive high-order Hopfield-type neural networks with time-varying

delays, IEEE Trans. Neural Networks 16(6) (2005), pp. 1329–1339.[15] X.Z. Liu and Q. Wang, On stability in terms of two measures for impulsive systems of functional differential equations,

J. Math. Anal. Appl. 326 (2007), pp. 252–265.

3162 C. Li et al.

[16] X.Y. Lou and B.T. Cui, Novel global stability criteria for high-order Hopfield-type neural networks with time-varyingdelays, J. Math. Anal. Appl. 330(1) (2007), pp. 144–158.

[17] R. McEliece, E. Posner, E. Rodemich, and S. Venkatesh, The capacity of the Hopfield associative memory, IEEETrans. Inform. Theory 33 (1986), pp. 461–482.

[18] J. Pan, X.Z. Liu, and S.M. Zhong, Stability criteria for impulsive reaction–diffusion Cohen–Grossberg neuralnetworks with time-varying delays, Math. Comput. Model. 51(9–10) (2010), pp. 1037–1050.

[19] J.L. Qiu, Exponential stability of impulsive neural networks with time varying delays and reaction–diffusion terms,Neurocomputing 70(4–6) (2007), pp. 1102–1108.

[20] Q.K. Song and J. Cao, Global exponential stability and existence of periodic solutions in BAM networks with delaysand reaction–diffusion terms, Chaos, Solitons and Fractals 23 (2005), pp. 421–430.

[21] Z. Wang, Y. Liu, and X. Liu, On complex artificial higher order neural networks: Dealing with stochasticity, jumpsand delays, in Artificial Higher Order Neural Networks for Economics and Business, Chapter 21, M. Zhang, ed.,IGI Global, Hershey, USA, 2008, pp. 466–483.

[22] L. Wang and D. Xu, Global exponential stability of Hopfield reaction–diffusion neural networks with variable delays,Sci. China Ser. 46 (2003), pp. 466–474.

[23] Y. Xia, J. Cao, and S.S. Cheng, Global exponential stability of delayed-cellular neural networks with impulses,Neurocomputing 70 (2007), pp. 2495–2501.

[24] Z.B. Xu, Global convergence and asymptotic stability of asymmetric Hopfield neural networks, J. Math. Anal. Appl.191 (1995), pp. 405–427.

[25] B.J. Xu, X.Z. Liu, and X.X. Liao, Global asymptotic stability of high-order Hopfield type neural networks with timedelays, Comput. Math. Appl. 45 (2003), pp. 1729–1737.

[26] B.J. Xu, X.Z. Liu, and K.L. Teo, Global exponential stability of impulsive high-order Hopfield type neural networkswith delays, Comput. Math. Appl. 57(11–12) (2009), pp. 1959–1967.

[27] Z. Yang and D. Xu, Impulsive effects on stability of Cohen–Grossberg neural networks with variable delays, Appl.Math. Comput. 177 (2006), pp. 63–78.

[28] Q. Zhou, L. Wan, and J. Sun, Exponential stability of reaction–diffusion generalized Cohen–Grossberg neuralnetworks with time-varying delays, Chaos, Solitons and Fractals 32 (2007), pp. 1713–1719.

Copyright of International Journal of Computer Mathematics is the property of Taylor & Francis Ltd and its

content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's

express written permission. However, users may print, download, or email articles for individual use.