Bifurcations, Stability and Synchronization in Delayed Oscillatory Networks

23
Bifurcations, Stability and Synchronization in Delayed Oscillatory Networks Michele Bonnin, Fernando Corinto, and Marco Gilli, Department of Electronics, Politecnico di Torino, Italy. Abstract Current studies in neurophysiology award a key role to collective behaviors in both neural infor- mation and image processing. This fact suggests to exploit phase locking and frequency entrainment in oscillatory neural networks for computational purposes. In the practical implementation of arti- ficial neural networks delays are always present due to the non null processing time and the finite signal propagation speed. This manuscript deals with networks composed by delayed oscillators, we show that either long delays or constant external inputs can elicit oscillatory behavior in the single neural oscillator. Using center manifold reduction and normal form theory, the equations governing the whole network dynamics are reduced to an amplitude-phase model (i.e. equations describing the evolution of both the amplitudes and the phases of the oscillators). The analysis of a network with a simple architecture reveals that different kind of phase locked oscillations are admissible, and the possible coexistence of in-phase and anti-phase locked solutions. 1 Introduction Many studies in neurophysiology have revealed the existence of pattern of local synaptic circuitry in several parts of the brain. Local populations of excitatory and inhibitory neurons have extensive and strong synaptic connections between each other, so that action potentials generated by the former excite the latter, which in turn, reciprocally inhibit the former. Such a pairs of excitatory and in- hibitory populations of neurons, usually called neural oscillators, can be found in the spinal cord, in the olfactory bulb, in the cortico-thalamic system and in the neocortex [Hoppensteadt & Izhikevich, 1997]. The neural oscillators within one brain structure can be connected into a network because the excitatory (and sometimes inhibitory) neurons can have synaptic contacts with other, distant, neurons. Synaptic organization, together with the oscillator’s characteristics, determine the network dynamics. There is an ongoing debate on the role of dynamics in neural computation. Collective behaviors, not intrinsic to any individual neuron, are believed to play a key role in neural information processing. The phenomenon of collective synchronization, in which an enormous system of oscillators spontaneously locks to a common frequency, is ubiquitous in physical and biological systems, and is believed to be responsible in self organization in nature [Kuramoto, 1984]. For instance, it has been observed in networks of pacemaker cells in the heart [Peskin, 1975] and in the circadian pacemaker cells in the suprachiasmatic nucleus of the brain [Liu et al., 1997]. Partial synchrony in cortical networks is believed to generate various brain oscillations, such as the alpha and gamma EEG rhythms, while increased synchrony may result in pathological types of activity, such as epilepsy. Current theories of visual neuroscience assume that local object features are represented by cells which are distributed across multiple visual areas in the brain. The segregation of an object requires the unique identification and integration of the pertaining cells which have to be bound into one assembly coding for the object 1

Transcript of Bifurcations, Stability and Synchronization in Delayed Oscillatory Networks

Bifurcations, Stability and Synchronization in Delayed

Oscillatory Networks

Michele Bonnin, Fernando Corinto, and Marco Gilli,

Department of Electronics, Politecnico di Torino, Italy.

Abstract

Current studies in neurophysiology award a key role to collective behaviors in both neural infor-mation and image processing. This fact suggests to exploit phase locking and frequency entrainmentin oscillatory neural networks for computational purposes. In the practical implementation of arti-ficial neural networks delays are always present due to the non null processing time and the finitesignal propagation speed. This manuscript deals with networks composed by delayed oscillators,we show that either long delays or constant external inputs can elicit oscillatory behavior in thesingle neural oscillator. Using center manifold reduction and normal form theory, the equationsgoverning the whole network dynamics are reduced to an amplitude-phase model (i.e. equationsdescribing the evolution of both the amplitudes and the phases of the oscillators). The analysisof a network with a simple architecture reveals that different kind of phase locked oscillations areadmissible, and the possible coexistence of in-phase and anti-phase locked solutions.

1 Introduction

Many studies in neurophysiology have revealed the existence of pattern of local synaptic circuitry inseveral parts of the brain. Local populations of excitatory and inhibitory neurons have extensive andstrong synaptic connections between each other, so that action potentials generated by the formerexcite the latter, which in turn, reciprocally inhibit the former. Such a pairs of excitatory and in-hibitory populations of neurons, usually called neural oscillators, can be found in the spinal cord, inthe olfactory bulb, in the cortico-thalamic system and in the neocortex [Hoppensteadt & Izhikevich,1997].

The neural oscillators within one brain structure can be connected into a network because theexcitatory (and sometimes inhibitory) neurons can have synaptic contacts with other, distant, neurons.Synaptic organization, together with the oscillator’s characteristics, determine the network dynamics.

There is an ongoing debate on the role of dynamics in neural computation. Collective behaviors, notintrinsic to any individual neuron, are believed to play a key role in neural information processing. Thephenomenon of collective synchronization, in which an enormous system of oscillators spontaneouslylocks to a common frequency, is ubiquitous in physical and biological systems, and is believed tobe responsible in self organization in nature [Kuramoto, 1984]. For instance, it has been observedin networks of pacemaker cells in the heart [Peskin, 1975] and in the circadian pacemaker cells inthe suprachiasmatic nucleus of the brain [Liu et al., 1997]. Partial synchrony in cortical networks isbelieved to generate various brain oscillations, such as the alpha and gamma EEG rhythms, whileincreased synchrony may result in pathological types of activity, such as epilepsy. Current theoriesof visual neuroscience assume that local object features are represented by cells which are distributedacross multiple visual areas in the brain. The segregation of an object requires the unique identificationand integration of the pertaining cells which have to be bound into one assembly coding for the object

1

in question [Roskies, 1999]. Several authors have suggested that such a binding of cells could beachieved by selective synchronization [Gray et al., 1989; Schillen & Konig, 1994].

The celebrated Hopfield model for artificial neural networks has provided fundamental insights inthe possibility to realize bio-inspired architectures for information and image processing. The dynamicevolution of the network is governed by the assumed dynamics of the individual processing units andtheir reciprocal interactions. Such a network behaves as an associative memory if the configurationsrepresenting the stored memories are locally stable attractors of the proposed dynamics.

However the increasing experimental evidence about the importance of dynamic attractors ratherthan static attractors in neural information processing has suggested new architectures for neurocom-puters, that consists of coupled oscillatory arrays with periodic and/or complex behavior (includingthe possibility of chaos). As shown in [Hoppensteadt & Izhikevich, 1999; Hoppensteadt & Izhikevich,2001; Itoh & Chua, 2004] such networks can be exploited as dynamic associative memories and fordynamic pattern recognition.

In electronic implementations of analog neural networks, time delays are present in the commu-nication and response of neurons due to the finite switching speed of amplifiers (neurons). Delayedneural networks have recently received much attention because of their rich variety of spatiotemporalbehaviors. Although delay effects leading to an oscillatory behavior have been well known for longtime, especially in radio engineering sciences, only in recent time it was emphasized that delay inducedinstabilities can lead to more complex behavior.

From the mathematical point of view, the presence of delays makes the problem harder to handle.In fact, the state vector characterizing a nonlinear delayed system evolves in a infinite dimensionalfunctional space. It is thus necessary to generalize a finite dimensional state space to an infinitedimensional extended space, embedding delay differential equations in the framework of functionaldifferential equations [Hale, 1977]. Hence, the analysis of excitatory-inhibitory delayed neural net-works involves many difficulties, and as a consequence several authors restricted their attention tonetworks composed of two neurons [Belair & Campbell, 1994; Olien & Belair, 1997; Wei & Ruan,1999; Faria, 2000] in absence of external inputs. The presence of a forcing signal has been consideredin [Giannakopoulos & Zapp, 2001] for a specific coupling function. Networks composed of limit cycleoscillators where the delays only appear in the couplings were investigated in [Izhikevich, 1998].

In this paper we consider networks of delayed neural oscillators, in presence of a constant exter-nal forcing term. The paper is organized as follows, section 2 describes the mathematical model ofthe network under investigation. In section 3 we analyze the dynamical behavior of a single neuraloscillator, i.e. a network composed of an excitatory and an inhibitory neurons. We determine equi-librium points, their stability, and bifurcations. Conditions which guarantee that pitchfork and Hopfbifurcations take place are established. The stability and the direction of the Hopf bifurcation aredetermined applying center manifold reduction and normal form method. In section 4 we reduce thewhole network to its canonical model, under the hypothesis that each neural oscillator is close to aHopf bifurcation and resorting again to center manifold reduction and normal form method. As a casestudy, a chain of identical oscillators with nearest neighbors symmetric connections is investigated insection 5. The possible phase locked solutions and their stability are analytically determined, andtheir possible coexistence is demonstrated. Section 6 is devoted to conclusions.

2 Mathematical Model

Based on Hopfield Neural Network model, differential equations including delays in the sigmoidalactivation functions have become a prominent tool in the investigation of bio-inspired networks. The

2

governing equation is usually assumed to be

xi(t) = −αi xi(t) +N∑

j=1

aij fi(xj(t− τj)) + ρi(t) i = 1, . . . , N. (1)

where xi(t) denotes the activity of the ith neuron, αi > 0 is a damping coefficient, fi(·) is a monotonesigmoidal activation function, ρi(t) is an external input applied to the ith neuron and aij is the strengthof the synaptic connection between the ith and the jth neurons.

According to experimental results about the synaptic organization of the brain, we consider theneural architecture proposed in [Schillen & Konig, 1994]. We shall assume each neural oscillator becomposed by a tightly coupled excitatory and inhibitory unit, and that the network arises due to weakconnections among different neural oscillators. For the sake of simplicity, activities of both the unitsare described by scalar variables. The resulting architecture is described in figure 1, and the resultinggoverning equations are

ui(t) = −αi ui(t) + ai Fi,1(vi(t− τi,2)) + εGi,1(v(t− τ2)) + Ii(t)

vi(t) = −αi vi(t) + bi Fi,2(ui(t− τi,1)) + εGi,2(u(t− τ1)) + Ji(t),i = 1, . . . , N (2)

where ui and vi represents the activityies of the excitatory and the inhibitory neurons, F1(v) andF2(u) are monotone sigmoidal activation functions, G1(v) and G2(u) are the coupling functions, Ii(t)and Ji(t) represent the external forcing terms. The coefficients ai and bi represents the strength ofthe local synaptic connections.

I(t)

J(t)

u(t)

v(t)

F1(v(t − τ2))F2(u(t − τ1))

(a) (b)

Figure 1: The neural architecture under investigation. (a) The neural oscillator subject to an externalinput. (b) The network architecture proposed in [Schillen & Konig, 1994].

3 Neural Oscillator Analysis

In this section we investigate the dynamic behavior of a single neural oscillator, determining thepossible equilibrium points and investigating their stability and bifurcations. In order to simplify thenotation the subscript i in equation (2) is omitted. Thus we consider the system of delayed differentialequations

u(t) = −αu(t) + aF1(v(t− τ2)) + I(t)

v(t) = −α v(t) + b F2(u(t− τ1)) + J(t).(3)

and exploit its dynamical behavior as some parameters and/or the delays are varied. In absence ofdelays and if constant external inputs are applied, a trivial application of the Bendixon’s criterionshows that the system does not admit periodic solutions.

3

Numerical experiments show that oscillatory behavior can be induced in the network by differentmechanisms [Schillen & Konig, 1994]. In absence of external forcing, as the time delays increase abovea certain threshold, the network begin to oscillate. A similar transition is observed in presence ofsubthreshold delays, when the strength of the synaptic connections is increased, or in presence of astrong enough constant external input. It is worth noting that, since the forcing term is constant intime, it is not driving the oscillator by itself. This input dependance, therefore, allows a stimulusdependent transition between a non oscillatory and an oscillatory state.

3.1 Equilibrium points

In this subsection we determine the equilibrium points for the single neural oscillator, in presence ofa constant external forcing (I and J constants). As a first step, we introduce the new variables

x(t) = u(t) − I

α, y(t) = v(t) − J

α, (4)

which transform equation (3) into

x(t) = −αx(t) + a f1(y(t− τ2))

y(t) = −α y(t) + b f2(x(t− τ1))(5)

where

f1(y(t)) = F1

(

y(t) +J

α

)

, f2(x(t)) = F2

(

x(t) +I

α

)

. (6)

It is clear that equilibrium points of equation (3) correspond to equilibrium points of (5) shiftedof a constant quantity. Equilibria of equation (5) are determined by intersections of the nullclinesNx = (x, y) ∈ R

2|αx−a f1(y) = 0 and Ny = (x, y) ∈ R2|α y− b f2(x) = 0, that is, intersections of

x =a

αf1(y)

y =b

αf2(x).

(7)

Due to the monotonicity of the sigmoidal functions, if the coefficients a and b have opposite sign, thereis one and only one intersection, and thus one and only one equilibrium point. Conversely, if a andb have the same sign, depending on the values of the bifurcation parameters there can be more thanone intersection. Using calculus it is trivial to determine conditions for the existence of more than oneintersection, in fact, by differentiating equation (7), we obtain

dx

dy=

a

αf ′1(y) ⇒ dy

dx=

α

a f ′1(y)dy

dx=

b

αf ′2(x).

(8)

From (8), and introducing D = a b f ′1(y) f′2(x), it follows that there is one and only one equilibrium

point ifD < α2. (9)

3.2 Bifurcations Analysis

We now look at the bifurcations involving a generic equilibrium point (x, y)⊤, following the approachdescribed in [Olien & Belair, 1997; Wei & Ruan, 1999; Faria, 2000]. A linearization of equation (5) in

4

the neighborhood of the equilibrium point leads to the variational equation

x(t) = −αx(t) + a f ′1(y) y(t− τ2) + O(y2)

y(t) = −α y(t) + b f ′2(x)x(t− τ1) + O(x2).(10)

We search for solution of the variational equation (10) in the form(

x(t), y(t))⊤

=(

x eλt, y eλt)⊤

obtaining

(λ+ α)x− a f ′1(y) e−λτ2 y = 0

(λ+ α) y − b f ′2(x) e−λτ1 x = 0. (11)

or, in matrix form(

λ+ α −a f ′1(y) e−λτ2−b f ′2(x) e−λτ1 λ+ α

)(

xy

)

=

(

00

)

. (12)

In order to have a nontrivial solution the determinant must be zero, this leads to the characteristicequation

(λ+ α)2 − a b f ′1(y) f′2(x) e

−λ(τ1+τ2) = 0. (13)

To simplify the notation we introduce

a f ′1(y) = α12 b f ′2(x) = α21 τ1 + τ2 = 2 τ α12 α21 = D, (14)

thus, the characteristic equation (13) becomes

(λ+ α)2e2λτ −D = 0. (15)

which implies(λ+ α) eλτ = ±

√D. (16)

The roots of the characteristic equation determine the stability of the equilibrium point. The latteris locally stable if and only if all the roots lie in the left half of the complex plane. The stabilityproperties of the equilibrium points are described in the next theorems.

Theorem 1 For |D| < α2, equation (5) has a unique asymptotically stable equilibrium point for allτ > 0. For D = α2 the equilibrium point undergoes to a supercritical pitchfork bifurcation, becomingunstable when D > α2.

Proof The existence of a unique equilibrium point for D < α2 was proved in the previous section (seeequation (9)). In order to prove that new equilibria arise through a pitchfork bifurcation at D = α2

we substitute λ = r ± iω in equation (16) obtaining

(r + α± iω) erτ [cos(ω τ) ± i sin(ω τ)] = ±√D. (17)

The bifurcation occurs for r = 0, separating the real and imaginary parts in the equation above weobtain

α cos(ω τ) − ω sin(ω τ) = ±√D (18)

ω cos(ω τ) + α sin(ω τ) = 0, (19)

and squaring and summing both equations we get

α2 + ω2 = D ⇒ ω =√

D − α2. (20)

5

where only the positive root is considered, since λ = r ± i ωt.D = α2 implies ω = 0, and thus a pitchfork bifurcation occurs. To determine the stability of the

solution and the direction of the bifurcation we look at the derivative of equation (15) with respect toD, obtaining

2(λ+ α) e2λ τ [1 + τ(λ+ α)]∂λ

∂D− 1 = 0. (21)

For λ = 0, equation (21) implies

2α∂λ

∂D(1 + τ α) = 1 (22)

which yields∂ r

∂D=

1

2α (1 + τ α)> 0. (23)

Consequently, for D < α2, we have r < 0 and the equilibrium point is stable, while D > α2 impliesr > 0 and the original equilibrium point is unstable: The pitchfork bifurcation is thus supercritical.

To prove that the delay has no influence on the bifurcation process we consider the derivative ofequation (16) with respect to τ , that is

∂λ

∂τ[1 + τ(λ+ α)] + λ(λ+ α)

eλτ = 0. (24)

λ = 0 implies∂λ

∂τ(α τ + 1) = 0 ⇒ ∂λ

∂τ= 0 (25)

and thus the root of the characteristic equation does not depend on the delay value.

Theorem 2 For |D| < α2 equation (5) admits one and only one asymptotically stable equilibriumpoint. For any value D < −α2, there exists a critical value of the delay τ = τc such that the uniqueasymptotically stable equilibrium point undergoes a Hopf bifurcation.

Proof The first part of the theorem has already been proved. The Hopf bifurcation is associated withthe appearance of a couple of pure imaginary eigenvalues. Hence, we search for solution of the typeλ = ±i ω in the characteristic equation. A separation of the real and the imaginary parts in equation(17) yields (r = 0)

α cos(ω τ) − ω sin(ω τ) = 0 (26)

± [ω cos(ω τ) + α sin(ω τ)] = ±√−D (27)

and squaring and summing we obtain

ω =√

−D − α2. (28)

where only the positive root is considered, since λ = ±i ω. It is therefore evident that equation (28)is well defined if and only if D < −α2. From equation (26) we have

ω = α cot(ω τ). (29)

Hence ω τ ∈ (k π, π2 + k π). Introducing (29) into (27) we obtain

sin(ω τ) = ∓ α√−D

. (30)

6

If ω τ ∈ (2 k π, π2 + 2 k π) we take the positive root, vice versa if ω τ ∈ (π + 2 k π, 3π2 + 2 k π). The

associated critical value for the delay is

τc =

arcsin(∓ α√−D

) + 2kπ

√−D − α2

k = 0, 1, 2, . . . (31)

where the sign of the argument of the arcsin function is determined by the value of ω τ .The second hypothesis to verify for the occurrence of a Hopf bifurcation as the parameters D or τ

are varied, is either Re d λdD

6= 0 or Red λdτ

6= 0. Differentiating (16) with respect to D we obtain

eλ τ [τ (λ+ α) + 1]∂λ

∂D= ∓ i

2√−D

, (32)

and looking at the imaginary part only, it follows that

[ω τ cos(ω τ) + (α τ + 1) sin(ω τ)]∂r

∂D= ± 1

2√−D

. (33)

Using equation (29) we have∂ r

∂D= ∓ sin(ω τ)

2√−D

(

τ α+ sin2(ω τ)) (34)

where again, according to (29), we take the positive root if ω τ ∈ (2 k π,π

2+ 2 k π) and the negative if

ω τ ∈ (π + 2 k π,3π

2+ 2 k π). In both cases

∂ r

∂D> 0 (35)

and there exists a critical value D = Dc such that the equilibrium point is stable if D < Dc, un-stable otherwise. The dependence of the stability with respect to the delay can be determined bylooking at equation (24). At the Hopf bifurcation, considering the case λ = i ω, after some algebraicmanipulations and using equation (29) we find

∂r

∂τ= α2

(

cot2(ω τ) + 1)

> 0. (36)

Hence, the equilibrium point is stable if τ < τc, unstable otherwise.

Theorem (2) shows that oscillatory behavior can be induced in an excitatory-inhibitory neuraloscillator by either increasing the delay, or by acting on the parameter D. It is worth noting that Ddepends on the synaptic strength coefficients a and b, and on the position of the equilibrium point.It is thus clear the mechanism trough which a constant external input induces oscillatory behavior inthe neural oscillator. In fact, a constant external input shifts the positions of the nullclines throughequation (7), influencing their intersections. This change of the equilibrium point’s position reflectsin a variation of the parameter D. Obviously, to determine the exact value of the constant input atwhich the Hopf bifurcation occurs the explicit expressions of the sigmoidal activation functions areneeding.

3.3 Direction and Stability of the Hopf Bifurcation

This section is devoted to determine the direction of the Hopf bifurcation and the stability of thebifurcating periodic solution for the uncoupled neural oscillator. The method employed is based on

7

the normal form reduction and the center manifold theory introduced in [ Hassard et al., 1981; Wei& Ruan, 1999].

Theorems (1) and (2) establish that if |D| < α2 all the roots of the characteristic equation otherthan ±i ω have negative real parts, and that for D < −α2 there exist a critical value of the delayτc = 1

2(τ1 +τ2), given in equation (31), at which the Hopf bifurcation occurs. For the sake of simplicitywe set τ = τc + µ, so that µ = 0 is the bifurcation value, and without loss of generality we assumeτ1 > τ2. Since we are analyzing the single neural oscillator, the subscript i is still omitted.

We begin with a preliminary observation, if λ = ±i ω, the characteristic equation (13) becomes

(α± i ω)2 − α12α21 e∓i ω(τ1+τ2) = 0. (37)

The case of the positive eigenvalue leads to the identity

(α+ i ω)2

α12ei ω τ2 = α21 e

−i ω τ1 (38)

while the negative eigenvalue leads to

(α− i ω)2

α12e−i ω τ2 = α21 e

i ω τ1 . (39)

Let C = C([−τ1, 0]; R2) denote the space of real, 2-dimensional vector valued functions on theinterval [−τ1, 0], equation (5) can be rewritten as

x(t) = −αx(t) + a(

f ′1(y) y(t− τ2) + 12 f

′′1 (y) y2(t− τ2) + 1

6 f′′′1 (y) y3(t− τ2)

)

+ O(y4)

y(t) = −α y(t) + b(

f ′2(x)x(t− τ1) + 12 f

′′2 (x)x2(t− τ1) + 1

6 f′′′2 (x)x3(t− τ1)

)

+ O(x4).(40)

For any φ ∈ C, a one-parameter linear bounded operator Lµ : C → R2 can be defined as

Lµ φ = −α I φ(0) +B1 φ(−τ2) +B2 φ(−τ1), (41)

where I is the 2 × 2 identity matrix and

B1 =

(

0 α12

0 0

)

, B2 =

(

0 0α21 0

)

. (42)

Next, we define a nonlinear operator F (φ, µ) : C → R2 obtaining

F (φ, µ) =1

2

(

a f ′′1 (φ2)φ22(t− τ2)

b f ′′2 (φ1)φ21(t− τ1)

)

+1

6

(

a f ′′′1 (φ2)φ32(t− τ2)

b f ′′′2 (φ1)φ31(t− τ1)

)

+ O(|φ|4). (43)

Using (41) and (43) equation (40) can be recast as

x(t) = Lµ xt + F (xt, µ) (44)

where xt = (x(t− τ1), y(t− τ2))⊤.

By the Riesz representation theorem [Hale, 1977], there exists an 2 × 2 matrix valued functionη(θ, µ) : [−τ1, 0] → R

2 whose component are bounded variation functions such that, for all φ ∈ C

Lµ φ =

∫ 0

−τ1

dη(θ, µ)φ(θ). (45)

In particular we choose

dη(θ) =(

− α I δ(θ) +B1 δ(θ + τ2) +B2 δ(θ + τ1))

dθ, (46)

8

which, introduced in (45) satisfies (41). Next we define for φ ∈ C

A(µ)φ =

dφ(θ)

dθ, θ ∈ [−τ1, 0)

∫ 0−τ1

dη(s, µ)φ(s) ≡ Lµφ θ = 0

(47)

and

R(φ, µ) =

0, θ ∈ [−τ1, 0)

F (φ, µ) θ = 0.(48)

Then since dxt/dθ = dxt/dt, equation (44) becomes

xt = A(µ)xt +R(xt, µ). (49)

For ψ ∈ C the adjoint operator A∗(µ) is defined by

A∗(µ)ψ =

−dψ(s)

ds, s ∈ (0, τ1]

∫ 0−τ1

dη⊤(t, µ)ψ(−t) s = 0.

(50)

Since the critical value of the bifurcation parameter is µ = 0, we will write A instead of A(0). Toconstruct the center manifold we need an inner product, which we define as the bilinear form (forφ, ψ ∈ C)

〈ψ, φ〉 = ψ(0)φ(0) −∫ 0

θ=−τ1

∫ θ

ξ=0ψ(ξ − θ) dη(θ, 0)φ(ξ) dξ. (51)

By the definitions of A and A∗, it follows that, if λ = ± i ω are eigenvalues of A, they are alsoeigenvalues of A∗. The eigenvector of A corresponding to i ω can be easily computed, giving

q(θ) =

(

1

α+i ωα12

ei ω τ2

)

ei ω θ (52)

while the eigenvector for A∗ corresponding to −i ω is

q∗(s) = D

(

α− i ω

α21e−i ω τ1 1

)

ei ω s. (53)

The constant D is determined by the normalizing condition 〈q∗, q〉 = 1, thus

〈q∗, q〉 =def

q∗(0) q(0) −∫ 0

θ=−τ1

∫ θ

ξ=0q∗(ξ − θ) dη(θ) q(ξ) dξ

= D

(

α+i ωα21

ei ω τ1 1)

(

1

α+i ωα12

ei ω τ2

)

−∫ 0

θ=−τ1

∫ θ

ξ=0

(

α+i ωα21

ei ω τ1 1)

e−i ω(ξ−θ)dη(θ)

(

1

α+i ωα12

ei ω τ2

)

ei ω ξ dξ

= D

(α+ iω)

(

ei ω τ1

α21+ei ω τ2

α12

)

−∫ 0

−τ1

(

α+i ωα21

ei ω τ1 1)

θ ei ω θdη(θ)

(

1

α+i ωα12

ei ω τ2

)

= 1.

(54)

9

Using the filtering property of the Dirac’s delta function the integral is easily computed and as a resultwe have

D

(α+ iω)

[

eiωτ1

α21

(

1 + τ2(α+ iω))

+eiωτ2

α12

(

1 + τ1(α+ iω))

]

= 1, (55)

which implies

D =

(α− iω)

[

e−iωτ1

α21

(

1 + τ2(α− iω))

+e−iωτ2

α12

(

1 + τ1(α− iω))

]−1

. (56)

In a similar way it is easy to verify that 〈q∗, q 〉 = 0.The behavior of orbits of equation (49) in C near the singularity at the equilibrium point can be

completely described by the restriction of the flow to the associated finite dimensional center manifoldC0.

Using the formal adjoint theory for functional differential equations [Hale, 1977], let the phase spaceC be decomposed by Λ = −i ω, i ω as C = P ⊕Q, where P is the generalized eigenspace associated

with Λ and Q is the complementary space of P . Let Φ(θ) = (q(θ), q(θ)) and Ψ(θ) =(

q∗(θ), q∗(θ))⊤

be the bases for P and P ∗. Consider complex coordinates z = (z, z)⊤ and use the decomposition

xt = Φ(θ) z(t) + w(t, θ) (57)

with z(t) ∈ C2 and, on the center manifold C0, w(t, θ) = w(z(t), z(t), θ) where

w(z, z, θ) =∑

2≤i+j≤N

wij(θ)zi zj

i! j!= w20(θ)

z2

2+ w11(θ) z z + w02(θ)

z2

2+ w30(θ)

z3

6+ . . . . (58)

Here z and z are local coordinates for center manifold in the directions of q∗ and q∗. Then, to thelowest order, the flow of equation (49) on the center manifold can be written as

Φ(θ) z(t) = AΦ(θ)z(t) +R(

Φ(θ)z(t) + w(z, z, θ))

. (59)

Since AΦ(θ) = Φ(θ)Ω, where Ω = diag(iω,−iω) is the matrix of the eigenvalues, multiplying onthe right for Ψ(θ), and taking into account that, as a consequence of the eigenvectors normalization〈Ψ(θ),Φ(θ)〉 = I, we obtain

z(t) = Ω z(t) + Ψ(0)F(

Φ(θ)z(t) + w(t, θ))

. (60)

Equation (60) rewritten in scalar components reads

z(t) = i ω z(t) + q∗(0)F(

Φ(θ)z(t) + w(z, z, θ))

= i ω z(t) + q∗(0)f0(z, z) =defi ω z(t) + g(z, z) (61)

where only the first equation was written since the second is just the conjugate of the first. Fromequation (57), introducing M = α+i ω

α12, we have

x(t− τ1) = z q1(−τ1) + z q1(−τ1) + w(1)(t,−τ1)

= z e−i ω τ1 + z ei ω τ1 + w(1)20 (−τ1)

z2

2+ w

(1)11 (−τ1) z z + w

(1)02 (−τ1)

z2

2+ O(|z|3), (62)

y(t− τ2) = z q2(−τ2) + z q2(−τ2) + w(2)(t,−τ2)

= M z +M z + w(2)20 (−τ2)

z2

2+ w

(2)11 (−τ2) z z + w

(2)02 (−τ2)

z2

2+ O(|z|3). (63)

10

According to (43) the nonlinear term is

F (xt) =1

2

(

a f ′′1 (y) y2(t− τ2)

b f ′′2 (x)x2(t− τ1)

)

+1

6

(

a f ′′′1 (y) y3(t− τ2)

b f ′′′2 (x)x3(t− τ1)

)

+ O(|xt|4). (64)

Introducing (62) and (63) in (64) and substituting in the definition of g(z, z) we obtain

g(z, z) = q∗(0) f0(z, z) =D

2

aN ei ω τ1

f ′′1 (y)(

M2 z2 +M2z2 + 2 |M |2 z z

)

+[

f ′′1 (y)(

2w(2)11 (−τ2)M + w

(2)20 (−τ2)M

)

+ f ′′′1 (y)M2M]

z2 z

+ b

f ′′2 (x)(

e−i 2ω τ1 z2 + ei 2ω τ1 z2 + 2 z z)

+[

f ′′2 (x)(

2w(1)11 (−τ1) e−i ω τ1 + w

(1)20 (−τ1) ei ω τ1

)

+ f ′′′2 (x) e−i ω τ1]

z2 z

.

(65)

Considering the Taylor expansion of g(z, z)

g(z, z) =∑

2≤i+j≤N

gijzi zj

i! j!= g20

z2

2+ g11 z z + g02

z2

2+ g21

z2 z

2+ . . . (66)

and comparing the coefficients

g20(z, z) = D(

aN M2 ei ω τ1 f ′′1 (y) + b e−2 i ω τ1 f ′′2 (x))

, (67)

g11(z, z) = D(

aN |M |2 ei ω τ1 f ′′1 (y) + b f ′′2 (x))

, (68)

g02(z, z) = D(

aN M2ei ω τ1 f ′′1 (y) + b e2 i ω τ1 f ′′2 (x)

)

, (69)

g21(z, z) = D

aN ei ω τ1[

f ′′1 (y)(

2w(2)11 (−τ2)M + w

(2)20 (−τ2)M

)

+ f ′′′1 (y)M2M]

+ b[

f ′′2 (x)(

2w(1)11 (−τ1)e−i ω τ1 + w

(1)20 (−τ1)ei ω τ1

)

+ f ′′′2 (x)e−i ω τ1]

. (70)

We have to compute w20(θ) and w11(θ) for θ ∈ [−τ1, 0), we consider the time derivative of (57)

w(t, θ) = xt − Φ(θ) z(t) =

Aw(t, θ) − Φ(θ)Ψ(0) f0(z, z) θ ∈ [−τ1, 0)

Aw(t, θ) − Φ(θ)Ψ(0) f0(z, z) + f0(z, z) θ = 0

=def

Aw(t, θ) +H(z, z, θ) (71)

where

H(z, z, θ) =∑

2≤i+j≤N

Hij(θ)zi zj

i! j!= H20(θ)

z2

2+H11(θ) z z +H02(θ)

z2

2+ . . . (72)

A second expression for the time derivative of w(z, z) is obtained by its definition

w =∂w

∂zz +

∂w

∂z˙z (73)

11

and by using (58) and (61). We equate this to the right hand side of (71) and obtain, comparing thecoefficients

(2 i ω −A)w20(θ) = H20(θ) (74)

−Aw11(θ) = H11(θ) (75)

...

We search for the coefficients of w(z, z, θ) in the form

w20(θ) = C1 q(θ) + C2 q(θ) + E ei 2ω θ, (76)

w11(θ) = C3 q(θ) + C4 q(θ) + F (77)

and observe that, by the definition of H(z, z, θ)

H(z, z, θ) = −Φ(θ)Ψ(0)f0(z, z) = −g(z, z) q(θ) − g(z, z) q(θ)

= −(

g20z2

2+ g11z z + g02

z2

2+ g21

z2 z

2

)

q(θ) −(

g20

z2

2+ g11z z + g02

z2

2+ g21

z2 z

2

)

q(θ).

(78)

Equations (72) and (78) imply

H20(θ) = −g20 q(θ) − g02q(θ), (79)

H11(θ) = −g11 q(θ) − g11q(θ), (80)

H02(θ) = −g02 q(θ) − g02q(θ). (81)

Introducing (76) and (79) in (74) we obtain

C1 = −g20i ω

C2 = − g02

i 3ω. (82)

In the same way, inserting (77) and (80) in (75) we obtain

C3 =g11i ω

C4 = −g11

i ω. (83)

The two-dimensional vectors E and F can be determined by setting θ = 0 in H, in fact, equations(74), and (75) evaluated at θ = 0 yield

(2 i ω −A)w20(0) = H20(0) (84)

−Aw11(0) = H11(0) (85)

...

where, as a consequence of the definition (47) of the operator A we have

Aw20(0) = −α I w20(0) +B1w20(−τ2) +B2w20(−τ1) (86)

Aw11(0) = −α I w11(0) +B1w11(−τ2) +B2w11(−τ1). (87)

The definition of H(z, z, θ), evaluated at θ = 0 gives

H(z, z, 0) = −g(z, z) q(0) − g(z, z) q(0) + f0(z, z) (88)

12

and thus

H20(0) = −g20 q(0) − g02 q(0) +

(

a f ′′1 (y)M2

b f ′′2 (x) e−i 2ω τ1

)

(89)

H11(0) = −g11 q(0) − g11 q(0) +

(

a f ′′1 (y) |M |2

b f ′′2 (x)

)

. (90)

Substituting the results (82) and (83) in equations (76) and (77) we obtain

w20(θ) = −g20i ω

q(0) ei ω θ − g02

i 3ωq(0) e−i ω θ + E ei 2ω θ (91)

w11(θ) =g11i ω

q(0) ei ω θ − g11

i ωq(0) e−i ω θ + F (92)

By putting everything together, equations (84) and (86) yield, after some algebraic manipulations

(

2 i ω + α −α12 e−2i ω τ2

−α21 e−2i ω τ1 2 i ω + α

)

E =g20i ω

(

i ω + α −α12 e−iω τ2

−α21 e−iω τ1 i ω + α

)

q(0) +

+g02

3 i ω

(

−i ω + α −α12 eiω τ2

−α21 eiω τ1 −i ω + α

)

q(0) +

(

a f ′′1 (y)M2

b f ′′2 (x) e−2i ω τ1

)

(93)

(

α −α12

−α21 α

)

F = −g11i ω

(

α+ i ω −α12e−i ω τ2

−α21 e−i ω τ1 α+ i ω

)

q(0) +

+g11

i ω

(

α− i ω −α12ei ω τ2

−α21 ei ω τ1 α− i ω

)

q(0) +

(

a f ′′1 (y) |M |2

b f ′′2 (x)

)

. (94)

We now observe that, using the definition (52), and taking into account equations (38) and (39),the following conditions hold

(

i ω + α −α12 e−iω τ2

−α21 e−iω τ1 i ω + α

)

q(0) = 0 (95)

(

−i ω + α −α12 eiω τ2

−α21 eiω τ1 −i ω + α

)

q(0) = 0 (96)

Thus systems (93) can be simplified to

(

2 i ω + α −α12 e−2i ω τ2

−α21 e−2i ω τ1 2 i ω + α

)

E =

(

a f ′′1 (y)M2

b f ′′2 (x) e−2i ω τ1

)

(97)

which, once solved yields

E1 =a f ′′1 (y)M2(α+ 2 i ω) + b f ′′2 (x)α12 e

−2 i ω(τ1+τ2)

(α+ 2 i ω)2 − α12α21 e−2 i ω(τ1+τ2)(98)

E2 =e−2 i ω τ1

[

b f ′′2 (x) (α+ 2 i ω) + a f ′′1 (y)M2 α21

]

(α+ 2 i ω)2 − α12α21 e−2 i ω(τ1+τ2). (99)

13

In the same way system (94) can be simplified to

α −α12

−α21 α

F =

(

a f ′′1 (y) |M |2

b f ′′2 (x)

)

(100)

whose solution is

F1 =a f ′′1 (y) |M |2 α+ α12 b f

′′2 (x)

α2 − α12α21(101)

F2 =α b f ′′2 (x) + α21 a f

′′1 (y)|M |2

α2 − α12α21. (102)

Once the vectors E and F are determined, the corresponding values of w20 and w11 given by equations(76) and (77) can be determined and finally the coefficients g21 of equation (70) can be evaluated forany values of the parameters and the delays. Now we use the well known fact from normal form theorythat there exists a near to identity change of variables [Hassard et al., 1981; Guckenheimer & Holmes,1982]

z = z + p(z, ¯z) (103)

that transforms (61) to the normal form

z = i ω z + d z |z|2 + O(|z|5) (104)

The direction of the Hopf bifurcation, the stability and the period of the bifurcating periodicsolution depend on the following quantity [Hassard et al., 1981; Guckenheimer & Holmes, 1982]

d =i

(

g20g11 − 2|g11|2 −1

3|g02|2

)

+g212, (105)

µ2 = − Re dRe λ′ , (106)

T2 = −[

Im d + µ2 Im λ′]

Im λ , (107)

β2 = 2 Re d. (108)

If µ2 > 0 (respectively µ2 < 0), then the Hopf bifurcation is supercritical (respectively subcritical)and the bifurcating periodic solution exists for τ1 + τ2 > τ0 (τ1 + τ2 < τ0); β2 determines the stabilityof the bifurcating periodic solution, which is orbitally stable if β2 < 0, unstable otherwise. Finally, T2

determines the period of the bifurcating periodic solution, the period increase if T2 > 0, decrease ifT2 < 0.

4 Whole Network Analysis

In this section we shall use center manifold reduction and suitable change of coordinates, re-scalingand averaging to simplify the governing equation for the whole delayed neural network and developits canonical model. We consider the most general equation describing the local activity of a delayednetwork, written in the form

xi(t) = fi(xi(t− τi), µ) + ε gi(x(t− τ), µ) xi ∈ R2, i = 1, . . . , N. (109)

14

In the limit ε ≪ 1 the network is called weakly connected delayed neural network. The networklocal activity is not interesting from the neuro-computational point of view unless the equilibriumcorresponds to a bifurcation point [Hoppensteadt & Izhikevich, 1997]. In biological terms such aneurons are said to be near a threshold. A dynamical system xi = fi(xi(t), µ) is near a multiple Hopfbifurcation if each jacobian matrix Dfi(xi, µ) has a simple pair of purely imaginary eigenvalues ±iωi.For the sake of simplicity and without loss of generality, we assume (xi, µ) = (0, 0). Our purpose isto find a model describing the whole network dynamics, this task is accomplished by the followingtheorem

Theorem 3 If the weakly connected delayed neural network (109) is close to a multiple Hopf bifurca-tion, and µ(ε) = εµ1 + O(ε2), then its dynamics is described by the canonical model

z′i = bi zi + di zi |zi|2 +∑

j 6=i

cij zj + O(√ε) (110)

where σ = εt is the slow time, ′ = d/dσ, and zi, di, bi, cij ∈ C.

Proof Consider the uncoupled (ε = 0) system

xi(t) = fi(xi(t− τi), µ), (111)

and assume that (0, 0) is an equilibrium point. A linearization in the neighborhood of the equilibriumyields

xi(t) = Dfi(0, 0)xi,t + Fi(xi,t, 0) (112)

where Dfi(0, 0) is the jacobian matrix and Fi(xi,t, 0) accounts for nonlinear terms. By hypothesis thenetwork is near a multiple Hopf bifurcation, thus the jacobian matrix associated to each uncoupleddynamical system exhibits a pair of purely imaginary eigenvalues. We denote by ±i ωi, with ωi =√

detDfi(0, 0) the eigenvalues of Dfi(0, 0).The reduction to the normal form for each uncoupled neural oscillator can be obtained applying

the procedure described in the previous section. Now we suppose ε 6= 0, since the jacobian matrixof the uncoupled system is nonsingular, the implicit function theorem guarantees the existence ofa family of equilibria x(ε) = (x1(ε), . . . , xN (ε))⊤ ∈ R

2N in the coupled system. Introducing localcoordinates at (x(ε), 0), we can rewrite the system in the form

xi(t) = Dfi(

xi, 0)

xi,t + Fi(

xi,t, 0)

+ εN∑

j=1

Sij xj,t + O(ε|x|2, ε2|x|). (113)

where

Sij = Dxjgi(x, 0) for i 6= j (114)

Sii = DµDfi(xi, 0)µ1 +Dxigi(x, 0). (115)

and Dµ denotes the derivative with respect to the parameters.Let us denote τm = max

1≤i≤Nτi and introduce for any φi ∈ C = C

(

[−τm, 0];R2)

Ai(µ)φi =

dφi(θ)

dθ, θ ∈ [−τm, 0)

∫ 0−τm

dη(s, µ)φi(s) θ = 0,

(116)

15

where dη(s, µ) = Dfi(xi, µ) δ(s+ τi),

Bij(µ)φj =

0, θ ∈ [−τm, 0)

∫ 0−τm

dζ(s, µ)φi(s) ≡ Sij φj θ = 0,(117)

where dζ(s, µ) = Sij δ(s+ τj) and

Ri(φi, µ) =

0, θ ∈ [−τm, 0)

Fi(φi, µ) θ = 0.(118)

Hence equation (109) can be recast as

xi,t = Ai(µ)xi,t +Ri (xi,t, µ) + εN∑

j=1

Bij xj,t + O(ε|x2| + ε2|x|) (119)

We denote by qi(θ) the eigenvector of Ai corresponding to the eigenvalue λi = i ωi, and by q∗i (θ)the eigenvector of A∗

i corresponding to the eigenvalue λi = −i ωi. Now we introduce the matrices

Φi(θ) = (qi(θ), qi(θ)), Ψi(θ) =(

q∗i (θ), q∗i(θ)

)⊤and zi = (zi, zi)

⊤ and similarly to what done in theprevious section consider

xi,t = Φi(θ)zi(t) + wi(t, θ) (120)

which implies, at the lowest order and on the center manifold

Φi(θ) zi(t) = Ai Φi(θ) zi(t)+Ri(

Φi(θ)zi(t)+wi(zi, zi, θ))

+εN∑

j=1

BijΦj(θ) zj(t)+O(

|zi|5, ε|zi|2, ε2|zi|)

.

(121)Since Ai Φi(θ) = Φi(θ)Ωi where Ωi = diag(i ωi,−i ωi) is the matrix of the eigenvalues, multiplying onthe right for Ψi(θ) and taking into account that 〈Ψi(θ),Φi(θ)〉 = I, we obtain

zi(t) = Ωi zi(t)+Ψi(0)Fi(

Φi(θ)zi(t)+wi(zi, zi, θ))

+εN∑

j=1

Ψi(0)BijΦj(θ) zj(t)+O(

|zi|5, ε|zi|2, ε2|zi|)

.

(122)In scalar form

zi(t) = i ωi zi(t) + gi(zi, zi) + εN∑

j=1

(sij zj(t) + eij zj(t)) + O(

|zi|5, ε|zi|2, ε2|zi|)

(123)

where only the first equation was written, since the second equation is just the complex conjugate ofthe first, Ψi(0)Fi

(

Φi(θ)zi(t) + wi(zi, zi, θ))

= (gi(zi, zi), gi(zi, zi))⊤, and sij and eij are the entries of

the 2 × 2 matrix Ψi(0)BijΦj(θ)

Ψi(0)BijΦj(θ) =

(

sij eij

eij sij

)

. (124)

Applying a near to identity transformation zi = zi + pi(

zi, zi)

we obtain

˙zi = i ωi zi + di zi |zi|2 + εN∑

j=1

(

sij zj(t) + eij zj(t))

+ O(

|zi|5, ε|zi|2, ε2|zi|)

. (125)

16

Introducing the slow time σ = ε t and changing variables

zi(t) =√ε ei ωi tzi(σ) (126)

we transform the system to

z′i = di zi |zi|2 +N∑

j=1

(

sij ei(ωj−ωi) tzj + eij e

−i(ωj+ωi) tzj

)

+ O(√ε)

. (127)

Notice that

e−i(ωj+ωi)t = e−iωj+ωi

εσ (128)

is a high frequency term in the slow time reference frame as ε → 0. The same holds for ei(ωj−ωi)t

unless ωj = ωi. After averaging, all these exponentials except the ones for which ωj = ωi vanish, andwe obtain

z′i = bi zi + di zi |zi|2 +N∑

j=1

cij zj + O(√ε), j 6= i (129)

with bi = sii and

cij =

sij if ωj = ωi

0 if ωj 6= ωi(130)

which complete the proof.

It is customary to call ωi the natural frequency of the ith neuron. The discussion above has asa direct consequence that only neurons with equal natural frequencies interact, at least on a scale oftime 1/ε. All neural oscillators can be divided into ensembles, according to their natural frequencies,and interactions between oscillators belonging to different ensembles are negligible. This results is wellknow and has been proved for both weakly connected neural networks close to a multiple Hopf bifur-cation and for weakly connected limit cycle oscillators [Hoppensteadt & Izhikevich, 1997]. Oscillatorsfrom different ensembles work independently even if they have nonzero synaptic contacts cij 6= 0.Although physiologically present and active, those synaptic connections are functionally insignificantand do not play any role in the dynamics. We can speculate about a mechanism capable to regulatethe natural frequency ωi of the neurons so that neurons can be entrained into different ensemblesat different times simply adjusting the frequencies. In this way, selective synchronization could beachieved.

5 Phase Locking in a Chain of Oscillators

It is convenient to rewrite equation (110) in polar coordinates, let

zi = ri ei ϕi , bi = αi + iΩi, di = γi + iσi, cij = |cij | ei ψij . (131)

Then equation (110) can be rewritten as

r′i = αi ri + γi r3i +

N∑

j 6=i

|cij | rj cos (ϕj − ϕi + ψij)

ϕ′i = Ωi + σi r

2i +

1

ri

N∑

j 6=i

|cij | rj sin (ϕj − ϕi + ψij) .

(132)

Equations (132) describe the dynamics of coupled oscillators when the coupling strength is compa-rable to the attraction of the limit cycle [Aronson et al., 1990] and can be exploited for the realization

17

of neuro-computing mechanical devices [Hoppensteadt & Izhikevich, 2001]. The main advantage withrespect to the simpler phase-oscillator model is the richer dynamical behavior, which includes oscilla-tions death (Bar-Eli effect [Aronson et al., 1990; Bar-Eli, 1985], self ignition [Smale, 1974; Kowalskiet al., 1992], phase locking and phase drifting.

Analysis of equations (132) is a daunting problem unless certain restrictive hypotheses are imposed.In what follows we shall consider a particular architecture, i.e. a chain of identical oscillators, and weshall determine conditions for the occurrence of in-phase and anti-phase locked oscillations.

Theorem 4 Consider a chain composed of N identical delayed oscillators near a multiple Hopf bi-furcation, with nearest neighbors symmetric connections. Periodic boundary conditions are imposed,let us define

β(r) = 2 cos

(

2πr − 1

N

)

r = 1, . . . , N. (133)

Then the network exhibits:

• In-phase, constant amplitude oscillations if ∀ r = 1, . . . , N

(

β(r) − 4)

Re c− α < 0

(

β(r) − 2)

|c|2 − 4 Re 2c− 2αRe c− 2ασγ

Im c− 4σγ

Re c Im c < 0.(134)

• Anti-phase, constant amplitude oscillations if ∀ r = 1, . . . , N

(

β(r) − 4)

Re c+ α > 0

(

β(r) − 2)

|c|2 − 4 Re 2c+ 2αRe c+ 2ασγ

Im c− 4σγ

Re c Im c < 0.(135)

Proof Under the hypotheses of the theorem equations (132) reduces to

r′i = α ri + γ r3i + |c|1∑

k=−1

ri+k cos (ϕi+k − ϕi + ψ)

ϕ′i = Ω + σ r2i +

|c|ri

1∑

k=−1

ri+k sin (ϕi+k − ϕi + ψ)

k 6= 0. (136)

Since we are interested in phase locked solutions, we consider ϕi±1 − ϕi = χ ∀ i, thus the second ofequations (136) implies

χ′ = σ(

r2i±1 − r2i)

+|c|

ri ri±1sin (χ+ ψ)

1∑

k=−1

(ri ri±1+k − ri±1 ri+k) . (137)

Now we observe that ri = r ∀i is an equilibrium point of (137). The associated equilibrium point forthe first of equations (136) is

ri =

−α+ 2 |c| cos(χ+ ψ)

γ. (138)

Thus the solutions ϕi±1−ϕi = χ, ri = r ∀ i correspond to phase locked, constant amplitude oscillationsin the network.

18

The local stability of the solutions is investigated by looking at the real part of the eigenvalues ofthe jacobian matrix evaluated over such solution. For a chain composed by N oscillators, the jacobianmatrix is a 2N × 2N matrix. By defining the 2 × 2 sub-matrices

J1 =

∂r′i∂ri

∂r′i∂ϕi

∂ϕ′i

∂ri

∂ϕ′i

∂ϕi

∣ ri = r

ϕi±1 − ϕi = χ

=

α+ 3 γ r2 2 r |c| sin(χ+ ψ)

2σ r − 2 |c|r

sin(χ+ ψ) −2 |c| cos(χ+ ψ)

(139)

and

J2 =

∂r′i∂ri±1

∂r′i∂ϕi±1

∂ϕ′i

∂ri±1

∂ϕ′i

∂ϕi±1

∣ ri ± 1 = r

ϕi±1 − ϕi = χ

=

|c| cos(χ+ ψ) −r |c| sin(χ+ ψ)

|c|r

sin(χ+ ψ) |c| cos(χ+ ψ)

, (140)

the Jacobian matrix can be written as

J =

J1 J2 0 . . . 0 J2

J2 J1 J2 0 . . . 0

......

0 . . . 0 J2 J1 J2

J2 0 . . . 0 J2 J1

. (141)

Thus, the Jacobian matrix is a block circulant matrix of type (N, 2) [Davis, 1979], and may be recastas

J = bcircJ1, J2, 0, . . . , 0, J2. (142)

Each row is composed by N blocks (which are 2× 2 sub-matrices) and is obtained from the precedingrow, by simply shifting each block one step right. The block circulant nature of the Jacobian matrixis crucial for the stability analysis, in fact a block circulant matrix can be putted in block diagonalform by a unitary transformation. We define the Fourier Matrix of order n as

F ∗n =

1√n

(

w(i−1)(j−1))

=1√n

1 1 1 . . . 1

1 w w2 . . . wn−1

1 w2 w4 . . . w2(n−1)

......

......

1 wn−1 wn−2 . . . w

(143)

with w = ei2πN .

Any block circulant matrix A = bcircA1, A2 . . . , AN, where Ak is a n × n sub-matrix, can berecast in the block diagonal form by the unitary transformation

diag(M1, . . . ,MN ) =

N−1∑

k=0

diag(1, w, . . . , wN−1)FnAk+1 F∗n . (144)

19

In the present case, applying equations (143) and (144) to the jacobian matrix we obtain

Mr = J1 + w(r−1) J2 + w(N−1)(r−1) J2 = J1 + 2 cos

(

2πr − 1

N

)

J2 r = 1, . . . , N. (145)

It is well known that the eigenvalues of a block diagonal matrix are the eigenvalues of each block.Since the matrices Mr are 2 × 2 matrices, their eigenvalues lie in the left half of the complex plane ifand only if the following inequalities are satisfied

Tr (Mr) < 0

Det (Mr) > 0∀ r = 1, . . . , N. (146)

Using equations (139), (140) and (145) we obtain

Tr (Mr) = 2 [(β(r) − 4) |c| cos(χ+ ψ) − α] (147)

and

Det (Mr) = (β(r) − 2)[

(β(r) − 2) |c|2 − 4|c|2 cos2(χ+ ψ) − 2α |c| cos(χ+ ψ)

−2ασ

γ|c| sin(χ+ ψ) − 4σ

γ|c|2 cos(χ+ ψ) sin(χ+ ψ)

]

. (148)

Introducing χ = 0 in equations (147) and (148), taking into account that β(r)−2 ≤ 0, and substituting|c| cos(ψ) = Re c and |c| sin(ψ) = Im c we obtain (134). Conversely, for χ = π, and considering|c| cos(χ+ π) = −Re c and |c| sin(χ+ π) = −Im c we obtain (135).

Figure 2 shows the regions of the parameters space satisfying these conditions for a chain composedby 10 oscillators, in the case σ = 0 (no shear condition). Notice the existence of regions in theneighborhoods of ψ = π/2 and ψ = 3π/2 where in-phase and anti-phase locked oscillations cancoexist. These regions become narrower as the number of oscillators increase, and are distorted by anonzero value of σ, but they are not removed.

α

|c|α

|c|α

|c|

00 0

ψψ ψ2π 2π 2π0 0 0

(a) (b) (c)

Figure 2: Parameters regions corresponding to various phase locking. Shading indicates regions wherephase locked solutions exist. (a) In-phase locking. (b) Anti-phase locking. (c) Darker region: Coex-istence of in-phase and anti-phase locked oscillations.

Figures 3 and 4 show the evolutions of the amplitudes and of the phase differences for the networkcomposed by 10 oscillators, with ψ = π/4. It is evident the convergence towards constant amplitude,in-phase locked oscillations.

20

0 5 10 15 200

0.5

1

1.5

2

t

r(t)

Figure 3: Evolution of the amplitudes for a network composed by 10 oscillators, random initial am-plitudes ri ∈ (0, 2]

0 5 10 15 20

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

t

χ

Figure 4: Evolution of the phases differences for a network composed by 10 oscillators, random initialconditions χi ∈ (−π/4, π/4)

6 Conclusions

Motivated by the idea that weakly connected oscillatory networks are bio-inspired models havingassociative properties which can be exploited for neural information processing, we have investigatedthe dynamic behavior of a weakly connected network composed by delayed neural oscillators whereeach oscillator is close to Hopf bifurcation.

For the uncoupled neural oscillator, equilibrium points, bifurcation phenomena and stability of thesolutions have been analyzed. It was shown that oscillatory behavior can be induced in the neuraloscillator by either increasing the delay above a certain threshold, or in presence of subthreshold delay,by an intense enough constant external input.

With regard to the whole network, the weak connections hypothesis implies that the network isclose to the multiple Hopf bifurcation and allow to employ center manifold reduction and normal formtheory. Resorting to these techniques, the dynamics of the whole network is reduced to an amplitude-phase model (i.e. a system of ODEs describing the evolution of both the amplitudes and the phasesof the oscillators) which exhibits many interesting and complex dynamical behaviors.

As a case study a chain composed by identical oscillators with nearest neighbors symmetric con-nections was considered. Analytical conditions which guarantee the existence of stable in-phase andanti-phase locked solutions were determined. The existence of regions in the parameters space whereboth the phase locked behaviors are admissible and can coexists was proved. Since collective behaviorsare believed to play a key role in neural information and image processing, we believe that the coexis-

21

tence of different stable phase locked states confirm the idea to use oscillatory networks as associativememories, where different phase locked behaviors are associated to different stored images.

Acknowledgments

This work was partially supported by the Ministero dell’Istruzione, dell’Universita e della Ricerca,under the FIRB project no. RBAU01LRKJ and by the CRT Foundation under the Lagrange Fellowproject.

References

Aronson, D. G., Ermentrout, G. B. & Kopell, N., [1990] “Amplitude response of coupled oscillators,”Physica D 41, 403-449.

Bar-Eli, K., [1985] “On the stability of coupled chemical oscillators,” Physica D 14, 242-252.

Belair, J. & Campbell, S. A., [1994] “Stability and bifurcations of equilibria in a multiple-delayeddifferential equation,” Siam Journal of Applied Mathematics 54, 1402-1424.

Davis, P. J., [1979] “Circulant Matrices,” (John Wiley and Sons, New York).

Faria, T., [2000] “On a planar system modelling a neuron network with memory,” Journal of Differ-ential Equations 168, 129-149.

Giannakopoulos, F. & Zapp, A., [2001] “Bifurcations in a planar system of differential delay equationsmodelling neural activity,” Physica D 159, 215-232.

Gray, C. M., Konig, P., Engel, A. K., & Singer, W., [1989] “Oscillatory responses in cat visual cortexexhibit inter-columnar synchronization which reflects global stimulus properties,” Nature 338, 334-337.

Guckenheimer, J. & Holmes, P., [1982] “Nonlinear oscillations, dynamical systems and bifurcations ofvector fields ,” (Springer-Verlag, New York).

Hale, J., [1977] “Theory of functional differential equations,” (Springer-Verlag, New York).

Hassard, B. D., Kazarinoff, N. D. & Wan, Y. H. [1981] “Theory and applications of Hopf bifurcation,”(Cambridge University Press, Cambridge).

Hoppensteadt, F. C. & Izhikevich, E. M. [1997] Weakly connected neural networks (Springer-Verlag,New York).

Hoppensteadt, F. C. & Izhikevitch, E. M. [1999] “Oscillatory neurocomputers with dynamic connec-tivity,” Physical Review Letters 82, 2983-2986.

Hoppensteadt, F. C. & Izhikevitch, E. M. [2001] “Synchronization of MEMs resonators and mechanicalneurocomputing,” IEEE Transactions on Circuit and Systems-I: Fundamental Theory and Applica-tions 48, 133-138.

Itoh, M. & Chua, L. O. [2004] “Star cellular neural networks for associative and dynamic memories,”International Journal of Bifurcation and Chaos, 14, 1725-1772.

Izhikevitch, E. M. [1998] “Phase model with explicit time delays,” Physical Review E 58, 905-908.

Kowalski, J. M., Albert, G. L., Rhoades, B. K., & Gross, G. W., [1992] “Neuronal networks withspontaneous, correlated bursting activity: Theory and Simulations,” Neural Networks, 5, 805-833.

22

Kuramoto, Y. [1984] Chemical oscillations, waves and turbulence, (Springer-Verlag, New York).

Liu, C., Weaver, D. R., Strogatz, S. H., & Reppert, S. M., [1997] “Cellular construction of a circadianclock: Period determination in the suprachiasmatic nuclei,” Cell 91, 855-860.

Olien, L. & Belair, J., [1997] “Bifurcations, stability, and monotonicity properties of a delayed neuralnetwork model,” Physica D 102, 349-363.

Peskin, C. S., [1975] “Mathematical aspects of heart physiology,” (Courant Institute of MathematicalScience Publication, New York).

Roskies, A. L. [1999] “The binding problem,” Neuron, 24, 7-9.

Schillen, T. B., & Konig, P., [1994] “Binding by temporal structure in multiple features domains ofan oscillatory neuronal network,” Biological Cybernetics, 70, 397-405.

Smale, S., [1974] “A mathematical model of two cells via Turing equation,” Lectures in Applied Math-ematics, 6, 15-26.

Wei, J. & Ruan, S., [1999] “Stability and Bifurcation in a neural network model with two delays,”Physica D 130, 255-272.

23