A method for determining neural connectivity and inferring the underlying network dynamics using...

15
Journal of Neuroscience Methods xxx (2004) xxx–xxx A method for determining neural connectivity and inferring the underlying network dynamics using extracellular spike recordings Valeri A. Makarov a,, Fivos Panetsos a , Oscar de Feo b a Neuroscience Laboratory, Department of Applied Mathematics, School of Optics, Universidad Complutense de Madrid, Avda. Arcos de Jalon s/n, 28037 Madrid, Spain b Laboratory of Nonlinear Systems, Swiss Federal Institute of Technology Lausanne; EPFL-IC-LANOS, CH-1015 Lausanne, Switzerland Received 11 October 2004; received in revised form 10 November 2004; accepted 12 November 2004 Abstract In the present paper we propose a novel method for the identification and modeling of neural networks using extracellular spike recordings. We create a deterministic model of the effective network, whose dynamic behavior fits experimental data. The network obtained by our method includes explicit mathematical models of each of the spiking neurons and a description of the effective connectivity between them. Such a model allows us to study the properties of the neuron ensemble independently from the original data. It also permits to infer properties of the ensemble that cannot be directly obtained from the observed spike trains. The performance of the method is tested with spike trains artificially generated by a number of different neural networks. © 2004 Elsevier B.V. All rights reserved. Keywords: Neural circuits; Spike trains; Connectivity identification; Network modeling 1. Introduction The qualitative and quantitative analysis of the spiking activity of individual neurons is a very valuable tool for the study of the dynamics and architecture of the neural networks in the central nervous system (Kandel et al., 2000; Moore et al., 1966; Perkel et al., 1967a). Nonetheless, such activity is not due to the sole intrinsic properties of the individual neural cells but it is mostly consequence of the direct influence of other neurons, from a few to hundreds of thousands, which in general leads to dynamical behaviors far beyond a simple combination of those of the isolated neurons. Although any behavior of a neural network depends on the interactions of a high number of neural cells, on their morphology and their entire interconnection pattern, usually we cannot record the activity of each one of these cells but rather we are restricted to a very limited sample of the neurons of the network whose properties we aim to capture. Moreover, deducing the effec- Corresponding author. Tel.: +34 91 394 6900; fax: +34 91 394 6885. E-mail address: [email protected] (V.A. Makarov). tive connectivity between neurons whose experimental spike trains are observed is of crucial importance in neuroscience: first for the correct interpretation of the electrophysiological activity of the involved neurons and neural networks, and, second and probably more important, for correctly relating the electrophysiological activity to the functional tasks ac- complished by the network, being as simple as a response to a sensory stimulus or complex as interpreting a literature text. The above mentioned notion “effective connectivity” is defined as the simplest neuron-like circuit that would repro- duce the same temporal relationships between neurons in a cell assembly as those observed experimentally (Aertsen and Preissl, 1991). In other words, by “effective” we mean any observable direct or indirect interaction between neurons that alters their firing activity. Summarizing, when dealing with multiunit extracellular recordings we have the following ob- jectives: (i) inferring the effective connectivity and neuron properties of sub-networks (limiting to neurons experimen- tally available) and (ii) extrapolating the functionality of the “whole” from the properties of the collected and classified sub-networks. 0165-0270/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.jneumeth.2004.11.013

Transcript of A method for determining neural connectivity and inferring the underlying network dynamics using...

Journal of Neuroscience Methods xxx (2004) xxx–xxx

A method for determining neural connectivity and inferring theunderlying network dynamics using extracellular spike recordings

Valeri A. Makarova,∗, Fivos Panetsosa, Oscar de Feob

a Neuroscience Laboratory, Department of Applied Mathematics, School of Optics, Universidad Complutense de Madrid,Avda. Arcos de Jalon s/n, 28037 Madrid, Spain

b Laboratory of Nonlinear Systems, Swiss Federal Institute of Technology Lausanne; EPFL-IC-LANOS, CH-1015 Lausanne, Switzerland

Received 11 October 2004; received in revised form 10 November 2004; accepted 12 November 2004

Abstract

In the present paper we propose a novel method for the identification and modeling of neural networks using extracellular spike recordings.We create a deterministic model of the effective network, whose dynamic behavior fits experimental data. The network obtained by our methodincludes explicit mathematical models of each of the spiking neurons and a description of the effective connectivity between them. Such am rties of thee s artificiallyg©

K

1

asiancoicbaeatp

pikence:icalnd,tingac-onseture” ispro-s in a

nythat

ithob-ronen-hefied

0d

odel allows us to study the properties of the neuron ensemble independently from the original data. It also permits to infer propensemble that cannot be directly obtained from the observed spike trains. The performance of the method is tested with spike trainenerated by a number of different neural networks.2004 Elsevier B.V. All rights reserved.

eywords: Neural circuits; Spike trains; Connectivity identification; Network modeling

. Introduction

The qualitative and quantitative analysis of the spikingctivity of individual neurons is a very valuable tool for thetudy of the dynamics and architecture of the neural networksn the central nervous system (Kandel et al., 2000; Moore etl., 1966; Perkel et al., 1967a). Nonetheless, such activity isot due to the sole intrinsic properties of the individual neuralells but it is mostly consequence of the direct influence ofther neurons, from a few to hundreds of thousands, which

n general leads to dynamical behaviors far beyond a simpleombination of those of the isolated neurons. Although anyehavior of a neural network depends on the interactions ofhigh number of neural cells, on their morphology and theirntire interconnection pattern, usually we cannot record thectivity of each one of these cells but rather we are restricted

o a very limited sample of the neurons of the network whoseroperties we aim to capture. Moreover, deducing the effec-

∗ Corresponding author. Tel.: +34 91 394 6900; fax: +34 91 394 6885.E-mail address:[email protected] (V.A. Makarov).

tive connectivity between neurons whose experimental strains are observed is of crucial importance in neurosciefirst for the correct interpretation of the electrophysiologactivity of the involved neurons and neural networks, asecond and probably more important, for correctly relathe electrophysiological activity to the functional taskscomplished by the network, being as simple as a respto a sensory stimulus or complex as interpreting a literatext. The above mentioned notion “effective connectivitydefined as the simplest neuron-like circuit that would reduce the same temporal relationships between neuroncell assembly as those observed experimentally (Aertsen andPreissl, 1991). In other words, by “effective” we mean aobservable direct or indirect interaction between neuronsalters their firing activity. Summarizing, when dealing wmultiunit extracellular recordings we have the followingjectives: (i) inferring the effective connectivity and neuproperties of sub-networks (limiting to neurons experimtally available) and (ii) extrapolating the functionality of t“whole” from the properties of the collected and classisub-networks.

165-0270/$ – see front matter © 2004 Elsevier B.V. All rights reserved.oi:10.1016/j.jneumeth.2004.11.013

2 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

To address these problems a common experimental ap-proach is to use extracellular multiunit recordings. In thiscase spike trains (the time instants of spike occurrences, pointevents), in general, do not allow any direct insights aboutthe subthreshold and/or intrinsic membrane dynamics of theneurons. Nevertheless, spike trains can be used to identify thefunctional characteristics and architecture of the neural net-work they originated from (e.g.Perkel et al., 1967b; Segundo,2003). Though possible, identifying the effective neural cir-cuits from the spike trains represents a very complex taskeven for a network of few neurons. The available mathemat-ical methods result undersized with respect to the researchexigencies, and the large majority of neurophysiologists re-strict their study to the description of the neural activity bymeans of cross-correlograms (Perkel et al., 1967b), a toolwidely understood but which provides very limited knowl-edge about the functional properties of the neural networks.For instance, in the case of neural ensembles of three or moreneurons the cross-correlation may be easily fooled by thepresence of indirect connections (via other neuron(s)), or alsodue to a common input.

Recently, more sophisticated statistical methods have beenintroduced for the identification of effective connectivityin relatively large neural networks. Nevertheless, to extractthe interactions among neurophysiological data from two orm lwaysr rela-t hodst lysis,p ling)c vity( earp ainh l.,1 sd ns)c andi )b timed ence( ity( w.F ,2 ta enti-f rons,b ught ulatedn pli-c nota r don heira rainsw t tob ena ather

common situation which may just represent an objective ofthe research. An additional very important remark is that allthese methods assume a stochastic nature of the spike trainsgenerated by neurons. Consequently, no considerations aremade about the dynamics of the involved neurons or aboutthe nature of the intrinsic processes that are responsible forsuch behavior, with the consequent enormous difficulties insubsequent steps of the study: assigning of functional and/orneurochemical properties of the neurons, determination ofanatomical correlates, etc. Furthermore, all these methodsdeal only with the connectivity patterns, i.e. only presenceand sometime type and direction of the couplings betweenneurons can be estimated. No knowledge about absolute val-ues of couplings or other parameters of the network can bedrawn.

In contrast to a purely statistic approach, a deterministicone can be considered. The main advantage of determinis-tic methods is the use of mathematical models for inferringsingle neuron or neural network properties (indirect method)especially useful where direct observation of the neural dy-namics (experimental procedures) is very difficult or even im-possible. In general, in the literature, the indirect inference ofnetwork properties from models is approached in a very ab-stract manner, considering relatively simple, usually vaguelybiophysically meaningful, neuron models (like phase oscil-l1 dR herl al orr ons.AH aln esti-g n andt om-p uset ons.W menae (likep rentt sicali edi-a ctivec thisd outoa -c re oftc inis-t ced( elli,2 istica spikeg celld avior

ore neural elements, or brain sites, all these methods aecur to the evaluation of some kind of covariance or corion between the multiple signals. In fact, even the methat go beyond simple correlation (e.g., regression anarincipal components analysis and multidimensional scaonceptually embody the notion of co-variation in actiHorwitz, 2003). In this framework a method based on linartialization (conditional probability) in frequency domas been proposed in (Brillinger et al., 1976; Dahlhaus et a997; Rosenberg et al., 1989). Although this approach allowistinguishing direct from indirect (trough other neuroonnections, it does not differentiate between excitatorynhibitory synapses, a problem solved byEichler et al. (2003y adopting a similar approach based on partialization inomain. Another approach, called partial directed cohere.g.Sameshima and Baccala, 1999), uses Granger causalGranger, 1969) to expose the direction of information flourther two more methods: direct causality (Kaminski et al.001), and direct directed transfer function (Korzeniewska el., 2003) have been introduced. These methods allow id

ying the presence of feedback between two or more neuut coupling polarities are not directly accessible. Althohese methods have been successfully applied on simetworks of randomly spiking coupled neurons, their apation to real data is basically limited because: (i) they dollow resolving mutual couplings between neurons and/oot distinguish the type of such couplings; (ii) as a rule tpplication assumes the use of relatively large spike tith constant statistical properties, a condition difficule satisfied in the experiments; (iii) they usually fail whpplied to excessively rhythmic neural assemblies, a r

ators (e.g.Ermentrout, 1982), Fitzhugh-Nagumo (FitzHugh,961), Plant (Plant, 1981), Hindmarsh-Rose (Hindmarsh anose, 1984), etc.) connected in networks which are eit

arge and extremely regular (chains, rings, lattices, globandom couplings) or composed of only few (two) neurlternatively, the Hodgkin–Huxley formalism (Hodgkin anduxley, 1952) for modeling the dynamics of an individueuron can be adopted. However, the need of a priori invations of the channel dynamics of each particular neuro

he synaptic transduction properties, together with the cutational complexity of its integration, restrict as well its

o the cases of very regular networks or of only a few neurhilst these approaches allowed studying many pheno

xperimentally observed in the central nervous systemartial synchronization, phase or frequency locking, diffe

ypes of waves, clustering, etc.), they lack direct biophynterpretation and, the study of the dynamics of intermte size networks, having biophysically supported effeonnectivity patterns, is still a challenging problem. Inirection, a method for extracting a dynamical systemf the interspike intervals (ISIs) has been proposed (Racicotnd Longtin, 1997; Sauer, 1994). This method allows conluding, for instance, about the possible chaotic natuhe spike timings of a neuron (e.g.Pavlov et al., 2001). Re-ently another approach taking into account the determic dynamics of a single neuron activity has been introduPaninski, 2004; Paninski et al., 2003; Pillow and Simonc003). This approach provides a biophysically more reallternative to the models based on Poisson (stochastic)eneration. It was shown that the leaky integrate and fireriven by a noisy stimulus demonstrates an adaptive beh

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 3

similar to the effects observed in vivo and in vitro; spikingdynamics may account for changes observed in the recep-tive fields measured at different contrasts. However, thesemethods assume neurons to be isolated; hence, they do notprovide insights about the neural network structure and itsrelationships with the observed dynamics.

Within the deterministic approach, here we propose anovel mathematical method for the identification of connec-tivity and modeling of neural networks using extracellularspike recordings. The aim is to obtain an inferable determin-istic model of the neural ensemble, including interneuronconnections. Such a model allows studying the properties ofthe network independently from the original data and alsopermits to infer dynamical properties that cannot be directlyobtained from the observed spike trains. The mathematicalalgorithm has been implemented and a PC package is freelyavailable (Makarov, 2004).

2. Materials and methods

2.1. Network identification and modeling

The method we propose is capable to analyze a set of ex-perimentally recorded spike trains providing a mathematicalm spikes ethodp in-c nceo willh nglen neral,c . Fort rmi-n d then odel

to investigate the network dynamics and functionality. Let usnow give an outline of the method, whilst the mathematicaldetails are reported inAppendix A.

Having spike series ofN neurons we assume a networkcomposed ofN interconnected dynamical systems, each ofwhich describes the spiking behavior of one recorded neuron,i.e. we consider one dynamical system for each neuron. Thebehavior of each of theN dynamical systems is describedby a set of differential equations that depend on a numberof parameters unknown“ a priori”. The connections betweenneurons are represented by aN× NmatrixWwhose elementswkn are the weights of the synapses between then-th andthek-th neurons, e.g.w12 represents the synaptic weight ofthe coupling directed from the presynaptic neuron 2 to thepostsynaptic neuron 1. Schematically, it results in a graphwhoseN vertices represent the neurons, whose spike trainsare experimentally observed, and the links between verticescorrespond to effective synapses between the correspondingneurons, as illustrated inFig. 1A. Then, the parameter valuesof the network model (the parameters of each one of thedynamical systems and among them the strengths and typesof the synapses in the network) are determined from the spiketrains by means of a minimization procedure.

Recently, Jolivet et al. (2004)shown that detailedconductance-based models (Hodgkin–Huxley type) can bew Ac-c s-t d-firem

C

w ec l po-t ti ork,

F thvertices singlen amics d-.E he link betweenn determ synapse,w spike potential ani neuro

odel of interconnected spiking neurons that generatesequences like the experimental ones. Moreover, our mrovides the effective architecture of the neural networkluding type, direction, and strength of the synapses. Obtained, the deterministic model of the neural networkelp us in interpreting both the dynamic behavior of the sieurons and their interactions, characteristics that, in geannot be directly obtained from the merely spike trainshis reason our procedure is divided in two steps: (i) deteation of a mathematical model of the neural network aneurons that form it and (ii) the use of the mathematical m

ig. 1. Graphical representation of the network model. (A) Graph wiNeuron whose spike train is available experimentally. The intrinsic dynach neuron can receive synaptic input from (N− 1) other neurons, and teurons. The connectivity matrix{wkn} defines the synaptic weights to behile zero corresponds to the absence of the synapse. (B) The input

ncrease (decrease) for inhibitory (excitatory) synapses the ISIs of the

ell fitted by single-variable integrate-and-fire models.ordingly, for each of theN interconnected dynamical syems we use the single-compartment leaky integrate-anodel (e.g.Stein, 1967):

dV

dt= −GL(V − VL) + I0 + Isyn(t), (1)

hereV is the neural membrane potential,C the membranapacity,GL andVL are the conductance and the reversaential of the leakage current,Isyn(t) is the synaptic currennduced by the spikes from the other neurons of the netw

(the case of three vertices is shown). Each vertex corresponds to aof a neuron is modeled by the single-compartment leaky integrate-anfire models between vertices correspond to the (effective) synaptic connections

ined. Positive (negative) weight corresponds to excitatory (inhibitory)trains (the case of one train is shown) modify the neuron membranedn, with respect to its intrinsic firing.

4 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

Fig. 2. Qualitative illustration of the working principle of the identification algorithm. A neuron, “k”, receives two synaptic inputs of different polarity, i.e.excitatory and inhibitory (two upper spike trains). Input spikes provoke postsynaptic current,Isyn; the current associated with a single spike is modeled by anexponential function with amplitude and polarity defined by the corresponding entry of the connectivity matrix,W (in this case we have two componentsw1

andw2 for the two synapses). The time decays of the synaptic current, or duration of the synaptic transmission, are defined by two parameterλ1 andλ2. Thegray bars show time intervals when the membrane potential,V, is altered by the synaptic current and deviates (continuous line) from the intrinsic membranedynamics (dashed line). When the membrane potential reaches the threshold,Vth, the modeled neuron fires a spike andVmem is reset to the initial state,VL.Accordingly, the parameter setp={i0, τ, λ1, λ2, w1, w2} defines uniquely the dynamics of the modeled neuron for a given input spike trains. Hence, theparameter values can be adjusted to minimize the sum of the squared differences (�Ti ) between the experimentally observed firing of the neuron (“experimentaloutput”) and the firing predicted by the model (“model output”). The resulting parameter setp* gives the best (predictive) estimate of the intrinsic parametersand of the entries in the connectivity matrix corresponding to the modeled neuron.

and the constant currentI 0 allows neurons to fire periodi-cally when uncoupled. Whenever the membrane potentialVreaches a threshold,Vth (about−50 mV) a spike is fired andV is instantaneously reset to the initial state,VL. Hence, forI 0 > GL(Vth − VL) the single neuron model (1) produces tonicspikes. Whilst, for non-spiking neurons, the resting mem-brane potential (usually about−60 mV) isV0 = VL + I 0/GL.

Our procedure to adjust the parameters of the model usesthe ISIs of the recorded spike trains. It relies on the generalassumption that when a neuron receives an excitatory inputit fires in advance with respect to its intrinsic firing inter-val and, consequently, it increases its firing rate. Inversely,it delays firing for inhibitory input with a consequent de-crease in its firing rate (Fig. 1B). This is a general propertyof Type I membranes that have non-negative phase resettingcurves (Ermentrout, 1996), which is indeed compatible withthe leaky integrate-and-fire neuron model (van Vreeswijk etal., 1995). Type II neurons can have regions with negativephase resetting curves, for instance when undergoing a su-percritical Hopf bifurcation (many membrane can be broughtinto this regime at high enough temperatures, but it is nor-mally unusual) (Ermentrout, 1996).

The dynamic behavior of each neuron is determined bythe parameters of each incoming synapse, by the incomingspiking activity and by the parameter values that determine itsi ingf themb -p thep

Fig. 2 sketches the method working principle. We as-sume that thek-th neuron has two afferent synapses of dif-ferent types, i.e. one excitatory and one inhibitory. For eachone of the two synapses, an input spike induces an expo-nential current and the sum of them gives the postsynapticcurrent Isyn. The amplitude and the sign of the two com-ponents of the postsynaptic current are defined by the cor-responding weights of the connectivity matrix,W; in thiscase with only two synapses let us call them shortlyw1and w2. Between each two spikes of thek-th neuron, inabsence of external input, the membrane potential evolvesaccording to its intrinsic dynamics (dashed line inFig. 2,see alsoAppendix A). When external spikes income, themembrane potential deviates from its natural trajectory ashighlighted inFig. 2 and, every time it reaches the thresh-old Vth, the neuron fires a spike andV is reset to the ini-tial stateVL. Given the input spike trains, the parametersof the equations of the model uniquely define the dynam-ics within two successive spikes of the neuron. At thispoint, keeping in mind that with this method we are in-terested in capturing the underlined dynamics of the net-work, we adjust the parameter values in order to minimizethe sum of the squared differences between the experimen-tal spike trains and the trains predicted by the model, i.e.the sum of the squared�Ti highlighted in Fig. 2. Thes ini-m t es-t d oft euraln

ntrinsic behavior (leaky time and intrinsic spiking). Startrom arbitrarily assigned parameter values, we adjusty means of a mean square method (Motulsky and Christooulos, 2003) taking into account the differences betweenredicted and measured spike trains.

et of the parameters corresponding to the global mum of the squared sum (cost function) gives the bes

imate of the components of the connectivity matrix anhe other parameters of the neurons in the modeled networks.

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 5

2.2. Use of the identified neural network model

Once a model of the whole neural network has been set-tled, i.e. all the parameters values of the equations (includingthe interconnections) have been determined, it can be mathe-matically investigated, independently from the original data,in order to assess the relationships between dynamical be-havior and effective network architecture. The model of thenetwork is functionally equivalent to the real one although thearchitectural differences between the real and modeled net-works may be important. For example, the sequence of twoor more interconnected neurons could be modeled by onlyone or a modeled neuron may send to postsynaptic neuronsboth excitatory and inhibitory connections.

A functionally meaningful neural network model fittingthe observed data opens a broad variety of investigative pos-sibilities. For example, one can study statistical propertiesof the model dynamics or bifurcations of the vector field andeven collective chaotic behavior. As guidance, we report heretwo examples illustrating the kind of information that maybe retrieved from the model, leaving the further exploitationof the modeling approach to the researchers.

It should be noted that, since we built a deterministicmodel, we can easily reverse the problem. Indeed, as soonas the parameter values of the model (i.e. connectivity ma-t qua-t forit e thec rainsf rgern addn e-n s andn pen-d d.

thea ay beu (e.g.c com-p uredo hiche o thee alsob ters.

hichc ich,o casew ronsa egu-l ingw his-t s in-d amet icat-i pik-

ing of two neurons can be due to the dynamical behavior ofthree very different effective networks: (i) one of the neuronsmay not intrinsically fire at all and it produces tonic spikesdue to input from the other neuron; (ii) alternatively, the firingfrequencies of the two neurons are outcome of the collectivedynamics (effective connectivity) and are, in general, differ-ent from their intrinsic frequencies; (iii) finally, no one of thetwo neurons is intrinsically spiking and their regular spikingis the result of the mutual effective interconnection. Fromthe correlograms and other statistical methods it is not pos-sible to distinguish which of these three cases is observed.On the contrary, this information is inferable from a simplemathematical analysis of the model: namely, if the identifiedparameter values of a neuron are so thatI 0/ GL(Vth − VL) > 1,then this neuron is intrinsically firing; otherwise the neuronfires due to the collective network dynamics.

2.3. Testing the method performance: generation of“experimental” spike trains

To verify our technique we used artificially generatedspike trains, since the only way to judge the results of ourmethod is to know all the properties of the network, espe-cially the connectivity pattern. To generate the spike trainswe consider five neural networks combining two essentiallyd re-s icm sedb be-c an ir-r Is,t it al-l ect tod , theR nallye andp eu-r tion,e e-t latet ofo rcesi uronm sincew niquew se ofi f thet

eu-r ec-t ryl et wo-n caseso mu-t in

rix, decay scales, etc.) have been identified, the model eion can be numerically integrated (simulated), resortingnstance to one of the ODE integrators of Matlab®, andhe model spike trains can be obtained. Thought, sincommonly available simultaneous recordings are spike trom few neurons despite the fact that they are part of laeural circuits, a natural extension of the model is tooise sources into Eq.(1) for modeling all unmeasured phomena, including entrances from unobserved neuronoising environment. In the simplest cases, white indeent zero mean Gaussian processes (noise) can be use

The simplest use of the model (trough simulations) isssessment of timing related properties. Simulations msed to generate spike trains; then statistical propertiesrosscorrelograms) of the model generated data can beuted and compared with those of the originally measnes. This comparison can highlight, for instance, to wxtent the observed statistical properties are related tffective architecture, especially if the simulations haveeen performed slightly modifying the identified parame

On the other hand, there are network properties wannot be retrieved from the solely spike timings but whn the contrary, may be inferred from the model. In thehen two tonic, approximately synchronous, spiking neure observed, usually the spike trains show very high r

arity, their ISIs are practically constant and slightly varyith the synaptic input. Consequently, auto-correlation

ograms will demonstrate equidistantly distributed peakicating the tonic spiking of both neurons and, at the s

ime, the cross-correlation will have at least one peak indng the common behavior. However, the joint regularly s

ifferent kinds of neuron models: the probabilistic spikeponse model (SRM,Gerstner, 1995) and the deterministodel for regularly spiking (RS) neurons recently propoy Izhikevich (2003). On one hand we chose the SRMause in the absence of couplings, such a neuron firesegular spike train with Poisson-like distribution of the IShus no assigned firing frequency exists. Consequently,ows assessing the robustness of the method with respeterminism of the modeled neuron. On the other sideS model has been chosen because it is a computatiofficient yet very plausible model of spiking neuronsermits to simulate a variety of behaviors of biological nons, like spiking, bursting, chattering, frequency adaptatc. (Izhikevich, 2004). Since the RS model is strictly d

erministic, to add variability to the dynamics and emuhe environmental fluctuations and/or the driving activityther “unobserved” neurons, we introduce stochastic fo

nto the model equations. None of the two chosen neodels belongs to the class of integrate-and-fire ones,e want to assess the robustness of the proposed techith respect to our most restricting hypothesis, i.e. the u

ntegrate-and-fire model. Equations and descriptions owo models are briefly recalled inAppendix B.

Experiments 1–3 consider combinations of two RSM nons: (1) unidirectional excitatory connection; (2) unidirional inhibitory connection; (3) mutual inhibitory–excitatooop. Despite that in theory there are (23 + 1) cases, theshree experiments practically cover all the possible teuron connections, since the other six are particularf these combinations. In experiment 4 we consider the

ual inhibitory–excitatory loop of two RS neurons; and

6 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

experiment 5 we analyze the spikes generated from a classi-cal three-neuron network formed by two interconnected RSand one SRM neuron. Experiment 6 assesses robustness ofthe method in respect to the amount of data (length of theavailable spike train recordings) and complexity of the un-derlying network (cases of three- and five-neuron networksare considered).

3. Results

First, the performance of the method is assessed on threeneural networks of two probabilistic (SRM) neurons withlow firing rates and one network of two tonic (RS) neurons;we generate experimental traces lasting 40 s for the SRM net-works and 20 s for the RS network, thus having about 50–150spikes for each experiment. Second, we use a mixed SRM-RSthree-neuron network with experimental spike trains lasting20 s. In each experiment we apply our method to identify

the connectivity pattern and intrinsic parameter values of themodel of the neural network. Afterwards, we simulate themodel at the identified parameters values. We compare theconnectivity patterns (presence and type of the synapses), andwe also crosscheck statistical properties, e.g. ISI and cross-correlation histograms, of modeled and “experimental” spiketrains.

Finally (experiment 6), the robustness of the method istested on two complex networks of three and five intercon-nected neurons. For each network we perform 100 MonteCarlo experiments varying the data segment length (durationof recording), and then we calculate the number of success-ful structural inferences as the number of correctly detectedsynapses divided by the number of possible couplings.

3.1. Experiment 1

Fig. 3shows the results for the case of the unidirectionalexcitatory coupling. Here and further on, we use “V” and “T”-

Fswt

ig. 3. Identification and modeling of a two-neuron network with unidirectionahows the experimental spike trains and their ISI and cross-correlation histoghich captures correctly the experimental connectivity pattern (only presenc

heir ISI and cross-correlation histograms, which are in a good accordance w

l excitatory coupling between two SRM neurons. The upper part (“experiment”)rams. The lower part (“model”) illustrates: the identified connectivity matrix,Wmd,e, polarity and relative weights are comparable); and the modeled spike trains withith their experimental counterparts.

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 7

like links ends to mark excitatory and inhibitory synapsesrespectively. The first neuron is autonomous; hence it firesirregularly with a very broad spectrum of ISIs. The firing ofthe second neuron, even though irregular, does depend on thespiking of the first neuron. The upper part ofFig. 3, markedas “experiment”, illustrates the experimental spike trains andthe corresponding ISI and cross-correlation histograms. Thecross-correlation histogram exhibits a high peak centered ata positive closed to zero value of time shift, suggesting thepresence of coupling. The firing rate of both neurons is withinthe range of 0.5–50 Hz but, due to excitatory input, neuron2 fires more frequently since the mean frequencies are 2 and4 Hz for neurons 1 and 2, respectively. Neuron 2 may alsoproduce bursts as a response to input spikes that stronglydepolarize it. The blind use of the identification method de-tects correctly the presence (and strength) of the excitatorycoupling (w21 = +31.3). It also estimates a negligible value(about 1% of the excitatory link) for the feedback coupling(w12 =−0.3). Indeed, further investigations show that thisvanishing coupling is irrelevant for the dynamics of the net-work.

It should be noted that the “experimental” and identifiedcoupling matrices can be compared only qualitatively as theabsolute values of their entries are incommensurables. In fact,they refer to two completely different mathematical models.Moreover, even in the case where the same model has beenused for both “experiment” and identification, the matricesare not “linearly” comparable since their elements are nonlin-early obtained upon the spike timings, e.g. for strong enoughcoupling strength the spiking rate may saturate or even de-crease.

We performed a simulation using the model of the net-work we obtained and to which we have added a white noise.The lower part ofFig. 3, marked as “model”, reports thespike trains and the corresponding ISI and cross-correlationhistograms of the model. The comparison of the upper andlower part of the figure highlights a good statistical agreementof the model with the experimental data. Indeed, as for theexperimental data, neuron 1 fires with a strong variability ofISIs whilst, since the firing of the first neuron evokes spikesin second one, the ISI histogram of neuron 2 decays rapidly.Furthermore, the cross-correlation histogram exhibits a high

Fig. 4. Same as inFig. 3for the c

ase of inhibitory coupling.

8 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

peak for small positive values of the time shift very similarto the one of the experimental data.

3.2. Experiment 2

Similarly to the first case,Fig. 4shows the results for thecase of inhibitory synapse. The inhibitory synapse inducestwo observable effects on the statistics: first of all the elon-gation of the ISI histogram of neuron 2, i.e. its mean firingfrequency decreases to 1.6 Hz, and second, the emergence ofa valley in the positive part of the cross-correlation histogram.Alike to the case of the excitatory synapse, the identificationmethod provides the correct connectivity pattern, and the sta-tistical properties of the model results very similar to thosefrom the experiment, supporting further the potential of themethod.

3.3. Experiment 3

Fig. 5shows the results for the more complex case of twoneurons forming an excitatory–inhibitory loop. In this case

the spike trains from the two neurons are not trivially interre-lated. The presence of the excitatory synapse is pointed out bythe experimental cross-correlation histogram, which shows apeak similar to that inFig. 3 for the case of unidirectionalcoupling. However, the presence of an inhibitory synapse isnot obvious as there is no pronounced difference betweenthe histograms ofFigs. 3 and 5. On the contrary, alike to theprevious cases, the identification method provides the cor-rect connectivity pattern, and the simulation of the networkresults in a satisfactory statistical accordance of experimentaland model produced data.

3.4. Experiment 4

The last two-neuron network is composed by two tonicspiking neurons connected into an excitatory–inhibitory loopas shown inFig. 6. The experimental spike trains and theresults of their statistical analysis are shown in the upperpart (“experiment”) ofFig. 6. The autocorrelations clearlydemonstrates equidistantly distributed peaks representativeof the tonic spiking of both neurons, whilst the peaks in the

Fig. 5. Same as inFig. 3for the case of e

xcitatory–inhibitory coupling loop.

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 9

Fig. 6. Same asFig. 3for the case of a two-neuron network of mutually interconnected tonic spiking neurons (RS model).

cross-correlation highlight the presence of synaptic coupling.However, as in the previous experiment, the analysis of thehistogram does not allow outlining the connectivity patternneither allows drawing any conclusion about the intrinsicspiking nature of the two neurons. On the contrary, our iden-tification method provides the correct connectivity pattern.Also the simulation of the modeled network provides resultsin a satisfactory statistical accordance with the experimentalones. As it can be seen, there is a good qualitative agree-ment between experimental and model cross-correlation his-tograms, though the former one has less pronounced peaksand the depression between peaks is not as strong as in themodel histogram. This is due to the nature of the RS model weuse for generating experimental spike trains which does notbelong to the class of renewal models, i.e. its spiking showsadaptation phenomenon (Izhikevich, 2004). Consequently,the ISI following a short ISI tends to elongate, which in turnresults in the fuzzy peaks of the experimental crosscorrelo-gram. Finally, the simulation of the two isolated (modeled)

neurons allows spotting exactly the intrinsic nature of bothneurons as being regularly spiking, highlighting in this waythe modeling ability of our identification technique.

3.5. Experiment 5

Fig. 7summarizes the results obtained when applying theidentification technique to a mixed RS-SRM three-neuronnetwork. Considering the experimental data we observe thatneuron 1 has a relatively high firing frequency and strongvariability of ISIs and it fires in bursts when gets the excita-tory input from the low rate irregularly firing neuron 2. Onthe other side, neuron 3 shows regular fast spiking activity.Note that this case has different feedback loops and indirect(via third neuron) connections, which makes problematic thestudy of the connectivity pattern by means of the conventionalcross-correlation methods. However, in agreements to all theprevious considered experiments, also for this harder casethe application of our deterministic method leads to the iden-

10 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

Fig. 7. Test of the method on a complex three-neuron network. Neurons 1 and 2 are modeled with SRM, while neuron 3 (SR model) fires tonic spikes with ahigh firing rate. The network includes different excitatory–inhibitory loops and indirect connections. The connectivity matrixWex is used for the generation ofexperimental spike trains. The outcome of the method application is the matrixWmd, which captures correctly the interneuron connectivity. Simulation of theidentified neural network (model) shows a good statistical accordance with the experimental data.

tification of the correct connectivity pattern. Yet again, thesimulation of the identified network provides results in a sat-isfactory statistical accordance with the experimental ones.The simulation of each one of the three neurons taken sepa-rately allows spotting the intrinsic regularly spiking nature ofone of them, further highlighting the power of the proposedmodeling technique.

3.6. Experiment 6

Fig. 8 summarizes the results of the robustness test. Thenetwork architecture, connectivity matrix and experimentalspike trains for the case of the five-neuron network are shownin Fig. 8A, together with two representative examples of theconnectivity matrix found during the identification using 20and 60 s lasting spike trains, respectively. The matrix obtainedfor longer spike trains is structurally closer to the experimen-

tal one. In the case of three- (five-)neuron networks there exist6 (20) couplings to be identified by the method.Fig. 8B shows(100 Monte Carlo simulations) the mean percentage of thecorrectly identified synapses as a function of the recordinglength. In both cases the method shows a good performanceincreasing with the amount of the data available for identi-fication. In the case of the three-neuron network the methodidentifies correctly a 76.9% of the synapses with only spiketrains lasting 10 s. The percentage reaches the 96.5% with30 s of data and practically does not vary after this time in-terval (95.9% with 60 s). With the five-neuron network theperformances are of 59.1%, 73.3% and 85.8%, respectively.Note that before identification each synapse in the connec-tivity pattern has “a priory” three equiprobable possibilities(it may be excitatory, inhibitory or null). Hence, a 50% ofsuccesses in the identification would mean a better perfor-mance than just a random assignation that in this case will

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 11

Fig. 8. Performance test of the network identification. (A) Results of a five-neuron network simulation and examples of connectivity matrices identified whenusing recording samples of two different durations: 20 and 60 s. (B) Mean percentage of correctly identified synapses as a function of the recording duration forthe three- and five-neurons networks (in the former case experimental conditions are the same as inFig. 7). Note that when counting successes we distinguishbetween inhibitory, null, and excitatory synapses. Thus even 50% of successful inferences correspond to a prediction much better than the one made byarandom generator which would provide a 33% of success.

have 33% of success. As expected, the complexity of the net-work (number of involved neurons) decreases the method per-formance. For the same recording length a simpler network(three neurons) is identified better than the complex one (fiveneurons).

4. Discussion and conclusions

In the present paper we have proposed a novel determinis-tic method, which, for given spike trains ofNneurons, allowsobtaining a mathematical model describing both the architec-ture and the dynamical behavior of the underlined biologicalneural network. We have also shown that the obtained neuralnetwork model produces spike trains with statistical proper-ties similar to those of the experimental data.

To model the network we use a set of interconnectedintegrate-and-fire neurons. All the parameter values neces-sary to univocally define the neural network, i.e. the matrixof the network connectivity, the synaptic time scales, and theintrinsic parameters of the neurons, are calculated from therecorded spike trains trough an optimization procedure min-imizing the difference between the predicted and the mea-sured timings of spike episodes. Given the spike events, theidentification of all the parameters is guaranteed by the de-c tedi dy-

namical equation of each neuron remains independent fromthe others and (ii) the dynamics of each neuron within aninterspike interval is independent from the dynamics withinthe other intervals.

The identification algorithm relies on the solely spike dis-charging times and any a priori knowledge of the parametervalues, which may be provided by physiology, morphology,etc., can be included simply constraining to the given values(or ranges of values) the corresponding parameters, result-ing also in a computational improvement. Furthermore, themethod also allows considering bursting neurons simply re-curring to a preprocessing of the spike trains: all the neededis to specify the minimal time interval for which two spikesare considered to be separated events and not belonging to aburst.

In order to assess the robustness of the method we gen-erated “experimental” data sets considering neuron modelsdenying the most restricting hypotheses on which the methodtrusts. Namely, we considered networks combining statisticalresponding (Gerstner, 1995) and not renewal regularly spik-ing (Izhikevich, 2003) neuron models. The former model de-nies the determinism, while the later one does not satisfied theresetting property of the neuron model we use in our method.Furthermore, the first five networks we considered collect themain difficulties reported in literature about the identificationo gs,i nter-

omposition of the fitting problem according to two nesndependencies of the integrate-and-fire model: (i) the

f the neural connectivity, like mutual and indirect couplinnhibitory synapses, and excessively regular interspike i

12 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

vals, providing a reliable platform for the assessment of theidentification technique.

In the first five experiments, we assessed separately theconnectivity pattern identification and the modeling abil-ities of the method. For all the considered networks, themethod provided the correct connectivity pattern. Simulta-neously, with reference to the modeling issues, the simu-lation of the identified networks has provided spike trainswith first order statistics in satisfactory accordance with theexperimental ones. Also, for the more complex considerednetworks, the mathematical analysis of the identified modelhas correctly spotted intrinsic features of the isolated neu-rons which could not be inferred from the spike trains only,highlighting in this way the strength of the modeling tech-nique. Finally (experiment 6), we checked the robustness ofthe method with respect to the amount of the available data(length of the spike trains) and the complexity of the un-derlying network applying identification of the connectiv-ity pattern for data generated by relatively large networkswith complex dynamics. The method showed a good per-formance that increased with the length of the data understudy.

Concerning the identified connectivity pattern, it shouldbe mentioned that the network connectivity obtained maydiffer from the anatomical network from which the data areo a,bT ex-p l be-h se”o atet eratet ficiald hin ah andi

eth-o vity,t odelo ikesa pa-r fre-q entlyw wom thed y oft ac-t da-m olat-i thep m ofn ob-t entlyf singi canb micalp

Acknowledgments

This work was supported by the European ProjectsROSANA (EU-IST-2001-34892), APEREST (EU-IST-2001-34893), Grants OFES-01.0456 and FIS 01/0822, andla Conserjerıa de Educacion de la Comunidad de Madrid(V.A.M.).

Appendix A. Network model and parameter valuesadjusting procedure

Given N spike trains, we have a graph whoseN verticesrepresent the neurons, and the links between vertices stand forthe effective synapses between the corresponding neurons.The dynamics of each neuron (vertex inFig. 1) is described byEq.(1), driven by the external force (synaptic current) definedby the links structure. A rather accurate mathematical modelof the synaptic current induced by a single spike is (Getting,1989):

Isyn(t − t′) = Γ1

λd − λr(e−(t−t′)/λd − e−(t−t′)/λr

), (A.1)

wheret′ is the time instant when a spike from the presynap-t ndp er-m . Forf zero,wv cale.B -j ork.H rtent

I

T d byt rons.T pseso of thes

av nlessm

v

w telya piker rane

bserved (Aertsen and Preissl, 1991; Perkel et al., 1967).hough, both of them are effectively equivalent for certainerimental conditions, i.e. they span the same dynamicaavior, which justifies the use of the term “effective synapr “effective network” we used through all the text to indic

he equivalent neural dynamical systems that could genhe observed data. The application of the method on artiata allows measuring its potential and robustness witandy environment; however, it is clearly not the aim

nvestigations on real data are underway.Conversely to the descriptive nature of the statistical m

ds, which provides only a pattern of neural connectihe use of our deterministic approach offers a whole mf the neural network from which the experimental spre coming from. Such model includes the values of theameters of the individual neurons (e.g. intrinsic firinguency) and of the synapses between them. Conseque offer a powerful tool to approach the first of the tain steps in the study of neural circuits, i.e. inferringynamics of the neurons, and the effective connectivit

he network they belong to, from the recordings of theivity of a limited sample of neurons. This step is funental for addressing the second objective, i.e. extrap

ng the functionality of the whole neural assembly fromroperties of the analyzed sub-networks, the main aieuronal studies. Finally, it should be stressed that the

ained models can be subsequently studied independrom the experimental data; hence, information procesn neural circuits can be investigated and neural circuitse classified depending on their architectures and dynaroperties.

,

ic neuron arrives,Γ accounts for the synaptic strength aolarity (weight), andλd andλr are the time constants detining the decay and the rise time scales, respectively

ast synapses the rise scale can be approximately set tohile for a slow synapse (e.g. GABAA or NMDA) it is notanishing although being usually less than the decay sesides, from the modeling point of viewλd plays a ma

or role in determining the dynamics of the model netwence, to simplify the identification method, we can sho

he synaptic equation to:

syn(t − t′) = Γ1

λde−(t−t′)/λd

. (A.2)

he total synaptic current for a neuron can be accountehe sum over spikes generated by all the presynaptic neuhe synaptic weights may be different among the synar be equal to zero, meaning in this case the absenceynaptic coupling between certain neurons.

Let us consider one of the vertices-neurons, say thek-thnd, to simplify notation, so forth we drop the indexk in allariables and parameters. First, we introduce dimensioembrane potential and time:

= V − VL

Vth − VL, tnew = told

θ, (A.3)

hereθ is a time scale constant that can be appropriadjusted to account for neurons with largely different sates. Then, the evolution of the dimensionless memb

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 13

potential,v, is written as

v = −v

τ+ i0 +

N∑n=1, n�=k

wn

∑j

1

λn

exp

[− t − tnj

λn

] ,

if v = 1 thenv = 0 (A.4)

where the decay time scale,τ; the constant current,i 0; synap-tic time scales{λn} and weights,{wn} are given by

τ = C

GLθ, i0 = θ I0

C(Vtr − VL), λn = λd

n

θ,

wn = Γn

C(Vtr − VL). (A.5)

The first sum in Eq.(A.4) is taken over all the presynaptic neu-rons, whilst the second one is taken over the spikes generatedby these neurons between two consecutive spikes of neuronk. Note that all these parameters and the membrane potentialare related to neuronk, so to have a complete set of equa-tions we need to repeat the same procedure for all the otherneurons. Summarizing, forN membrane potentials we haveN equations as Eq.(A.4), each of which has 2N independentparameters:pk = (i0k, τk, λk1, ..., λkN,wk1, ..., wkN ), beingλkk andwkk meaningless.

There are two nested independencies which allow theic e in-c ttinge uronsa mpo-s firem e po-t thed pikesb

l Is ofn theo wed ;w rono SIo eo .(v

w eo

f then ning

Fig. A.1. Illustration of relative time intervals between presynaptic and post-synaptic spikes; a fragment of two spike trains of postsynaptic neuronk, andpresynaptic neuronn is shown. The postsynaptic neuron in itsi-th ISI, Ti ,receives two spikes with relative intervalstin1 andtin2.

the connectivity matrixWand the other intrinsic parameters.We stress once more that, in general, the parameter valuesdefining the dynamics of each neuron in the network (i.e.,decay scale, constant current, and the synaptic constants) maydiffer amongst the neurons, thus accounting for the diversityof neurons and synapses.

Assuming that in the experiment we observed a sufficientnumber of spikes, soM 2N, we can consider Eq.(A.6)as a nonlinear regression problem. Indeed, Eq.(A.6) can bepresented in the form:

F (T i, {tinj}, p) = 0 (A.7)

where all input events{tinj} are known precisely (up to ex-perimental precision). On the contrary, due to contaminationby noise and inputs from unobserved neurons, the predictedrelative time of spike occurrences,T i, may have a strongvariability with respect to the measured values,Ti . Thus, wecan considerTi as a response vector in a nonlinear regressionproblem.

Eq. (A.7) are transcendental with respect toT i, so theycan be inverted only numerically; moreover, they may havemore than one solution. For a given parameter setp, we solvethe ambiguity selecting the solution that corresponds to theminimal value ofT i; which physically means that a neuronfires a spike as soon as its membrane potential crosses thet

T

T e oc-c ttingp

m

wt r val-u

dentification of the all parameter values. First, each Eq.(A.4)an be integrated independently from the others since thoming spike time occurrences are known. This allows fiach neuron to the data independently from the other nend their corresponding parameters. The second decoition follows from the main property of integrate-and-odels, namely the state variable, alias the membran

ential, is reset to a known value at every spike; this letsynamics of the neuron between two of its successive se independent from its dynamics in the other ISIs.

Let us now denote by{Ti = ti+1 − ti} (i = 1,2,. . .,M) theength of all the consecutive measured (experimental) ISeuronk. Then, we classify all the spikes generated byther neurons according to the ISI in which they fall, andenote with{tinj} the relative (toti) times within this intervalhere the indexn indicates the number of the source neuf the spike (n �= k), index i stands for the number of the If thek-th neuron, and indexj for the number of input spikf neuronn falling inside the ISIi (Fig. A.1). Integrating EqA.4) over each ISI, with boundary conditionsv(ti) = 0 and(ti + T i) = 1, we obtainM equations as

(1 − e−T i/τ)i0τ + τ

N∑n=1, n�=k

wn

τ − λn

∑j

(e−(T i−tinj)/τ

− e−(T i−tinj)/λn ) = 1. (A.6)

hereT i is the relative (toti) predicted by the model timccurrence of thei + 1 spike of neuronk.

This procedure is repeated for all the other neurons oetwork, thus obtaining the equations completely defi

hreshold. We present this solution as

¯i = G({tinj}, p). (A.8)

hen, comparing the predicted and measured spike timurrences, we may formulate the following least square firoblem (Fig. 2):

inp

J =∑i

(�T i)2 =

∑i

[T i − G({tinj}, p)]2, (A.9)

hose solution provides the set of parametersp* (includinghe connectivity matrix) that estimates the real parametees.

14 V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx

Two main issues should be addressed when solving the re-gression problem (A.9) with real (physiological) data. First,the use of robust regression methods is preferable, e.g. nonlin-ear trimmed least squares estimator (Cizek, 2001). Indeed, themeasured spike trains may be strongly corrupted by noise, be-cause of the influence of “hidden” neurons or in consequenceof wrong assignment during the spike-separation procedure(which is a complex problem as well; see for details e.g.(Harris et al., 2000)). Second, neuron bursting must be han-dled to avoid the deterioration of the method performance.Indeed, when a neuron produces a burst, a lot of very short“fake” ISIs appear. This would wrongly bias the solution ofEq. (A.6) to high values ofI 0, with the consequent under-estimation of the connectivity weights (strong intrinsic highfrequency firing). To avoid this problem we consider a burstof neuronk as a unique event when its parameters, and cor-responding row of the connectivity matrix, are estimated. Sothe ISIs,{Ti}, are measured between the last spike of a burstand the next nearest spike.

Appendix B. Spike generation models

B.1. Spike response (probabilistic) model

yt

h

w oftd rons.T

h

w rt entiala apticp

ε

w lateh vera

h

wt

Eqs.(B.1)–(B.4), together with the thresholdθ, define anoise free model. In the presence of noise, the neuron mayemit a spike even if the membrane potential has not reachedthe threshold or, on the other hand, it may not fire if themembrane potential crosses the threshold for a short timeinterval. A simple way to deal with noise is to introduce theprobability of firing:

P = 12(1 + tanh[β(h(t) − θ)]), (B.5)

where the parameterβ determines the amount of internalnoise. The probability for firing is equal to 1/2 forh(t) = θ

and goes asymptotically to zero forh(t) → −∞ and to onefor h(t) → +∞.

B.2. Regularly spiking neuron

The RS neuron model proposed byIzhikevich (2003)is

v′ = 0.04v2 + 5v + 140− u + Idc + Is + ς(t),

u′ = a(bv − u), (B.6)

where time is scaled to ms,v is the membrane potential in mV,u represents a membrane recovery variable accounting for theactivation of K+ and inactivation of Na+ ionic currents, whileIs andIdc account for synaptic and dc currents, respectively.T thes ursb ic)s

R

A log-mics

B rac-

C nder-

D n-Meth

E the302.

E ronal2 p.

E syn-

F ls of

G Phys

G egevorks.

G ls and

The membrane potential of thek-th neuron is defined bhe sum of two terms (Gerstner, 1995):

k(t) = hrefk (t − t) + h

synk (t) (B.1)

herehref(t − t) is a kernel describing the refractorinesshe neuron,t the last firing time of the neuron, andhsyn

k (t)escribes the effect of the synaptic inputs from other neuhe refractoriness is modeled as

ref(s) =

0 for s ≤ 0,

−∞ for 0 < s ≤ sref,η0

sref − sfor s > sref,

(B.2)

heresref is the absolute refractory period andη0 accounts fohe recovering scale. The change in the membrane pott the soma due to a single incoming spike (postsynotential) is described by the kernel (α-function):

(s) = s

τ2ε

exp

(− s

τε

), (B.3)

hereτε is the synaptic decay constant. In order to calcusynk (t), the contribution from a single spike is summed oll spikes of all presynaptic neurons:

synk (t) =

N∑n=1,n�=k

wkn

∫ ∞

0ds ε(s)

∑j

δ(t − s − tnj ) (B.4)

here{wkn} is the connectivity matrix, and{tnj } are the firingimes of neuronn.

o account for variability in the spike trains we introducedtochastic process,ς(t), into the deterministic model. In oimulations we set parameters to their typical values:a= 0.02,= 0.27,c=−65, andd= 2, ensuring high frequency (tonpiking with a spike frequency adaptation.

eferences

ertsen A, Preissl H. Dynamics of activity and connectivity in physioical neuronal networks. In: Schuster HG, editor. Nonlinear dynaand neuronal networks. New York: VCH; 1991. p. 281–302.

rillinger DR, Bryant HL, Segundo JP. Identification of synaptic intetions. Biol Cybern 1976;22:213–28.

izek P. Robust estimation in nonlinear regression models. Soforschungsbereich 2001;373:25 Humboldt Universitaet Berlin.

ahlhaus R, Eichler M, Sandkuhler J. Identification of synaptic conections in neural ensembles by graphical models. J Neurosci1997;77:93–107.

ichler M, Dahlhaus R, Sandkuhler J. Partial correlation analysis foridentification of synaptic connections. Biol Cybern 2003;89:289–

rmentrout GB. Asymptotic behavior of stationary homogeneous neunets. Lecture notes in biomath. Berlin, New York: Springer; 19845.

rmentrout GB. Type I membranes, phase resetting curves, andchrony. Neural Comput 1996;8:979–1001.

itzHugh R. Impulses and physiological states in theoretical modenerve membrane. Biophys J 1961;1:445–66.

erstner W. Time structure of the activity in neural network models.Rev E 1995;51:738–48.

etting PA. Reconstruction of small neural networks. In: Koch C, SI, editors. Methods in neuronal modeling: from synapses to netwA Bradford Book. Cambridge: MIT Press; 1989. p. 171–94.

ranger CWJ. Investigating causal relations by econometric modecross-spectral methods. Econometrica 1969;37:424–38.

V.A. Makarov et al. / Journal of Neuroscience Methods xxx (2004) xxx–xxx 15

Harris KD, Henze DA, Csicsvari J, Hirase H, Buzsaki G. Accuracy oftetrode spike separation as determined by simultaneous intracellularand extracellular measurements. J Neurophysiol 2000;84:401–14.

Hindmarsh JL, Rose RM. A model of neuronal bursting using threecoupled first order differential equations. Proc R Soc London B1984;221:87–102.

Hodgkin AL, Huxley AF. A quantitative description of membrane currentand its application to conduction and excitation in nerve. J Physiol1952;117:500–44.

Horwitz B. The elusive concept of brain connectivity. NeuroImage2003;19:466–70.

Izhikevich EM. Simple model of spiking neurons. IEEE Trans NeuralNetw 2003;14:1569–72.

Izhikevich EM. Which model to use for cortical spiking neurons? IEEETrans Neural Netw 2004;15:1063–70.

Jolivet R, Lewis TJ, Gerstner W. Generalized integrate-and-fire modelsof neuronal activity approximate spike trains of a detailed model toa high degree of accuracy. J Neurophysiol 2004;92:959–76.

Kaminski M, Ding M, Truccolo W, Bressler S. Evaluating causal relationsin neural systems: Granger causality, directed transfer function andstatistical assessment of significance. Biol Cybern 2001;85:145–57.

Kandel ER, Schwartz JH, Jessell TM. Principles of neural science. 4thed. New York: McGraw-Hill; 2000.

Korzeniewska A, Manczak M, Kaminski M, Blinowska K, Kasicki S.Determination of information flow direction among brain structuresby a modified directed transfer function (dDTF) method. J NeurosciMeth 2003;125:195–207.

Makarov VA. Identification of Neural Connectivity and Modeling (IN-CAM) package is available at:http://aperest.epfl.ch/docs/software.htm,2004.

M al in-522.

Motulsky HJ, Christopoulos A. Fitting models to biological data usinglinear and nonlinear regression. A practical guide to curve fitting. SanDiego: GraphPad Software Inc; 2003.

Paninski L, Lau B, Reyes A. Noise-driven adaptation: in vitro and math-ematical analysis. Neurocomputing 2003;52/54:877–83.

Paninski L. Maximum likelihood estimation of cascade point-process neu-ral encoding models. Computation Neural Syst 2004;15:243–62.

Pavlov AN, Sosnovtseva OV, Moseckilde E, Anishchenko VS. Chaoticdynamics from interspike intervals. Phys Rev E 2001;63(5):36205.

Perkel DH, Gerstein GL, Moore GP. Neuronal spike trains and stochasticpoint processes. I. The single spike train. Biophys J 1967a;7:391–418.

Perkel DH, Gerstein GL, Moore GP. Neuronal spike trains andstochastic point processes. II. Simultaneous spike trains. Biophys J1967b;7:419–40.

Pillow JW, Simoncelli EP. Biases in white noise analysis due to non-Poisson spike generation. Neurocomputing 2003;52–54:109–15.

Plant RE. Bifurcation and resonance in a model for bursting nerve cells.J Math Biol 1981;11:15–32.

Racicot DM, Longtin A. Interspike interval attractors from chaoticallydriven neuron models. Physica D 1997;104:184–204.

Rosenberg JR, Amjad AM, Breeze P, Brillinger DR, Halliday DM. TheFourier approach to the identification of functional coupling betweenneuronal spike trains. Prog Biophys Mol Biol 1989;53:1–31.

Sameshima K, Baccala LA. Using partial directed coherence to describeneuronal ensemble interactions. J Neurosci Meth 1999;94:93–103.

Sauer T. Reconstruction of dynamical systems from interspike intervals.Phy Rev Lett 1994;72:3811–4.

Segundo JP. Nonlinear dynamics of point process systems and data. IntJ Bifurcat Chaos 2003;13(8):2035–116.

Stein RB. Some models of neuronal variability. Biophys J 1967;7:37–68.v on,

13.

oore GP, Perkel DP, Segundo JP. Statistical analysis and function

terpretation of neural spike data. Annu Rev Physiol 1966;28:493–

an Vreeswijk CA, Abbott LF, Ermentrout GB. Inhibition, not excitati

synchronizes coupled neurons. J Comput Neurosci 1995;1:303–