Self awareness and speech processing: An fMRI study

9
Self awareness and speech processing: An fMRI study Renaud Jardri, a,b, Delphine Pins, a Maxime Bubrovszky, a,c Pascal Despretz, a Jean-Pierre Pruvo, d Marc Steinling, e and Pierre Thomas a,c a UMR-CNRS 8160, Laboratoire Neurosciences Fonctionnelles et Pathologies, Université Lille II, Centre Hospitalier Universitaire de Lille, France b Service de Psychiatrie de lEnfant, Hôpital Michel Fontan, Centre Hospitalier Universitaire de Lille, France c Service de Psychiatrie Adulte, Hôpital Michel Fontan, Centre Hospitalier Universitaire de Lille, France d Service de Neuroradiologie, Hôpital Roger Salengro, Centre Hospitalier Universitaire de Lille, France e Service de Médecine Nucléaire et dImagerie Fonctionnelle, Hôpital Roger Salengro, Centre Hospitalier Universitaire de Lille, France Received 6 September 2006; revised 28 December 2006; accepted 5 February 2007 Available online 13 February 2007 Language production and perception imply motor system recruitment. Therefore, language should obey the theory of shared motor representation between self and other, by means of mirror-like systems. These mirror-like systems (referring to single-unit recordings in animals) show the property to be recruited both when accomplishing and when perceiving a goal-directed action, whatever the sensory modality may be. This hypothesis supposes that a neural network for self-awareness is involved to distinguish speech production from speech listening. We used fMRI to test this assumption in 12 healthy subjects, who performed two different block-design experiments. The first experiment showed involvement of a lateral mirror-like network in speech listening, including ventral premotor cortex, superior temporal sulcus and the inferior parietal lobule (IPL). The activity of this mirror-like network is associated with the perception of an intelligible speech. The second experiment looked at a self-awareness network. It showed involvement of a medial resting-state network, including the medial parietal and medial prefrontal cortices, during the self-generated voicecondition, as opposed to passive speech listening. Our results support the fact that deactivation of this medial network, in association with modulation of the activity of the IPL (part of the mirror-like network previously described), is linked to self-awareness in speech processing. Overall, these results support the idea that self- awareness is present when distinguishing between speech production and speech listening situations, and may depend on these two different parieto-frontal networks. © 2007 Elsevier Inc. All rights reserved. Keywords: Speech; Self; Physiological baseline; Parietal cortex; Medial prefrontal cortex; fMRI Introduction The concept of mirror neurons was introduced following the observation of populations of neurons in the rostral part of the lower 6 area of the Macaque monkey (area F5), whose electrophysiological activity was increased, both when the monkey manipulated objects, and when the experimenter produced the same action in his visual field (Gallese et al., 1996). In fact, these mirror neurons are sensitive to goal-directed actions whatever the sensory modality may be (Kohler et al., 2002). In humans, no such neurons have been recorded directly, but neuroimaging studies indicate that some cortical networks show mirror-like properties. In particular, PET-scan showed haemodynamic changes in the inferior parietal cortex (BA 40) and the superior temporal sulcus (BA 21) when grasping movements were made (Rizzolatti et al., 1996). This network also involves participation of the lower premotor frontal area, especially the pars opercularis (BA 44) (Molnar- Szakacs et al., 2005). This mirror network produces motor re- presentations which are shared between an agent producing the action, and an observer (Decety and Sommerville, 2003). These studies have allowed motor cognition to go beyond simply preparing and producing an action. The motor system therefore participates in cognitive functions which were up to now considered as higher brain functions (Jackson and Decety, 2004), such as interpretation and anticipation of actions being observed. This common code between perception and action would have been present very early on during development, as the early phenomena of reciprocal imitation bear witness, which are early signs of social cognition (Meltzoff and Decety, 2003). It is by a phenomenon of mental simulation by someone observing another person, that certain authors explain the process of deducing the other persons intentions (Blakemore and Decety, 2001). Such a process was shown to involve the frontal mirror-like areas (Iacoboni et al., 2005). The phylogenetic evolution of this global mirror system for observation/performing an action, may be linked to the development of structures involved in the appearance of language in hominids (Rizzolatti and Arbib, 1998). Some authors www.elsevier.com/locate/ynimg NeuroImage 35 (2007) 1645 1653 Corresponding author. Laboratoire de Neurosciences Fonctionnelles et Pathologies, UMR-CNRS 8160, Explorations Fonctionnelles de la Vision, Hôpital Roger Salengro, Centre Hospitalier Universitaire, 59037, Lille cedex, France. Fax: +33 320 446 732. E-mail address: [email protected] (R. Jardri). Available online on ScienceDirect (www.sciencedirect.com). 1053-8119/$ - see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2007.02.002

Transcript of Self awareness and speech processing: An fMRI study

www.elsevier.com/locate/ynimg

NeuroImage 35 (2007) 1645–1653

Self awareness and speech processing: An fMRI study

Renaud Jardri,a,b,⁎ Delphine Pins,a Maxime Bubrovszky,a,c Pascal Despretz,a

Jean-Pierre Pruvo,d Marc Steinling,e and Pierre Thomasa,c

aUMR-CNRS 8160, Laboratoire Neurosciences Fonctionnelles et Pathologies, Université Lille II, Centre Hospitalier Universitaire de Lille, FrancebService de Psychiatrie de l’Enfant, Hôpital Michel Fontan, Centre Hospitalier Universitaire de Lille, FrancecService de Psychiatrie Adulte, Hôpital Michel Fontan, Centre Hospitalier Universitaire de Lille, FrancedService de Neuroradiologie, Hôpital Roger Salengro, Centre Hospitalier Universitaire de Lille, FranceeService de Médecine Nucléaire et d’Imagerie Fonctionnelle, Hôpital Roger Salengro, Centre Hospitalier Universitaire de Lille, France

Received 6 September 2006; revised 28 December 2006; accepted 5 February 2007Available online 13 February 2007

Language production and perception imply motor system recruitment.Therefore, language should obey the theory of shared motorrepresentation between self and other, by means of mirror-likesystems. These mirror-like systems (referring to single-unit recordingsin animals) show the property to be recruited both when accomplishingand when perceiving a goal-directed action, whatever the sensorymodality may be. This hypothesis supposes that a neural network forself-awareness is involved to distinguish speech production fromspeech listening. We used fMRI to test this assumption in 12 healthysubjects, who performed two different block-design experiments. Thefirst experiment showed involvement of a lateral mirror-like networkin speech listening, including ventral premotor cortex, superiortemporal sulcus and the inferior parietal lobule (IPL). The activityof this mirror-like network is associated with the perception of anintelligible speech. The second experiment looked at a self-awarenessnetwork. It showed involvement of a medial resting-state network,including the medial parietal and medial prefrontal cortices, during the‘self-generated voice’ condition, as opposed to passive speech listening.Our results support the fact that deactivation of this medial network, inassociation with modulation of the activity of the IPL (part of themirror-like network previously described), is linked to self-awarenessin speech processing. Overall, these results support the idea that self-awareness is present when distinguishing between speech productionand speech listening situations, and may depend on these two differentparieto-frontal networks.© 2007 Elsevier Inc. All rights reserved.

Keywords: Speech; Self; Physiological baseline; Parietal cortex; Medialprefrontal cortex; fMRI

⁎ Corresponding author. Laboratoire de Neurosciences Fonctionnelles etPathologies, UMR-CNRS 8160, Explorations Fonctionnelles de la Vision,Hôpital Roger Salengro, Centre Hospitalier Universitaire, 59037, Lillecedex, France. Fax: +33 320 446 732.

E-mail address: [email protected] (R. Jardri).Available online on ScienceDirect (www.sciencedirect.com).

1053-8119/$ - see front matter © 2007 Elsevier Inc. All rights reserved.doi:10.1016/j.neuroimage.2007.02.002

Introduction

The concept of mirror neurons was introduced following theobservation of populations of neurons in the rostral part of thelower 6 area of the Macaque monkey (area F5), whoseelectrophysiological activity was increased, both when the monkeymanipulated objects, and when the experimenter produced thesame action in his visual field (Gallese et al., 1996). In fact, thesemirror neurons are sensitive to goal-directed actions whatever thesensory modality may be (Kohler et al., 2002). In humans, no suchneurons have been recorded directly, but neuroimaging studiesindicate that some cortical networks show mirror-like properties. Inparticular, PET-scan showed haemodynamic changes in the inferiorparietal cortex (BA 40) and the superior temporal sulcus (BA 21)when grasping movements were made (Rizzolatti et al., 1996).This network also involves participation of the lower premotorfrontal area, especially the pars opercularis (BA 44) (Molnar-Szakacs et al., 2005). This mirror network produces motor re-presentations which are shared between an agent producing theaction, and an observer (Decety and Sommerville, 2003). Thesestudies have allowed motor cognition to go beyond simplypreparing and producing an action. The motor system thereforeparticipates in cognitive functions which were up to nowconsidered as higher brain functions (Jackson and Decety, 2004),such as interpretation and anticipation of actions being observed.This common code between perception and action would havebeen present very early on during development, as the earlyphenomena of reciprocal imitation bear witness, which are earlysigns of social cognition (Meltzoff and Decety, 2003). It is by aphenomenon of mental simulation by someone observing anotherperson, that certain authors explain the process of deducing theother person’s intentions (Blakemore and Decety, 2001). Such aprocess was shown to involve the frontal mirror-like areas(Iacoboni et al., 2005). The phylogenetic evolution of this globalmirror system for observation/performing an action, may be linkedto the development of structures involved in the appearance oflanguage in hominids (Rizzolatti and Arbib, 1998). Some authors

1646 R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

consider premotor area F5 of the Macaque functionally analogousto Broca’s area in man (inferior frontal gyrus, BA 44/45) (Arbib,2005) even if the anatomical similarity between these regions hasyet to be demonstrated (Petrides et al., 2005). The concept thatunderstanding language requires recruitment of the motor areasinvolved in language production, had already been suggested evenbefore mirror neurons had been discovered (Liberman andMattingly, 1985). Although it needs to be revised, this motortheory for speech perception has been shown to be supported byseveral findings in the literature (for a review, see Galantucci et al.,2006). Its main claim is that ‘the motor system is recruited forperceiving speech’. This theory fits in well with our hypothesis thatbrain mechanisms are shared by a person speaking and a personlistening to him. The involvement of a mirror-like network inlanguage is backed up by the finding of motor facilitation of theoropharyngeal muscles, using a transcranial magnetic stimulationprocedure when phrases were listened to passively (Fadiga et al.,2002), or by recruitment of the premotor and motor areas whichoverlapped with the areas involved in speech production asobserved with fMRI during a speech listening task (Wilson et al.,2004). Without proving the motor theory of speech perceptiondirectly, these studies are compatible with the idea that in language,as in other motor functions, representations exist, which are sharedbetween different protagonists in the exchange of speech(Rizzolatti and Arbib, 1998). However, the difficulty with thisphenomenological approach to understanding language is that,when an individual is speaking, how does he know that he himselfis in the process of producing language and that he is not justlistening to another person speaking?

The notion of self-awareness has been examined in manyphilosophical, psychoanalytical and cognitive science publications,but often remains somewhat complex. We would like to underlinethat certain authors approach this difficult question by testingagency, a more basic concept of self in action that is the feeling ofbeing at the origin of an action which is produced (Eilan andRoessler, 2003). Involvement of the premotor and primary motorareas during mental imagery of movement, contradicts the fact thatself-awareness is only based on the activity in those brain areas(Rodriguez et al., 2004). This self-agency may rely more on adistributed dynamic parieto-frontal network (Gallagher and Frith,2003). Several studies have found that the right inferior parietallobule (IPL) participates in different tasks involving agency: mentalsimulation of action in a first or third person perspective (Ruby andDecety, 2001), reciprocal imitation and looking for the source(Chaminade and Decety, 2002), or the subjective feeling of owningthe action produced (Farrer et al., 2003). In all these studies, self-attribution of action depends on a decrease of this neural networkactivity. The second region involved in agency awareness is theprefrontal medial cortex, which plays a critical role in the adjustmentof cognitive control and sensorial modulation (Ridderinkhof et al.,2004). Incidentally, this medial region may be involved in modu-lating the level of activity in the IPL when self-agency is takingplace. Self-awareness is not however confining to agency. Thisconcept of self in action does not either contradict the existence ofself-related thought, which is independent of external stimuli(Gusnard, 2005). At this point, it is interesting to note that theparietal and frontal regions have a high basal metabolism, comparedwith other cortical regions (Gusnard et al., 2001) and it is alsopossible for these ‘default-mode brain areas’ to be involved in thefeeling of being a self at rest. Furthermore, interactions between thehuman mirror system and the resting-state network have recently

been discussed in an fMRI study of self-face recognition (Uddin etal., 2005). With respect to what has already been described forvisual–motor tasks to differentiate between self/other, our hypothesisis that self-awareness may be involved in language, in parallel withthe classical functions of listening to and producing speech, bymeans of recruitment of parieto-frontal distributed networks. If theunderstanding processes involve lateral mirror-like cortical network,finding the internal or external origin of the signal being processedmay involve a ‘resting-state’ network on the medial cortical surface,working in parallel. The interaction between these two networksmay use the predictive coding properties of the brain (Friston, 2005).Other, seen as individuals capable of speech, would be perceived asidentical to self (base for decoding and understanding language) andat the same time as distinct from self (self-awareness, in a “firstperson” perspective). The objective of our own study was first toidentify mirror network implication in language. The aim was thento identify a neural network of self-awareness using fMRIexamination during language tasks, and to differentiate this networkfrom brain areas processing the physical properties of the soundsignal, and familiarity with the person speaking. The motor theoryof speech perception allows us to use the same experimentalprocedure for two types of sensorimotor verbal task: active speechproduction and passive listening to a voice (Jackson and Decety,2004).

Materials and methods

Subjects

12 healthy individuals, who were right-handed according to theEdinburgh inventory (Oldfield, 1971), participated in this study (sexratio=1; 25–29 years old). They did not have any psychiatric,neurological or Ear Nose and Throat disease, and their morpholo-gical brain MRI scan was normal. They all gave their writtenconsent. This research was approved by the local ethical committee.

Stimuli

Five conditions with different stimuli were used: Self GeneratedVoice <SGV>/Familiar Voice <FV> (1 male, 1 female)/UnfamiliarVoice <UFV> (2 males, 2 females)/Reverse-Taped Voice <RTV>/Graduate Tones <GT>. During the sound recording, the speakersread Paul Eluard’s poem “La ville de Paris renversée” (the city ofParis turned upside-down). These were “connected speech stimuli”including both semantic and prosodic information, and not justwords or phonemes (Hesling et al., 2005). A digital recorder(Tascam TEAC DA-P1) allowed the voices to be recorded with asampling rate of 44.1 kHz, and a resolution of 16 bits. The <FV> isthe voice of a first-rank parent for each subject. We used theserecordings as <UFV> for the other subjects. One of the unfamiliarvoices was also altered and played backwards <RTV>, whichdestructured the language making it incomprehensible, but keepingthe same frequency and prosodical properties. This transformationmakes it possible to use a stimulus which cannot be reproduced bythe subject, either using an object, such as a musical instrument, orusing the voice, since it is biomechanically impossible to jointogether the breathed or blown syllables of this artificially reversedvoice. The graduate tone <GT>, used as a control, consisted of 3fundamental frequencies (500, 700 and 900 Hz), which weremodulated by ±5 Hz to a temporal frequency of 10 Hz. The stimulilasted for a total of 21 s. The amplitude of the sound files was

Table 1Results of fMRI multi-subject general linear model contrast analysis duringspeech listening conditions (n=12)

Cortical areas BA R/L t-values p (correctedvalues)

Talairachcoordinates

Passive speech listening [UFV–GT]Pre-central gyrus 4 R>L 4.3 0.001 (−)48/−17/41Superior frontal gyrus 6 R>L 4.2 0.001 (−)5/−1/48Superior temporal

sulcus (medium)21 R>L 5.5 0.001 (−)53/−24/−1

Superior temporalgyrus (posterior)

22 R>L 4.9 0.001 (−)54/−11/−1

Inferior parietal lobule 40 R 4.5 0.001 42/−35/45Hechl's gyrus 41 R+L 5.4 0.001 (−)53/−25/5Inferior frontal gyrus 44 L 5.3 0.001 −48/17/9

Voice familiarity processing [FV–UFV]Superior temporal

sulcus (anterior)21 R 5.0 0.018 48/5/−17

BA: Brodmann's areas; FV: familiar voice; R/L: right–left side of the brain;GT: graduate tones; UFV: unfamiliar voice.

1647R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

normalized to 96%, in order to obtain comparable amplitude for thedifferent recordings.

Experimental procedure

The subjects had to perform two successive experiments. Theseexperiments were performed inside the scanner. The subjects laydown with their eyes closed wearing MR-compatible headphones,which transmitted sound stimuli and attenuated the ambient noiseof the scanner (about 30 dB SPL less) (MRI Devices Corporation,USA). Each experiment included a block-design paradigmconsisting of initial silence lasting 1 min, to allow the subject toget used to the noise of the scanner, followed by alternation ofstimuli lasting 21 s and rest lasting 12 s. For each experiment, thealternating stimulus/rest cycle was repeated 8 times. The differentstimuli used in each experiment were presented to the differentsubjects in a random order.

The aimof experiment 1was to showwhich networkwas activatedduring passive listening to a voice (Schmithorst, 2005), and whichregionswere particularly involved in the processing of familiaritywiththe heard voice. The 3 conditions used in experiment 1 were: <GT>,<UFV> and <FV>. The instructions given to the subjects for the firstexperiment were to listen with the eyes closed to the stimuli presentedto them through the headphones. They were advised that some of thevoices they would hear would be familiar to them.

The aim of experiment 2 was to identify the neuronal substrateof self-awareness in language. The conditions used were: <UFV>,<SGV> and <RTV>. The <UFV> used in this second experimentwere different from those used in the first one for each subject toavoid any learning effect. During this second experiment, thesubject was asked to repeat the poem by whispering for the <SGV>condition, while listening to his voice through the headphones atthe same time. In this condition, the subject was both the primary(production) and secondary (source monitoring) agent. For theother conditions, he only had to listen to the voices passively. Tohelp him distinguish between the conditions of voice productionand passive listening, a sound signal (500 ms) of a differentfrequency preceded the different stimuli. Whispering in the voiceproduction condition allowed artefacts caused by face and headmovements to be limited.

Data acquisition

Imaging was performed using a 1.5 Tesla MRI scanner (InteraAchieva, Philips, The Netherlands) with a SENSE-Head coil with 8elements. The T1-weighted anatomical sequence was a 3D multi-shotTFE with the following properties: 140 slices, thickness=1.6 mm,FOV=240 mm2, matrix=256×256, TR=8.2 ms, TE=4 ms, flipangle=8°, TFE factor=192. The T2*-weighted functional sequencewas a single-shot sensitivity-encoded echo planar imaging sequence(Preibisch et al., 2003). Two different runs of 280 volumes each wereobtained with a 30-slice Fast Fourier Echo, thickness=4 mm,FOV=240 mm2, matrix=64×64, TR=3000 ms, TE=70 ms, flipangle=90°, SENSE factor g=1.4.

Data analysis

The functional data were pre-processed and analysed using theBrainVoyager QX v1.7.9 software (Brain Innovation, The Nether-lands, 2005) on a Windows XP computer. Images were pre-processed using slice scan time correction, 3D head motion

correction, temporal high-pass filtering with 3 cycles/point, lineartrend removal and 3D spatial smoothing with a Gaussian filter of4.00 mm. These slice-based functional data were aligned on thehigh quality 3D anatomical image, reliably normalized in thestereotactic Talairach’s space (Talairach and Tornoux, 1988). Forall of the subjects, the cortex was segmented at the grey/whitematter boundary. Finally, the cortical surface was reconstructed andinflated. Cortex Based Alignment using curvature information wasperformed, to improve anatomical inter-subject correspondencemapping beyond Talairach’s transformation (Fischl et al., 1999).Group-level random-effects analysis was performed according tothe general linear model [GLM] (Goebel et al., 2006). Statisticalmaps were thresholded using a false discovery rate approach(Genovese et al., 2002). Finally, for some analyses, multisubjectGLM surface maps were superimposed on a flattened representa-tion of the cortical sheet of an average template brain normalized inTalairach’s space.

Results

The T-test peak and corresponding corrected p-values arereported in Tables 1, 2 and 3.

Passive listening experiment

Activation network for passive speech listeningPassive speech listening, tested by [<UFV>–<GT>] contrast

analysis, showed bilateral activations, predominantly on the rightside, in Hechl’s gyrus, in the middle part of the superior temporalsulcus (STS), and the posterior part of the superior temporal gyrus(Wernicke’s area), in the middle part of the precentral gyrus,corresponding to the primary motor cortex and the premotorventral cortex (PMv), including the pars triangularis and the parsopercularis of the inferior frontal gyrus (Broca’s convolution on theleft side). The right inferior parietal lobule (IPL) at the level of thesupramarginal gyrus, was also more activated by listening to thevoice (cf. Table 1, part 1). No activation in the motor, the premotor,the parietal and the STS areas was found with [<GT>–<UFV>]reverse contrast analysis.

Table 2Results of fMRI multi-subject general linear model contrast analysis duringa language task to identify the cortical network for intelligible speech[<UFV>–<RTV>] (n=12)

Cortical areas BA R/L t-values p (correctedvalues)

Talairachcoordinates

Pre-central gyrus 4 R+L 4.6 0.001 (−)47/−17/41Medial frontal gyrus

(SMA)6 R+L 4.2 0.001 (−)3/−3/55

Insula 13 L 6.6 0.001 −44/14/4Middle temporal gyrus 21 R+L 3.9 0.002 (−)47/−31/−3Anterior cingulate gyrus 32 L 4.2 0.002 −1/26/32Inferior parietal lobule 40 R 6.2 0.001 42/−35/44Inferior frontal gyrus

(Broca's area)44,45

L 6.5 0.001 −46/19/16

BA: Brodmann's areas; R/L: right–left side of the brain; RTV: reverse-tapedvoice; SMA: supplementary motor area; UFV: unfamiliar voice.

Table 3Results of the fMRI multi-subjects general linear model contrast analysisduring self–other distinction in a double speech task [UFV–SGV] (n=12)

Cortical areas BA R/L t-values p (correctedvalues)

Talairachcoordinates

Post-central gyrus 3 R −8.6 0.002 (−)58/−15/28Pre-central gyrus 4 L −13.0 0.001 −57/−6/23Medial frontal gyrus 6 R −12.3 0.001 4/−8/55Medial frontal gyrus 8 R+L −8.3 0.002 (−)2/46/42Ventro-medial

frontal gyrus9 R+L 6.6 0.003 (−)4/48/25

Dorso-medialfrontal gyrus

10 L 7.8 0.002 −32/36/8

Insula 13 L −9.2 0.001 −32/23/1Posterior cingulate gyrus 23 R+L 7.6 0.003 (−)1/−55/19Anterior cingulate gyrus 24 R+L 7.6 0.003 (−)3/33/10Para-hippocampal gyrus 28 R+L 7.3 0.003 23/−13/−16Posterior cingulate gyrus 31 R+L 8.2 0.002 0/−54/25Anterior cingulate gyrus 32 R+L 7.6 0.003 (−)1/28/33Temporal gyrus

(medium)39 R+L 5.7 0.004 (−)45/−61/20

Inferior parietal lobule 40 R>L 7.6 0.003 43/−33/44Inferior frontal gyrus 44 L −7.6 0.003 −57/8/11Dentate nucleus

(cerebellum)NA R+L −12.3 0.001 (−)11/−55/−29

Caudate nucleus NA L>R −8.5 0.002 (−)17/−4/23Thalamus NA R+L −7.1 0.003 (−)10/−18/9

BA: Brodmann's areas; FV: familiar voice; NA: non available data; R/L:right–left side of the brain; SGV: self generated voice condition in witch thesubject has to sub-vocalize over his own voice; UFV: unfamiliar voice.

1648 R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

Activation network to process familiarity with the person speakingFamiliarity of the speaker’s voice, tested by [<FV>–<UFV>]

contrast analysis, showed activation in the anterior part of the rightSTS (cf. Table 1, part 2).

Experiment for self-awareness in language

Activation network for intelligible languageIntelligible language, tested by [<UFV>–<RTV>] contrast

analysis (cf. Table 2 and Fig. 1), showed bilateral activations in thePMv, particularly in Broca’s area and the left insula, the middlepart of the precentral gyrus and the supplementary motor area(SMA). The medial prefrontal cortex (mPFC) at the level of theanterior cingulate area, and the right IPL at the level of thesupramarginal gyrus were also activated in this contrast analysis.Finally, no activity was found in the mirror-like areas of the brainwith [<RTV>–<UFV>] reverse contrast analysis.

Activation network to distinguish between other/self in languageOther/self differentiation was tested by [<UFV>–<SGV>]

contrast analysis (cf. Table 3 and Fig. 2). This analysis showedthree different haemodynamic patterns. First of all, areas moreactivated in language production <SGV> than in listening

Fig. 1. Two factorial GLM group-level random-effect analyses (n=12, Talairachcondition and the “reverse-taped voice listening” condition. T-maps were projecteActivated clusters are also shown in a glass-brain view (fourth column). The fifth coThe average length of time is expressed as a percentage of the BOLD signal per srepresents the “reverse-taped voice listening” condition. Each scan lasted 3 s in th

conditions <UFV> were identified in cortical and sub-corticalregions: in the sub-cortex, the basal ganglia (with especially caudatenuclei and thalami) and the dentate nucleus of the cerebellum; in thecortex, the frontal medial pre and post-central gyri (BA 3, 4, 6, 44),the supplementary motor area (BA 6), the cingulate cortex (BA 32),and the posterior parietal gyri at the level of the angular gyrus (BA39–40). Some of these cerebral areas were only activated in thespeech production condition (subcortical areas), whereas otherswere activated in both conditions, compared with the restingperiods, but with a larger signal amplitude for speech production(premotor and motor areas). Secondly, we grouped areas more

's space). Contrast analysis was performed between the “speech listening”d onto the average of normalized individuals' brains (first three columns).lumn shows the haemodynamic responses of the fMRI signal in Broca's area.can. The green line represents the “speech listening” condition, the red oneis block-design paradigm.

Fig. 2. A fMRI-based contrast analysis between the “speech listening” condition (UFV=green) and the “self voice-production” condition (SVG=red). “Randomeffect” group analysis according to the GLM was performed (n=12). Surface maps were superimposed on an average of normalized individuals' inflated brainrepresentations, normalized in Talairach's space. Cortex-based inter-subject alignment was performed. Activation was measured in the right inferior parietallobule during UFV condition but no significant signal variation was observed in SVG condition in this region (a). Regions which are more active in speechproduction (SVG), with respect to the baseline, are located in the premotor, motor, thalamic and cerebellar areas (b, c). A decrease in the BOLD signal under thebaseline level was observed bilaterally in the anterior and posterior cingulate cortex and dorso-medial prefrontal cortex during SVG condition (d).

Fig. 3. Detail of the activation patterns in the right parietal region for contrastanalysis between the “speech listening” condition and the “self voice-production” condition, visualized on a flattened cortical sheet of the righthemisphere. IPS: intra-parietal sulcus; PCS: post-central sulcus; STS:superior temporal sulcus.

1649R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

activated in listening conditions than in speech production in twodifferent clusters (cf. Fig. 2). The first cluster fits with areasactivated during <UFV> and simply less activated or staying at thebaseline level during <SGV>: the middle temporal gyri (BA 39), theparahippocampic gyri (BA 28) and the right IPL at the level of thesupramarginalis gyrus (BA 40) respectively. The second cluster isfor areas that present no reactivity for listening conditions but are de-activated during speech production, with respect to the restingcondition: the ventro-medial prefrontal cortex (BA 9, 23, 24, 31).The details of activation and deactivation in the right parietal cortexare shown in Fig. 3, on an unfolded average cortex.

Discussion

This study made it possible to show two social properties oflanguage. Firstly, listening to a voice recruits mirror systems, as longas the language can be reproduced by the subject. Secondly, there isa self-awareness network in language which allows the subject whois speaking, to feel that he is the agent that is producing the speech(self in action), but which also allows a resting subject to focus partof his cerebral activity on his internal world (Kinsbourne, 2005).This resting-state parieto-frontal network, identified by [<UFV>–<SGV>] contrast analysis, is distinct from those usually recruitedwhen a voice is processed by the cortex [<UFV>–<GT>], or aspeaker’s voice is familiar [<FV>–<UFV>].

1650 R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

This study first of all confirms that a mirror-like neuron systemlocated on the lateral cortical surface (BA 44, 40 and 21) isrecruited when passively listening to a voice, as [<UFV>–<RTV>]contrast analysis showed. As for Broca’s area, it is possible toconsider that distinct but closely located neuronal subpopulations,within this macroanatomical region, may support differentcognitive functions, and that its activation, in [<UFV>–<RTV>]contrast analysis, may reflect high-level linguistic operations.However, and not contradictorily, the specific coactivation of all ofthe regions showing ‘mirror-like’ properties in this contrastvalidates our hypothesis on the involvement of mirror networksin language. In the <RTV> condition, subject cannot reproduce itand it does not involve activation in the areas rich in mirror-likeneurons, such as the pars opercularis of Broca’s area on the left andthe right supramarginal gyrus, but also in strongly interconnectedareas, such as the insula, whose anterior part is known to have amotor language planning function (Dronkers, 1996), the precentralgyrus, or the superior temporal sulcus. This is not simply a questionof semantics, since a collection of pseudo-words structuredcorrectly according to syntax, but which mean nothing, also causeBroca’s area to be activated (Friederici et al., 2006). Furthermore,this mirror-like network seems to be involved in the learningprocess, since these areas are also recruited when listening to aforeign language. Wilson and Iacoboni (2006) showed that therewas increased activity in the temporal and premotor areas whenlistening to non-native phonemes, compared with native phonemes.Their data also suggest that speech perception is not just sensory ormotor, but a sensorimotor process. That is compatible with our datashowing that to decode another person’s language, a listener musthave a potential motor representation which he is capable ofreproducing. A speech recording played backwards is notrecognized as a language, unlike an unknown foreign language,because it has no motor representation, is not humanly reprodu-cible, and so does not activate the mirror-like neuron network. Thisresult confirms those obtained with visual–motor tasks, whichshowed the existence of a mirror-like network, but only when amovement was biomechanically reproducible by the subject(Stevens et al., 2000). Broca’s area is not either activated when asubject observes a dog barking; this action is not part of his motorrepertoire (Buccino et al., 2004). Access to the representation ofcortical processing of language through the mirror-like system fitsin with such a model of shared motor representations.

It is important to differentiate implicitly one’s own speech fromthat of others. The right temporo-parietal junction may participatein cognitive distinguishing between one’s own thoughts andactions and those of another person (Saxe et al., 2004). A recentmeta-analysis has in fact shown that there is an overlap betweenthe brain areas recruited for theory of mind tasks and thoseinvolved in self-agency tasks (Decety and Grezes, 2006). We foundin this study that IPL is activated when the subject must listen to avoice but this same cortical region stays at the baseline level duringself produced speech. [<UFV>–<SGV>]. Now the lower part ofthe parietal cortex (Clower et al., 2001) receives afferent neuronsdirectly from the cerebellum. In the left hemisphere, the inferiorparietal lobule is also a relay station between Wernicke’s area andBroca’s area (Catani et al., 2005). Thanks to its anatomicalposition, the parietal cortex therefore seems to be a convergenceand integration zone for sensory and motor information and theirtemporal dynamics, and to play a key role in the self-awareness inaction (agency). It has also been shown that the right temporo-parietal junction can participate in the processing of pitch cues and

intonation contours (for a review see Wildgruber et al., 2006).Nevertheless, given that the subjects of our study heard the poemunder each of the two experimental conditions, our results cannotbe explained by variations of spoken utterances with indexical orintonational cues. In addition, other authors suggested to make adistinction between two parallel fronto-posterior functional net-works, as for voice perception in the brain, which is fullycompatible with our results (Scott and Johnsrude, 2003). The firstnetwork anteriorly extends to the primary auditory core andincludes the anterior STS, among others. This ventral pathwaywould play a role in mapping acoustic–phonetic cues onto lexicalrepresentations and in the processing of familiarity (underlined byFV–UFV contrast analysis in experiment 1). The second networkposteriorly extends to the auditory belt and parabelt regions andincorporates the IPL. This dorsal pathway would play a part in thearticulatory–gestural representation of the heard voices (underlinedby UFV–SGV contrast analysis in experiment 2).

Another more anterior area is also activated in [<UFV>–<SGV>] contrast analysis, the mPFC region, which is involved inthe performance monitoring process (Ridderinkhof et al., 2004).The activity of the dorso-medial prefrontal cortex, according toFrith, may be reduced when there are goal-directed movements(Frith and Frith, 1999). Other authors postulate that a forwardoutput model could explain this activation (Wolpert and Miall,1996). This model is a comparison between the likely result ofspeech production (of central origin) and effective sensoryfeedback (auditory and proprioceptive). Backward connections ofmPFC towards the associative sensory cortical areas are thenprobably involved in modulation of their neuronal activity(Ridderinkhof et al., 2004). An error in predicting the effect ofspeech production would result in recruitment of the parieto-frontalnetwork, mainly on the right side, which has been previouslyidentified as distinguishing between other/self during manualmotor tasks (Decety and Sommerville, 2003). The activationsfound in the right IPL back up this hypothesis. Other authors hadalready postulated a link between predictive models and mirrorsystems (Miall, 2003), that our results confirm in the languagefield. Parallel to IPL activation (the dorsal pathway for speechprocessing as described by Scott and Johnsrude, 2003) in the self/other differentiation, it has been shown that Broca’s area isinvolved in online predictive mechanisms of the informationproceeding from the ventral pathway, e.g. dynamic anticipatoryprocessing of hierarchical sentences (Fiebach and Schubotz, 2006).Thus, an fMRI study demonstrated that the activation of the parsopercularis, a region in Broca’s area which shows mirror-likeproperties, was sensitive to the violations of the syntaxicalstructure of sentences (Newman et al., 2003). All these datasupport the fact that the sense of agency might be based on thecontrol of the activity of a part of the lateral mirror network by themodulating activity of the medial parieto-frontal network. Ourresults also show modulation of the activity in the auditory cortex,in agreement with the idea that mechanisms in the lateral brainareas may predict the activity in the sensorial brain areas, and somodulate the activity in the sensorial areas, depending on whetherthe subject is the author of an action or an observer. This auditorybrain area is in fact strongly connected to the IPL, but may alsoreceive direct afferent nerves from the medial frontal cortex. It hasin fact previously been noted that deformation of the returningsound signal may cause modulation of the activity of the temporalcortex by medial prefrontal cortex (Allen et al., 2005; Fu et al.,2005). In our study, the middle temporal gyri are also more

1651R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

strongly activated when listening to speech than when producingspeech. This type of predictive model assumes that there isfunctional asymmetry between the forward, backward and lateralconnections, which is backed up by the Bayesian neuronalinference theory of cortical response (Friston, 2005).

Another model may take some of these activations into account,in particular the fact that some of them result from deactivation,with respect to the baseline. According to Raichle et collaborators(2001), there is a physiological basal cortical activity in the medialparieto-frontal areas which is not activated, but already active inthe resting state condition. The metabolism of these medial regionsmay be reduced when there are goal-directed movements. Thesereductions in the signal can not be the direct result of localgabaergic inhibitory processes, which are characterized by anincrease in the local metabolism, but would rather be caused bylong-distance connections (Gold and Lauritzen, 2002). There isalso a strong positive time (Greicius et al., 2003) and space (van deVen et al., 2004) correlation between the activity in the areas of thisnetwork at rest. In addition, the amplitude of these deactivationswithin these areas would seem to be linked to the amount ofattention and working memory that the subject allocates to the task(Greicius and Menon, 2004). Convergence of these data seems toshow that the basal physiological activity of a medial parieto-frontal distributed network, probably representing the resting brainfunction without any external stimuli in time, in an awakeorganism with respect to its environment, may represent the feelingof being a self (Gusnard, 2005). The concept of self-awarenessmay therefore be seen in two distinct ways. During an action, it isunderstood as modulation of the activity of the IPL, part of thelateral mirror-like parieto-frontal network. At rest, when indepen-dent of stimuli, it results from the upper physiological basalactivity of the medial parieto-frontal areas. In fact, Uddin etcollaborators (2005) suggested a distinction between self as thesubject of experience (as in our production task) and self as anobject (as in meta-cognitive tasks aimed at determining the originof the action presented). Involvement of a resting-state network inour task for self/other differentiation, together with other brainareas like the IPL, has already been described in the literature asbeing involved in agency. It may therefore be the result of ourexperimental procedure in which self was not just an observer, butcould also distinguish between whether an action was produced byhim or by someone else.

Before concluding, it is important to remember that the corticalnetwork activated during [<UFV>–<GT>] contrast analysisconfirms the data in the literature, in particular with bilateralactivations mainly on the right side, centred on the upper part ofthe middle STS in cortical voice processing (Belin et al., 2000).We would note that our choice to use long connected speechstimuli rather than isolated words in the voice condition, activatesa network which stretches beyond the temporal cortex (PMv etM1), which certainly reflects the complexity of phonological,prosodic and semantic processing of speech (Hesling et al., 2005),as well as involvement of the motor system in phonetic coding(Wilson et al., 2004). Activity in the primary motor cortex M1 inthe passive listening task, suggests that this brain area is notenough by itself to differentiate between the subject speaking andother speakers. This study also shows the subcortical areas, whichare involved in motor planning of speech production, in particularthe caudate nucleus, which is strongly connected to the thalamusand the premotor cortex, whose function may be to select themotor sequences required for speech articulation (Friederici,

2006). Finally, the ventral temporal part (anterior STS) issignificantly more activated during a voice familiarity processingtest [<FV>–<UFV>] (Kriegstein and Giraud, 2004). The result ofthis control situation is distinctive from the parieto-frontalnetwork shown by the [<UFV>–<SGV>] contrast analysis.Recruitment of the parieto-frontal network is therefore not simplyby a recognition of the subject’s own voice with respect to thoseof others which are less familiar, but an attribution of the sourceof the voice.

Short conclusion

This study shows that the neuronal basis of speech listeninginvolves mirror-like systems including IFG (BA 44) and IPL (BA40), on the lateral cortical surface of the brain, predominantly onthe right side. Language has common coding for the one whoproduces it and the one who hears it, in accordance with the mainclaim of the motor theory of speech perception (Liberman andMattingly, 1985). This lateral mirror-like network may have crucialimportance in the learning process, and in understanding humanlanguage. Speech production is therefore accompanied bydeactivation of a medial cortical network, linked to the non-specific physiological basal activity of the brain in the medialprefrontal and medial parietal cortices, which can be suspected tobe involved in distinguishing between self and other. Our resultssupport the fact that the sense of agency, defined as identifying thesource of the action producing the speech, may be based on themodulating activity of this medial parieto-frontal network on thelevel of activity in part of the mirror networks (IPL). In an awakesubject who is just listening, this medial network would remainactive and be involved in self-related thoughts. This concept recallsthe “feeling of continuing to exist” developed by Winnicott (1958),since this network remains active as long as the attention of thesubject is not redistributed to a goal-directed action. Dysfunction ofthe medial parieto-frontal network may occur in subjects sufferingfrom schizophrenia, in the phenomena of intrusive thought andauditory hallucinations, in which the information perceived by thebrain (e.g. hearing voices) is wrongly attributed as coming fromoutside (Frith, 2005).

Acknowledgments

We thank all the anonymous referees for their comments andwise advice that have helped improve the whole quality of thismanuscript.

References

Allen, P.P., Amaro, E., Fu, C.H., Williams, S.C., Brammer, M., Johns, L.C.,McGuire, P.K., 2005. Neural correlates of the misattribution of self-generated speech. Hum. Brain Mapp. 26 (1), 44–53.

Arbib, M.A., 2005. From monkey-like action recognition to humanlanguage: an evolutionary framework for neurolinguistics. Behav.Brain Sci. 28 (2), 105–124.

Belin, P., Zatorre, R.J., Lafaille, P., Ahad, P., Pike, B., 2000. Voice-selectiveareas in human auditory cortex. Nature 403 (6767), 309–312.

Blakemore, S.J., Decety, J., 2001. From the perception of action to theunderstanding of intention. Nat. Rev., Neurosci. 2 (8), 561–567.

Buccino, G., Lui, F., Canessa, N., Patteri, I., Lagravinese, G., Benuzzi, F.,Porro, C.A., Rizzolatti, G., 2004. Neural circuits involved in therecognition of actions performed by nonconspecifics: an fMRI study.J. Cogn. Neurosci. 16 (1), 114–126.

1652 R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

Catani, M., Jones, D.K., ffytche, DH., 2005. Perisylvian language networksof the human brain. Ann. Neurol. 57 (1), 8–16.

Chaminade, T., Decety, J., 2002. Leader or follower? Involvement of theinferior parietal lobule in agency. NeuroReport 13 (15), 1975–1978.

Clower, D.M., West, R.A., Lynch, J.C., Strick, P.L., 2001. The inferiorparietal lobule is the target of output from the superior colliculus,hippocampus, and cerebellum. J. Neurosci. 21 (16), 6283–6291.

Decety, J., Grezes, J., 2006. The power of simulation: imagining one’s ownand other’s behavior. Brain Res. 1079 (1), 4–14.

Decety, J., Sommerville, J.A., 2003. Shared representations between self andother: a social cognitive neuroscience view. Trends Cogn. Sci. 7 (12),527–533.

Dronkers, N.F., 1996. A new brain region for coordinating speecharticulation. Nature 384 (6605), 159–161.

Eilan, N., Roessler, J., 2003. Agency and self-awareness: mechanisms andepistemology. In: Roessler, J., Eilan, N. (Eds.), Agency and SelfAwareness: Issues in Philosophy and Psychology. Oxford UniversityPress, New York, USA, pp. 1–47.

Fadiga, L., Craighero, L., Buccino, G., Rizzolatti, G., 2002. Speech listeningspecifically modulates the excitability of tongue muscles: a TMS study.Eur. J. Neurosci. 15 (2), 399–402.

Farrer, C., Franck, N., Georgieff, N., Frith, C.D., Decety, J., Jeannerod, M.,2003. Modulating the experience of agency: a positron emissiontomography study. NeuroImage 18 (2), 324–333.

Fiebach, C.J., Schubotz, R.I., 2006. Dynamic anticipatory processing ofhierarchical sequential events: a common role for Broca’s area andventral premotor cortex across domains? Cortex 42 (4), 499–502.

Fischl, B., Sereno, M.I., Tootell, R.B., Dale, A.M., 1999. High-resolutionintersubject averaging and a coordinate system for the cortical surface.Hum. Brain Mapp. 8 (4), 272–284.

Friederici, A.D., 2006. What’s in control of language? Nat. Neurosci. 9 (8),991–992.

Friederici, A.D., Bahlmann, J., Heim, S., Schubotz, R.I., Anwander, A.,2006. The brain differentiates human and non-human grammars:functional localization and structural connectivity. Proc. Natl. Acad.Sci. U. S. A. 103 (7), 2458–2463.

Friston, K., 2005. A theory of cortical responses. Philos. Trans. R Soc.Lond., B Biol. Sci. 360 (1456), 815–836.

Frith, C.D., 2005. The self in action: lessons from delusions of control.Conscious Cogn. 14 (4), 752–770.

Frith, C.D., Frith, U., 1999. Interacting minds—A biological basis. Science286 (5445), 1692–1695.

Fu, C.H., Vythelingum, G.N., Brammer, M.J., Williams, S.C., Amaro Jr., E.,Andrew, C.M., Yaguez, L., van Haren, N.E., Matsumoto, K., McGuire,P.K., 2005. An fMRI study of verbal self-monitoring: neural correlates ofauditory verbal feedback. Cereb. Cortex 16 (7), 969–977.

Galantucci, B., Fowler, C.A., Turvey, M.T., 2006. The motor theory ofspeech perception reviewed. Psychon. Bull. Rev. 13 (3), 361–377.

Gallagher, H.L., Frith, C.D., 2003. Functional imaging of ‘theory of mind’.Trends Cogn. Sci. 7 (2), 77–83.

Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G., 1996. Action recognitionin the premotor cortex. Brain 119 (Pt 2), 593–609.

Genovese, C.R., Lazar, N.A., Nichols, T., 2002. Thresholding of statisticalmaps in functional neuroimaging using the false discovery rate.NeuroImage 15 (4), 870–878.

Goebel, R., Esposito, F., Formisano, E., 2006. Analysis of functional imageanalysis contest (FIAC) data with brainvoyager QX: from single-subjectto cortically aligned group general linear model analysis and self-organizing group independent component analysis. Hum. Brain Mapp.27 (5), 392–401.

Gold, L., Lauritzen, M., 2002. Neuronal deactivation explains decreasedcerebellar blood flow in response to focal cerebral ischemia orsuppressed neocortical function. Proc. Natl. Acad. Sci. U. S. A. 99(11), 7699–7704.

Greicius, M.D., Menon, V., 2004. Default-mode activity during a passivesensory task: uncoupled from deactivation but impacting activation.J. Cogn. Neurosci. 16 (9), 1484–1492.

Greicius, M.D., Krasnow, B., Reiss, A.L., Menon, V., 2003. Functionalconnectivity in the resting brain: a network analysis of the default modehypothesis. Proc. Natl. Acad. Sci. U. S. A. 100 (1), 253–258.

Gusnard, D.A., 2005. Being a self: considerations from functional imaging.Conscious Cogn. 14 (4), 679–697.

Gusnard, D.A., Raichle, M.E., Raichle, M.E., 2001. Searching for abaseline: functional imaging and the resting human brain. Nat. Rev.,Neurosci. 2 (10), 685–694.

Hesling, I., Clement, S., Bordessoules, M., Allard, M., 2005. Cerebralmechanisms of prosodic integration: evidence from connected speech.NeuroImage 24 (4), 937–947.

Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J.C.,Rizzolatti, G., 2005. Grasping the intentions of others with one’s ownmirror neuron system. PLoS Biol 3 (3), e79, 529–535.

Jackson, P.L., Decety, J., 2004. Motor cognition: a new paradigm to studyself–other interactions. Curr. Opin. Neurobiol. 14 (2), 259–263.

Kinsbourne, M., 2005. A continuum of self-consciousness that emerges inphylogeny and ontogeny. In: Terrace, H.S., Metcalfe, J. (Eds.), TheMissing Link in Cognition: Origins of Self-Reflective Consciousness.Oxford University Press, New York, USA, pp. 142–156.

Kohler, E., Keysers, C., Umilta, M.A., Fogassi, L., Gallese, V., Rizzolatti,G., 2002. Hearing sounds, understanding actions: action representationin mirror neurons. Science 297 (5582), 846–848.

Kriegstein, K.V., Giraud, A.L., 2004. Distinct functional substrates along theright superior temporal sulcus for the processing of voices. NeuroImage22 (2), 948–955.

Liberman, A.M., Mattingly, I.G., 1985. The motor theory of speechperception revised. Cognition 21 (1), 1–36.

Meltzoff, A.N., Decety, J., 2003. What imitation tells us about socialcognition: a rapprochement between developmental psychology andcognitive neuroscience. Philos. Trans. R Soc. Lond., B Biol. Sci. 358(1431), 491–500.

Miall, R.C., 2003. Connecting mirror neurons and forward models.Neuroreport 14 (17), 2135–2137.

Molnar-Szakacs, I., Iacoboni, M., Koski, L., Mazziotta, J.C., 2005.Functional segregation within pars opercularis of the inferior frontalgyrus: evidence from fMRI studies of imitation and action observation.Cereb. Cortex 15 (7), 986–994.

Newman, S.D., Just, M.A., Keller, T.A., Roth, J., Carpenter, P.A., 2003.Differential effects of syntactic and semantic processing on thesubregions of Broca’s area. Brain Res. Cogn. Brain Res. 16 (2),297–307.

Oldfield, R.C., 1971. The assessment and analysis of handedness: theEdinburgh inventory. Neuropsychologia 9 (1), 97–113.

Petrides, M., Cadoret, G., Mackey, S., 2005. Orofacial somatomotorresponses in the macaque monkey homologue of Broca’s area. Nature435 (7046), 1235–1238.

Preibisch, C., Pilatus, U., Bunke, J., Hoogenraad, F., Zanella, F.,Lanfermann, H., 2003. Functional MRI using sensitivity-encoded echoplanar imaging (SENSE-EPI). NeuroImage 19 (2 Pt 1), 412–421.

Raichle, M.E., MacLeod, A.M., Snyder, A.Z., Powers, W.J., Gusnard, D.A.,Shulman, G.L., 2001. A default mode of brain function. Proc. Natl.Acad. Sci. U. S. A. 98 (2), 676–682.

Ridderinkhof, K.R., Ullsperger, M., Crone, E.A., Nieuwenhuis, S., 2004.The role of the medial frontal cortex in cognitive control. Science 306(5695), 443–447.

Rizzolatti, G., Arbib, M.A., 1998. Language within our grasp. TrendsNeurosci. 21 (5), 188–194.

Rizzolatti, G., Fadiga, L., Matelli, M., Bettinardi, V., Paulesu, E., Perani, D.,Fazio, F., 1996. Localization of grasp representations in humans by PET:1. Observation versus execution. Exp. Brain Res. 111 (2), 246–252.

Rodriguez, M., Muniz, R., Gonzales, B., Sabate, M., 2004. Hand movementdistribution in the motor cortex. The influence of a concurrent task andmotor imagery. NeuroImage 22 (4), 1480–1491.

Ruby, P., Decety, J., 2001. Effect of subjective perspective taking duringsimulation of action: a PET investigation of agency. Nat. Neurosci. 4 (5),546–550.

1653R. Jardri et al. / NeuroImage 35 (2007) 1645–1653

Saxe, R., Xiao, D.K., Kovacs, G., Perrett, D.I., Kanwisher, N., 2004. Aregion of right posterior superior temporal sulcus responds to observedintentional actions. Neuropsychologia 42 (11), 1435–1446.

Schmithorst, V.J., 2005. Separate cortical networks involved in musicperception: preliminary functional MRI evidence for modularity ofmusic processing. NeuroImage 25 (2), 444–451.

Scott, S.K., Johnsrude, I.S., 2003. The neuroanatomical and functionalorganization of speech perception. Trends Neurosci. 26 (2), 100–107.

Stevens, J.A., Fonlupt, P., Shiffrar, M., Decety, J., 2000. New aspects ofmotion perception: selective neural encoding of apparent humanmovements. NeuroReport 11 (1), 109–115.

Talairach, J., Tornoux, P., 1988. A Coplanar Stereotactic Atlas of the HumanBrain. Thieme Medical Publishers, New York.

Uddin, L.Q., Kaplan, J.T., Molnar-Szakacs, I., Zaidel, E., Iacoboni, M.,2005. Self-face recognition activates a fronto-parietal “mirror” networkin the right hemisphere: an event-related fMRI study. NeuroImage 25(3), 926–935.

van de Ven, V.G., Formisano, E., Prvulovic, D., Roeder, C.H., Linden, D.E.,2004. Functional connectivity as revealed by spatial independentcomponent analysis of fMRI measurements during rest. Hum. BrainMapp. 22 (3), 165–178.

Wildgruber, D., Ackermann, H., Kreifelts, B., Ethofer, T., 2006. Cerebralprocessing of linguistic and emotional prosody: fMRI studies. Prog.Brain Res. 156, 249–268.

Wilson, S.M., Iacoboni, M., 2006. Neural responses to non native phonemesin producibility: evidence for the sensorimotor nature of speechperception. NeuroImage 33 (1), 316–325.

Wilson, S.M., Saygin, A.P., Sereno, M.I., Iacoboni, M., 2004. Listening tospeech activates motor areas involved in speech production. Nat.Neurosci. 7 (7), 701–702.

Winnicott, D.W., 1958. The capacity to be alone. Int. J. Psychoanal. 39 (5),416–420.

Wolpert, D.M., Miall, R.C., 1996. Forward models for physiological motorcontrol. Neural Netw. 9 (8), 1265–1279.