Neuromagnetic functional coupling during dichotic listening of speech sounds

12
Neuromagnetic Functional Coupling During Dichotic Listening of Speech Sounds Alfredo Brancucci, 1*# Stefania Della Penna, 2# Claudio Babiloni, 3,4 Fabrizio Vecchio, 4 Paolo Capotosto, 2 Davide Rossi, 2 Raffaella Franciotti, 2 Kathya Torquati, 2 Vittorio Pizzella, 2 Paolo M. Rossini, 4 and Gian Luca Romani 2 1 ITAB, Istituto di Tecnologie Avanzate Biomediche, Fondazione ‘‘Universita ` G. D’Annunzio,’’ and Dipartimento di Scienze Biomediche, Universita ` ‘‘G. D’Annunzio,’’ Chieti, Italy 2 ITAB, Istituto di Tecnologie Avanzate Biomediche, Fondazione ‘‘Universita ` G. D’Annunzio,’’ and Dipartimento di Scienze Cliniche e Bioimmagini, Universita ` ‘‘G. D’Annunzio,’’ Chieti, Italy 3 Dipartimento di Fisiologia Umana e Farmacologia, Universita ` ‘‘La Sapienza,’’ Rome, Italy 4 IRCCS ‘‘S. Giovanni di Dio-FBF,’’ Brescia, A.Fa.R. Dipartimento di Neuroscienze, Ospedale FBF Isola Tiberina and (PMR) Clinica Neurologica, Universita ` ‘‘Campus Biomedico,’’ Rome, Italy Abstract: The present magnetoencephalography (MEG) study tested the hypothesis of a phase synchroniza- tion (functional coupling) of cortical alpha rhythms (about 6–12 Hz) within a ‘‘speech’’ cortical neural network comprising bilateral primary auditory cortex and Wernicke’s areas, during dichotic listening (DL) of consonant- vowel (CV) syllables. Dichotic stimulation was done with the CV-syllable pairs /da/-/ba/ (true DL, yielded by stimuli having high spectral overlap) and /da/-/ka/ (sham DL, obtained with stimuli having poor spectral overlap). Whole-head MEG activity (165 sensors) was recorded from 10 healthy right-handed non-musicians showing right ear advantage in a speech DL task. Functional coupling of alpha rhythms was defined as the spec- tral coherence at the following bands: alpha 1 (about 6–8 Hz), alpha 2 (about 8–10 Hz), and alpha 3 (about 10–12) with respect to the peak of individual alpha frequency. Results showed an inverse pattern of functional cou- pling: during DL of speech sounds, spectral coherence of the high-band alpha rhythms increased between left auditory and Wernicke’s areas with respect to sham DL, whereas it decreased between left and right auditory areas. The increase of functional coupling within the left hemisphere would underlie the processing of the sylla- ble presented to the right ear, which arrives to the left auditory cortex without the interference of the other sylla- ble presented to the left ear. Conversely, the decrease of inter-hemispherical coupling of the high-band alpha might be due to the fact that the two auditory cortices do not receive the same information from the ears during DL. These results suggest that functional coupling of alpha rhythms can constitute a neural substrate for the lat- eralization of auditory stimuli during DL. Hum Brain Mapp 29:253–264, 2008. V V C 2007 Wiley-Liss, Inc. Key words: dichotic listening; consonant-vowel (CV)-syllables; magnetoencephalography (MEG); spec- tral coherence; functional coupling; primary auditory cortex; Wernicke’s area INTRODUCTION Dichotic listening is broadly used in measuring cerebral lateralization of perceptual and cognitive functions in both clinical and experimental practice. It consists in the simul- taneous presentation of two different auditory stimuli to either ear [Bryden, 1988; Hugdahl, 2000; Tervaniemi and Hugdahl, 2003]. Subjects with left-hemispheric language # These two authors contributed equally to the present study. *Correspondence to: Alfredo Brancucci, PhD, ITAB—Istituto di Tecno- logie Avanzate Biomediche, Universita ` di Chieti ‘‘G. D’Annunzio,’’ 33 Via dei Vestini, 66013 Chieti, Italy. E-mail: [email protected] Received for publication 5 June 2006; Revision 8 January 2007; Accepted 27 January 2007 DOI: 10.1002/hbm.20385 Published online 16 March 2007 in Wiley InterScience (www. interscience.wiley.com). V V C 2007 Wiley-Liss, Inc. r Human Brain Mapping 29:253–264 (2008) r

Transcript of Neuromagnetic functional coupling during dichotic listening of speech sounds

Neuromagnetic Functional Coupling DuringDichotic Listening of Speech Sounds

Alfredo Brancucci,1*# Stefania Della Penna,2# Claudio Babiloni,3,4

Fabrizio Vecchio,4 Paolo Capotosto,2 Davide Rossi,2

Raffaella Franciotti,2 Kathya Torquati,2 Vittorio Pizzella,2

Paolo M. Rossini,4 and Gian Luca Romani2

1ITAB, Istituto di Tecnologie Avanzate Biomediche, Fondazione ‘‘Universita G. D’Annunzio,’’and Dipartimento di Scienze Biomediche, Universita ‘‘G. D’Annunzio,’’ Chieti, Italy

2ITAB, Istituto di Tecnologie Avanzate Biomediche, Fondazione ‘‘Universita G. D’Annunzio,’’and Dipartimento di Scienze Cliniche e Bioimmagini, Universita ‘‘G. D’Annunzio,’’ Chieti, Italy

3Dipartimento di Fisiologia Umana e Farmacologia, Universita ‘‘La Sapienza,’’ Rome, Italy4IRCCS ‘‘S. Giovanni di Dio-FBF,’’ Brescia, A.Fa.R. Dipartimento di Neuroscienze, Ospedale FBF Isola

Tiberina and (PMR) Clinica Neurologica, Universita ‘‘Campus Biomedico,’’ Rome, Italy

Abstract: The present magnetoencephalography (MEG) study tested the hypothesis of a phase synchroniza-tion (functional coupling) of cortical alpha rhythms (about 6–12 Hz) within a ‘‘speech’’ cortical neural networkcomprising bilateral primary auditory cortex andWernicke’s areas, during dichotic listening (DL) of consonant-vowel (CV) syllables. Dichotic stimulationwas done with the CV-syllable pairs /da/-/ba/ (true DL, yielded bystimuli having high spectral overlap) and /da/-/ka/ (sham DL, obtained with stimuli having poor spectraloverlap). Whole-head MEG activity (165 sensors) was recorded from 10 healthy right-handed non-musiciansshowing right ear advantage in a speechDL task. Functional coupling of alpha rhythmswas defined as the spec-tral coherence at the following bands: alpha 1 (about 6–8Hz), alpha 2 (about 8–10Hz), and alpha 3 (about 10–12)with respect to the peak of individual alpha frequency. Results showed an inverse pattern of functional cou-pling: during DL of speech sounds, spectral coherence of the high-band alpha rhythms increased between leftauditory and Wernicke’s areas with respect to sham DL, whereas it decreased between left and right auditoryareas. The increase of functional couplingwithin the left hemispherewould underlie the processing of the sylla-ble presented to the right ear, which arrives to the left auditory cortex without the interference of the other sylla-ble presented to the left ear. Conversely, the decrease of inter-hemispherical coupling of the high-band alphamight be due to the fact that the two auditory cortices do not receive the same information from the ears duringDL. These results suggest that functional coupling of alpha rhythms can constitute a neural substrate for the lat-eralization of auditory stimuli duringDL.HumBrainMapp 29:253–264, 2008. VVC 2007Wiley-Liss, Inc.

Key words: dichotic listening; consonant-vowel (CV)-syllables; magnetoencephalography (MEG); spec-tral coherence; functional coupling; primary auditory cortex; Wernicke’s area

INTRODUCTION

Dichotic listening is broadly used in measuring cerebrallateralization of perceptual and cognitive functions in bothclinical and experimental practice. It consists in the simul-taneous presentation of two different auditory stimuli toeither ear [Bryden, 1988; Hugdahl, 2000; Tervaniemi andHugdahl, 2003]. Subjects with left-hemispheric language

#These two authors contributed equally to the present study.

*Correspondence to: Alfredo Brancucci, PhD, ITAB—Istituto di Tecno-logieAvanzate Biomediche, Universita di Chieti ‘‘G. D’Annunzio,’’ 33 Viadei Vestini, 66013Chieti, Italy. E-mail: [email protected]

Received for publication 5 June 2006; Revision 8 January 2007;Accepted 27 January 2007

DOI: 10.1002/hbm.20385Published online 16 March 2007 in Wiley InterScience (www.interscience.wiley.com).

VVC 2007 Wiley-Liss, Inc.

r Human Brain Mapping 29:253–264 (2008) r

lateralization are faster and more accurate in reportingdichotic verbal items presented at the right compared toleft ear [Kimura, 1961; Studdert-Kennedy and Shankweiler,1970], while they exhibit left ear advantage for tasksinvolving the recognition of complex tones, music or envi-ronmental sounds [Boucher and Bryden, 1997; Brancucciand San Martini, 1999, 2003; Brancucci et al., 2005a; Kall-man and Corballis, 1975]. Functional neuroimaging studiesof regional cerebral blood flow have elucidated fine spatialdetails of brain structures involved in DL such as bilateralprimary auditory areas [Hugdahl et al., 1999, 2000; Janckeand Shah, 2002; Jancke et al., 2003; Lipschutz et al., 2002],orbitofrontal and hippocampal paralimbic belts [Pollmannet al., 2004], prefrontal cortex [Hugdahl et al., 2003a; Lip-schutz et al., 2002; Thomsen et al., 2004], and splenium ofthe corpus callosum [Plessen et al., 2007; Pollmann et al.,2002; Westerhausen et al., 2006].It has been shown that ‘‘true’’ DL occurs when the stimuli

constituting the dichotic pair have high overlap in spectralcontents. This claim is based on both behavioral and physio-logical evidence. Springer et al. [1978] have shown that whilereport of consonant-vowel syllables presented to the left earduring dichotic testing was at chance, report of left ear digitsunder the same conditions was greater than 80% in four outof five split-brain patients. As the acoustic overlap is greaterbetween consonant-vowel syllables than between digits, theyconcluded that the availability of information from the ipsi-lateral auditory pathway is a function of the spectral acousticoverlap between competing dichotic stimuli. This indicatesthat noncompeting pairs (digits in that study) do not makeup true DL stimuli. Sidtis [1981] demonstrated a nearlythreefold difference in the magnitude of the laterality mea-sure when dichotic tones with similar vs. different funda-mental frequencies were employed. The musical fifth interval(low spectral overlap) yielded minimal laterality effects,whereas intervals of a second, a minor third, or an octave(higher spectral overlap) yielded maximal laterality effects.More recently using MEG during DL of non-verbal stimuli,we demonstrated that, according to Kimura [1967], the neu-rophysiological interactions that allow the lateralization ofthe dichotic input were especially evident when the dichoticpair is composed by complex tones having similar funda-mental frequency and spectral content [Brancucci et al.,2004]. The same is true when the dichotic pair was com-posed by consonant-vowel syllables (CV-syllables) havinghigh spectral overlap [Della Penna et al., in press]. On thewhole, it can be argued that when the dichotic pair is com-

posed by stimuli having scarce spectral overlap, they reachboth auditory cortices without significant loss of information.Conversely, when the dichotic pair is composed by stimulihaving high spectral overlap, they compete to reach the corti-cal level, due to the organization of the cortex that is basedon spectral cues, i.e. tonotopy, [Bilecen et al., 1998; Formisanoet al., 2003; Romani et al., 1982]. It can be speculated that thiscompetition actually favors stimulus lateralization beyondthe fact that the input to each ear is generally better repre-sented in the contralateral auditory cortex [Ackermann et al.,2001; Eichele et al., 2005; Makela, 1988].At the present stage of research, an open issue remains

the investigation of the cooperation between auditory cort-ical areas during DL of speech sounds. In fact, a conse-quence of the DL paradigm—where the two auditory corti-ces do not receive the same information from the ears—isthat cortical areas do not respond with the same featuresto the stimulation. It is conceivable that this different activ-ity of left and right auditory areas is associated with areduced coordination or functional coupling betweenthem, possibly due to a major involvement of left (domi-nant for verbal sounds) compared with right auditory cort-ical areas. Such a functional coupling would be allowed bydirect inter-hemispherical connections between auditorycortices, as revealed by several studies in the cat, rat, mon-key, and man [Arnault and Roger, 1990; Bozhko et al.,1988; Code and Winer, 1986; Diamond et al., 1968; Pandyaet al., 1969], and has been observed in electroencephalogra-phy recordings during DL of complex tones (non-speechsounds, Brancucci et al., 2005b).The spectral band of the human encephalogram, which

is most directly involved in the type of processes investi-gated by the present study, is alpha frequency band (about6–12 Hz). Alpha frequency contains the dominant compo-nent of the human encephalogram and is retained to repre-sent the main element of talamo-cortical and cortico-cortical connectivity [Lopes da Silva et al., 1980]. Alpharhythms are strictly related to attentional level and corticalinformation processing in that enhancement or increase ofalpha power reflects elaboration or inhibition of sensorystimuli [Nunez, 1995; Pfurtscheller and Lopes da Silva,1999]. Field research in the last years has indicated that adeep investigation of alpha rhythms requires its divisionin sub-bands. Noteworthy, low-band (about 6–8 Hz, alpha1 and 8–10 Hz, alpha 2) alpha rhythms reflect unspecific‘‘alertness’’ and/or ‘‘expectancy’’ processes [Klimeschet al., 1996, 1998], whereas high-band (about 10–12 Hz,alpha 3) alpha rhythms depend on task-specific sensoryprocesses [Klimesch et al., 1994, 1996]. Regarding specifi-cally the auditory cortex, it has been shown the existenceof a distinct, reactive auditory rhythm around 10 Hz in thehuman temporal cortex, also called ‘‘tau’’ rhythm [Lehtelaet al., 1997]. Many studies observed a modulation of oscil-lations in the alpha range involving the auditory cortexduring lexical decision task with words and pseudowords[Krause et al., 2006], auditory-verbal encoding or retrievalworking memory task [Ellfolk et al., 2006], and other audi-

Abbreviations

ANOVA analysis of varianceCV-syllables consonant-vowel syllablesDL dichotic listeningECD equivalent current dipoleErCoh event-related coherenceMEG magnetoencephalographyPAC primary auditory cortex.

r Brancucci et al. r

r 254 r

tory memory tasks [Fingelkurts et al., 2003; Pesonen et al.,2006].The earliest DL findings recognized that attentional

processes play an important role in this topic [Kinsbourne,1970]. More recently, functional neuroimaging studieshave shown that attentional shifts during DL are accompa-nied by specific modulations of neural activity possiblywith a facilitating effect for auditory processing [Alhoet al., 2003; Hugdahl et al., 2000; O’Leary et al., 1996].Selective attention to the right ear leads to increasedactivity in the left auditory cortex whereas selectiveattention to the left ear increases activity in the rightauditory cortex. In particular the area showing such anasymmetric pattern of activation linked to attention is theplanum temporale [Jancke et al., 2001, 2003; Lipschutzet al., 2002].We aim here at extending previous evidence [Brancucci

et al., 2005b] to speech sounds, in order to elucidatewhether synchronization of alpha rhythms within a net-work of speech auditory areas including bilateral primaryauditory cortex (PAC) and Wernicke’s area can be affectedby DL. The hypothesis is that the different activity of leftand right PAC elicited by ‘‘true’’ DL is associated with areduced functional coupling between the two areas, as anoutcome of the major involvement of left (dominant forspeech sounds) compared with right auditory cortex. Onthe contrary, we expect an increase of functional couplingwithin left hemispheric speech cortical areas (left PAC andWernicke’s area), at the basis of the processing of the CV-syllable presented at the right ear, which arrives to the leftPAC with a reduced interference of the CV-syllable pre-sented dichotically to the left ear. To face this issue weused an approach based on magnetoencephalography(MEG) and analysis of spectral coherence of the alpharhythms generated during passive DL of CV-syllables withdistributed attention (i.e. no lateral attentional shifts). Thecomputation of spectral coherence on MEG recordings isparticularly suited for the study of functional couplingthanks to both high temporal resolution of MEG, whichallows the study of neural synchronization in the millisec-ond scale, and to the fact that the tissues which are amongthe neural sources and the sensors are transparent to mag-netic signals. This avoids that the recorded signal isblurred, which can introduce artifacts in the analysis ofspectral coherence. Moreover, MEG is a silent neuroimag-ing technique, which has been successfully used to modelthe activation of auditory cortical areas [Hari, 1990].

MATERIALS AND METHODS

Subjects

Ten right-handed (Edinburgh Inventory: mean 6 stand-ard error ¼ 67.94 6 7.39) healthy volunteers wererecruited (seven females, age range 20–31 years, average25 years). All subjects gave their written informed consentaccording to the Declaration of Helsinki and could freely

request an interruption of the investigation at any time.The general procedures were approved by the local Insti-tutional Ethics Committee. None of them had auditoryimpairments as shown by auditory functional assessment.No differences (65 dB) of hearing threshold at 500 and1,000 Hz were found between left and right ears in all sub-jects.

Behavioral Test

In a separate session, subjects underwent a behavioralverbal DL task. The verbal DL task consisted of 60 items,which were continuously generated by a computer, with3 s inter-item interval. Subjects were provided with an ear-phone, a pencil, and a grid printed on a sheet of paper.He/she comfortably sat in front of the computer. After 30items the positions of the earphones was switched betweenleft and right ears, to avoid any bias due to the outputchannels. Each item consisted in a dichotic pair composedby two of the CV-syllables /ba/, /ka/, /da/, /ga/, /pa/,/ta/. The task of the subject was to indicate on the gridwhich CV-syllable he/she perceived at best, among theones listed above. Subjects were asked to pay attention atboth ears simultaneously, i.e. without privileging one ear[Hugdahl et al., 2000, 2003b; Jancke et al., 2003]. Data anal-ysis was based on the number of correct reported syllablesthat were presented at the left vs. right ear. That is, whenthe subject reported a syllable that was actually present inthe dichotic pair, a point was ascribed to the left (right)ear if the reported syllable was presented at the left (right)ear [Eichele et al., 2005]. One-way ANOVA analysis withEar of input (left, right) as a factor was carried out on earscores (dependent variable).

Stimulation for MEG Recordings

Dichotic stimuli consisted of three CV-syllables (/da/,/ba/, /ka/) recorded from a natural female voice. The in-tensity of the stimuli was adjusted at 60 dBA (auditory dec-ibel). The CV-syllables were recorded and handled with asampling rate of 44,100 Hz and an amplitude resolution of16 bit. Stimulus recording and handling was performed byusing the software ‘‘Wave 2.0’’ (Voyetra Turtle Beach Sys-tems, Yonkers, NY) for Microsoft Windows on a PC Pen-tium III 550 MHz with audio card Sound Blaster AWE 32.Waveforms of the CV-syllables are plotted in Figure 1 (leftpanel). Verification of the actual sound intensity was doneby means of a Phonometer (Geass, Delta Ohm HD2110,Torino, Italy). The normalized power spectrum densities ofthe consonant forming the CV-syllables are shown in Figure 1(right panel). The spectra were computed in the time win-dow of the consonant using 1024 points and a Hammingwindow. The three syllables were arranged in the followingfour dichotic stimuli: two ‘‘true dichotic’’ competing CV-syl-lable pairs (/da/ at the left ear þ /ba/at the right ear andviceversa /ba/at the left ear þ /da/ at the right ear) havinghigh spectral overlap and two ‘‘dichotic sham’’ non-compet-

r Functional Coupling During Dichotic Listening r

r 255 r

ing CV-syllables pairs (/da/ at the left ear þ /ka/ at theright ear and viceversa/ka/at the left ear þ /da/ at theright ear) having low spectral overlap (control condition).The amount of spectral overlap between the consonants

of the CV-syllables forming the dichotic stimuli was esti-mated by computing the Euclidean distance between con-sonant spectra. We obtained a spectral overlap of about93% when comparing /d/ versus /b/ (two voiced conso-nants) and about 23% for /d/ versus /k/ (a voiced versusa voiceless consonant). Each of the four stimuli was pre-sented 80 times in a pseudorandomized sequence to avoidexpectancy effects that could affect the neuromagneticmeasures. The interstimulus interval varied randomlybetween 2,500 and 3,500 ms.

MEG Recordings

Whole-brain activity was recorded using the 165 channelMEG system installed at the University of Chieti inside ahigh-quality magnetically shielded room [Della Pennaet al., 2000]. The system consists of 153 dc SQUID inte-grated magnetometers arranged on a helmet surface cover-ing the whole head and 12 reference channels. The acous-tic stimulation was provided by Sensorcom plastic eartubes connected to an artifact-free transducer. During themagnetic recordings the subjects passively listened to the

stimuli and did not perform any task. They were asked topay attention at both ears at the same time, i.e. withoutprivileging one side [Hugdahl et al., 2000; Jancke et al.,2003]. Simultaneously with magnetic recordings, ECG wasacquired as a reference for the rejection of the heart arti-fact. All signals were band-pass filtered at 0.16–250 Hzand recorded at 1 kHz sampling rate.The position of subjects’ head with respect to the sensors

was determined by recording and fitting the magnetic fieldgenerated by four coils placed on the scalp before and af-ter each recording. The coil positions on the subject scalpwere digitized by means of a 3D digitizer (Polhemus,3Space Fastrak), together with anatomical landmarks defin-ing a coordinate system. A set of high resolution magneticresonance images (MRIs) of subjects’ heads were obtainedby a Siemens Magnetom Vision 1.5 Tesla using anMPRAGE sequence (2560 256, FoV 256, TR ¼ 9.7 ms, TE¼ 4 ms, Flip angle 128, voxel size 1 mm3).

Dipole Source Analysis

We studied the coherence between sources representingthe evoked activity of the bilateral PAC and of Wernicke’sarea (Fig. 2). Indeed, the aim of the present study was toinvestigate the functional coupling among the major areasinvolved in the perception of simple speech stimuli [Jancke

Figure 1.

Left: Waveforms of the CV-sylla-

bles /da/, /ba/, and /ka/ which

constituted the dichotic stimuli.

Right: power spectrum densities

of the consonants /d/, /b/ and

/k/ forming the CV-syllables. The

spectral overlap between/d/and/

b/was estimated to be about

93%, so that the CV-syllables

/da/ and /ba/ were named as

‘‘competing". The overlap was

reduced to 23% for /d/ and /k/,

forming the noncompeting sylla-

bles /da/ and/ ka/ which con-

stituted the control stimuli in

the present study (dichotic sham

condition).

r Brancucci et al. r

r 256 r

et al., 2002], which are also the first cortical stationsreached by auditory speech signals. It should be howeverconsidered that DL involves a distributed network of areascomprising, other than audiory cortex, the prefrontal cor-tex, the orbitofrontal and hippocampal paralimbic belts,and the splenium of the corpus callosum.Wernicke’s area and the left PAC are few centimetres

apart. For our recordings we used magnetometers, whichfeature a spread lead field so that the detected signal com-prises the contribution of nearby laying sources even if itis small. So, to separate the activity of Wernicke’s areafrom the left primary auditory area we studied coherencebetween those brain sites that gave rise to the evokedfields, instead of channel coherence. At this aim we recon-structed the specific source waveforms from the raw wave-forms of all channels throughout all the trials, i.e. thesource raw activities. Thus, the oscillations observed in thepresent study are directly generated at the level of the cor-responding sources.First, we localized the sources based on the analysis of the

auditory evoked magnetic fields (AEFs). For each dichoticpair, about 70 artifact-free AEF trials were band-pass filtered(1–70 Hz) and averaged in a time period of 600 ms, includ-ing a 50 ms pre-stimulus epoch. For each channel, the base-line level was set as the mean value of the magnetic field inthe time interval �10 7 10 ms across the stimulus onset.Before averaging, the heart artifact was removed from allthe channels by an adaptive algorithm utilizing the QRSpeak recorded by the ECG channel as a trigger.

Second, we used multiple source analysis provided by theBESA software (MEGIS Software GmbH, Germany) based onthe Equivalent Current Dipole (ECD) as source model and ahomogeneous sphere to model the subject’s head. The fittinginterval was 70 7 360 ms poststimulus. In this time interval,the position, orientation, and amplitude of two free ECDswere fitted to model the sources in the primary auditory cor-tices using the dichotic pair comprising noncompeting sylla-bles /da/-/ka/, since the signal to noise ratio was higher inthis condition. No regional constraints were applied to the fitof the two ECDs. For the other conditions the locations of theauditory ECDs found in the noncompeting condition /da/-/ka/ were held fixed during the fit. A third source was posi-tioned in the Talairach coordinates (�52, �40, 8) to representthe Wernicke’s area in the left hemisphere (see Fig. 2). Theorientation and amplitude of this dipole source could changeto fit AEFs in the time interval 150 7 400 ms poststimulus.We also checked whether other sources could explain theevoked data by adding free ECDs. Since we did not obtainsets of sources fitting the residual field (after fitting the ECDsin the primary auditory cortex), which were reproducibleacross subjects and conditions, we considered only the twoECDs in the bilateral primary auditory cortex and the one inWernicke’s area for further analysis. For each subject, theECD locations were checked on the MRI with the Brain-Voy-ager software (Brain Innovation B.V., The Netherlands). Weselected a goodness of fit of 80% as the lower threshold toaccept an ECD configuration. Because of the low amplitudeof the evoked signal, possibly caused by the interferencebetween auditory pathways during DL [Brancucci et al., 2004;Della Penna et al., in press], a 20% of residual variance wasassumed to be related to nonphase-locked signals. The meanacross subjects of the explained variance was 86%.Third, the source raw activities for the two ECDs in the

PAC and for the one in Wernicke’s area that are the ECDamplitudes at each sampling time were obtained by meansof BESA software. The ECD positions and orientationswere held fixed while an inverse operator was applied tothe instantaneous raw field distribution over the helmetthroughout the whole recording session.

Analysis of Spectral Coherence

The source raw activities reconstructed from continu-ously recorded MEG data were segmented in single trialseach spanning from �1000 to þ1000 ms, the zerotimebeing the onset of auditory stimulus. ECD single trialswere discarded when associated with artefacts. About 70MEG trials were accepted for each stimulus condition andfor each subject. To perform spectral coherence analysis ofthe artifact-free data, we preliminarily removed phase-locked activity (i.e., average evoked auditory source activ-ity) with a mathematical technique based on weightedinter-trial variance calculation. It is indeed well knownthat phase-locked activity (evoked activity) can interferewith the study of power density or functional coupling ofcerebral rhythms [Kalcher and Pfurtscheller, 1995]. Briefly,

Figure 2.

Three ECDs have been considered for the analysis of functional

coupling. They are localized in the left PAC, right PAC, and Wer-

nicke’s area. [Color figure can be viewed in the online issue,

which is available at www.interscience.wiley.com.]

r Functional Coupling During Dichotic Listening r

r 257 r

the procedure of phase-locked activity removal was thefollowing. For each source, a correction factor was calcu-lated for each single trial by cross-correlation between theaverage source AEFs and the on-going raw sourceactivity of that trial. This factor was used to weight theremoval of source AEFs from the on-going source activityof that trial. A similar technique has been successfullyused in previous studies focused on the analysis ofcerebral rhythms [Brancucci et al., 2005b; Kalcher andPfurtscheller, 1995].Spectral coherence is a normalized measure of the cou-

pling between two signals at any given frequency [Hal-liday et al., 1995; Rappelsberger and Petsche, 1988] andcan be used to study the functional coupling betweencortical areas [Babiloni et al., 2006; Brancucci et al., 2005b].The coherence values were calculated for each frequencybin by:

CohxyðlÞ ¼jfxyðlÞj2

fxxðlÞfyyðlÞ

which is the extension of the Pearson’s correlation coeffi-cient R to complex number pairs. In this equation, fdenotes the spectral estimate of two source raw signals xand y for a given frequency bin (l). The numerator con-tains the cross-spectrum for x and y (fxy), while the denom-inator contains the respective auto-spectra for x (fxx) and y(fyy). For each frequency bin (l), the coherence value(Cohxy) is obtained by squaring the magnitude of the com-plex correlation coefficient R. This procedure returns a realnumber between 0 (no coherence) and 1 (maximal coher-ence). For each subject and each source pair, a statisticalthreshold level for coherence was computed according toHalliday et al. [1995], taking into account the number ofsingle valid trials used as an input for the analysis of spec-tral coherence. Of note, for each subject and each sourcepair, coherence values were all above statistical thresholdposed at P < 0.05.As mentioned above, here spectral coherence was com-

puted among time series of the activity arising from ECDs,which were located in the left and right PAC and in Wer-nicke’s area. The between-ECDs coherence was calculatedat ‘‘baseline’’ period (from �1,000 ms to zerotime, definedas the auditory stimulus onset) as well as at ‘‘event’’ pe-riod (from zerotime to þ1,000 ms). The computation ofsource coherence from data segments of 1,000 ms yieldeda frequency resolution of 1 Hz. Frequency bands of inter-est were alpha 1, alpha 2, and alpha 3, which were assessedsubjectively, on the basis of the individual alpha peak (IAF).These alpha sub-bands were determined according to astandard procedure based on the peak of individual alphafrequency at the power density spectrum [IAF; Klimesch,1996, 1999; Klimesch et al., 1996]. With respect to the IAF,the alpha sub-bands were defined as follows: (i) alpha 1 asthe IAF-4 Hz to IAF-2 Hz, (ii) alpha 2 as IAF-2 Hz to IAF,and (iii) alpha 3 as IAF to IAF þ2 Hz. Of note, mean(6 standard error) IAF was 10.1 6 0.4 Hz.

Statistical Analysis

For the statistical analysis, maximal event-related MEGsource coherence (ErCoh) within each alpha sub-band wasused as a dependent variable. ErCoh is the mean differ-ence between coherence at event and baseline periods. Itshould be stressed that the magnitude of ErCoh is usuallysmaller than the absolute coherence values. However, ithas the advantage to take into account the inter-subjectvariability of baseline coherence.Statistical evaluation of the data was done by means of

three 2 � 2 repeated measures analysis of variance(ANOVA), one for each alpha sub-band (alpha 1, alpha 2,alpha 3). Tukey’s post-hoc comparisons were applied andMauchley’s test evaluated the sphericity assumption. Cor-rection of the degrees of freedom was made by the Green-house-Geisser procedure. ANOVA factors were (a) ECDpair, with two levels, namely coherence between ECDs inthe left and right PAC and coherence between left PACand Wernicke’s area and (b) spectral overlap of thedichotic stimuli, with two levels, namely high spectraloverlap (true dichotic condition) and low spectral overlap(sham dichotic condition). Of note, the level ‘‘high spectraloverlap’’ contained averaged data from both dichotic stim-uli composed by /da/ at the left ear þ /ba/ at the rightear and /ba/ at the left ear þ /da/ at the right ear. Simi-larly, the level ‘‘low spectral overlap’’ contained averageddata from both stimuli composed by /da/ at the left ear þ/ka/ at the right ear and /ka/ at the left ear þ /da/ atthe right ear.

RESULTS

Behavioral Test

All subjects showed a right ear advantage in the verbalDL task. Laterality index (LI) was computed as follows: LI¼ (R � L)/(R þ L) � 100, where R is the number of correctreports of the right ear and L the number of correct reportsof the left ear. Mean (6standard error) laterality index was18.6 6 2.9, the range spanned from 11.1 to 30.0. ANOVAanalysis indicated a statistically significant effect in favorof the right ear (F ¼ 37.58, P < 0.001). This result indicatesthat the subjects perceived preferentially the syllable pre-sented at the right ear which, in DL conditions, sends itsinputs mainly to the left hemisphere [Brancucci et al.,2004].

Control Analysis on the Behavioral Test: Dichotic

‘‘True’’ vs. ‘‘Sham’’ Laterality Effects

With the aim to check whether dichotic pairs composedby stimuli having high spectral overlap yielded strongerlaterality effects than dichotic pairs having scarce spectraloverlap we analyzed the behavioral results by dividing thetrials in two categories: ‘‘true’’ dichotic stimuli (those com-posed by the CV-syllables having high spectral overlap,

r Brancucci et al. r

r 258 r

i.e. /ba/-/da/, /ba/-/pa/, /ba/-/ta/, /da/-/pa/, /da/-/ta/, /ka/-/ga/, /pa/-/ta/, and vice versa) and ‘‘sham’’dichotic stimuli (those composed by CV-syllables havinglow spectral overlap, i.e. /ba/-/ka/, /ba/-/ga/, /da/-/ga/, /da/-/ka/, /ga/-/pa/, /ga/-/ta/, /ka/-/ta/,/ka/-/pa/, and vice versa). Laterality index for ‘‘true’’dichotic stimuli was 26.1 6 3.7, whereas laterality indexfor ‘‘sham’’ dichotic stimuli was 12.3 6 3.8. One-wayANOVA with laterality index as a dependent variableshowed that the laterality effect (right ear advantage) wasstronger for ‘‘true’’ than ‘‘sham’’ dichotic stimuli (F ¼ 7.54,P ¼ 0.03).

Coherence Spectra

Figure 3 illustrates baseline (upper part of the figure) andevent (bottom) mean coherence spectra around the alphafrequency range between both MEG source pairs (left $right PAC and left PAC $ Wernicke’s area) for bothdichotic competing (/da/-/ba/) and dichotic noncompeting(/da/-/ka/) CV-syllables along with statistical threshold ofcoherence [Halliday et al., 1995]. Mean (6standard error)statistical threshold was 0.038 6 0.002 for the source pairleft-right PAC and 0.037 6 0.002 for the source pair leftPAC-Wernicke’s area. Absolute coherence values were rela-tively low in magnitude especially for left $ right PAC,which are two distant sources [Thatcher et al., 1986]. How-ever, all band values were above the corresponding statisticalthresholds (P < 0.05) at both baseline and event periods ineach subject. This was true for all alpha frequency bands.

Comparing event with baseline coherence spectra, it can beobserved that, especially evident at higher alpha bands(alpha 2 and alpha 3), MEG source coherence decreasedbetween bilateral PAC when competing compared with non-competing CV-syllables were presented. Conversely, MEGsource coherence increased within the left hemispherebetween left PAC and Wernicke’s area during dichotic listen-ing of competing compared to noncompeting CV-syllables.Table I reports mean ErCoh values for the three alpha

sub-bands (alpha 1, alpha 2, and alpha 3) between left andright PAC and between left PAC and Wernicke’s area in

Figure 3.

Across-subjects mean MEG co-

herence spectra observed dur-

ing the delivering of competing

(/da/-/ba/) and noncompeting (/

da/-/ka/) dichotic CV-syllables.

Top: baseline coherence spectra

(last prestimulus second). Bot-

tom: event coherence spectra

(first poststimulus second) with

statistical thresholds. Left: interhe-

mispheric coherence spectra, bet-

ween sources located in the left

and right PAC. Right: intrahemi-

spheric coherence spectra, bet-

ween sources located in the left

PAC and in Wernicke’s area. Note

that spectral bands, reported be-

low the horizontal axis, are not

fixed but vary according to individ-

ual alpha frequency.

TABLE I. Maximal ErCoh values and standard errors (in

brackets) in the three alpha sub-bands (alpha1, alpha2,

and alpha3) between the two ECD pairs (left PAC$right

PAC and left PAC$Wernicke’s area) for competing (true

dichotic) and non-competing (sham diachotic) syllables

Left PAC-Right PAC

Left PAC-Wernicke

ALPHA1Competing (/da/-/ba/) 0.027 (0.008) 0.061 (0.01)Non-competing (/da/-/ka/) 0.024 (0.010) 0.042 (0.017)

ALPHA2Competing (/da/-/ba/) 0.029 (0.009) 0.093 (0.025)Non-competing (/da/-/ka/) 0.038 (0.007) 0.054 (0.018)

ALPHA3Competing (/da/-/ba/) 0.023 (0.006) 0.068 (0.017)Non-competing (/da/-/ka/) 0.050 (0.016) 0.028 (0.018)

r Functional Coupling During Dichotic Listening r

r 259 r

the two experimental conditions (competing and noncom-peting CV-syllables).

Statistical Results

Two-way repeated measures ANOVA of ErCoh valueshaving ECD pairs (left $ right PAC, left PAC $Wernicke’s area) and spectral overlap (high /da/þ/ba/,low /da/þ/ka/) as factors yielded no statistically signifi-cant effects at the two low alpha sub-bands (i.e. alpha 1and alpha 2) but a statistically significant interaction effect(F ¼ 9.88, P ¼ 0.013) at the high alpha sub-band (i.e. thealpha 3 sub-band). This indicated (Fig. 4) an inverse pat-tern of functional coupling at high-band alpha rhythms;whereas MEG source ErCoh between left and right PACdecreased with dichotic stimuli having high frequencyoverlap compared with sham DL (based on stimuli withpoor spectral overlap), MEG source coherence between leftPAC and Wernicke’s area increased. Tukey’s post-hoccomparisons indicated that the increase of MEG sourceErCoh (left PAC $ Wernicke’s area) during DL of verbalstimuli having high spectral overlap was statistically sig-nificant (P ¼ 0.03).

Control Analysis for Possible Attentional Biases

With the aim to check whether in the present experi-ment spectral coherence was affected by uncontrolled var-iations of the attentional level during the MEG recordings,we analyzed the alpha power (in the whole alpha band,based on the IAF) of all parietal MEG channels in the pe-riod of 1 s immediately preceding the auditory stimula-tions. Parietal alpha power is retained to be a reliable indi-cator of the attentional level [Brancucci et al., 2005b; Kli-mesch et al., 1998]. Normalized mean (6standard error)alpha power preceding ‘‘true’’ dichotic stimuli was 0.35 60.03 whereas alpha power preceding ‘‘sham’’ dichotic stim-uli was 0.34 6 0.02. Paired t-test showed that the atten-tional level did not significantly differ (P ¼ 0.30) betweenthe two conditions.

DISCUSSION

The present study was designed to test whether func-tional coupling of alpha rhythms decreases during DL ofverbal sounds. To this purpose, we performed coherenceanalysis of MEG data recorded during the presentation ofdichotic speech stimuli on subjects showing right earadvantage in a speech DL task. The three principal areasinvolved in the perception of speech stimuli were taken inaccount, that is, left and right PAC, and Wernicke’s area.An increase of functional coupling within the high-bandalpha range in the left hemisphere, specifically betweenleft PAC and Wernicke’s area, i.e. the two main brain areasof the left hemisphere implicated in the perception of

speech was observed [Price, 2000]. Moreover, an inversepattern (i.e. a decrease) of functional coupling was ob-served between left and right PAC. No significant modula-tion of functional coupling in the low alpha band wasobserved. Of note, the attentional level between ‘‘true’’ and‘‘sham’’ dichotic stimuli showed negligible variations asreflected by the analysis of alpha power during the MEGrecordings. Alpha power is a reliable indicator of atten-tional level [Brancucci et al., 2005a,b; Klimesch et al., 1998].No significant modulation of low-band alpha rhythms wasobserved.The present observations of a functional coupling modu-

lation of high- but not low-alpha rhythms can be inter-preted considering that the alpha band of the human en-cephalogram reflects quite different cognitive processes. Ithas been shown that, depending on the specific frequency,modulation of brain rhythms within the alpha band isrelated to different types of cognitive activity. Unspecificalertness processes (i.e. global attention) are reflected bychanges in the low alpha band, whereas specific sensoryprocesses are reflected by changes in the high alpha band[Klimesch et al., 1998]. This interpretation is supported bythe findings of a recent study, which has demonstratedthat functional coupling of alpha rhythms can reflect atten-tional processes. In particular, focused attention to one earincreased high alpha band synchronization likelihood inthe ipsilateral frontal region [Gootjes et al., 2006]. In the

Figure 4.

Across subjects mean (6standard error) event-related coher-

ence (ErCoh) values in the alpha 3 frequency band. Statistical

results showed a significant interaction (P ¼ 0.013) between the

two factors (ECD pair and spectral overlap of CV-syllables) indi-

cating that, compared to sham DL, during DL (/da/-/ba/) there

was an inverse pattern of functional coupling: whereas interhe-

mispheric high-band alpha (alpha 3) ErCoh between left and right

PAC was reduced, intrahemispheric high-band alpha ErCoh

within the left hemisphere (left PAC $ Wernicke’s area) was

increased.

r Brancucci et al. r

r 260 r

light of these considerations, the results of the presentstudy showing phase synchronization effects in the highalpha band are interpretable considering the fact that thepresent experimental design typically involved sensoryspecific, rather than global attentional processes.The present results extend previous DL evidence [Bran-

cucci et al., 2004; Hugdahl et al., 1999; Mathiak et al., 2000;Milner et al., 1968; Pollmann et al., 2002; Sparks andGeschwind, 1968; Springer and Gazzaniga, 1975] by show-ing the modulation of functional coupling of auditory cort-ical areas implicated in speech perception. The investiga-tion of functional coupling of cortical rhythms comple-ments the evaluation of amplitude and latency of EEG orMEG activity evoked by dichotic stimuli. Furthermore, itconfirms and complements previous behavioral evidencedemonstrating that small changes in the degree of compe-tition (i.e. spectral overlap) between the dichotic tones sig-nificantly affect the magnitude of perceptual asymmetry.We observed here that the right ear advantage was stron-ger for ‘‘true’’ than ‘‘sham’’ dichotic stimuli, having ‘‘true’’dichotic stimuli higher spectral overlap than ‘‘sham’’ ones.According to previous evidence on complex tones [Sidtis,1981, 1988] and linguistic sounds [Springer et al., 1978] itcan be stated that laterality effects are related to the spec-tral composition of the stimuli constituting the dichoticpair. The higher the spectral overlap in the dichotic pair,the stronger the ear advantage. Thus, the modulation ofneuromagnetic functional coupling disclosed in the presentstudy could have been blurred, as suggested by the behav-ioral results, which showed that even ‘‘sham’’ dichoticstimuli yield (minor) laterality effects.In the present study, dichotic stimuli with high spectral

overlap were obtained by simultaneous presentation oftwo CV-syllables with voiced consonant (/da/ and /ba/)whereas dichotic stimuli with low spectral overlap wereobtained by simultaneous presentation of a CV-syllablewith voiced consonant (/da/) and a CV-syllable withvoiceless consonant (/ka/). In this context, the presentresults are in accord with the outcome of a recent fMRIreport, which showed that the left auditory cortex (in par-ticular the left medial planum temporale) is highly sensibleto the phonological differences between voiceless andvoiced consonants [Jancke et al., 2002].In particular, the present findings confirm and extend

evidence from a precedent EEG study on functional cou-pling of cortical rhythms between bilateral auditory corti-ces during DL of harmonic complex nonverbal tones[Brancucci et al., 2005b]. In that study, it was demonstratedthat functional coupling of EEG rhythms at left and rightand scalp sites roughly overlying auditory cortices, wassignificantly lower during DL of competing (i.e. havingsimilar fundamental frequencies and high spectral overlap)than noncompeting (i.e. having dissimilar fundamental fre-quencies and low spectral overlap) tone pairs. Moreover, itwas suggested that this decrease of functional couplingcould be a possible neural substrate for the lateralizationof auditory stimuli during DL. In the present study, the

methodological advancements allowed us to observe themodulation of functional coupling at cortical areas and tospecify the individual sub-band of alpha rhythms sensitiveto that modulation. Moreover, the present study expandsto speech stimuli and to a network of cortical areasdevoted to speech perception.It should be stressed that the absolute coherence values

were somewhat low, especially for that what concernedthe interhemispheric source pair (left $ right PAC). Thismight be ascribed to the large distance between the twosources of the MEG signals and to the preliminary re-moval of the auditory evoked magnetic fields (i.e. neuralactivity phase locked to the auditory stimulus) before thecomputation of coherence, which was done in order toinvestigate brain rhythms nonphase-locked to the stimu-lus [Pfurtscheller and Lopes da Silva, 1999]. Previousfindings have shown that coherence values are inverselyproportional to the distance of the recorded sites[Thatcher et al., 1986]. Furthermore, the preliminary re-moval of evoked activity is in line with recent guidelineson the study of brain rhythmicity [Pfurtscheller andLopes da Silva, 1999] and provides absolute and event-related coherence values lower than those obtained com-puting the coherence from event-related potentials [Kaiseret al., 2000; Yamasaki et al., 2005]. Finally, it should bealso stressed that the individual coherence values werestatistically significant (P < 0.05). Indeed, the coherencevalues of each subject were higher than statistical thresh-old as computed with the procedure suggested by Halli-day et al. [1995]. Obviously, event-related coherence val-ues are usually even smaller than absolute coherence val-ues as they represent a difference between coherencelevels in two different time periods.On the whole, the present results agree with the ‘‘struc-

tural theory’’ proposed originally by Kimura [1967]. Onthe basis of neuropsychological results, it was suggestedthat the contralateral neural pathway suppresses the ipsi-lateral one during DL. In line with this theory, commisuro-tomized patients had no difficulty reporting words or con-sonants-vowel syllables presented monaurally [Milneret al., 1968; Sparks and Geschwind, 1968; Springer andGazzaniga, 1975]. In contrast, they failed to report itemspresented to left ear when the same stimuli were pre-sented dichotically. The lesion of the posterior part of thecorpus callosum (splenium) prevented dichotic sounds toleft ear from reaching the left hemisphere via the indirectcontralateral route [Pollmann et al., 2002; Westerhausenet al., 2006]. This route going through the splenium wouldpermit normal subjects to hear dichotic items in both ears,even if the ear contralateral to the dominant hemisphere ispreferred. The present results on MEG source coherenceextend the aforementioned ‘‘structural theory’’, in that thesuggested inhibition of the ipsilateral pathway may beassociated with a drop of the functional coordinationbetween the two auditory cortical areas at the dominanthuman encephalographic rhythm, in particular in the highalpha band.

r Functional Coupling During Dichotic Listening r

r 261 r

CONCLUSION

The present study focused on functional coupling of corti-cal alpha rhythms within a neural network of cortical areascomprising bilateral auditory cortices and Wernicke’s areaduring DL of speech sounds (CV-syllables). Results showedthat, during DL of speech sounds, functional coupling ofhigh-band alpha rhythms between left and right PACdecreased, whereas it increased within auditory speech areasof the left hemisphere. The decrease of inter-hemisphericalfunctional coupling of high-band alpha rhythms is possiblybecause of the fact that, during DL of speech sounds, pre-dominant information processing is performed by the speci-alized left hemisphere and the level of coordination betweenthe two hemispheres declines. Instead, the increase of intra-hemispherical functional coupling of high-band alpharhythms within the left hemisphere would underlie the stim-ulus-specific processing of the syllable presented to the rightear, which arrives to the left PAC with reduced ‘‘dichotic in-terference’’ of the other syllable, presented to the left ear.These results suggest that functional coupling of alpharhythms might constitute a neural substrate for lateralizationof auditory stimuli during DL, at least regarding the audi-tory cortices. Indeed, DL involves a distributed network ofareas comprising, other than the auditory cortices, the pre-frontal cortex, the orbitofrontal and hippocampal paralimbicbelts, and the splenium of the corpus callosum. Specifically,it has been recently shown that frontal areas play an impor-tant role in DL [Jancke et al., 2001, 2002, 2003; Karino et al.,2006; Lipschutz et al., 2002; Thomsen et al., 2004]. Futurestudies should investigate functional coupling among thesebrain areas during DL, possibly by using localization meth-ods based on extended sources as models, as well as therole of other spectral frequencies and the influence of atten-tional constraints.

REFERENCES

Ackermann H, Hertrich I, Mathiak K, Lutzenberger W (2001):Contralaterality of cortical auditory processing at the level ofthe M50/M100 complex and the mismatch field: A whole-headmagnetoencephalography study. Neuroreport 12:1683–1687.

Alho K, Vorobyev VA, Medvedev SV, Pakhomov SV, Roudas MS,Tervaniemi M, van Zuijen T, Naatanen R (2003): Hemisphericlateralization of cerebral blood-flow changes during selectivelistening to dichotically presented continuous speech. BrainRes Cogn Brain Res 17:201–211.

Arnault P, Roger M (1990): Ventral temporal cortex in the rat:Connections of secondary auditory areas Te2 and Te3. J CompNeurol 302:110–123.

Babiloni C, Brancucci A, Vecchio F, Arendt-Nielsen L, Chen ACN,Rossini PM (2006): Anticipation of somatosensory and motorevents increases centro-parietal functional coupling: An EEGcoherence study. Clin Neurophysiol 117:1000–1008.

Bilecen D, Scheffler K, Schmid N, Tschopp K, Seelig J (1998):Tonotopic organization of the human auditory cortex asdetected by BOLD-FMRI. Hear Res 126:19–27.

Boucher R, Bryden MP (1997): Laterality effects in the processingof melody and timbre. Neuropsychologia 35:1467–1473.

Bozhko GT, Slepchenko AF (1988): Functional organization of thecallosal connections of the cat auditory cortex. Neurosci BehavPhysiol 18:323–330.

Brancucci A, San Martini P (1999): Laterality in the perception oftemporal cues of musical timbre. Neuropsychologia 37:1445–1451.

Brancucci A, San Martini P (2003): Hemispheric asymmetries inthe perception of rapid (timbral) and slow (nontimbral) ampli-tude fluctuations of complex tones. Neuropsychology 17:451–457.

Brancucci A, Babiloni C, Babiloni F, Galderisi S, Mucci A, TecchioF, Zappasodi F, Pizzella V, Romani GL, Rossini PM (2004): In-hibition of auditory cortical responses to ipsilateral stimuli dur-ing dichotic listening: Evidence from magnetoencephalography.Eur J Neurosci 19:2329–2336.

Brancucci A, Babiloni C, Rossini PM, Romani GL (2005a): Righthemisphere specialization for intensity discrimination of musi-cal and speech sounds. Neuropsychologia 43;1916–1923.

Brancucci A, Babiloni C, Vecchio F, Galderisi S, Mucci A, TecchioF, Romani GL, Rossini PM (2005b): Decrease of functional cou-pling between left and right auditory cortices during dichoticlistening: An electroencephalography study. Neuroscience136:323–332.

Bryden MP (1988): An overview of the dichotic listening proce-dure and its relation to cerebral organization. In: Hugdahl K,editor. Handbook of Dichotic Listening: Theory, Methods andResearch. New York: Wiley. pp 1–44.

Code RA, Winer JA (1986): Columnar organization and reciprocityof commissural connections in cat primary auditory cortex(AI). Hear Res 23:205–222.

Della Penna S, Del Gratta C, Granata C, Pasquarelli A, Pizzella V,Rossi R, Russo M, Torquati K, Erne SN (2000): Biomagneticsystems for clinical use. Philos Mag B 80:937–948.

Della Penna S, Brancucci A, Babiloni C, Franciotti R, Pizzella V,Rossi D, Torquati K, Rossini PM, Romani GL (2006): Lateraliza-tion of dichotic speech stimuli is based on specific auditorypathways interactions: Neuromagnetic evidence. Cereb Cortex2006 Dec 18; [Epub ahead of print] (in press).

Diamond IT, Jones EG, Powell TP (1968): Interhemispheric fiberconnections of the auditory cortex of the cat. Brain Res 11:177–193.

Eichele T, Nordby H, Rimol LM, Hugdahl K (2005): Asymmetryof evoked potential latency to speech sounds predicts the earadvantage in dichotic listening. Brain Res Cogn Brain Res24:405–412.

Ellfolk U, Karrasch M, Laine M, Pesonen M, Krause CM (2006):Event-related desynchronization/synchronization during anauditory-verbal working memory task in mild Parkinson’s dis-ease. Clin Neurophysiol 117:1737–1745.

Fingelkurts A, Fingelkurts A, Krause C, Kaplan A, Borisov S, SamsM (2003): Structural (operational) synchrony of EEG alpha activ-ity during an auditory memory task. Neuroimage 20:529–42.

Formisano E, Kim DS, Di Salle F, van de Moortele PF, Ugurbil K,Goebel R (2003): Mirror-symmetric tonotopic maps in humanprimary auditory cortex. Neuron 40:859–869.

Gootjes L, Bouma A, Van Strien JW, Scheltens P, Stam CJ (2006):Attention modulates hemispheric differences in functional con-nectivity: Evidence from MEG recordings. Neuroimage 30:245–253.

Halliday DM, Rosenberg JR, Amjad AM, Breeze P, Conway BA,Farmer SF (1995): A framework for the analysis of mixed timeseries/point process data—Theory and application to the studyof physiological tremor, single motor unit discharges and elec-tromyograms. Prog Biophys Mol Biol 64:237–278.

r Brancucci et al. r

r 262 r

Hari R (1990): Magnetic evoked fields of the human brain: Basicprinciples and applications. Electroencephalogr Clin Neuro-physiol Suppl 41:3–12.

Hugdahl K (2000): Lateralization of cognitive processes in thebrain. Acta Psychol (Amst) 105:211–235.

Hugdahl K, Bronnick K, Kyllingsbaek S, Law I, Gade A, PaulsonOB (1999): Brain activation during dichotic presentations ofconsonant-vowel and musical instrument stimuli: A 15O-PETstudy. Neuropsychologia 37:431–440.

Hugdahl K, Law I, Kyllingsbaek S, Bronnick K, Gade A, PaulsonOB (2000): Effects of attention on dichotic listening: An 15O-PET study. Hum Brain Mapp 10:87–97.

Hugdahl K, Bodner T, Weiss E, Benke T (2003a): Dichotic listeningperformance and frontal lobe function. Brain Res Cogn BrainRes 16:58–65.

Hugdahl K, Rund BR, Lund A, Asbjornsen A, Egeland J, LandroNI, Roness A, Stordal KI, Sundet K (2003b): Attentional and ex-ecutive dysfunctions in schizophrenia and depression: Evi-dence from dichotic listening performance. Biol Psychiatry53:609–616.

Jancke L, Shah NJ (2002): Does dichotic listening probe temporallobe functions? Neurology 58:736–743.

Jancke L, Buchanan TW, Lutz K, Shah NJ (2001): Focused andnonfocused attention in verbal and emotional dichotic listen-ing: An FMRI study. Brain Lang 78:349–363.

Jancke L, Wustenberg T, Scheich H, Heinze HJ (2002): Phoneticperception and the temporal cortex. Neuroimage 15:733–746.

Jancke L, Specht K, Shah JN, Hugdahl K (2003): Focused attentionin a simple dichotic listening task: An fMRI experiment. BrainRes Cogn Brain Res 16:257–266.

Kaiser J, Lutzenberger W, Preissl H, Ackermann H, Birbaumer N(2000): Right-hemisphere dominance for the processing ofsound-source lateralization. J Neurosci 20:6631–6639.

Kalcher J, Pfurtscheller G (1995): Discrimination between phase-locked and non-phase-locked event-related EEG activity. Elec-troencephalogr Clin Neurophysiol 94:381–384.

Kallman HJ, Corballis MC (1975): Ear asymmetry in reaction timeto musical sounds. Percept Psychophys 17:368–370.

Karino S, Yumoto M, Itoh K, Uno A, Yamakawa K, SekimotoS, Kaga K (2006): Neuromagnetic responses to binauralbeat in human cerebral cortex. J Neurophysiol 96:1927–1938.

Kimura D (1961): Cerebral dominance and the perception ofverbal stimuli. Can J Psychol 15:166–171.

Kimura D (1967): Functional asymmetry of the brain in dichoticlistening. Cortex 3:163–168.

Kinsbourne M (1970): The cerebral basis of lateral asymmetries inattention. Acta Psychol (Amst) 33:193–201.

Klimesch W (1996): Memory processes, brain oscillations and EEGsynchronization. Int J Psychophysiol 24:61–100.

Klimesch W (1999): EEG alpha and theta oscillations reflect cogni-tive and memory performance: A review and analysis. BrainRes Rev 29:169–195.

Klimesch W, Schimke H, Schweiger J (1994): Episodic and seman-tic memory: An analysis in the EEG theta and alpha band.Electroencephalogr Clin Neurophysiol 91:428–441.

Klimesch W, Doppelmayr M, Schimke H, Pachinger T (1996):Alpha frequency, reaction time, and the speed of processing in-formation. J Clin Neurophysiol 13:511–518.

Klimesch W, Doppelmayr M, Russegger H, Pachinger T, Sch-waiger J (1998): Induced alpha band power changes in thehuman EEG and attention. Neurosci Lett 244:73–76.

Krause CM, Gronholm P, Leinonen A, Laine M, Sakkinen AL,Soderholm C (2006): Modality matters: The effects of stimulusmodality on the 4- to 30-Hz brain electric oscillations during alexical decision task. Brain Res 1110:182–192.

Lehtela L, Salmelin R, Hari R (1997): Evidence for reactive mag-netic 10-Hz rhythm in the human auditory cortex. NeurosciLett 222(2):111–114.

Lipschutz B, Kolinsky R, Damhaut P, Wikler D, Goldman S (2002):Attention-dependent changes of activation and connectivity indichotic listening. Neuroimage 17:643–656.

Lopes da Silva FH,Vos JE, Mooibroek J, Van Rotterdam A (1980):Relative contributions of intracortical and thalamo-corticalprocesses in the generation of alpha rhythms, revealed by par-tial coherence analysis. Electroencephalogr Clin Neurophysiol50:449–456.

Makela JP (1988): Contra- and ipsilateral auditory stimuli producedifferent activation patterns at the human auditory cortex. Aneuromagnetic study. Pflugers Arch 412:12–16.

Mathiak K, Hertrich I, Lutzenberger W, Ackermann H (2000):Encoding of temporal speech features (formant transients) dur-ing binaural and dichotic stimulus application: A whole-headmagnetencephalography study. Brain Res Cogn Brain Res 10:125–131.

Milner B, Taylor L, Sperry RW (1968): Lateralized suppression ofdichotically presented digits after commissural section in man.Science 161:184–186.

Nunez P (1995): Neocortical Dynamics and Human EEG Rhythms.New York: Oxford University Press.

O’Leary DS, Andreason NC, Hurtig RR, Hichwa RD, Watkins GL,Ponto LL, Rogers M, Kirchner PT (1996): A positron emissiontomography study of binaurally and dichotically presentedstimuli: Effects of level of language and directed attention.Brain Lang 53:20–39.

Pandya DN, Hallett M, Kmukherjee SK (1969): Intra- and interhe-mispheric connections of the neocortical auditory system in therhesus monkey. Brain Res 14:49–65.

Pesonen M, Bjornberg CH, Hamalainen H, Krause CM (2006):Brain oscillatory 1–30 Hz EEG ERD/ERS responses during thedifferent stages of an auditory memory search task. NeurosciLett 399(1–2):45–50.

Pfurtscheller G, Lopes da Silva FH (1999): Event-related EEG/MEG synchronization and desynchronization: Basic principles.Clin Neurophysiol 110:1842–1857.

Plessen KJ, Lundervold A, Gruner R, Hammar A, LundervoldA, Peterson BS, Hugdahl K (2007): Functional brain asymme-try, attentional modulation, and interhemispheric transferin boys with Tourette syndrome. Neuropsychologia 45:767–774.

Pollmann S, Maertens M, von Cramon DY, Lepsien J, Hugdahl K(2002): Dichotic listening in patients with splenial and nonsple-nial callosal lesions. Neuropsychology 16:56–64.

Price CJ (2000): The anatomy of language: Contributions fromfunctional neuroimaging. J Anat 197 (Part 3):335–359.

Rappelsberger P, Petsche H (1988): Probability mapping: Powerand coherence analyses of cognitive processes. Brain Topogr1:46–54.

Romani GL, Williamson SJ, Kaufman L (1982): Tonotopic organi-zation of the human auditory cortex. Science 216:1339–1340.

Sidtis JJ (1981): The complex tone test: Implications for the assess-ment of auditory laterality effects. Neuropsychologia 19:103–111.

Sidtis JJ (1988): Dichotic listening after commissurotomy. In: Hug-dahl K, editor. Handbook of Dichotic Listening: Theory, Meth-ods and Research. New York: Wiley. pp 161–184.

r Functional Coupling During Dichotic Listening r

r 263 r

Sparks R, Geschwind N (1968): Dichotic listening after section ofneocortical commissures. Cortex 4:3–16.

Springer SP, Gazzaniga MS (1975): Dichotic testing of partial andcomplete split-brain subjects. Neuropsychologia 13:341–346.

Springer SP, Sidtis J, Wilson D, Gazzaniga MS (1978): Left ear per-formance in dichotic listening following commissurotomy.Neuropsychologia 16:305–312.

Studdert-Kennedy M, Shankweiler D (1970): Hemispheric spe-cialization for speech perception. J Acoust Soc Am 48:579–594.

Tervaniemi M, Hugdahl K (2003): Lateralization of auditory-cortexfunctions. Brain Res Rev 43:231–246.

Thatcher RW, Krause PJ, Hrybyk M (1986): Cortico-corti-cal associations and EEG coherence: A two-compartmen-

tal model. Electroencephalogr Clin Neurophysiol 64:123–143.

Thomsen T, Rimol LM, Ersland L, Hugdahl K (2004): Dichotic lis-tening reveals functional specificity in prefrontal cortex: AnfMRI study. Neuroimage 21:211–218.

Westerhausen R, Woerner W, Kreuder F, Schweiger E, HugdahlK, Wittling W (2006): The role of the corpus callosum indichotic listening:A combined morphological and diffusion ten-sor imaging study. Neuropsychology 20:272–279.

Yamasaki T, Goto Y, Taniwaki T, Kinukawa N, Kira J, TobimatsuS (2005): Left hemisphere specialization for rapid temporalprocessing: A study with auditory 40 Hz steady-stateresponses. Clin Neurophysiol 116:393–400.

r Brancucci et al. r

r 264 r