Valence specific laterality effects in prosody: Expectancy account and the effects of morphed...

11
Brain and Cognition 63 (2007) 31–41 www.elsevier.com/locate/b&c 0278-2626/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.bandc.2006.07.008 Valence speciWc laterality eVects in prosody: Expectancy account and the eVects of morphed prosody and stimulus lead Paul Rodway ¤ , Astrid Schepman Centre for Psychology, Department of Social Sciences and Humanities, University of Bradford, Richmond Road, Bradford, BD7 1DP, UK Accepted 13 July 2006 Available online 6 September 2006 Abstract The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies have found that the involvement of each hemisphere is valence speciWc, with the RH better at perceiving negative emotions and the LH better at perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981) DiVerential contributions of the 2 cerebral hemispheres to the perception of happy and sad faces.Neuropsychologia, 19, 609–613]. To account for valence laterality eVects in emotion perception we pro- pose an ‘expectancy’ hypothesis which suggests that valence eVects are obtained when the top-down expectancy to perceive an emotion outweighs the strength of bottom-up perceptual information enabling the discrimination of an emotion. A dichotic listening task was used to examine alternative explanations of valence eVects in emotion perception. Emotional sentences (spoken in a happy or sad tone of voice), and morphed-happy and morphed-sad sentences (which blended a neutral version of the sentence with the pitch of the emotion sentence) were paired with neutral versions of each sentence and presented dichotically. A control condition was also used, consisting of two identical neutral sentences presented dichotically, with one channel arriving before the other by 7 ms. In support of the RH hypothe- sis there was a left ear advantage for the perception of sad and happy emotional sentences. However, morphed sentences showed no ear advantage, suggesting that the RH is specialised for the perception of genuine emotions and that a laterality eVect may be a useful tool for the detection of fake emotion. Finally, for the control condition we obtained an interaction between the expected emotion and the eVect of ear lead. Participants tended to select the ear that received the sentence Wrst, when they expected a ‘sad’ sentence, but not when they expected a ‘happy’ sentence. The results are discussed in relation to the diVerent theoretical explanations of valence laterality eVects in emotion perception. © 2006 Elsevier Inc. All rights reserved. Keywords: Top-down; Endogenous; Vocal aVect; Extreme male brain hypothesis; F0; Synthetic; Resynthesis; Acoustic; AVective prosody 1. Introduction The emotional state of a person can often be perceived by their tone of voice, or facial expression, and certain regions of the brain seem to be specialised for the percep- tion of such emotions. The aim of the present study was to examine which hemisphere of the brain is preferentially involved in the perception of emotional prosody. Considerable evidence now exists to suggest that the right hemisphere (RH) is more specialised than the left hemisphere (LH) for dealing with the perception of emo- tions (Christman & Hackworth, 1993). For example, many studies that have used the divided visual Weld technique, or dichotic listening, have consistently found the RH to have a greater role in the perception and interpretation of emo- tional expressions (Borod, Zgaljardic, Tabert, & KoV, 2001; Ley & Bryden, 1979; Mandal & Singh, 1990). Moreover, in patient studies, damage to the RH has been found to impair the recognition of facial emotion more than does damage to the LH (Adolphs, Damasio, Tranel, & Damasio, 1996). The view that the RH is more specialized for emotional * Corresponding author. E-mail addresses: [email protected] (P. Rodway), A.Schepman @bradford.ac.uk (A. Schepman).

Transcript of Valence specific laterality effects in prosody: Expectancy account and the effects of morphed...

Brain and Cognition 63 (2007) 31–41

www.elsevier.com/locate/b&c

Valence speciWc laterality eVects in prosody: Expectancy account and the eVects of morphed prosody and stimulus lead

Paul Rodway ¤, Astrid Schepman

Centre for Psychology, Department of Social Sciences and Humanities, University of Bradford, Richmond Road, Bradford, BD7 1DP, UK

Accepted 13 July 2006Available online 6 September 2006

Abstract

The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies havefound that the involvement of each hemisphere is valence speciWc, with the RH better at perceiving negative emotions and the LH betterat perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981) DiVerential contributions of the 2 cerebral hemispheres to theperception of happy and sad faces.Neuropsychologia, 19, 609–613]. To account for valence laterality eVects in emotion perception we pro-pose an ‘expectancy’ hypothesis which suggests that valence eVects are obtained when the top-down expectancy to perceive an emotionoutweighs the strength of bottom-up perceptual information enabling the discrimination of an emotion. A dichotic listening task wasused to examine alternative explanations of valence eVects in emotion perception. Emotional sentences (spoken in a happy or sad tone ofvoice), and morphed-happy and morphed-sad sentences (which blended a neutral version of the sentence with the pitch of the emotionsentence) were paired with neutral versions of each sentence and presented dichotically. A control condition was also used, consisting oftwo identical neutral sentences presented dichotically, with one channel arriving before the other by 7 ms. In support of the RH hypothe-sis there was a left ear advantage for the perception of sad and happy emotional sentences. However, morphed sentences showed no earadvantage, suggesting that the RH is specialised for the perception of genuine emotions and that a laterality eVect may be a useful tool forthe detection of fake emotion. Finally, for the control condition we obtained an interaction between the expected emotion and the eVectof ear lead. Participants tended to select the ear that received the sentence Wrst, when they expected a ‘sad’ sentence, but not when theyexpected a ‘happy’ sentence. The results are discussed in relation to the diVerent theoretical explanations of valence laterality eVects inemotion perception.© 2006 Elsevier Inc. All rights reserved.

Keywords: Top-down; Endogenous; Vocal aVect; Extreme male brain hypothesis; F0; Synthetic; Resynthesis; Acoustic; AVective prosody

1. Introduction

The emotional state of a person can often be perceivedby their tone of voice, or facial expression, and certainregions of the brain seem to be specialised for the percep-tion of such emotions. The aim of the present study was toexamine which hemisphere of the brain is preferentiallyinvolved in the perception of emotional prosody.

* Corresponding author.E-mail addresses: [email protected] (P. Rodway), A.Schepman

@bradford.ac.uk (A. Schepman).

0278-2626/$ - see front matter © 2006 Elsevier Inc. All rights reserved.doi:10.1016/j.bandc.2006.07.008

Considerable evidence now exists to suggest that theright hemisphere (RH) is more specialised than the lefthemisphere (LH) for dealing with the perception of emo-tions (Christman & Hackworth, 1993). For example, manystudies that have used the divided visual Weld technique, ordichotic listening, have consistently found the RH to have agreater role in the perception and interpretation of emo-tional expressions (Borod, Zgaljardic, Tabert, & KoV, 2001;Ley & Bryden, 1979; Mandal & Singh, 1990). Moreover, inpatient studies, damage to the RH has been found to impairthe recognition of facial emotion more than does damageto the LH (Adolphs, Damasio, Tranel, & Damasio, 1996).The view that the RH is more specialized for emotional

32 P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41

processing, including the perception of emotions, is knownas the RH hypothesis.

1.1. The valence hypothesis

The RH hypothesis, however, has not gone unchallengedand an alternative view, known as the valence hypothesis,proposes that the involvement of each hemisphere is deter-mined by the valence (e.g., positive or negative) of the emo-tion being processed (Ahern & Schwartz, 1979, 1985;Davidson, 1992; Reuter-Lorenz & Davidson, 1981; Tucker,1981). It is suggested that the RH is specialised for negativeemotions and the LH for positive emotions. The valencehypothesis takes two forms. One version makes no distinc-tion between the various forms of emotional processing(emotional experience, expression, and perception) and sug-gests that that the RH preferentially deals with negativeemotions whereas the LH deals with positive emotions (Sil-berman & Weingartner, 1986). A second version of thevalence hypothesis (Borod, 1993; Davidson, 1984), whichwe will refer to as the modiWed valence hypothesis, pro-poses that frontal regions of both hemispheres are involvedin the expression and experience of emotions whereas pos-terior regions of the RH are involved in the perception ofemotion (Borod, 1993; Davidson, 1993a, 1993b). Moreover,frontal regions of the LH are specialised for positive emo-tions and frontal regions of the RH are specialised for neg-ative emotions (Davidson, 1993a).

The modiWed valence hypothesis predicts that valencespeciWc eVects will only emerge in emotion perception taskswhen a task (either directly or indirectly) recruits frontalareas involved in the experience (or expression) of emo-tions. This may occur in tasks which are perceptual butwhich, by their nature, become experiential. For example,because emotions can be contagious, with emotionallyexpressive faces eliciting an experience of that emotion inthe perceiver (Wild, Erb, & Bartels, 2001), it is possible forvalence speciWc eVects to emerge in tasks which involve theperceptual identiWcation of emotions (van Strien & vanBeek, 2000). Moreover, it has been suggested that emo-tional reactions in response to faces may be one of the waysthat facial expressions are decoded (Safer, 1981; Silberman& Weingartner, 1986; Adolphs, 2002; Wild et al., 2001) andit is possible that in diYcult emotion perception tasks, par-ticipants rely on their own aVective responses to a face toaid in making that discrimination, which then promotesvalence speciWc laterality eVects (Jansari, Tranel, & Adol-phs, 2000; see Section 1.3). One of the aims of the presentstudy was therefore to manipulate the discrimination diY-culty of the emotions in order to examine whether it causesthe emergence of valence speciWc laterality eVects.

As noted, in contradiction of the valence hypothesis, themajority of evidence supports the RH hypothesis, withmost studies obtaining a RH advantage for the perceptionof facial aVect irrespective of the valence of the emotionconveyed (see Borod et al., 2001, for a review). However,the large body of evidence in favour of the RH hypothesis

does not contradict or support the modiWed valencehypothesis. This is because emotional experience may nothave been recruited in those perception studies. In addition,the modiWed valence hypothesis predicts that in most per-ception studies a RH advantage will emerge, because theRH is specialised for emotional perception, but that occa-sional valence speciWc eVects will emerge when emotionalexperience has somehow been recruited by the task. Thismakes the modiWed valence hypothesis harder to test andfalsify. Without a reliable measure of emotional experienceit is often possible to use the recruitment of emotional expe-rience (or lack of recruitment) as an explanation for thepresence (or lack) of valence speciWc eVects. (e.g., Jansariet al., 2000; van Strien & van Beek, 2000).

1.2. Prosody and valence

While there have been a number of reports of valenceeVects for facial aVect (Burton & Levy, 1989; Jansari et al.,2000; Reuter-Lorenz & Davidson, 1981; Rodway, Wright,& Hardie, 2003; van Strien & van Beek, 2000) a diYcultyfor the modiWed valence hypothesis is that valence speciWceVects seem to be restricted almost exclusively to facialaVect rather than vocal emotion (see Borod et al., 2001).Bryden and MacRae’s (1989) study, which provides thestrongest evidence for valence eVects in prosody, obtained aRH superiority for all emotion perception but found alarger RH advantage for sad stimuli than for happy stimuli.Similarly Erhan, Borod, Tenke, and Bruder (1998) alsoobtained a RH advantage for prosodic emotion perceptionbut detected a non-signiWcant trend for a larger RH advan-tage for sad stimuli than for happy stimuli. However, moststudies have failed to obtain valence eVects with prosody(e.g., Bryden, Free, Gagne, & GroV, 1991; Stirling, Cavill, &Wilkinson, 2000), which could reXect the fact that prosodydoes not recruit emotional experience as eVectively as facialaVect, or that emotional prosody diVers from facial emo-tion in some other important (and informative) way. Anaim of the present study was to increase the likelihood ofdetecting valence speciWc laterality eVects with prosody byintroducing factors that have been proposed to causevalence speciWc laterality eVects in emotion perceptiontasks.

Most studies examining lateralised prosodic emotiondichotically have used nonsense syllables or single words(e.g., only three of 11 studies reviewed by Borod et al., 2001,used sentences). Using (nonsense) words is assumed toreduce the linguistic content of the stimuli, which is consid-ered an advantage, because linguistic content could intro-duce a left hemisphere bias into the lateralisation results.However, we feel that this method has important draw-backs. First, the range of prosodic cues that can beexpressed on such short utterances is limited. Second, theperceived naturalness of emotions expressed on nonsensewords may be limited, which may be one of the reasons whyvalence speciWc laterality eVects have failed to emergein prosody experiments. Therefore, in the present study

P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41 33

sentences were used to examine whether the use of morenatural prosody and of more suitable utterance domainsplays a role in producing valence eVects.

1.3. Discrimination diYculty

Jansari et al. (2000) suggested that valence speciWc eVectsmight emerge when the experimental task requires a diY-cult and subtle discrimination of emotions. This couldexplain the Wndings of Jansari et al. (2000) and Rodwayet al. (2003) who used facial stimuli which had been formedby morphing a neutral expression with an emotionalexpression by varying degrees, so that some faces expressedvery faint emotions, and which made the emotion discrimi-nation diYcult. In contrast, prosody studies have typicallyused easier discrimination tasks and this might thereforehave prevented the emergence of valence speciWc eVects.

In order to make the present prosody task equivalent tothe work of Jansari et al. (2000) and Rodway et al. (2003),which obtained valence speciWc eVects, it was decided tomorph the prosodic stimuli with the emotionally neutralsentences to produce morphed sentences which were closerto the neutral stimuli in their aVective prosody. This wasexpected to increase the diYculty of the discrimination task(because the morphed stimuli shared more features with theneutral stimuli), potentially promoting the recruitment ofexperiential factors to aid in the discrimination, and there-fore increasing the likelihood of detecting valence speciWceVects. An additional reason for introducing morphed pros-ody was to enable the examination of whether the discrimi-nation of artiWcial prosody was lateralised in the same wayas natural prosody. To our knowledge this is the Wrst timethat the lateralised processing of morphed prosodic stimulihas been examined and the results may have implicationsfor using laterality eVects in the assessment of fake, or syn-thetic, vocal emotions.

1.4. Top-down expectancy eVects

A further factor that might produce valence-speciWceVects in emotion perception, independently from therecruitment of emotional experience, are expectancy eVects.Kinsbourne’s (1970) hemispheric activation theory pro-poses that an expectancy for a certain stimulus activates thehemisphere that is most specialised for dealing with thatstimulus, which then causes attention to be allocated to thehemiWeld contralateral to the activated hemisphere. If fron-tal regions are specialised according to emotional valence(Davidson, 1993a), then an expectancy for an emotionmight asymmetrically activate the hemispheres, which theninXuences the perception of the emotion and leads to theemergence of valence eVects.

Top-down expectancies, however, are often balanced bybottom-up information. As Hugdahl (2000) suggests, later-ality eVects can best be understood as the product of theinteraction between bottom-up stimulus driven factors andtop-down expectancies created by the instructions given to

the participants. There is much evidence to support thisview, with, for example, hemispheric advantages beingincreased, or reversed, depending on which ear the partici-pants are instructed to attend to (Asbjùrnsen & Hugdahl,1995).

Recently Rodway et al. (2003) obtained results that sug-gest an important role for top-down expectancies in pro-ducing valence speciWc laterality eVects. In a controlcondition participants were presented with two identicalfaces (with a neutral emotion) along with an emotionallabel (e.g., happy, sad) and they were required to select theface they thought best represented the labelled emotion.The participants were not told that the faces were bothidentical and neutral. Interestingly the participants chosethe left side more often when they believed the emotion wasnegative and the right side more often when they believedthe emotion was positive. Therefore there was a responsebias in responding to two identical neutral faces1. Althoughthe stimuli did not convey any emotion, the pattern ofresults was entirely consistent with the valence speciWc lat-erality eVect, with faces on the left viewed negatively andfaces on the right viewed positively. As this eVect cannothave been due to the perception of actual emotional infor-mation, because there was no emotional information toperceive, it suggests that the results were caused by a top-down bias to perceive faces on the left as negative and thoseon the right as positive (e.g., Davidson et al., 1987; Nataleet al., 1983). This bias may have been increased by present-ing the emotion label prior to the arrival of the faces, whichgenerated an expectation in the participants for a particularemotion.

Based on the results of Rodway et al., and those of otherstudies, it is possible to propose an expectancy explanationof valence speciWc laterality eVects in perception tasks. Inaccord with Kinsbourne’s theory the response biasobtained by Rodway et al. may have been due to the emo-tional label creating an expectancy for the arrival of thefacial emotion speciWed by the label, which, in accord withthe valence speciWc hypothesis, caused the left hemisphereto be activated when a positive face was expected and theright hemisphere to be activated when a face with negativevalence was expected. In addition, and in accord with Hug-dahl’s (2000) view, it is possible that this emerged when thefaces possessed no emotion (and were identical) because thetop-down expectancy was not counteracted by any bottom-up facial emotion information. That is, valence-speciWceVects may emerge when participants expect an emotion ofa particular valence and when the actual perceptual (bot-tom-up) aVective information is very weak, or not present.

If valence eVects emerge in perception when there is astrong top-down expectancy and weak bottom-up percep-

1 See also Davidson, Mednick, Moss, Saron, and SchaVer (1987) Natale,Gur, and Gur (1983), who found a bias for rating faces in the left visualWeld as more negative and faces presented in the right visual Weld as morepositive, and Drake (1987) for a similar eVect when making aestheticjudgements of pictures.

34 P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41

tual information then tasks which create a strong expec-tancy for a particular emotion, either by presenting anemotional label before the stimulus’s arrival (e.g., Burton &Levy, 1989; Jansari et al., 2000; Rodway et al., 2003) or byrequiring participants to look out for one speciWc emotionthroughout an experiment (e.g., Bryden & MacRae, 1989;Erhan et al., 1998), and which use stimuli which providevery little, or no emotional information (e.g., Jansari et al.,2000; Rodway et al., 2003), are likely to show valence-spe-ciWc laterality eVects. In addition, expectancies generated byparticipants themselves may outweigh the perceptual infor-mation provided by facial stimuli, if the stimuli havereduced information because of very brief presentationtimes (e.g., Burton & Levy, 1989; Natale et al., 1983;Reuter-Lorenz & Davidson, 1981; van Strien & van Beek,2000). Therefore the expectancy account appears able tospecify the conditions under which valence eVects are likelyto emerge in perception tasks.

This expectancy explanation has some similarities toDavidson’s (1993a, 2004) inXuential approach–withdrawaltheory of emotional processing. Davidson’s theory pro-poses that the involvement of the frontal regions of thehemispheres in emotion and motivational behaviour shouldbe characterised along an approach-withdrawal contin-uum, with left anterior regions controlling goal-directedbehaviour and right anterior regions controlling with-drawal behaviour. Moreover approach-withdrawal theoryhas an expectancy component within it because anticipa-tion for a reward (which may be an expectation) has animportant role in activating left pre-frontal regions (David-son, 2004). Therefore an expectancy account of valenceeVects could be viewed as a novel variant, and applicationof approach-withdrawal theory, to the area of emotion per-ception. Future work that manipulates expectancies will benecessary to address the extent to which an expectancyaccount is distinct from approach-withdrawal theory.

A potential advantage of an expectancy account is that itsuggests a possible reason why several studies have foundvalence speciWc eVects in females but not males (e.g., Rod-way et al., 2003; Burton & Levy, 1989; van Strien & vanBeek, 2000). The balance between top-down expectancyeVects and bottom-up stimulus driven eVects may diVerbetween the sexes (e.g., Azim, Mobbs, Jo, Menon, & Reiss,2005), with emotional processing in males being drivenmore by bottom-up information whereas in females it maybe driven more by top-down expectancies. For example,Baron-Cohen’s (2002) extreme male brain hypothesis sug-gests that males have a stronger tendency to focus ondetailed stimulus information whereas females have astronger drive toward empathising. If the drive to empa-thise is a top-down process, whereas focusing on detailedstimulus information is a bottom-up process, then it mightexplain why females showed stronger valence speciWc eVectsin Rodway et al. (2003), but the males were actually moreaccurate at discriminating the very faintest emotion. It mayalso explain why females show greater empathy (HoVman,1977; Burton & Levy, 1989), but may not be any better at

discriminating facial emotion (see Borod et al., 2001 for areview).

1.5. Neutral control and expectancy account

To examine the expectancy account (and response bias)further, in replication of Rodway et al. (2003) a controlcondition was introduced which presented identical neutralsentences dichotically, along with an emotional label. Thus,the participants were required to say which of the two iden-tical sentences corresponded to the labelled emotion (sad orhappy). In replication of Rodway et al., and in support ofthe expectancy hypothesis, we predicted that the emotionallabel would cause a valence speciWc response bias whenresponding to the two neutral sentences. When the labelwas ‘sad’ we expected participants to select the sentencepresented to the left ear more often than the sentence to theright ear, and when the label was ‘happy’ we expected theparticipants to select the right ear more often than the leftear.

For this control condition, however, in order to preventthe two sentences fusing into a single percept and makingthe task impossible, one sentence preceded the other by7 ms (this was counterbalanced across conditions). Thusthere was stimulus-driven activation of one hemispherebefore the other hemisphere. We expected that the ear leadwould orient attention to the leading ear resulting in a biastoward it being chosen as the ‘emotional’ sentence (e.g.,Mondor & Bryden, 1991).

In addition to predicting a bias toward reporting theemotion in the leading ear, we expected the valence eVect ofthe label to overlay this bias, resulting is an interaction inwhich the ‘sad’ label and left ear lead increased left earresponses, and the ‘happy’ label and right ear leadincreased right ear responses. In addition, when the eVectsof ear lead and emotion label operate in opposing direc-tions, we expected that the eVects would cancel each otherso that the bias to report the stimulus in the leading earwould be attenuated in the left ear for ‘happy’ stimuli andin the right ear for ‘sad’ stimuli. If this pattern emerges itwill suggest a top-down inXuence of expectancy which con-cords with the valence speciWc laterality hypothesis, in addi-tion to the orienting eVects of ear lead.

1.6. Predictions

In accord with all theories (modiWed valence, discrimi-nation diYculty, expectancy) it was predicted that for theoriginal stimuli there would be a RH hemisphere (left ear)advantage2 for the perception of all emotions regardlessof valence. However, based on the expectancy account itwas predicted that valence speciWc eVects would emergefor both the morphed stimuli and the control stimuli

2 Ear advantage refers to enhanced task performance in the named earrelative to the other ear.

P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41 35

because the weak bottom-up aVective information wouldbe counteracted by top-down expectancy eVects. In addi-tion, it was also predicted that valence eVects would bestronger for the control stimuli than for the morphedstimuli, due to the lower levels of bottom-up aVectiveinformation in the control stimuli. This contrasts with thediscrimination diYculty hypothesis which predicts thatvalence speciWc eVects will emerge with the morphed stim-uli rather than the control stimuli or original stimuli.

2. Method

2.1. Stimuli

The spoken materials were derived from eight shortbase sentences, chosen to be as neutral as possible in theirsemantic content, but which could also be plausibly spo-ken with emotional prosody. These were: (1) He saw thebike. (2) He went to the cinema. (3) He went yesterday. (4)It was a dog. (5) It was early. (6) She caught the bus. (7)She folded her clothes. (8) She heard his voice.

The sentences were recorded by a prosodically trainedspeaker with professional voice-over experience, in an IACaudiometric room, using an AKG C 535 EB (condenser)microphone and a Sony DAT recorder. All eight sentenceswere read in three versions: with neutral, happy and sadprosody. The decision to use happy and sad emotions (andno other emotions) was based on our assessment of previ-ous literature, including confusion matrices (see Juslin &Laukka, 2001; Scherer, Banse, Wallbot, & Goldbeck, 1991).DAT Recordings were digitized, mono, 16 bit, at a samplingrate of 22050 Hz. Following this, two further versions ofeach sentence were created by taking the F0 values from thehappy and sad sentences and imposing these on the neutralsentences, to create so-called “morphed” sentences. Thesemorphed stimuli were created with Praat (Boersma &Weenink, 2004), using the PSOLA (Pitch-SynchronousOverlap and Add) technique on stylized F0 contours. Thecombined procedures yielded 40 mono stimuli (8 base sen-tences, with Wve emotions: neutral, happy, sad, morphed-happy, morphed-sad).

2.2. Prosodic characteristics of the stimuli

A number of acoustic measurements were made onPraat and analysed using a repeated-measures 1£ 3ANOVA, with sentence as the random factor and emo-tion (happy, sad, neutral) as the independent variable.Because in the dichotic experiment (Section 2.4), partici-pants made a comparison between emotional and neutralstimuli, the 1£ 3 ANOVAs were followed up with two t-tests comparing measures for stimuli of each emotionwith those for neutral stimuli. Means and analysis resultsare presented in Table 1. This set of analyses revealedthat happy stimuli were spoken at signiWcantly higherpitch, with greater pitch variation, at greater mean inten-sity and with greater intensity variation than neutralstimuli. The sad stimuli showed signiWcantly lower pitchvariation, higher intensity variation and greater harmo-nicity than the neutral stimuli, while shimmer was mar-ginally lower than in the neutral stimuli. This pattern isfairly typical of happy and sad stimuli (e.g., Banse &Scherer, 1996; Juslin & Laukka, 2001, 2003). Note that,although we report overall duration of the stimuli, whichwas longer for both emotions than for neutral, (but onlysigniWcantly so for sad), we neutralised overall durationin the main experiment to avoid presenting part of thedichotic stimuli on their own.

As for qualitative diVerences, neutral stimuli were spo-ken with a modal voice quality, mostly with a F0 contourwith a peak on the verb and a down-stepped peak on theWnal noun or adjective (although sentences with “was”lacked such a peak on the verb). Happy stimuli were spo-ken with a slightly sharper voice quality than neutralstimuli. The F0 contour for happy stimuli mostly fea-tured a higher second peak than Wrst peak (i.e. no down-step). Sad stimuli were spoken with a softer voice quality,with a more level F0 contour throughout the sentence(i.e. lower than neutral stimuli at the start, and higherthan neutral stimuli at the end, due to the lack of down-step and a reduced nuclear fall size). For all stimuli, thenuclear pitch movement was a fall. Example stimuli areshown in Fig. 1.

Table 1Acoustic analysis

Mean acoustic measures for the eight sentences in the three prosody conditions (with standard errors in parentheses). “*” indicates that the relevant emo-tional stimuli diVered signiWcantly from the neutral stimuli on the relevant acoustic parameter on a Student t-test. “SD”D standard deviation. Superscripta

indicates a contrast that approached signiWcance (.05 < p < .1).

Measure Happy Sad Neutral F (2,14) p

Mean F0 (Hz) 246 (9)* 172 (3) 165 (3) 54.39 <0.0001F0 SD (Hz) 68 (9)* 13 (1)* 23 (1) 30.13 <0.001Mean Intensity (dB) 69 (.7)* 60 (.6) 60 (.3) 244.06 <0.0001Intensity SD (dB) 10 (1)* 10 (.5)* 8 (.6) 6.07 <0.05Jitter (%) 2.2 (.3) 1.5 (.1) 2.5 (.6) 1.54 n.s.Shimmer (%) 5.2 (.5) 5.2 (.2)a 6.6 (.7) 1.87 n.s.Harmonicity (dB) 13.8(.9) 16.9 (.7)* 15.1 (1.3) 4.35 <0.05Duration (s) 1.220 (.6) 1.248 (.7)* 1.138 (.7) 4.82 <0.05

36 P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41

2.3. Pre-test

As the stimuli were new, and we used a novel morphingprocedure, we Wrst subjected the stimuli to a basic pre-test toestablish (a) whether the emotion could be detected and (b)whether the morphing procedure led to reduced discrimina-bility, as intended. Ten participants, (three male) who werestudents (nine) or staV (one) at the University of AbertayDundee took part in the pre-test, which was presented usingE-Prime software (Psychology Software Tools Inc., Pitts-burgh, PA), with sounds played through SennheiserHD25SP stereo headphones. Participants were asked toidentify the emotion of the stimulus, choosing 1 or 0 to indi-

Fig. 1. Sample stimuli. Three renditions of the sentence “She folded herclothes”. Each display consists of a speech wave (top panel) and spectro-gram on a scale of 0–8000 Hz, with a superimposed F0 track on a scale of75–500 Hz (bottom panel). The horizontal axis represents time (in sec-onds). The top display represents happy vocal aVect (duration 1.5811 s),the middle sad (1.5864 s), and the bottom neutral (1.4551 s).

cate happy or sad (with response options counterbalancedbetween participants). All 40 stimuli were played in a ran-dom order. Note that we decided not to have a “neutral”response option, as our earlier (unpublished) research hadtaught us that participants have a tendency to equate “neu-tral” with “do not know”. This could potentially diluteinformation on discriminability of the emotional stimuli (seealso Juslin & Laukka, 2001, p 388). Instead, we opted to testany biases in the neutral stimuli indirectly, by inspectingwhether neutral stimuli deviated from chance (50%) whenparticipants were forced to choose one of the two emotions.

As intended, participants were able to identify happyand sad stimuli in their original form almost perfectly(means 100% vs. 99% for happy and sad, respectively, withstandard errors of 0 and 1.3%, respectively), performancedropped to 73% (SE 6.5%) vs. 92% (SE 2.8%), respectivelywhen they were “morphed”. The drop was greater for thehappy stimuli, which was reXected in results from a 2£2repeated-measures ANOVA, which revealed main eVectsfor emotion, F(1,9)D5.98, p < .05, and type, F(1,9)D 21.07,p < .001, and a signiWcant interaction, F(1,9)D8.42, p < .05.Proportions correct for morphed-happy and morphed-sadstimuli diVered signiWcantly from chance, t(9)D 3.46,p < .01, t(9)D 15.37, p < .0001, respectively. So, participantsidentiWed the original recordings perfectly or almost per-fectly, and the “morphed” stimuli well above chance in thepredicted direction. The overall level of identiWcation was91%. Interestingly, there was a 69% preference to label neu-tral stimuli “sad”, with responses diVering signiWcantlyfrom the 50% chance level, t(9)D¡2.34, p < .05. However,participants’ number of “sad” responses to neutral stimulistill diVered from those to sad and morphed-sad stimuli,t(9)D 3.79, p < .01 and t(9)D3.49, p < .01, respectively.

It is possible that participants have a tendency towards asad interpretation of neutral stimuli because the acousticcues that distinguish the happy stimuli from the neutralstimuli are stronger than those for the sad emotion. Thismay be particularly true of high pitch in happy stimuli.Therefore, even though the acoustic signals were inspectedto ensure that a similar number of cues diVerentiated bothemotional signals from neutral, it may be inevitable thatsad stimuli resemble neutral stimuli more than do thehappy stimuli. Having established that the stimuli could bediscriminated and that the morphed stimuli were less dis-criminable than the original stimuli, we used these stimuliin the dichotic listening experiment.

2.4. Dichotic experiment

2.4.1. ParticipantsThirty student participants (seven male) who had not

taken part in the pre-test took part in the main experiment.They all had self-reported normal hearing.

2.4.2. Dichotic stimuliThe mono stimuli described above were further edited

into dichotic stimuli. First each non-morphed emotional

P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41 37

stimulus was matched in duration to the value of the neu-tral stimulus3, by entering a length ratio into the PSOLA(Pitch-Synchronous Overlap and Add) routine of Praat(Boersma & Weenink, 2004). Morphed emotional stimuliwere already of the same length as neutral stimuli, as theyhad been derived from them. Although the matching of theduration of the emotional stimuli to the neutral stimuli mayhave removed some information from the set of prosodiccues to emotion, most of the prosodic information is likelyto have been expressed in the prosodic parametersunaVected by this matching process. In any case, if informa-tion was removed by neutralisation of emotion, it is likelythat happy and sad stimuli were aVected similarly, as bothwere longer than neutral stimuli (see Section 2.1, and 3.1).Duration-matched emotional utterances and morphedemotional utterances, were then each paired with neutralrenditions of the same sentence, with the emotional stimu-lus appearing either in the left or right channel, and theneutral stimulus in the other. All such combinations werecreated. Thus, there were 64 experimental stimuli: eightbase sentences, two emotions (happy, sad), two stimulustypes (original, morphed), and two locations of the emo-tional stimulus (left or right ear). The latter three categoriesalso formed the within-participant factors in the design ofthe main experiment.

In addition to the experimental stimuli, a set of controlstimuli was created, in which stimuli in both ears were neu-tral, following Rodway et al. (2003). An onset delay of 7 mswas introduced in one of the channels of the dichotic stim-uli, which had the desired eVect of making the overall stim-ulus sound like two percepts, rather than one. The overallset of double-neutral stimuli consisted of 16: eight base sen-tences, with two ear lead locations (right ear or left ear).Half the left ear lead stimuli were preceded by a happyemotion label and half by a sad emotion label. This wascounterbalanced across ear lead condition using a LatinSquare, such that each sentence was presented which eachemotion label and each ear lead.

2.4.3. ProcedureThe experiment was run on the same equipment as the

pre-test. Participants were informed that they would see anemotion label on the screen, and they would then hear asentence spoken with a diVerent emotional tone of voice ineach ear. Participants were asked to identify in which earthey heard the emotion (speciWed by the label on the screen)using 1 to indicate the left and 0 to indicate the right ear.The emotion labels were presented centrally for 500 ms, andwere immediately followed by the dichotic stimuli. Theresponse prompt read “Please enter your response now”and was presented centrally until a response was entered.There was a 1 s inter-trial interval. There were 80 trials (64

3 Although this has not always been done in previous studies we felt thatthis was important because stimuli that are longer than their counterpartwill be heard on their own, which may profoundly aVect the results.

experimental and 16 control trials), presented in a randomorder. The experiment lasted approximately 15 min.

3. Results

3.1. Experimental conditions

The mean percentages correct are presented in Fig. 2.First, overall performance, at 66%, diVered signiWcantly

from chance, t(29)D 11.25, p < .0001. Discrimination accu-racy was lower than the identiWcation accuracy in the pre-test. This was probably due to the additional challengesposed by the dichotic presentation.

A 2£ 2£ 2 (Emotion: Happy, Sad; Ear: Left, Right;Type: Original, Morphed) repeated-measures ANOVArevealed a number of signiWcant main eVects and interac-tions.

Of primary interest from a lateralisation perspective,there was no signiWcant main eVect of ear, F < 1, with meansof 67% vs. 65% for left vs right ear, respectively. However,there was a signiWcant interaction between ear and type,F(1,29)D 8.05, p < .01, with original stimuli showing anaverage 8% left ear advantage (75% vs. 67% for left vs.right, respectively), while the means diVered in the oppositedirection for morphed stimuli (60% vs. 63%, respectively).The interaction is depicted in Fig. 3. Two simple main eVectanalyses showed that the eVect of ear was signiWcant in theoriginal stimuli, F(1,29)D4.53, p < .05, while it was not sig-niWcant in the morphed stimuli, F < 1.

A number of other eVects were probably associated withthe acoustic properties and discriminability patterns of thestimuli, as already identiWed in the pre-test. First, there wasa signiWcant main eVect of emotion, F(1,29)D 144.47,p < .0001, with participants scoring higher on the happyemotions (83%) than the sad emotions (49%). Also mirror-ing the pre-test, there was a signiWcant main eVect of type,F(1,29)D 34.77, p < .0001, with original stimuli leading tobetter discrimination (71%) than morphed stimuli (62%).The interaction between emotion and type approached

Fig. 2. Results of main experiment. Percentage correct responses in thedichotic experiment as a function of Emotion, Ear and Type. Note the 6%and 9% left ear advantage for happy and sad original stimuli respectively,and the 4% right ear advantage for morphed stimuli.

38 P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41

signiWcance, F(1,29)D 3.70, pD .06: there was a largerreduction in performance associated with morphed asopposed to original stimuli for the happy emotions (91% vs.76% for original vs morphed) than for the sad stimuli (51%vs. 47%, respectively). Once again, this reXected the patternobtained in the pre-test. No other interactions reached orapproached signiWcance.

3.2. Control stimuli

The percentages of “left” responses were calculated as afunction of label and ear lead. The means are presented inFig. 4.

These data were analysed using a 2£2 repeated-mea-sures ANOVA with the factors “ear lead” (left vs. right)and “emotion label” (happy vs. sad), which revealed aneVect of ear lead, with participants showing a tendency tolocalise the emotion in the ear to which the stimulus was

Fig. 3. Interaction between stimulus type and ear. Percent correctresponses for morphed and original stimuli with the emotional signal inthe left vs. right ear in the main experiment.

Fig. 4. Control condition observed and predicted results. Percentage “left”responses in the control condition as a function of emotion label (happy,sad) and ear lead (left vs right ear Wrst). Note that the percentage of“right” responses is always the complement to 100%. The observed results(results obtained) are on the left side of the Wgure and the results predictedby the expectancy account are on the right side.

presented Wrst, with 60% vs. 37% “left” responses for left-Wrst and right-Wrst, respectively. Note that this entails thatthere were 63% “right” responses for right ear lead, i.e. ahigh level of ear lead compatible responses.

Interestingly, there was a very strong interactionbetween emotion label and ear lead, with the tendency toreport the emotion as being localised in the leading earbeing much stronger when the emotion label was “sad”(67% vs 28% “left” responses for the left and right ear,respectively) than when the label was “happy” (52% vs 45%“left” responses, respectively). Simple main eVects analysesshowed that the eVect of ear lead on the percentage “left”responses was signiWcant for the sad label, F(1,29)D 32.67,p < .0001, but not for the happy label, F(1,29)D1.11, p > .05.In other words, when participants were played identicalneutral stimuli, and when the emotion label was “sad”, par-ticipants were strongly inclined to report the emotion asbeing located in the ear to which the neutral stimulusarrived Wrst. However, when the emotion label was“happy”, participants were (statistically) unaVected by theear lead, and responded at approximately 50% for each earlead. We compared percentage “left” responses to the neu-tral trials on the Wrst vs second half of the experiment, toinvestigate whether the pattern may be due to strategicbehaviour, learnt through the course of the experiment, butthere was no evidence of this.

4. Discussion

In the main experiment a left ear (RH) advantage wasobtained for the original stimuli, regardless of valence, withhappy and sad sentences identiWed more accurately whenthey were presented to the left ear compared to the rightear. This result clearly supports the hypothesis that the RHis specialised for the perception of all emotions regardlessof their valence (see Borod et al., 2001 for a review).

In contrast to the LEA for the original stimuli there wasno signiWcant ear advantage for the morphed stimuli, with anon-signiWcant trend towards a REA. As the participantswere able to identify morphed emotions above chance thisresult suggests that they were using some features of thestimuli to make the discrimination but were not preferen-tially using the RH to do this. The rationale for introducingthe morphed stimuli was to make the discrimination taskmore diYcult, potentially increasing the recruitment ofemotional experience and eliciting valence speciWc lateralityeVects (as proposed by Jansari et al., 2000). As expected, theparticipants found the morphed emotions harder to iden-tify than the original emotions, but this increase in discrim-ination diYculty did not cause the emergence of valencespeciWc eVects. These Wndings therefore do not support pre-dictions derived from the discrimination diYculty hypothe-sis (Jansari et al., 2000), or the expectancy hypothesis, asboth predict stronger valence eVects when the perceptualinformation is more ambiguous. In addition, for the origi-nal stimuli, the sad emotion was harder to discriminatethan the happy emotion, but the REA was equivalent for

P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41 39

both emotions. Thus, discrimination diYculty was notrelated to the emergence of valence speciWc eVects with theoriginal or morphed stimuli.

It is possible that morphed prosody eliminated the RHadvantage because the RH has evolved to be specialised inthe perception of “authentic”, natural emotions, in whichthe prosodic cues do not conXict among themselves. As aresult other regions of the brain may have been used todetect perceptual features that enabled the discriminationof the morphed prosody. If this is the case, ear advantagescould be an interesting tool in the detection of “genuine”versus “faked” emotions, especially if in the “faked” emo-tions the set of perceptual features normally associatedwith that emotion show internal conXicts. Moreover, if gen-uine emotions can be distinguished from synthetic emo-tions by the ear-advantage they elicit, then in future thismay form a useful tool in the evaluation of synthetic emo-tional prosody.

In the control condition it was predicted that both thestimulus lead and the emotional label would have eVects. Inaccord with part of this prediction there was a clear eVect ofear lead, with participants much more likely to report the‘emotional’ sentence as having been presented to the earwhich received the sentence Wrst. As the ear lead was only7 ms this was almost certainly due to an attentional orient-ing eVect inXuencing the ear selected (e.g., Mondor & Bry-den, 1991), rather than any strategic eVects.

In accord with predictions derived from the expectancyaccount, there was also an interaction between ear lead andemotion label. However the results did not correspond toour expected results which are best illustrated by workingwith hypothetical numbers (see Table 2), showing percent-age of ‘left’ responses. If we assume that responses arestrongly driven by ear lead, then we would expect the datato show a pattern of means approximating that in Table 2,under a. In this case the ear that receives the sentence Wrstwill be chosen more frequently than the ear that receivesthe sentence 7 ms after the Wrst sentence. If, however,responses are strongly driven by top-down valence-speciWclaterality eVects, then one would expect the pattern toresemble that under b. In this case the label will inXuencethe ear chosen, with the ‘sad’ label increasing the propor-tion of left ear choices, and the ‘happy’ label increasing theproportion of right ear choices. If we assume that the twoeVects interact, one would average across those columnsand obtain the pattern under (c). Although our data (the

actual results) resemble the pattern predicted for the leftear, the pattern for the right ear (see Fig. 4) is not compati-ble with such an interpretation, and therefore does not sup-port it. Thus the prediction that the right ear lead and thehappy label would increase the proportion of right earchoices was not supported by the results.

A possible interpretation of the interaction betweenlabel and lead is that it was caused by the ‘happy’ labelreducing the eVects of ear lead (Fig. 4). When the left earreceived the sentence Wrst the ‘happy’ label reduced theextent to which the left ear was chosen. Moreover, when theright ear received the sentence Wrst the ‘happy’ label alsoreduced the extent to which the right ear was chosen. Incontrast, for the ‘sad’ label the side selected was much morestrongly inXuenced by ear lead. It therefore appears thatthis result might be due to the ‘happy’ label exerting agreater expectancy eVect compared to the ‘sad’ label, sothat the ear selected was more driven by attentional orient-ing when the label was ‘sad’.

A possible reason why the ‘happy’ label may havereduced the eVect of ear lead more than the ‘sad’ label isthat the brain regions responsible for dealing with eachemotion are diVerently distributed throughout the hemi-spheres. There is evidence to suggest that happy emotionsmay be bilaterally localised whereas negative emotions areunilaterally localised in the right hemisphere. For example,studies by Asthana and Mandal (2001) and Mandal et al.(1991) suggest that both hemispheres are involved in theprocessing of positive emotions whereas the processing ofnegative emotions is lateralised in the RH. Adolphs et al.(2001) reached a similar conclusion after examining emo-tional facial perception in patients with either unilateralRH (31 patients) or LH (28 patients) damage. If positiveemotions are more bilaterally represented than are negativeemotions then a ‘happy’ label might induce an expectancyin both hemispheres whereas an expectancy for a negativeemotion might predominantly inXuence the RH. This couldpotentially reduce the eVect of ear lead for both hemi-spheres when the label is ‘happy’ compared to when thelabel is ‘sad’.

A second possibility is that the ‘happy’ label reduced theeVects of ear lead more strongly because of the pre-existingemotional state of the participants. This is importantbecause it is known that that the participant’s moodcan inXuence their perception of facial aVect (David, 1989;Gage & Safer, 1985). If, in general, the participants

Table 2Predicted results given the eVects of expectancy and lead

Pattern of percentage “left” responses predicted on the basis of a Wrst ear eVect only (a), and on valence-speciWc laterality (b), and on the basis of an inter-action between the two factors (c), which averages (a) and (b) for each cell. Actual observed values are in parentheses in (c).

(a) Pattern expected on basis of Wrst ear eVect only

(b) Pattern expected on basis of valence-speciWc laterality eVect only

(c) Pattern expected if Wrst ear andvalence-speciWc laterality interact

Left ear Wrst Right ear Wrst Left ear Wrst Right ear Wrst Left ear Wrst Right ear Wrst

Happy emotion label 70 30 30 30 50 (actual: 53) 30 (actual: 45)Sad emotion label 70 30 70 70 70 (actual: 68) 50 (actual: 28)

40 P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41

possessed a positive mood then they may have been moreable to generate an expectancy for a happy sentence, con-gruent with their mood, than a sad sentence, which con-Xicted with their mood4. This could have inXuenced therelative strength of the top-down eVects generated inresponse to the ‘happy’ and ‘sad’ labels.

Although the interaction between lead and label mightsuggest than the ‘happy’ label exerted a greater expec-tancy eVect that the ‘sad’ label, the results of the presentexperiment do not support the expectancy accountbecause, for the control task, they were not in the pre-dicted direction for the right ear lead (Fig. 4). At present itis unclear why this is the case. A clear possibility is thatthe expectancy account does not explain valence eVects invocal emotion perception. Moreover, as noted in theintroduction, valence eVects in prosody have been verydiYcult to obtain (see Bryden et al., 1991; Stirling et al.,2000). Therefore, while an aim of the present study was totry to promote valance eVects in prosody it is possible, assome researchers have concluded (Bryden et al., 1991),that valence eVects do not exist for prosody, making anyexplanation of valence eVects in prosody completelyredundant. In contrast, valence eVects for facial emotionhave been detected more frequently and while the expec-tancy account may be able to explain these Wndings, thevalidity of the expectancy account for facial emotionremains to be tested empirically.

5. Conclusions

The present study adds support to the view that theRH is specialised for the perception of emotion. This Wtswith the RH hypothesis and the considerable body ofwork showing a RH advantage in the perception of emo-tion (Borod et al., 2001). However, the results argueagainst the hypothesis that the diYculty of the discrimi-nation (and recruited experience) is important in causingvalence speciWc eVects. The results also showed that theuse of morphed sentences with synthetic prosody couldeliminate the RH advantage. This may have implicationsfor the assessment of fake or computer-generatedemotions.

To explain the emergence of valence speciWc eVects inemotion perception task we proposed an expectancyaccount. However, the results of the present study do notsupport the expectancy hypothesis. Therefore, although thevalidity of the expectancy account for valence eVects infacial emotion perception remains to be tested, the presentresults suggest that the expectancy account does notexplain valence eVects in prosody.

4 In fact, our own (unpublished) data measured a moderate to very hap-py mood in a large majority of a convenience sample of the general popu-lation. We obtained these data using the Brief Mood Introspection Scale(BMIS; Mayer & Gaschke, 1988), with Halberstadt, Niedenthal, andKushner’s (1995) “happiness” score, which was greater than a neutral zeroin 75% of the 216 people in the sample (91 female, 95 male).

Acknowledgment

We thank Ami Gage for helping with the data collection.

References

Adolphs, R. (2002). Neural systems for recognizing emotion. CurrentOpinion in Neurobiology, 12, 169–177.

Adolphs, R., Damasio, H., Tranel, D., & Damasio, A. R. (1996). Corticalsystems for the recognition of emotion in facial expressions. Journal ofNeuroscience, 16, 7678–7687.

Ahern, G. L., & Schwartz, G. E. (1979). DiVerential lateralization for posi-tive versus negative emotions. Neuropsychologia, 17, 693–697.

Ahern, G. L., & Schwartz, G. E. (1985). DiVerential lateralization for posi-tive and negative emotions in the human brain: EEG spectral analysis.Neuropsychologia, 23, 745–755.

Asbjùrnsen, A., & Hugdahl, K. (1995). Attentional eVects in dichotic lis-tening. Brain and Language, 49, 189–201.

Asthana, H. S., & Mandal, M. K. (2001). Visual-Weld bias in the judgmentof facial expression of emotion. Journal of General Psychology, 128(1),21–29.

Azim, E., Mobbs, D., Jo, B., Menon, V., & Reiss, A. L. (2005). Sex diVer-ences in brain activation elicited by humor. Proceedings of the NationalAcademy of Sciences, 16496–16501.

Banse, R., & Scherer, K. R. (1996). Acoustic proWles in vocal emotionexpression. Journal of Personality and Social Psychology, 70, 614–636.

Baron-Cohen, S. (2002). The extreme male brain theory of autism. Trendsin Cognitive Sciences, 6, 248–254.

Boersma, P., & Weenink, D. (2004). Praat: doing phonetics by computer,version 4.2.08. Software: www.praat.org.

Borod, J. C. (1993). Cerebral mechanisms underlying facial, prosodic andlexical emotional expression: A review of neuropsychological studiesand methodological issues. Neuropsychology, 7, 445–463.

Borod, J. C., Zgaljardic, D., Tabert, M. H., & KoV, E. (2001). Asymmetriesof emotional perception and expression in normal adults. In G. Gain-otti (Ed.), Emotional behavior and its disorders. Oxford, UK: ElsevierScience.

Bryden, M. P., & MacRae, L. (1989). Dichotic laterality eVects obtainedwith emotional words. Neuropsychiatry, Neuropsychology and Behav-ioral Neurology, 3, 171–176.

Bryden, M. P., Free, T., Gagne, s., & GroV, P. (1991). Handedness eVects inthe detection of dichotically-presented words and emotions. Cortex,27, 229–235.

Burton, L. A., & Levy, J. (1989). Sex diVerences in the lateralized process-ing of facial emotion. Brain and Cognition, 11, 210–228.

Christman, S. D., & Hackworth, M. D. (1993). Equivalent perceptualasymmetries for free viewing of positive and negative emotionalexpressions in chimeric faces. Neuropsychologia, 31, 621–624.

David, A. S. (1989). Perceptual asymmetry for happy–sad chimeric faces:EVects of mood. Neuropsychologia, 27, 1289–1300.

Davidson, R. J. (1984). AVect, cognition, and hemispheric specialization.In C. E. Izard, J. Kagan, & R. Zajonc (Eds.), Emotion, cognition, andbehaviour. New York: Cambridge University Press.

Davidson, R. (1992). Anterior cerebral asymmetry and the nature of emo-tion. Brain and Cognition, 20, 125–151.

Davidson, R. J. (1993a). Cerebral asymmetry and emotion: Conceptualand methodological conundrums. Cognition and Emotion, 7, 115–138.

Davidson, R. J. (1993b). Parsing aVective space: Perspectives from neuro-psychology and psychophysiology. Neuropsychology, 7, 464–475.

Davidson, R. J. (2004). What does the prefrontal cortex “do” in aVect: per-spectives on frontal EEG asymmetry research. Biological Psychology,219–233.

Davidson, R. J., Mednick, D., Moss, E., Saron, C., & SchaVer, C. E. (1987).Ratings of emotion in faces are inXuenced by the visual Weld to whichstimuli are presented. Brain and Cognition, 6, 403–411.

Drake, R. A. (1987). EVects of gaze manipulation on aesthetic judgements:Hemispheric priming of aVect. Acta Psychologica, 65, 91–99.

P. Rodway, A. Schepman / Brain and Cognition 63 (2007) 31–41 41

Erhan, H., Borod, J. C., Tenke, C. E., & Bruder, G. E. (1998). IdentiWcationof emotion in a dichotic listening task: Event-related brain potentialand behavioural Wndings. Brain and Cognition, 37, 286–307.

Gage, D. F., & Safer, M. A. (1985). Hemispheric diVerences in the moodstate: Dependent eVect for recognition of emotional faces. Journal ofExperimental Psychology: Learning, Memory, and Cognition, 11, 752–763.

Halberstadt, J. B., Niedenthal, P. M., & Kushner, J. (1995). Resolution oflexical ambiguity by emotional state. Psychological Science, 6, 278–282.

HoVman, M. L. (1977). Sex diVerences in empathy and related behaviors.Psychological Bulletin, 84, 712–722.

Hugdahl, K. (2000). Lateralization of cognitive processes in the brain. ActaPsychologica, 105, 211–235.

Jansari, A., Tranel, D., & Adolphs, R. (2000). A valence-speciWc lateral biasfor discriminating emotional facial expressions in free Weld. Cognitionand Emotion, 14, 341–353.

Juslin, P. N., & Laukka, P. (2001). Impact of Intended Emotion Intensityon Cue Utilization and Decoding Accuracy in Vocal Expression ofEmotion. Emotion, 4, 381–412.

Juslin, P. N., & Laukka, P. (2003). Communication of Emotions in VocalExpression and Music Performance: DiVerent Channels, Same Code?Psychological Bulletin, 12, 770–814.

Kinsbourne, M. (1970). The cerebral basis of lateral asymmetries in atten-tion. Acta Psychologica, 33, 193–201.

Ley, R. G., & Bryden, M. P. (1979). Hemispheric diVerences in processingemotions in faces. Brain and Language, 7, 127–138.

Mandal, M. K., & Singh, S. K. (1990). Lateral asymmetry in identiWca-tion and expression of facial emotions. Cognition and Emotion, 4,61–70.

Mandal, M. K., Tandon, S. C., & Asthana, H. S. (1991). Right brain dam-age impairs recognition of negative emotions. Cortex, 27, 247–253.

Mayer, J. D., & Gaschke, Y. (1988). The experience and meta-experience ofmood. Journal of Personality and Social Psychology, 55, 102–111.

Mondor, T. A., & Bryden, M. P. (1991). The inXuence of attention on thedichotic REA. Neuropsychologia, 29, 1179–1190.

Natale, M., Gur, R. E., & Gur, R. C. (1983). Hemispheric asymmetries inprocessing emotional expressions. Neuropsychologia, 21, 555–565.

Reuter-Lorenz, P., & Davidson, R. J. (1981). DiVerential contributions ofthe 2 cerebral hemispheres to the perception of happy and sad faces.Neuropsychologia, 19, 609–613.

Rodway, P., Wright, L., & Hardie, S. (2003). The valence-speciWc lateralityeVect in free viewing conditions: The inXuence of sex, handedness, andresponse bias. Brain and Cognition, 53, 452–463.

Safer, M. A. (1981). Sex and hemisphere diVerences in access to codes forprocessing emotional expressions in faces. Journal of ExperimentalPsychology: General, 110, 86–100.

Scherer, K. R., Banse, R., Wallbot, H. G., & Goldbeck, T. (1991). Vocalcues in emotion encoding and decoding. Motivation and Emotion, 15,123–148.

Silberman, E. K., & Weingartner, H. (1986). Hemispheric lateralization offunctions related to emotion. Brain and Cognition, 5, 322–353.

Stirling, J., Cavill, J., & Wilkinson, A. (2000). Dichotically presented emo-tionally intoned words produce laterality diVerences as a function oflocalisation task. Laterality, 5, 33–371.

Tucker, D. M. (1981). Lateral brain function, emotion and conceptualiza-tion. Psychological Bulletin, 89, 19–46.

van Strien, J. W., & van Beek, S. (2000). Ratings of emotion in laterallypresented faces: Sex and handedness eVects. Brain and Cognition, 44,645–652. doi:10.1006/brcg.1999.1137.

Wild, B., Erb, M., & Bartels, M. (2001). Are emotions contagious? Evokedemotions while viewing emotionally expressive faces: Quality, quantity,time course and gender diVerences. Psychiatry Research, 102, 109–124.