Are you always on my mind? A review of how face perception and attention interact

18
Neuropsychologia 45 (2007) 75–92 Are you always on my mind? A review of how face perception and attention interact Romina Palermo a,* , Gillian Rhodes b a Macquarie Centre for Cognitive Science (MACCS), Macquarie University, NSW 2109, Sydney, Australia b School of Psychology, University of Western Australia, Perth, WA, 6009, Australia Available online 23 June 2006 Abstract In this review we examine how attention is involved in detecting faces, recognizing facial identity and registering and discriminating between facial expressions of emotion. The first section examines whether these aspects of face perception are “automatic”, in that they are especially rapid, non-conscious, mandatory and capacity-free. The second section discusses whether limited-capacity selective attention mechanisms are preferentially recruited by faces and facial expressions. Evidence from behavioral, neuropsychological, neuroimaging and psychophysiological studies from humans and single-unit recordings from primates is examined and the neural systems involved in processing faces, emotion and attention are highlighted. Avenues for further research are identified. © 2006 Elsevier Ltd. All rights reserved. Keywords: Faces; Identity; Expression; Emotion; Attention; Automatic; Amygdala; Prefrontal cortex; Fear When scanning our complex visual environment we encounter too many items to fully analyze at one time. The brain must therefore evaluate incoming stimuli and devote more cognitive resources to processing important items and events. But what counts as important? Compton (2003) suggests that a primary way to determine importance is to evaluate the emo- tional significance of a stimulus or event. She further argues that stimuli deemed emotionally significant receive enhanced processing and that this occurs via the operation of two atten- tional mechanisms: one that evaluates emotional significance preattentively or “automatically”, and another that gives these significant stimuli priority in the competition for selective atten- tion. Although the emotional value of stimuli may differ between individuals, there are some stimuli – such as snakes, spiders, and human faces – that are emotionally significant to most indi- viduals. Faces are probably the most biologically and socially significant visual stimuli in the human environment, and might therefore be expected to receive enhanced processing as outlined above. In this review we examine whether faces are indeed processed preattentively and whether they preferentially engage mecha- nisms of selective attention. We consider the role of attention in detecting and categorizing faces, in recognizing the identity * Corresponding author. Tel.: +61 2 9850 6711; fax: +61 2 9850 6059. E-mail address: [email protected] (R. Palermo). of individuals and in registering different expressions displayed by faces. In addition, we examine the interactions between neu- ral systems involved in processing faces, emotion and attention. We begin by outlining current neuropsychological models of face perception. Current cognitive and neural models of face perception propose an initial stage of encoding, after which changeable aspects of a face, which are involved in the analysis of expres- sion and eye gaze, are processed relatively independently of its invariant aspects, which are used to determine identity (Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000, 2002). Processing of identity proceeds via the lateral fusiform gyrus (including the fusiform face area, FFA; Kanwisher, McDermott, & Chun, 1997) to anterior temporal regions that are involved in the recollection of biographical information (Haxby et al., 2000, see Fig. 1, yellow shading). Processing of changeable aspects of faces is mediated by the superior temporal sulcus (STS) (Haxby et al., 2000). Perceiving and recognizing emotion from facial expressions involves a complex network of partially independent neural structures (Adolphs, 2002a,b, see Fig. 1, red shading). Cortical pathways, in occipital and temporal neocortex (in particular the FFA and STS), conduct the detailed perceptual analyses necessary to make fine discriminations between facial expressions (Adolphs, 2002a,b, Fig. 1, solid lines). A “dual-route” fear or threat detection system has also been proposed, with a parallel subcortical route to the 0028-3932/$ – see front matter © 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.neuropsychologia.2006.04.025

Transcript of Are you always on my mind? A review of how face perception and attention interact

Neuropsychologia 45 (2007) 75–92

Are you always on my mind? A review of how faceperception and attention interact

Romina Palermo a,!, Gillian Rhodes b

a Macquarie Centre for Cognitive Science (MACCS), Macquarie University, NSW 2109, Sydney, Australiab School of Psychology, University of Western Australia, Perth, WA, 6009, Australia

Available online 23 June 2006

Abstract

In this review we examine how attention is involved in detecting faces, recognizing facial identity and registering and discriminating betweenfacial expressions of emotion. The first section examines whether these aspects of face perception are “automatic”, in that they are especiallyrapid, non-conscious, mandatory and capacity-free. The second section discusses whether limited-capacity selective attention mechanisms arepreferentially recruited by faces and facial expressions. Evidence from behavioral, neuropsychological, neuroimaging and psychophysiologicalstudies from humans and single-unit recordings from primates is examined and the neural systems involved in processing faces, emotion andattention are highlighted. Avenues for further research are identified.© 2006 Elsevier Ltd. All rights reserved.

Keywords: Faces; Identity; Expression; Emotion; Attention; Automatic; Amygdala; Prefrontal cortex; Fear

When scanning our complex visual environment weencounter too many items to fully analyze at one time. Thebrain must therefore evaluate incoming stimuli and devote morecognitive resources to processing important items and events.But what counts as important? Compton (2003) suggests thata primary way to determine importance is to evaluate the emo-tional significance of a stimulus or event. She further arguesthat stimuli deemed emotionally significant receive enhancedprocessing and that this occurs via the operation of two atten-tional mechanisms: one that evaluates emotional significancepreattentively or “automatically”, and another that gives thesesignificant stimuli priority in the competition for selective atten-tion. Although the emotional value of stimuli may differ betweenindividuals, there are some stimuli – such as snakes, spiders,and human faces – that are emotionally significant to most indi-viduals. Faces are probably the most biologically and sociallysignificant visual stimuli in the human environment, and mighttherefore be expected to receive enhanced processing as outlinedabove.

In this review we examine whether faces are indeed processedpreattentively and whether they preferentially engage mecha-nisms of selective attention. We consider the role of attentionin detecting and categorizing faces, in recognizing the identity

! Corresponding author. Tel.: +61 2 9850 6711; fax: +61 2 9850 6059.E-mail address: [email protected] (R. Palermo).

of individuals and in registering different expressions displayedby faces. In addition, we examine the interactions between neu-ral systems involved in processing faces, emotion and attention.We begin by outlining current neuropsychological models offace perception.

Current cognitive and neural models of face perceptionpropose an initial stage of encoding, after which changeableaspects of a face, which are involved in the analysis of expres-sion and eye gaze, are processed relatively independently of itsinvariant aspects, which are used to determine identity (Bruce& Young, 1986; Haxby, Hoffman, & Gobbini, 2000, 2002).Processing of identity proceeds via the lateral fusiform gyrus(including the fusiform face area, FFA; Kanwisher, McDermott,& Chun, 1997) to anterior temporal regions that are involvedin the recollection of biographical information (Haxby et al.,2000, see Fig. 1, yellow shading). Processing of changeableaspects of faces is mediated by the superior temporal sulcus(STS) (Haxby et al., 2000). Perceiving and recognizing emotionfrom facial expressions involves a complex network of partiallyindependent neural structures (Adolphs, 2002a,b, see Fig. 1,red shading). Cortical pathways, in occipital and temporalneocortex (in particular the FFA and STS), conduct the detailedperceptual analyses necessary to make fine discriminationsbetween facial expressions (Adolphs, 2002a,b, Fig. 1, solidlines). A “dual-route” fear or threat detection system hasalso been proposed, with a parallel subcortical route to the

0028-3932/$ – see front matter © 2006 Elsevier Ltd. All rights reserved.doi:10.1016/j.neuropsychologia.2006.04.025

76 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

Fig. 1. Face perception and attention systems. The three rectangles with beveled edges indicate the core system for face perception (Haxby et al., 2000). Areasshaded in yellow represent regions involved in processing identity and associated semantic information, areas in red represent regions involved in emotion analysis(Adolphs, 2002b), and those in blue reflect the fronto-parietal cortical network involved in spatial attention (Hopfinger, Buonocore, & Mangun, 2000). Solid linesindicate cortical pathways and dashed lines represent the subcortical route for rapid and/or coarse emotional expression processing. This model is highly simplifiedand excludes many neural areas and connections. In addition, processing is not strictly hierarchical (i.e., from left to right) but involves multiple feedback connections(Bullier, 2001). The face displayed is from the database collected by Gur et al. (2002).

amygdala1, via the superior colliculus and pulvinar thalamus(Fig. 1, dashed lines), providing a rapid but coarse analysis,perhaps based on salient individual features (LeDoux, 1996,1998; Morris, Ohman, & Dolan, 1999; Ohman, 2002).

Regardless of its expression, a face is a salient emotional stim-ulus, allowing us to distinguish friend from foe and conveyingcrucial information for social interactions (e.g., identity, race,sex, attractiveness, direction of eye gaze). Thus, all faces, evenso called “unexpressive” or “neutral” faces will have emotionalsignificance and so may have special access to visual attention.However, automatic processing and/or attentional biases seemmost likely for facial expressions displaying threat or danger,which if rapidly detected may confer a crucial survival advantage(e.g., Ohman, 2002; Vuilleumier, 2002). These include fearful

1 Technically, this structure is known as the “amygdaloid complex” because itis composed of a number of nuclei that are organized into a number of divisions,which appear to have different functions and connections (see Davis & Whalen,2001; Holland & Gallagher, 2004). There are two “amygdalae”—one in eachhemisphere. Both appear to be involved in processing facial expressions, withdifferences between the two not clearly understood at this stage (see Adolphs,2002b; Zald, 2003, for reviews on laterality). For brevity, we will use the term“amygdala”.

faces, which may warn of an environmental threat to be avoided,angry faces, which signify impending aggression and disgustedfaces which reflect the possibility of physical contamination(Adams, Gordon, Baird, Ambady, & Kleck, 2003; Adolphs,2002b; Anderson, Christoff, Panitz, De Rosa, & Gabrieli, 2003).The impact of attention on the processing of fear, anger and dis-gust is often contrasted with the processing of other “basic” oruniversal expressions (see Ekman, 1999), such as happiness,sadness and surprise.

1. Are faces processed “automatically”?

The emotional significance and neural specificity of faceprocessing make faces an ideal candidate for automatic or preat-tentive processing (Ohman, 2002; Ohman & Mineka, 2001).Automatic processes have some or all of the following character-istics: they are rapid (e.g., Batty & Taylor, 2003; Ohman, 1997),non-conscious (e.g., Bargh, 1997; Ohman, 2002; Robinson,1998), mandatory (e.g., Wojciulik, Kanwisher, & Driver, 1998)and capacity-free, requiring minimal attentional resources (e.g.,Schneider & Chein, 2003; Vuilleumier, Armony, Driver, &Dolan, 2001). We discuss the evidence for each separately.

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 77

1.1. How rapidly are faces processed?

Automatic processes are fast, although exactly how fast is farfrom clear (see Compton, 2003, for further discussion). One wayto judge the speed of face processing is to examine whether facesare processed more quickly than other types of stimuli. Electro-physiological studies measuring event-related potentials (ERPs)from the scalp (Bentin, Allison, Puce, Perez, & McCarthy, 1996;Jeffreys, 1989) and via intra-cranial electrodes in patients await-ing surgery (Allison et al., 1994) have suggested that responsesto faces can be differentiated from those of other visual stimuli.In this section we examine how quickly faces are detected andidentified and compare this to the speed with which other stimuliare processed. We also examine how quickly facial expressionsare registered and identified and evaluate the evidence for anultra-fast subcortical pathway.

1.1.1. Detecting facesDetecting a facial configuration is fast and efficient. For

instance, thresholds to detect the presence of a stimulus are lowerwhen the features compose an upright rather than an invertedor scrambled face (Purcell & Stewart, 1988) and ERPs fromfrontal regions differentiate between normal and jumbled facesfrom 135 ms (Yamamoto & Kashikura, 1999). Moreover, single-unit recordings from primate cortex (Oram & Perrett, 1992;Sugase, Yamane, Ueno, & Kawano, 1999) and magnetoen-cephalography (MEG) recordings from humans (Liu, Harris,& Kanwisher, 2002) indicate that stimuli can be categorizedas “faces” in extrastriate areas as early as 100 ms after stim-ulus presentation. ERP results suggest that faces are catego-rized around 100 ms, much earlier than the 200 ms requiredto categorize objects and words (Pegna, Khateb, Michel, &Landis, 2004). Furthermore, when stimuli are embedded in nat-ural scenes, faces (or more precisely, humans) are detected10 ms earlier than animals (Rousselet, Mace, & Fabre-Thorpe,2003).

1.1.2. Recognizing individualsExtracting the finer-grained information needed to identify

a specific individual appears to require at least an additional70 ms. Liu et al. (2002) found that occipito-temporal face-selective MEG responses 170 ms after stimulus onset (knownas the M170) were correlated with successful face categoriza-tion and face recognition (as measured with a matching task),whereas those 100 ms post-stimulus (labeled the M100) wereonly correlated with accurate face categorization. ERPs maxi-mally responsive to faces at around 170 ms (labeled as an N170or N1) are also seen for non-face objects and words, althoughoften with reduced amplitude and increased latency (Bentin &Carmel, 2002; Itier & Taylor, 2004; but see Rossion, Curran, &Gauthier, 2002).

1.1.3. Registering and discriminating emotionBehavioral evidence indicates that lower thresholds are

needed to detect faces when they are unambiguously emotional(Calvo & Esteves, 2005). Psychophysiological studies usingERP and MEG also suggest that emotional information from

faces is rapidly registered and discriminated, from as early as80 ms after stimulus onset. Table 1 provides a summary ofstudies investigating the temporal course of facial expressionprocessing.

The proposed value of the subcortical route (Fig. 1, dashedlines) is to rapidly convey information about potential threatto the amygdala (e.g., LeDoux, 1996; Morris, de Gelder,Weiskrantz, & Dolan, 2001). Thus, we might expect that emo-tional information would activate the amygdala prior to otherregions2. The amygdala responds more to emotional than neu-tral faces from approximately 100 ms (Streit et al., 2003),and may differentiate between expressions from 110 ms (Liu,Ioannides, & Streit, 1999). However, other areas respond toemotional information from faces at similar latencies (seeTable 1). Responses over occipital regions differentiate likedfrom disliked faces starting from 80 ms (Pizzagalli, Regard,& Lehmann, 1999), fearful from happy faces from 90 ms(Pourtois, Grandjean, Sander, & Vuilleumier, 2004), and happyfrom sad faces from 110 ms (Halgren, Raij, Marinkovic,Jousmaeki, & Hari, 2000). Frontal regions discriminate fear-ful from neutral faces beginning at 100 ms (Eimer & Holmes,2002; Holmes, Vuilleumier, & Eimer, 2003) and fearful fromhappy faces from 120 ms (Kawasaki et al., 2001). Tempo-ral areas are involved in processing emotional informationfrom faces from 130 ms (Batty & Taylor, 2003; Liu et al.,1999).

We might also expect fearful faces to be registered earlier thanother facial expressions. Although responses to fearful expres-sions may sometimes be larger than those to other expressions(e.g., Batty & Taylor, 2003; Krolak-Salmon, Henaff, Vighetto,Bertrand, & Mauguiere, 2004), results from the few studies thathave compared fear to other expressions do not provide over-whelming evidence of a speed advantage for fear. Responsesto other expressions, such as happiness, appear to have earlierlatencies than those to fear in both temporal regions (Batty &Taylor, 2003; Liu et al., 1999) and the amygdala (Liu et al.,1999) and ERPs over frontal regions are no earlier for fearfulthan other basic expressions (Eimer & Holmes, 2007; Eimer,Holmes, & McGlone, 2003).

Moreover, even those that do find faster processing forfear argue that the response is unlikely to be the result ofjust subcortical processing. Krolak-Salmon et al. (2003, 2004)measured ERP responses to facial expressions from electrodesimplanted in the amygdala and insula of patients with drug-resistant epilepsy. When participants attended to facial expres-sion, responses to fear in the amygdala ("200 ms post-stimulusonset) occurred earlier than those to disgust in the insula("300 ms post-stimulus onset), suggesting rapid, preferentialprocessing of fear by the amygdala. However, Krolak-Salmonet al. (2004, see Cowey, 2004, for similar arguments based on

2 Note that existing techniques that can be used with healthy participants tomeasure the time-course of facial expression processing (i.e., ERPs recordedfrom the scalp, and to some extent MEG) may not effectively measure subcorticalprocessing (see Eimer et al., 2003, for more discussion). Intra-cranial ERPscertainly have more accurate localization, although it is possible that processingis delayed in medicated patient populations (see Krolak-Salmon et al., 2004).

78 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

Table 1Summary of earliest responses found to emotional faces in occipital, temporal and frontal regions, the amygdala and insula

Study Measure Task Key comparison Latency (ms)

OccipitalPizzagalli et al. (1999) Scalp ERP Passive viewing of brief, lateralized,

facesLiked vs. disliked faces 80

Eger, Jednyak, Iwaki, andSkrandies (2003)

Scalp ERP Dichoptic presentation of emotional,neutral and scrambled schematicfaces and asked to judge whichstimulus appeared more “face-like

Negative vs. positive and neutralfaces

80

Pourtois et al. (2004) Scalp ERP Judge the orientation of a bar thatfollowed a pair of faces (one neutral,one emotional) presented in uppervisual field

Fearful vs. happy expressions 90

Halgren et al. (2000) MEG Detect repetitions of foveallypresented stimuli

Happy vs. sad expressions 110

Krolak-Salmon, Fischer,Vighetto, and Mauguiere(2001)

Scalp ERP Judge sex of faces or count surprisedfaces (attend expression)

Neutral vs. emotional (fear,happiness, disgust, surprise) whenattend expression

250

TemporalLiu et al. (1999) MEG Identify emotional expression (angry,

disgusted, fearful, happy, sad,surprised)

Significant change in activation todisgusted, happy, sad and surprisedfaces; fear delayed

130

Batty and Taylor (2003) Scalp ERP Detect objects among facesdisplaying emotional (angry,disgusted, fearful, happy, sad,surprised) or neutral expressions

Positive expressions (happy andsurprise); followed by negativeexpressions (fear disgust andsadness); amplitude enhanced forfear

140

Streit et al. (1999) MEG Identify emotional expression (angry,disgusted, fearful, happy, sad,surprised) and categorise objects andfaces

Face recognition vs. emotionrecognition

160

Pizzagalli et al. (2002) Scalp ERP Passive viewing of foveally presentedfaces

Enhanced amplitudes liked vs.disliked faces

160

Sugase et al. (1999) Single-unit recordingfrom macaque

Fixation to coloured dots before andafter the faces

Fine-grained, subordinateinformation that could discriminateexpressions

165

Esslen, Pascual-Marqui,Hell, Kochi, andLehmann (2004)

Scalp ERP Generate the emotion presented infacial expressions

Neutral vs. fear 256

Sato, Kochiyama,Yoshikawa, andMatsumura (2001)

Scalp ERP Gender discrimination of emotional(fearful and happy) and neutral faces

Emotional faces elicited a largernegative peak

270

AmygdalaStreit et al. (2003) MEG Identify emotional expression (angry,

disgusted, fearful, happy, sad,surprised) and categorise objects andfaces

Stronger activity for emotional vs.neutral or blurred faces

100

Liu et al. (1999) MEG Identify emotional expression (angry,disgusted, fearful, happy, sad,surprised)

Significant change in activation to- Happy expressions 110- Fearful expressions 150

Halgren, Baudena, Heit,Clarke, and Marinkovic(1994)

ERP from depthelectrodes

Viewing of unfamiliar faces Faces vs. words 130

Krolak-Salmon et al.(2004)

ERP from depthelectrodes

Judge sex of faces or count surprisedfaces (attend expression)

Fear vs. others (neutral, happy,disgusted) when attend expression

200

Streit et al. (1999) MEG Identify emotional expression (angry,disgusted, fearful, happy, sad,surprised) and categorise objects andfaces

Face recognition vs. emotionrecognition

220

FrontalStreit et al. (2003) MEG Identify emotional expression (angry,

disgusted, fearful, happy, sad,surprised) and categorise objects andfaces

Stronger activity for emotional vs.neutral or blurred faces

100

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 79

Table 1 (Continued )

Study Measure Task Key comparison Latency (ms)

Holmes et al. (2003) Scalp ERP Attend (match identity) or ignore,pairs of faces

Enhanced positivity for attendedfearful vs. neutral faces

100

Eimer and Holmes (2002) Scalp ERP Detect repetitions of foveallypresented faces and houses

Enhanced positivity for fearful vs.neutral faces

120

Kawasaki et al. (2001) ERP from depthelectrodes

Passive viewing of emotional facesand scenes

Changes to neuron firing rate greaterin response to fearful vs. happy facialexpressions

120

Esslen et al. (2004) Scalp ERP Generate the emotion presented infacial expressions

Neutral vs. happy, sad and disgust 138

Neutral vs. fear 256Neutral vs. angry 349

Eimer et al. (2003) Scalp ERP Attend (determine whether pairs offaces were emotional) or ignore,pairs of faces

Enhanced positivity for all attendedemotional faces (angry, disgusted,fearful, happy, sad, surprised) vs.attended neutral faces

160

Streit, Wolwer,Brinkmeyer, Ihl, andGaebel (2000)

Scalp ERP Identify emotional expression (angry,disgusted, fearful, happy, sad,surprised) and categorise objects andfaces

Face recognition vs. emotionrecognition

180

InsulaKrolak-Salmon et al.

(2003)ERP from depthelectrodes

Judge sex of faces or count surprisedfaces (attend expression)

Disgust vs. others (neutral, happy,fearful) when attend expression

300

single-unit recordings) argue that we cannot be sure that theamygdala response to fear is mediated purely by an ultra-fastretinal–collicular–pulvinar route to the amygdala, because cor-tical responses can occur within 40–80 ms and could providesome input.

1.1.4. SummaryFaces are detected and categorized faster than many other

stimuli. Detection and crude affective categorization can occurrapidly, from 100 ms post-stimulus onset, with fine-grained cor-tical representations necessary to recognize identity and dis-criminate between basic emotional expressions computed withinan additional 70 ms. In sum, the evidence for rapid face pro-cessing is good. Whether threatening faces are detected morerapidly than other displays of emotion and whether this is due tosubcortical processing are difficult to determine because of lim-itations in our ability to measure latency responses from healthyhuman subcortical structures. Although advances in technologymay eventually prove otherwise, we can note that current evi-dence provides little support for claims that rapid threat detectionis mediated by purely subcortical pathways, or that threat isdetected more rapidly than other expressions.

1.2. Are faces registered without conscious awareness?

Not all visual processing by the human brain reaches con-scious awareness (Rees, Kreiman, & Koch, 2002), and suchprocessing can be thought of as “automatic” (Bargh, 1997;Ohman, 2002; Robinson, 1998). Here we consider whether theidentity and affect displayed by faces is registered even whenpeople are unaware of this information. Conscious awareness isdisrupted by some neuropsychological disorders of vision, andcan be simulated in healthy people by presenting stimuli verybriefly and backward masked.

1.2.1. Can faces be identified without consciousawareness?

People with acquired prosopagnosia, who cannot con-sciously recognize previously familiar faces, sometimes shownon-conscious or covert “recognition”. Some show higher levelsof autonomic arousal (as measured with skin conductanceresponses, SCRs) to familiar than unfamiliar faces (e.g., Bauer,1984; Tranel and Damasio, 1985), and some have access tosemantic information associated with identity, learning thenames and occupations of faces better when they are pairedcorrectly rather than incorrectly (e.g., de Haan, Young, &Newcombe, 1987). Similar non-conscious processing of facialidentity may also occur in healthy participants (Stone &Valentine, 2003, 2004; Stone, Valentine, & Davies, 2001).

1.2.2. Are facial expressions registered without consciousawareness?

Covert or non-conscious recognition of facial expression inhealthy people is often examined by presenting expressive facesvery briefly and backward masked. Affective priming studiestypically present a brief face prime ("15 ms), followed by aneutral stimulus to be evaluated. The expression depicted by theprime influences judgments of the neutral stimuli; for instance,meaningless symbols are rated as more appealing when theyfollow brief happy, rather than angry, faces (Murphy & Zajonc,1993; Rotteveel, de Groot, Geutskens, & Phaf, 2001; Wong &Root, 2003). Although participants appear to be unaware of theprime expressions (Wong & Root, 2003), it is possible that theface primes were incompletely masked because they were sim-ply masked by symbols rather than more effective masks com-posed of a facial configuration (see Loffler, Gordon, Wilkinson,Goren, & Wilson, 2005, for a comparison of mask types).

Physiological and imaging studies typically present a briefprime face, often for around 30 ms, followed by a neutral

80 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

masking face that is presented for a longer time. The studiesthat we review below generally find that participants weresubjectively unaware of the face primes (i.e., they reportno knowledge of the facial expressions), although objectiveawareness of the face primes (e.g., by using a forced-choiceidentification task) was often not assessed (see Pessoa, Japee,& Ungerleider, 2005, for more discussion).

Facial electromyography (EMG) studies reveal that people’sfacial muscles mimic brief (30 ms) masked facial expressions,with unseen happy faces resulting in smiles via the action ofthe zygomatic major muscle and unseen angry faces resultingin frowns via corrugator supercilli (Dimberg, Thunberg, &Elmehed, 2000). Fear-conditioning studies also demonstrateautonomic responses to angry faces presented below thethreshold for conscious awareness. Angry faces previouslyassociated with an electric shock evoke heightened SCRs,even when presented for 30 ms, backward masked and notconsciously perceived (Esteves, Dimberg, & Ohman, 1994).Moreover, associations can be formed between aversive eventsand angry faces that are not consciously recognized (Ohman,Esteves, & Soares, 1995). Interestingly, masked angry, butnot happy, faces evoke conditioned autonomic responses,perhaps because threatening faces have an evolutionary biasto be associated with aversive outcomes (Mineka & Ohman,2002).

Neuroimaging studies often show that the amygdala is acti-vated by fearful and fear-conditioned angry faces that are out-side awareness (for review see Zald, 2003). Although partic-ipants could not consciously report the expressions presentedby very brief ("30 ms), backward-masked faces, functionalimaging has revealed that amygdala activation was greater tofearful than happy unseen faces (Whalen et al., 1998) and that(right) amygdala activation was enhanced for angry faces thatwere previously associated with an aversive sound in a fear-conditioning paradigm (Morris, Ohman, & Dolan, 1998; Morriset al., 1999). Similar results have been found with binocularrivalry tasks, where the image presented to one eye is per-ceived while the image presented to the other eye is suppressed(Blake & Logothetis, 2002). For instance, fearful faces acti-vated the amygdala to the same extent, regardless of whetherthey were perceived or suppressed (Williams, Morris, McGlone,Abbott, & Mattingley, 2004) and suppressed fearful faces acti-vated the (left) amygdala more than suppressed chairs (Pasley,Mayes, & Schultz, 2004). Rivalry suppression appears to blockvisual information prior to extrastriate visual areas (e.g., Tong,Nakayama, Vaughan, & Kanwisher, 1998), leading Pasley andcolleagues to argue that amygdala responses to suppressed facesresults from subcortical visual input, via the superior colliculusand thalamus, to the amygdala (see Fig. 1, dashed lines). Finally,a recent study suggests that briefly (17 ms) presented and maskedeye-whites from fearful faces also activate the amygdala to agreater extent than do the eye-whites from happy faces (Whalenet al., 2004).

However, two recent studies only find enhanced amygdalaactivation for fearful faces when participants were aware ofthem. Phillips et al. (2004) found that amygdala activationpresent when fearful faces were shown for 170 ms was elimi-

nated when the faces were presented for 30 ms and then back-ward masked. Pessoa, Japee, Sturman, and Ungerleider (2006)asked participants to report whether or not a fearful face waspresent on each trial and used signal detection analyses toobjectively measure whether each participant was able to reli-ably detect brief backward-masked fearful faces. They observedincreased amygdala (and fusiform gyrus) activation for fearfulrelative to neutral faces when the faces were shown for 67 msand reliably detected by participants but not when the faceswere shown for 33 ms and not detected by participants. More-over, a small group of participants that were able to detect fearfrom 33 ms backward-masked faces did show differential acti-vation, suggesting that amygdala activation may be associatedwith awareness.

Investigating patients who are “unaware” of visual stimulihas also provided an insight into the unconscious processing offearful faces. Patients with right parietal damage suffering visualneglect and extinction fail to perceive or respond to a stimulus inthe contralesional (left) visual field when a competing stimulusis present in the ipsilesional field. Despite no awareness of extin-guished faces, unseen fearful faces activate the amygdala just asmuch as those that are consciously perceived (Vuilleumier etal., 2002). Although striate cortex damage prevents patient G.Y.from becoming aware of faces presented in his “blind” visualfield, when forced he is nonetheless able to choose the expressiondisplayed by the face at above chance levels, known as “affec-tive blindsight” (de Gelder, Vroomen, Pourtois, & Weiskrantz,1999). The amygdala, superior colliculus and pulvinar were acti-vated when fearful and fear-conditioned faces were presented inG.Y.’s “blind” visual field (Morris et al., 2001), suggesting thathis ability to discriminate between expressions in the absence ofawareness may be mediated by a subcortical colliculo-pulvinarroute to the amygdala (Fig. 1, dashed lines) (Morris et al.,2001). Alternatively, cortical pathways to the amygdala thatbypass V1 could be involved (see Pessoa, 2005; Pessoa, Japeeet al., 2006; Williams et al., 2004, for discussion on anatomicalconnections).

Unseen fearful and fear-conditioned angry faces do not seemto be the only facial expressions that activate the amygdala.Williams et al. (2004) also found that suppressed happy facesactivated the amygdala to a greater extent than suppressedneutral faces. They suggested that facial expressions that arenot consciously perceived are processed primarily, and perhapsexclusively, by subcortical pathways—pathways that are able todistinguish emotional from unemotional faces but are unable todiscriminate between affective categories without cortical input.However, other evidence suggests that some discrimination maybe possible. A patient with total bilateral cortical blindness, whowould be unable to use information in his intact visual field tohelp determine what was presented in the blind field, can distin-guish between unseen emotional expressions at above chancelevels when given two categories from which to choose (i.e.,angry versus happy; sad versus happy; fearful versus happy)(Pegna, Khateb, Lazeyras, & Seghier, 2004). These expressionsmay be discriminated by comparing the degree of emotionalarousal for each face, based on amygdala activation (Killgore &Yurgelun-Todd, 2004).

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 81

Other structures than the amygdala could also be involvedin the non-conscious discrimination of expression. Functionalimaging studies in healthy people have revealed anterior cin-gulate gyrus activation in response to masked happy and sadfacial expressions (Killgore & Yurgelun-Todd, 2004) and sub-lenticular substantia inominata activation in response to maskedhappy and fearful facial expressions (Whalen et al., 1998). Fear-ful facial expressions that were extinguished by neglect patientsactivated the orbitofrontal cortex (Vuilleumier et al., 2002). Therole played by these areas in processing expressions that peopledo not consciously recognize is currently unclear.

1.2.3. SummaryBoth healthy people and prosopagnosic patients appear to be

able to encode some information about facial identity withoutconscious awareness. Note that this does not mean that the faceswere processed without attention, because attention was directedto the location of the stimulus and generally there was no com-peting task using attentional resources (Pessoa & Ungerleider,2003). Converging behavioral, physiological, neuroimaging andneuropsychological evidence also shows that facial expressionsthat people are subjectively unaware of are registered, oftenby the amygdala. It is less clear whether faces that people areobjectively unaware of activate the amygdala; fearful eye-whitespresented alone seem to (Whalen et al., 2004) whereas fearfulfaces (which paradoxically contain fearful eye whites) appearnot to (Pessoa, Japee et al., 2006). One issue that may need tobe examined in future work is that of individual differences inemotion processing (Hamann & Canli, 2004). Sensitivity to briefface presentations seems to vary across both expression type(Maxwell & Davidson, 2004) and between individuals (Pessoaet al., 2005), and could be related to anxiety levels (Etkin et al.,2004).

1.3. Is face processing mandatory?

Another characteristic of automatic processes is that theyare mandatory or obligatory, in that processing is unavoidableand occurs regardless of one’s intentions (e.g., Wojciulik etal., 1998). Behavioral methods to test for mandatory process-ing include priming tasks with unfamiliar faces and face–nameinterference tasks with familiar faces, where participants read thenames of famous people while trying to ignore photos of famouspeople. Functional imaging studies have compared responses toattended faces with those for unattended faces. Responses canbe: (i) equivalent for attended and unattended faces (completemandatory processing), (ii) significantly greater for attendedthan unattended faces (partial mandatory processing), or (iii)completely absent for unattended faces (no mandatory process-ing). Electrophysiological methods have been used to assesswhether processing is mandatory at various stages. For example,early processing may be mandatory, but later stages may not be.

1.3.1. Can you ignore that face?Detecting a face may be obligatory. Participants are slower

to detect the curvature of a single line when it appears in aface configuration of three curved arcs than in a meaningless

configuration, indicating that facial configurations cannot beignored even when it would be advantageous to do so (Suzuki& Cavanagh, 1995). Participants required to judge the lengthof lines shown at the centre of the screen often noticed unex-pected smiling schematic faces presented in the periphery butdid not notice many other types of stimuli, including sad andneutral schematic faces (Mack & Rock, 1998). Patients withvisual neglect and extinction extinguish contralesionally pre-sented schematic faces less often than scrambled faces and othershapes (Vuilleumier, 2000). Happy, fearful and angry faces arealso extinguished less often than faces with neutral expressions(Fox, 2002; Vuilleumier & Schwartz, 2001). So faces, especiallyexpressive ones, seem to demand awareness.

Some information about the identity of ignored unfamiliarfaces also seems to be processed. Khurana, Smith, and Baker(2000) presented arrays of five faces and asked participants tomatch the identity of two target faces and ignore the other distrac-tor faces. Matching of the target faces was slowed when the targetfaces were distractors on the preceding trial (known as “negativepriming”), suggesting that the unfamiliar distractor faces wererepresented and then inhibited. Face–name interference tasksalso indicate that the identity of ignored familiar famous faces isprocessed, with people slower to categorize the occupation of afamous name (e.g., Mick Jagger) when the name was presentedwith a to-be-ignored face from an incongruent occupation (e.g.,Margaret Thatcher) than a congruent one (e.g., Paul McCartney)(Young, Ellis, Flude, McWeeny, & Hay, 1986). Load theories ofselective attention propose that distractors should be processedwhen just one relevant stimulus is present, so that spare capacitycan “spill over” to irrelevant items (i.e., low attentional load), butnot when many relevant stimuli are present and exhaust all avail-able capacity (i.e., high attentional load) (Lavie, 1995, 2000).Mandatory semantic processing of faces seems more resistantto manipulations of attentional load than does the processingof other objects, with face–name interference sustained underconditions of high attentional load that eliminated object–nameinterference (Lavie, Ro, & Russell, 2003).

Although mandatory processing may occur for “ignored”faces, neuroimaging and electrophysiological evidence sug-gests it may be partial rather than complete. Responses in theFFA are larger for attended than unattended faces (McCarthy,2000; O’Craven, Downing, & Kanwisher, 1999; Wojciulik etal., 1998) and early components of face processing, such as theN170/M170, are enhanced for attended relative to unattendedfaces (Downing, Liu, & Kanwisher, 2001; Eimer, 2000; Holmeset al., 2003, but see Carmel & Bentin, 2002; Cauquil, Edmonds,& Taylor, 2000).

1.3.2. Can you ignore that emotion?Some evidence suggests that emotion processing by the

amygdala, especially in regard to fearful or threatening stim-uli, may be completely mandatory and is not reduced in theabsence of attention. In one study, participants either matchedthe identity of a pair of peripheral faces (i.e., attend faces) orhouses (i.e., ignore faces) (Vuilleumier et al., 2001). Responsesin the amygdala to fearful faces were not significantly reducedwhen the faces were unattended, suggesting that fear-responses

82 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

mediated by the amygdala may be obligatory and not dependentupon focused attention. Equivalent amygdala responses to unat-tended and attended fearful faces presented at the fovea havealso been reported (Anderson et al., 2003).

However, Pessoa, McKenna, Guiterrez and Ungerleider(2002) argued that matching houses was not sufficiently atten-tionally demanding and may have allowed attentional resourcesto “spill over” and process the facial expressions (as per atten-tional load theory, Lavie, 1995). They presented expressive facesat the fovea in conjunction with a more demanding secondarytask (i.e., judging the orientation of two similarly oriented linesin the periphery) and found that brain regions, including theamygdala, that responded differentially to emotional faces did soonly when participants were attending to the facial stimuli. Theconclusion was that facial expression coding is not mandatoryand requires some degree of attention, although some have sug-gested the possibility that amygdala activation was present butless than that able to be detected by fMRI threshold techniques(Taylor & Fragopanagos, 2005). Williams, McGlone, Abbott,and Mattingley (2005) also used a more attentionally demandingcompeting task than that used by Vuilleumier et al. (2001), withparticipants shown pairs of peripheral semi-transparent face-house composites and asked to match either the faces (attendface) or houses (attend houses). In vivid contrast to Pessoa etal., they found that amygdala activation was enhanced for unat-tended as compared to attended fearful faces.

Unfortunately, ERP evidence has not helped to clarify thepicture. Whereas neuroimaging has demonstrated equivalentamygdala responses to unattended and attended fearful faces(Vuilleumier et al., 2001), a similar ERP paradigm found thatearly (between 100 and 120 ms) frontal responses observed forattended fearful faces were absent for unattended fearful faces(Holmes et al., 2003). However, because scalp ERPs may notrecord important amygdala activation, this is not strong evidencefor an elimination of amygdala processing in the absence ofattention. Instead, the diminished ERP responses for unattendedfaces may reflect the reduction in cortical processing measuredby fMRI when faces are not attended (e.g., Anderson et al., 2003;Vuilleumier et al., 2001)

One explanation for the different patterns of imaging resultsmay be an interaction between attentional load and location ofthe face/s. Peripherally presented ignored faces may activate theamygdala under both low (Vuilleumier et al., 2001) and high(Williams, McGlone et al., 2005) attentional load manipula-tions, whereas a centrally presented face may only lead to strongamygdala responses under low-load (Anderson et al., 2003) butnot high-load (Pessoa et al., 2002) conditions. Indeed, a recentstudy by Pessoa, Padmala, and Morland (2005) with centrallypresented faces demonstrates greater (right) amygdala activationto fearful than neutral faces under low, but not medium or high,attentional load conditions. Enhanced processing of peripherallypresented affective stimuli would be consistent with the proposalthat one role of the amygdala is to direct attention to importantitems that are not the current focus of attention. There are moreprojections from the periphery to the superior colliculus (Berson& Stein, 1995) so subcortical information passed to the amyg-dala may favor peripheral input. Moreover, recent evidence that

the amygdala preferentially responds to low- rather than high-spatial frequency information (Vuilleumier, Armony, Driver, &Dolan, 2003) suggests that the amygdala may be especially sen-sitive to peripheral faces. Studies varying both load and locationof the face/s are needed to test the interaction hypothesis outlinedabove.

Another (not mutually exclusive) possibility is that amygdalaresponses to unattended threatening stimuli may interact withan individuals’ level of anxiety. Bishop, Duncan, and Lawrence(2004) used the paradigm devised by Vuilleumier et al. (2001)and found equivalent amygdala activation to both attended andignored fearful faces when participants were highly anxious.In contrast, amygdala activation to fearful faces was enhancedfor attended compared to ignored fearful faces when partici-pants reported low levels of anxiety. Studying how individualdifferences in anxiety interact with attentional load and impacton amygdala activation will be an interesting area for futureresearch.

1.3.3. SummaryBehavioral evidence that ignored familiar and unfamiliar

faces are processed to some level, indicates some manda-tory processing of facial identity. Furthermore, although neuralresponses in face-selective cortex are reduced for ignored com-pared with attended faces, they are certainly not eliminated.Neuroimaging studies examining the mandatory processing ofemotional expression have included manipulations of attentionalload and also examined individual differences in anxiety. Weexpect that analysis of these variables will eventually lead to afuller understanding of the nature of mandatory expression pro-cessing. At present, we can conclude that amygdala activation tofearful facial expressions appears completely mandatory underlow attentional load conditions, but perhaps not under high atten-tional load conditions. There is also the intriguing possibilitythat amygdala activation to fearful facial expressions is com-pletely mandatory for highly anxious individuals but not thosewith lower levels of anxiety.

1.4. Attentional resources for face processing

Another characteristic of automatic processing is that itdemands relatively few attentional resources, so it should expe-rience little disruption from competing stimuli or tasks that usethose resources (e.g., Schneider & Chein, 2003)3. We begin byreviewing evidence from visual search tasks, where participantsactively search for a target amongst a number of distractors. Asoriginally conceived, search times that did not vary with the num-ber of distractors (commonly termed “pop-out”) indicated rapid,parallel, and perhaps capacity-free processing (e.g., Treisman &Gelade, 1980). We also consider whether attentional resources

3 Theoretically, the processing of faces may be mandatory without beingcapacity-free, and vice versa. In practice, however, dual-task paradigms havebeen used to assess both the mandatory nature and capacity demands of facecoding. The relationship exists because mandatory processing is often assumedif information about identity or expression is encoded from faces while partici-pants are completing another, attentionally demanding task.

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 83

are necessary to encode the configural or holistic representa-tions that face recognition generally proceeds by. Finally, wesummarize the results of dual-task experiments with expressivefaces.

1.4.1. Searching for facesSimple, schematic neutral or unexpressive faces do not pop

out of crowded displays composed of scrambled or invertedfaces (Brown, Huey, & Findlay, 1997; Kuehn & Jolicoeur, 1994;Nothdurft, 1993), suggesting that face detection is neither par-allel nor capacity-free. Early visual studies using simplisticexpressive faces reported pop-out for angry faces (Hansen &Hansen, 1988), but this has been attributed to low-level con-founds rather than valence (Purcell, Stewart, & Skov, 1996).Hershler and Hochstein (2005) recently suggested that realis-tic face photographs pop-out when presented amongst differenttypes of non-face objects, however, VanRullen (in press) arguesthat face pop-out under these conditions mostly relies on low-level factors.

It is now considered more appropriate to conceive of searchtasks as measuring processing efficiency and/or attentionalbiases, with shallower slopes indicating more efficient process-ing of that stimulus type (see e.g., Huang & Pashler, 2005; Luck& Vecera, 2002; Wolfe, 1998). Reliable search asymmetries arefound in tasks using expressive schematic faces with no low-level confounds, with search for angry faces amongst neutralor happy faces quicker than the reverse (Eastwood, Smilek, &Merikle, 2001; Fox et al., 2000; Ohman, Lundqvist, & Esteves,2001). Studies using schematic stimuli suggest that angry facesmay attract attention because of salient features, particularly theeyebrows (Lundqvist, Esteves, & Ohman, 1999), but that thepresence of a correctly oriented facial configuration may alsobe essential (Eastwood et al., 2001; Fox et al., 2000; Tipples,Atkinson, & Young, 2002). Angry faces might be detected moreefficiently than faces displaying positive emotions because therapid detection of threatening stimuli has potential adaptivevalue (Fox et al., 2000; LeDoux, 1996; Ohman, 1993). Interest-ingly, although both angry and fearful faces signify threat, theirability to attract attention in a visual search tasks using photo-graphic quality stimuli may vary. Specifically, when targets wereembedded amongst large numbers of neutral face photos, searchfor an angry, but not fearful, face photo was more efficient thansearch for a happy face photo (Williams, Moss, Bradshaw, &Mattingley, 2005).

VanRullen, Reddy, and Koch (2004) have recently suggestedthat the “type” of attention measured by visual search tasks maybe different to that measured by other paradigms, pointing outthat some targets that trigger pop-out cannot be discriminatedin dual-task situations, whereas others can be discriminated indual-task situations but do not pop out. For instance, althoughfaces do not pop out, determining the sex of a face is notsubstantially impaired by performing a concurrent letter discrim-ination task (Reddy, Wilken, & Koch, 2004). If dual-tasks aremeasuring independent attentional resources to those measuredby visual search paradigms, then the type of conditions underwhich face detection occurs will determine whether attentionalresources are required. Put simply, different attentional manip-

ulations from different paradigms are likely to result in verydifferent effects. A greater understanding of the nature of atten-tional resources and the conditions under which they operate isneeded before we can clarify what is needed to detect faces.

1.4.2. Attentional resources to code identityRecognizing facial identity appears to rely more upon the

encoding of configural or holistic information than on repre-sentations of individual facial features (see Maurer, Le Grand,& Mondloch, 2002, for a review). Some evidence suggests thatholistic face representations are coded with little or no atten-tion (Boutet, Gentes-Hawn, & Chaudhuri, 2002). Boutet andcolleagues measured holistic coding of identity with a variantof the “composite effect” (Young, Hellawell, & Hay, 1987),showing participants a series of face composites that were either“aligned” (when the top and bottom halves of two different facesare joined into a new face composite, which makes recognitionof the top half more difficult) or “misaligned” (when the top andbottom halves are slightly offset, which facilitates recognitionof the top half). Before seeing the face composites participantsviewed a series of overlapping semi-transparent house and faceimages and traced the outline of either the face (attend face)or house (ignore face). Subsequent recognition was better formisaligned than aligned faces, regardless of whether the faceswere previously attended or ignored, leading Boutet and col-leagues to suggest that facial identity is coded with little or noattention.

In contrast, other studies suggest that attention is neededto form holistic face representations of identity. Palermo andRhodes (2002) measured holistic coding with the part-wholetask, in which participants are shown a face and then askedto recognize the previously seen facial parts (i.e., eyes, noseand mouth) in a forced-choice recognition test containing eithertwo “isolated parts” (e.g., nose 1 versus nose 2) or two “wholefaces” that were identical except for the feature in question (e.g.,nose 1 versus nose 2) (e.g., Tanaka, Kay, Grinnell, Stansfield, &Szechter, 1998). For upright faces, but not those initially shownscrambled or inverted, recognition of the face parts is superiorwhen the parts are shown in a “whole face” context rather thanas “isolated parts”, suggesting that upright faces are representedmore holistically than other types of stimuli (Tanaka & Farah,1993). Palermo and Rhodes (2002) presented a central targetface and two peripheral flanker faces for a brief time and thentested holistic coding of the target face with the part-whole task(either the parts in a “whole face” or the parts in “isolation”).Participants who ignored the flanker faces demonstrated holisticcoding (i.e., was performance better in the “whole face” than the“isolated part” condition), whereas those who were required tomatch the identity of the peripheral flanker faces were unableto holistically code the target face, suggesting that the encod-ing of facial identity is attentionally demanding (see Reinitz,Morrissey, & Demb, 1994, for similar conclusions).

In a subsequent experiment, Palermo and Rhodes (2002)found that matching a pair of peripheral upright faces elimi-nated holistic coding of the target faces, whereas holistic codingof the target faces was not disrupted when participants wererequired to match inverted flanker faces. Inverted faces are not

84 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

coded holistically (see reviews by Farah, 2000; Maurer et al.,2002), leading Palermo and Rhodes to suggest that holistic faceprocessing may have its own dedicated attentional or processingresources (see Young, de Haan, Newcombe, & Hay, 1990, forearlier suggestions that there may be face-specific attentionalresources). Similar conclusions were drawn by Jenkins, Lavie,and Driver (2003), after finding that face–name interference isreduced by an additional upright face but not by inverted facesor objects. These face-specific holistic resources may be quitelimited, with only one face able to be holistically coded in a brieftime (Bindemann, Burton, & Jenkins, 2005; also see Boutet &Chaudhuri, 2001).

The previous studies suggest capacity limits for faces pre-sented simultaneously. Other studies have examined temporalinterference by presenting stimuli sequentially in rapid succes-sion. When two targets are presented in close temporal prox-imity, the second target may be missed, presumably because ofa temporal attentional bottleneck—this is known as an atten-tional blink (AB)4 (Raymond, Shapiro, & Arnell, 1992). Awh etal. (2004) found that discriminating between digits impaired thesubsequent discrimination of letters but not faces, and suggestedthat this asymmetry may be because both letters and digits areprocessed in a featural manner, whereas faces rely upon a sepa-rate configural/holistic processing channel. In contrast, Jacksonand Raymond (in press) found that that a featural discrimina-tion task impaired the subsequent detection of an unfamiliar faceand argued against the existence of a separate configural/holisticchannel for faces.

Further complicating the issue of whether attentionalresources are needed to encode facial identity is the possibil-ity that very familiar or famous faces may not need attention,whereas unfamiliar faces might. Jackson and Raymond (in press)found an AB for unfamiliar faces but not for very familiar faces.Similarly, changes between two successive faces were easier todetect when one was famous (Buttle & Raymond, 2003). Finally,observers are much quicker to detect their own faces than thoseof unfamiliar people, even when these initially unfamiliar faceswere presented hundreds of times, suggesting that the detec-tion of very highly familiar faces can require fewer attentionalresources than that of less familiar faces (Tong & Nakayama,1999). A functional imaging study measuring FFA activationto both unfamiliar and familiar faces, when attended and unat-tended, would help to determine whether the role of attentiondiffers depending on the familiarity of the face.

1.4.3. Attentional resources to code expressionAs discussed in Section 1.3.2, Pessoa et al. (2002) found that

responses in all brain regions responsive to expression, includ-ing the amygdala and FFA, were eliminated when the faces werenot attended, and argued that facial expression coding is neithermandatory nor capacity-free. In contrast, the majority of dual-task studies diverting attention away from fearful faces have

4 Note that AB tasks may not be measuring relatively early perceptual pro-cesses but later post-perceptual stages of processing or consolidation into work-ing memory (see Marois & Ivanoff, 2005).

found that responses in the FFA are reduced but not eliminated;and moreover that amygdala activation is maintained, suggestingthat cortical processing needs some resources whereas amygdalacoding is both mandatory and resource-independent (Andersonet al., 2003; Vuilleumier et al., 2001; Williams, McGlone et al.,2005). Another possibility is that emotionally arousing infor-mation reduces, rather than eliminates, the need for attention(see Anderson, 2005). In a study using words rather than faces,Anderson and Phelps (2001) found that healthy people, butnot those with left or bilateral amygdala lesions, were morelikely to report the second target in an AB task when theword was aversive rather than neutral, suggesting that a crit-ical function of the amygdala may be to enhance the initialperceptual encoding of emotionally significant stimuli, “makingthem less dependent on attentional resources to reach awareness”(p. 308).

At least in situations of low-load, the amygdala appears torespond fully to fearful facial expressions, even when attentionis diverted elsewhere. However, responses by the amygdala inthe absence of attention may have reduced specificity. That is,while the amygdala preferentially responds to fearful expres-sions with attention, under conditions of reduced attention it mayrespond to any potentially threatening expression. For example,unattended disgusted faces also activate the amygdala, suggest-ing that coarse subcortical input is not sufficient to discriminatebetween fear and disgust without attentionally demanding cor-tical processing (Anderson et al., 2003). In contrast, amygdalaresponses to unattended happy faces are reduced compared towhen they are attended (Williams, McGlone et al., 2005), sug-gesting that coarse subcortical input can discriminate betweenthreatening (fearful) and non-threatening (happy) expressionsbut not between different types of potential threat (disgust andfear).

Attention may be necessary to examine the facial featuresthat are especially useful to distinguish between various nega-tive expressions (such as the mouth for disgust and the eyes forfear) and the amygdala may be involved in directing attentionto these salient facial features. Eye movement studies indicatethat both primates and humans fixate upon the facial features,especially the eyes and mouth, of emotional faces (see Green& Phillips, 2004, for a review). In contrast, patient S.M. withearly, bilateral amygdala damage and impaired recognition offearful expressions appears to abnormally scan faces for emo-tion, with particularly conspicuous absence of attention to theeyes, and perhaps also the mouth (Adolphs et al., 2005; seeSpezio, Adolphs, Hurley, & Piven, 2007, for similar abnormalgaze patterns in autism). Although she does not spontaneouslyexplore the eye region, S.M. was able to look at the eyes whendirected to do so. Looking at the eyes enhanced her ability to rec-ognize fear, suggesting that her deficit may not be in recognizingfearful expressions per se but rather in attending to facial fea-tures that aid recognition of fear. The results of this case studyare intriguing but cannot explain how S.M. is able to recog-nize other facial expressions (e.g., anger), where it would seemthat analysis of the eyes is important (see Vuilleumier, 2005,for further discussion). Whereas some patients with bilateral orunilateral amygdala lesions are only impaired at recognizing

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 85

facial expressions of fear (Adolphs, Tranel, & Damasio, 2001;Adolphs, Tranel, Damasio, & Damasio, 1995), others also havedeficits recognizing other negative facial expressions, such asdisgust and sadness (Adolphs et al., 1999; Anderson, Spencer,Fulbright, & Phelps, 2000). It would be instructive to exam-ine whether eye movement patterns in patients with amygdalalesions vary depending on which expressions can and cannot bereliably recognized.

1.4.4. SummaryFaces presented among stimuli matched on low-level fea-

tures do not pop-out, suggesting that attentional resourcesare needed to detect a facial configuration. Evidence fromdiverse paradigms suggests that the encoding of facial identityrequires attentional resources, perhaps face-specific attentionalresources. As discussed, one caveat may be that highly familiarfaces require fewer attentional resources than unfamiliar faces.Current evidence is consistent with the view that registrationof fear by the amygdala requires minimal attentional resources,whereas more attentional resources are needed to discriminatebetween emotional expressions of potential threat (e.g., fear ver-sus disgust).

1.5. Summary: is face processing rapid, non-conscious,mandatory, and capacity-free?

Detecting a facial configuration is rapid, and perhaps fasterthan detecting other stimuli. Detecting a facial configurationmay also be obligatory. However, visual search studies suggestthat faces are not detected in the complete absence of attentionalresources.

Identifying an individual is rapid. Some aspects of facial iden-tity are encoded without conscious awareness, without inten-tion, and even without focused attention. However, at leastfor unfamiliar faces, focused attention appears to be necessaryfor complete activation of the FFA and to encode the con-figural or holistic representations generally used to recognizeindividuals. There may also be face-specific resources, limit-ing the number of faces that can be simultaneously encodedand enabling faces to be ignored only when processing otherfaces.

Basic facial expressions such as fear and happiness appearto be rapidly categorized and identified (e.g., Batty & Tay-lor, see Table 1). Physiological evidence from facial EMG andfear-conditioning paradigms, behavioral evidence from affectivepriming studies and neuropsychological evidence from patientswith blindsight suggest that, at the very least, people are able todistinguish between positive and negative facial expressions thatthey are subjectively unaware of. Much of the research includedin the review considers that fearful faces are rapidly, pref-erentially, non-consciously and mandatorily registered by theamygdala with little or no reliance upon attentional resources.However, emerging evidence suggests that this may not be thecase, at least not for some individuals (e.g., those that have lowlevels of anxiety) and for some tasks (e.g., when a competingtask has a high attentional load or when conscious awareness ismeasured objectively).

Information from faces that are not consciously perceivedor attended may be conveyed via subcortical pathways to theamygdala. This information may only be sufficient to discrimi-nate emotional from unemotional faces (Williams et al., 2004)or more arousing from less arousing facial expressions (Killgore& Yurgelun-Todd, 2004), with attention needed for more pre-cise (Anderson et al., 2003), and perhaps conscious (Pessoa,Japee et al., 2006), discrimination. As a trade-off for coarserepresentations, it was argued that this route was a more rapid,“threat detector” (e.g., Morris et al., 2001; Ohman, 2002, sim-ilar to that proposed for auditory signals by LeDoux, 1986).However, as yet there is no evidence that emotional informationis conveyed more rapidly via subcortical than cortical path-ways. This leaves us with the possibility that there are tworoutes for coding emotional significance, which operate at sim-ilar speeds. An important function of the subcortical route maybe to direct further processing resources to significant stim-uli (see Shipp, 2004, for the role of the pulvinar and superiorcolliculus in attention). This is discussed further in the nextsection.

The results of the research reviewed in the previous sec-tions often appear conflicting. There does not seem to be asimple answer as to whether face processing is “automatic” ornot. Rather than absolute attentional dependence or indepen-dence, it seems that resource requirements will vary dependingon the type of face attribute encoded (e.g., identity versus fearfulexpression versus happy expression), the task parameters (e.g.,low versus high attentional load), the brain region involved (e.g.,the amygdala versus FFA) and individual differences (e.g., lowversus high anxiety).

2. Selective attention to faces

The human visual system is capacity-limited, so that notall stimuli can be fully analyzed simultaneously. Visual atten-tion selects some stimuli for further processing and allowsothers to be ignored (Desimone & Duncan, 1995; Kastner &Ungerleider, 2001). Bottom-up factors, such as stimulus salienceand top-down factors, such as expectations and current goalsinteract, forming a “salience map” that controls where, how andwhat is attended (Compton, 2003; Corbetta & Shulman, 2002;Feinstein, Goldin, Stein, Brown, & Paulus, 2002). Corbettaand Shulman (2002) suggest that a dorsal frontoparietal sys-tem is involved in both bottom-up and top-down selection,whereas a ventral frontoparietal system predominantly in theright hemisphere, is a “circuit breaker”, directing attention tobehaviorally relevant, especially salient or unexpected, stimuliin a bottom-up fashion. The amygdala may tag a stimulus withemotional significance, enhancing the circuit breaking capac-ity of the ventral attention circuit (Taylor & Fragopanagos,2005).

While not referring specifically to faces, Corbetta andShulman (2002, p. 208) state that, “. . . it is also possible thatsome stimuli attract attention because of some form of contin-gency that is hard-wired in the brain by learning, development orgenetics.” Faces, due to their biological and social significance,may be just the type of stimuli that can preferentially engage,

86 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

recruit or “capture” attentional resources. In the followingsection we examine whether faces are more likely to recruitattentional resources than other objects. In addition, we reviewthe evidence indicating that threatening faces receive an atten-tional priority and outline the neural mechanisms that may beinvolved.

2.1. Is there a bias to attend to faces rather than otherobjects?

Two lines of evidence suggest that people are biased to attendto faces. First, newborns will visually track a schematic face far-ther into the periphery than a scrambled face (Goren, Sarty, &Wu, 1975; Johnson, Dziurawiec, Ellis, & Morton, 1991) andprefer to look at upright rather than inverted schematic faces(Mondloch et al., 1999). The preference for schematic face stim-uli declines after the first month of life, but older infants displaya preference for faces when tested with more realistic faces(Maurer & Barrera, 1981; Morton & Johnson, 1991). Atten-tion may also favor other configurations with more elementsin the upper half than the lower half (Macchi Cassia, Turati,& Simion, 2004), but this bias seems likely to have evolvedto ensure that faces are attended (see e.g., Morton & Johnson,1991).

Second, faces might have an advantage in capturing atten-tion when they are competing with other objects. Ro, Russelland Lavie (2001) presented flickering displays (making changesdifficult to detect, Rensink, O’Regan, & Clark, 1997) consist-ing of one unfamiliar face and five different common objectsand found that changes to faces (e.g., a female face changing toanother female face) were detected both more rapidly and moreaccurately than changes to objects (e.g., an apple changing to abroccoli). The probability of detecting a change is increased bydirecting attention to the object or location of the change (seeSimons, 2000), so these results suggest that faces may have a spe-cial capacity to recruit attention when competing for attentionalresources. However, when one object was presented among anumber of faces, changes to the objects were detected morerapidly than those to faces, indicating that a change detectionadvantage can be observed for a distinct or unique category,regardless of its significance (Palermo & Rhodes, 2003; also seePashler & Harris, 2001, for an example of unique items attract-ing attention when participants view scenes without any specificgoals or expectations).

The results from infants suggest that faces may preferentiallyengage attentional resources. The change detection results alsosuggest that faces can preferentially engage attention, althoughnot when competing against novel or unique stimuli.

2.2. Is there a bias to attend to expressive faces?

Converging evidence from behavioral, functional imagingand electrophysiology suggests that spatial attention is prefer-entially “captured” by emotional, particularly threatening, facialexpressions. In an emotional Stroop task, naming the color ofa face is more time-consuming when the expression is angryrather than neutral, suggesting that angry faces recruit more

attentional resources (van Honk, Tuiten, De Haan, van den Hout,& Stam, 2001). Similar advantages to orient attention to negativeexpressions occur in the dot-probe task, where participants seebriefly presented masked pairs of faces, followed by a smalldot in one of the two positions previously occupied by thefaces. Detection or discrimination of the dot-probe is speededwhen the probe follows an angry rather than a neutral or happyface (Mogg & Bradley, 1999), an angry fear-conditioned facerather than one that was not previously aversively conditioned(Armony & Dolan, 2002) and a fearful face rather than a neu-tral face (Pourtois et al., 2004), indicating that spatial attentionis directed toward the location of potential threat. Functionalimaging suggests that this attentional modulation is associatedwith increased activation in frontal (including the ventrome-dial prefrontal cortex) and parietal spatial attention networks(Armony & Dolan, 2002). A caveat is that attention appears tobe oriented away from mildly threatening facial expressions,with what is considered mild depending on an individuals anxi-ety profile (Wilson & MacLeod, 2003). Attention may not onlybe preferentially directed toward threatening stimuli, but mayalso be sustained to potentially threatening faces, making itmore difficult or time-consuming to disengage attention fromthreat, particularly for individuals with heightened anxiety lev-els (Fox, Russo, & Dutton, 2002; Schutter, de Haan, & van Honk,2004).

2.3. Bottom-up and top-down control of selective attentionto facial expressions

Spatial attention is controlled by both exogenous bottom-up factors, such as stimulus salience and endogenous top-downfactors, such as the intentions and goals of the observer (seee.g., Corbetta & Shulman, 2002; Desimone & Duncan, 1995;Yantis, 1998). Information about facial expressions from visualcortical areas (Fig. 1, solid lines) and the pulvinar thalamus(Fig. 1, dashed lines) is evaluated for emotional significanceby the amygdala (LeDoux, 2000)5. As outlined in Section 1.1.1this initial bottom-up evaluation is rapid, with coarse catego-rization of affect within the first 100 ms. The amygdala is thenable to modulate visual cortical processing via both direct andindirect means (LeDoux, 2000).

The amygdala has extensive re-entrant projections to allregions of visual cortex, so once activated it is able todirectly regulate cortical perceptual processing and thus thekind of input it continues to receive (Amaral, Price, Pitkanen,& Carmichael, 1992; Davis & Whalen, 2001; LeDoux,2000). For example, amygdala activation is associated with

5 The amygdala is not the only brain region likely to evaluate stimuli for emo-tional significance. Phillips, Drevets, Rauch, and Lane (2003a,b) suggest thata ventral system, including the amygdala, insula, ventral striatum and ventralregions of the anterior cingulate and prefrontal cortex, may be involved in iden-tifying the emotional significance of stimuli. The amygdala may be particularlyinvolved in the evaluation of fear, whereas other structures may be tuned towardsother expressions, such as the insula for disgust. We have focused on the amyg-dala because more is known about how the amygdala and selective attentionsystems interact.

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 87

enhanced occipital and fusiform activation for fearful com-pared to neutral faces in healthy individuals (Ioannides, Liu,Kwapien, Drozdz, & Streit, 2000; Morris, Friston et al.,1998), but not people with amygdala damage (Vuilleumier &Pourtois, 2007; Vuilleumier, Richardson, Armony, Driver, &Dolan, 2004).

The amygdala also influences visual cortical processing indi-rectly, via reciprocal connections to regions of orbital and ventro-medial prefrontal cortex (VMPFC) (see Fig. 1) (Barbas, 2000;Bush, Luu, & Posner, 2000; Groenewegen & Uylings, 2000;Holland & Gallagher, 2004; Stefanacci & Amaral, 2002). These“affective” VMPFC regions appear to assess the emotional valueof stimuli, including faces (Davidson & Irwin, 1999; Keane,Calder, Hodges, & Young, 2002) and may be the source oftop-down guidance of selective attention to emotional stim-uli (Yamasaki, LaBar, & McCarthy, 2002). PFC responses arelinked with amygdala activation, with enhanced activity in PFCregions correlated with an attenuation of amygdala responsethat occurs during cognitive evaluation of facial expressions(Hariri, Bookheimer, & Mazziotta, 2000). It appears that bothcognitive and attentional factors are involved in modulatingamygdala responses to fearful faces. Attentional factors areimplicated because amygdala responses are greater for fearfulthan neutral attended faces, but are equivalent for unattendedfaces presented in conjunction with a demanding primary task(Pessoa, Padmala et al., 2005, see Section 1.3.2). Cognitivemodulation is important because performing the demanding pri-mary task without the presence of faces was also associatedwith a reduction in activation in a number of brain regions,including the amygdala and VMPFC (Pessoa, Padmala et al.,2005).

Top-down stimulus selection may also occur via a dorsal routefrom posterior parietal cortex to dorsolateral prefrontal cortex(DLPFC) (Holland & Gallagher, 2004; Yamasaki et al., 2002).The “cognitive” DLPFC region is closely connected to corticalsensory areas and forms part of a distributed attentional networkinvolved in selecting and maintaining task-relevant representa-tions in working memory, known as an “attentional set” (Banichet al., 2000; Curtis & D’Esposito, 2003), regardless of their emo-tionality (Compton, 2003; Yamasaki et al., 2002) (see Fig. 1, blueshading). Information from “affective” and “cognitive” streamsneeds to be integrated, perhaps by the anterior cingulate gyrus(Fichtenholtz et al., 2004; Yamasaki et al., 2002) and more lat-eral PFC regions (Gray, Braver, & Raichle, 2002). Moreover,these “cognitive” and “affective” areas interact, with affectiveregions deactivated during attentionally demanding cognitivetasks and cognitive regions deactivated during emotional pro-cessing (Drevets & Raichle, 1998).

2.4. Summary: selective attention to faces and facialexpressions

People are more likely to attend to faces than other morecommon objects under some, but not all, circumstances. Con-verging evidence also strongly suggests that emotional, partic-ularly threatening, facial expressions receive enhanced process-ing. This enhanced processing appears to be mediated by direct

re-entrant processing from the amygdala to all cortical visualareas and via interacting prefrontal attentional networks thatpreferentially allocate spatial attention to emotional, especiallythreatening, facial expressions.

3. Future directions

Compton (2003) has proposed that emotionally significantstimuli receive enhanced processing, both preattentively and bypreferentially recruiting attentional resources. Faces are amongthe most biologically and socially important stimuli in the humanenvironment, and are certainly emotionally significant. Onewould, therefore, expect them to receive enhanced processing.The research reviewed here confirms that to some extent they do,with evidence for some preattentive processing of faces and pref-erential engagement of attentional resources when comparedwith many other kinds of stimuli.

We focused on reviewing research that has examined howattention is involved in detecting faces, recognizing facial iden-tity and registering and discriminating between facial expres-sions of emotion. However, as outlined in the Introduction, facesalso convey other types of information—race, sex, attractive-ness, direction of eye gaze and kinship. Examining how attentioninteracts with some of these face attributes (e.g., eye gaze, seeHoffman & Haxby, 2000) is an active area of current research,whereas other aspects (e.g., kin recognition or other types offacial movements that do not convey expression) have beenneglected.

Two important areas for future research stand out. First, manypsychiatric and neurological disorders, such as autism, Williamssyndrome, Huntington’s disease, obsessive–compulsive disor-der, social phobia, alcoholism, post-traumatic stress disorder,and schizophrenia, are characterized by impaired processing offacial identity and/or facial expression (Green & Phillips, 2004;Phillips et al., 2003b). Some of these face and emotion process-ing deficits may not appear to involve attentional impairments orabnormalities but rather breakdowns in higher-level processes.However, research is needed to determine whether some of thesedifficulties originate from impaired interactions between atten-tion and face perception structures and/or attention and emotionprocessing systems.

Second, a large proportion of the research contained in thisreview has been conducted with adult participants. However, therole of attention in processing facial identity and facial expres-sion may vary with development. Both the amygdala and PFCregions develop dramatically between childhood and adulthood,especially during adolescence (Nelson et al., 2002) and maycontribute to increasing self-control over emotional behavior(Killgore, Oki, & Yurgelun-Todd, 2001). Indeed Monk et al.(2003) has found that the amygdala and regions of ventromedialPFC are more active in adolescents than adults when attentionis directed toward non-emotional aspects of fearful faces, sug-gesting that adolescents may find emotional information moredistracting than adults. Developmental studies, in which neu-ral development is linked with changes in performance, maybe particularly useful in understanding how attention and faceprocessing interact.

88 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

Acknowledgements

We are grateful to Andy Calder, Max Coltheart and twoanonymous reviewers for very helpful suggestions on earlierdrafts of this manuscript.

References

Adams, R. B., Jr., Gordon, H. L., Baird, A. A., Ambady, N., & Kleck, R. E.(2003). Effects of gaze on amygdala sensitivity to anger and fear faces.Science, 300, 1536–1537.

Adolphs, R. (2002a). Neural systems for recognizing emotion. Current Opinionin Neurobiology, 12, 169–177.

Adolphs, R. (2002b). Recognizing emotion from facial expressions: Psycholog-ical and neurological mechanisms. Behavioral and Cognitive NeuroscienceReviews, 1(1), 21–62.

Adolphs, R., Gosselin, F., Buchanan, T. W., Tranel, D., Schyns, P. G., & Dama-sio, A. (2005). A mechanism for impaired fear recognition after amygdaladamage. Nature, 433, 68–72.

Adolphs, R., Tranel, D., & Damasio, H. (2001). Emotion recognition fromfaces and prosody following temporal lobectomy. Neuropsychology, 15(3),396–404.

Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. R. (1995). Fear and thehuman amygdala. Journal of Neuroscience, 15, 5879–5892.

Adolphs, R., Tranel, D., Hamann, S., Young, A. W., Calder, A. J., Anderson, A.,et al. (1999). Recognition of facial emotion in nine subjects with bilateralamygdala damage. Neuropsychologia, 37, 1111–1117.

Allison, T., Ginter, H., McCarthy, G., Nobre, A. C., Puce, A., Luby, M., et al.(1994). Face recognition in human extrastriate cortex. Journal of Neurophys-iology, 71(2), 821–825.

Amaral, D. G., Price, J. L., Pitkanen, A., & Carmichael, S. T. (1992). Anatomicalorganization of the primate amygdaloid complex. In J. P. Aggleton (Ed.),The amygdala: Neurobiological aspects of emotion, memory and mentaldysfunction (pp. 1–66). New York: Wiley.

Anderson, A. K. (2005). Affective influences on the attentional dynamics sup-porting awareness. Journal of Experimental Psychology: General, 134(2),258–281.

Anderson, A. K., Christoff, K., Panitz, D., De Rosa, E., & Gabrieli, J. D. E.(2003). Neural correlates of the automatic processing of threat facial signals.The Journal of Neuroscience, 23(13), 5627–5633.

Anderson, A. K., & Phelps, E. A. (2001). Lesions of the human amygdala impairenhanced perception of emotionally salient events. Nature, 411, 305–309.

Anderson, A. K., Spencer, D. D., Fulbright, R. K., & Phelps, E. A. (2000).Contribution of the anteromedial temporal lobes to the evaluation of facialemotion. Neuropsychology, 14(4), 526–536.

Armony, J. L., & Dolan, R. J. (2002). Modulation of spatial attention by fear-conditioned stimuli: An event-related fMRI study. Neuropsychologia, 40,817–826.

Awh, E., Serences, J., Laurey, P., Dhaliwal, H., van der Jagt, T., & Dassonville,P. (2004). Evidence against a central bottleneck during the attentional blink:Multiple channels for configural and featural processing. Cognitive Psychol-ogy, 48(1), 95–126.

Banich, M. T., Milham, M. P., Atchley, R. A., Cohen, N. J., Webb, A., Wszalek,T., et al. (2000). Prefrontal regions play a predominant role in imposing anattentional ‘set’: Evidence from fMRI. Cognitive Brain Research, 10(1–2),1–9.

Barbas, H. (2000). Connections underlying the synthesis of cognition, memory,and emotion in primate prefrontal cortices. Brain Research Bulletin, 52(5),319–330.

Bargh, J. A. (1997). The automaticity of everyday life. In R. S. Wyer (Ed.),Advances in social cognition: The automaticity of everyday life: Vol. 10,(Vol. 10, (pp. 1–61). Mahwah, NJ: Lawrence Erlbaum.

Batty, M., & Taylor, M. J. (2003). Early processing of the six basic facial emo-tional expressions. Cognitive Brain Research, 17(3), 613–620.

Bauer, R. (1984). Automatic recognition of names and faces: A neuropsychologi-cal application of the guilty knowledge test. Neuropsychologia, 22, 457–469.

Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electro-physiological studies of face perception in humans. Journal of CognitiveNeuroscience, 8, 551–565.

Bentin, S., & Carmel, D. (2002). Accounts for the N170 face-effect: A reply toRossion, Curran, & Gauthier. Cognition, 85, 197–202.

Berson, D. M., & Stein, J. J. (1995). Retinotopic organization of the superiorcolliculus in relation to the retinal distribution of afferent ganglion cells.Visual Neuroscience, 12(4), 671–686.

Bindemann, M., Burton, A. M., & Jenkins, R. (2005). Capacity limits for faceprocessing. Cognition, 98, 177–197.

Bishop, S. J., Duncan, J., & Lawrence, A. D. (2004). State anxiety modulationof the amygdala response to unattended threat-related stimuli. The Journalof Neuroscience, 24(46), 10364–10368.

Blake, R., & Logothetis, N. K. (2002). Visual competition. Nature ReviewsNeuroscience, 3(1), 13–23.

Boutet, I., & Chaudhuri, A. (2001). Multistability of overlapped face stimuli isdependent upon orientation. Perception, 30, 743–753.

Boutet, I., Gentes-Hawn, A., & Chaudhuri, A. (2002). The influence of attentionon holistic face encoding. Cognition, 84, 321–341.

Brown, V., Huey, D., & Findlay, J. M. (1997). Face detection in peripheral vision:Do faces pop out? Perception, 26, 1555–1570.

Bruce, V., & Young, A. W. (1986). Understanding face recognition. BritishJournal of Psychology, 77, 305–327.

Bullier, J. (2001). Integrated model of visual processing. Brain ResearchReviews, 36, 96–107.

Bush, G., Luu, P., & Posner, M. I. (2000). Cognitive and emotional influencesin anterior cingulate cortex. Trends in Cognitive Sciences, 4(6), 215–222.

Buttle, H., & Raymond, J. E. (2003). High familiarity enhances visual changedetection for face stimuli. Perception & Psychophysics, 65(8), 1296–1306.

Calvo, M. G., & Esteves, F. (2005). Detection of emotional faces: Low perceptualthreshold and wide attentional span. Visual Cognition, 12(1), 13–27.

Carmel, D., & Bentin, S. (2002). Domain specificity versus expertise: Factorsinfluencing distinct processing of faces. Cognition, 83, 1–29.

Cauquil, A. S., Edmonds, G. E., & Taylor, M. J. (2000). Is the face-sensitiveN170 the only ERP not affected by selective attention? NeuroReport, 11,2167–2171.

Compton, R. (2003). The interface between emotion and attention: A reviewof evidence from psychology and neuroscience. Behavioral and CognitiveNeuroscience Reviews, 2(2), 115–129.

Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews: Neuroscience, 3, 201–215.

Cowey, A. (2004). The 30th Sir Fredrick Bartlett lecture: Fact, artefact, andmyth about blindsight. The Quarterly Journal of Experimental Psychology,57A(4), 577–609.

Curtis, C. E., & D’Esposito, M. (2003). Persistent activity in the prefrontal cortexduring working memory. Trends in Cognitive Sciences, 7(9), 415–423.

Davidson, R. J., & Irwin, W. (1999). The functional neuroanatomy of emotionand affective style. Trends in Cognitive Sciences, 3(1), 11–21.

Davis, M., & Whalen, P. (2001). The amygdala: Vigilance and emotion. Molec-ular Psychiatry, 6, 13–34.

de Gelder, B., Vroomen, J., Pourtois, G., & Weiskrantz, L. (1999). Non-conscious recognition of affect in the absence of striate cortex. NeuroReport,10, 3759–3763.

de Haan, E. H. F., Young, A. W., & Newcombe, F. (1987). Face recognitionwithout awareness. Cognitive Neuropsychology, 4, 385–415.

Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visualattention. Annual Review of Neuroscience, 18, 193–222.

Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactionsto emotional facial expressions. Psychological Science, 11(1), 86–89.

Downing, P., Liu, J., & Kanwisher, N. (2001). Testing cognitive models of visualattention with fMRI and MEG. Neuropsychologia, 39(12), 1329–1342.

Drevets, W. C., & Raichle, M. E. (1998). Reciprocal suppression of regionalcerebral blood flow during emotional versus higher cognitive processes:Implications for interactions between emotion and cognition. Cognition andEmotion, 12(3), 353–385.

Eastwood, J. D., Smilek, D., & Merikle, P. M. (2001). Differential attentionalguidance by unattended faces expressing positive and negative emotion.Perception & Psychophysics, 63, 1004–1013.

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 89

Eger, E., Jednyak, A., Iwaki, T., & Skrandies, W. (2003). Rapid extraction ofemotional expression: Evidence from evoked potential fields during briefpresentation of face stimuli. Neuropsychologia, 41, 808–817.

Eimer, M. (2000). Attentional modulations of event-related brain potentials sen-sitive to faces. Cognitive Neuropsychology, 17, 103–116.

Eimer, M., & Holmes, A. (2002). An ERP study on the time course of emotionalface processing. NeuroReport, 13, 427–431.

Eimer, M., & Holmes, A. (2007). Event-related brain potential correlates ofemotional face processing. Neuropsychologia, 45, 15–31.

Eimer, M., Holmes, A., & McGlone, F. P. (2003). The role of spatial attention inthe processing of facial expression: An ERP study of rapid brain responsesto six basic emotions. Cognitive, Affective & Behavioral Neuroscience, 3(2),97–110.

Ekman, P. (1999). Basic emotions. In T. Dagleish & M. Power (Eds.), Handbookof cognition and emotion (pp. 310–320). Sussex, UK: John Wiley & Sons,Ltd.

Esslen, M., Pascual-Marqui, R. D., Hell, D., Kochi, D., & Lehmann, D. (2004).Brain areas and the time course of emotional processing. NeuroImage, 21(4),1189–1203.

Esteves, F., Dimberg, U., & Ohman, A. (1994). Automatically elicited fear: Con-ditioned skin conductance responses to masked facial expressions. Cognitionand Emotion, 8, 393–413.

Etkin, A., Klemenhagen, K. C., Dudman, J. T., Rogan, M. T., Hen, R., Kandel, E.R., et al. (2004). Individual differences in trait anxiety predict the response ofthe basolateral amygdala to unconsciously processed fearful faces. Neuron,44, 1043–1055.

Farah, M. J. (2000). The cognitive neuroscience of vision. MA, USA: BlackwellPublishers.

Feinstein, J. S., Goldin, P. R., Stein, M. B., Brown, G. G., & Paulus, M. P. (2002).Habituation of attentional networks during emotion processing. NeuroRe-port, 13(10), 1255–1258.

Fichtenholtz, H. M., Dean, H. L., Dillon, D. G., Yamasaki, H., McCarthy, G., &LaBar, K. S. (2004). Emotion-attention network interactions during a visualoddball task. Cognitive Brain Research, 20, 67–80.

Fox, E. (2002). Processing emotional facial expressions: The role of anxietyand awareness. Cognitive, Affective & Behavioral Neuroscience, 2(1), 52–63.

Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K. (2000).Facial expressions of emotion: Are angry faces detected more efficiently?Cognition and Emotion, 14, 61–92.

Fox, E., Russo, R., & Dutton, K. (2002). Attentional bias for threat: Evidencefor delayed disengagement from emotional faces. Cognition and Emotion,16(3), 355–379.

Goren, C. C., Sarty, M., & Wu, P. Y. K. (1975). Visual following and patterndiscrimination of face-like stimuli by newborn infants. Pediatrics, 56, 544–549.

Gray, J. R., Braver, T. S., & Raichle, M. E. (2002). Integration of emotionand cognition in the lateral prefrontal cortex. Proceedings of the NationalAcademy of Science, 99(6), 4115–4120.

Green, M. J., & Phillips, M. L. (2004). Social threat perception and the evo-lution of paranoia. Neuroscience and Biobehavioral Reviews, 28, 333–342.

Groenewegen, H. J., & Uylings, H. B. M. (2000). The prefrontal cortex and theintegration of sensory, limbic and autonomic information. Progress in BrainResearch, 126, 3–28.

Gur, R. C., Sara, R., Hagendoorn, M., Marom, O., Hughett, P., Macy, L., etal. (2002). A method for obtaining 3-dimensional facial expressions and itsstandardization for use in neurocognitive studies. Journal of NeuroscienceMethods, 115, 137–143.

Halgren, E., Baudena, P., Heit, G., Clarke, J. M., & Marinkovic, K. (1994).Spatio-temporal stages in face and word processing. 1: Depth-recordedpotentials in the human occipital, temporal and parietal lobes. Journal ofPhysiology, 88, 1–50.

Halgren, E., Raij, T., Marinkovic, K., Jousmaeki, V., & Hari, R. (2000). Cognitiveresponse profile of the human fusiform face area as determined by MEG.Cerebral Cortex, 10(1), 69–81.

Hamann, S., & Canli, T. (2004). Individual differences in emotion processing.Current Opinion in Neurobiology, 14, 233–238.

Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: Ananger superiority effect. Journal of Personality and Social Psychology, 54,917–924.

Hariri, A. R., Bookheimer, S. Y., & Mazziotta, J. C. (2000). Modulating emo-tional responses: Effects of a neocortical network on the limbic system.NeuroReport, 11, 43–48.

Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed humanneural system for face perception. Trends in Cognitive Sciences, 4, 223–233.

Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2002). Human neural systemsfor face recognition and social communication. Biological Psychiatry, 51,59–67.

Hershler, O., & Hochstein, S. (2005). At first sight: A high-level pop out effectfor faces. Vision Research, 45, 1707–1724.

Hoffman, E. A., & Haxby, J. V. (2000). Distinct representations of eye gaze andidentity in the distributed human neural system for face perception. NatureNeuroscience, 3, 80–84.

Holland, P. C., & Gallagher, M. (2004). Amygdala-frontal interactions andreward expectancy. Current Opinion in Neurobiology, 14, 148–155.

Holmes, A., Vuilleumier, P., & Eimer, M. (2003). The processing of emotionalfacial expression is gated by spatial attention: Evidence from event-relatedbrain potentials. Cognitive Brain Research, 16, 174–184.

Hopfinger, J. B., Buonocore, M. H., & Mangun, G. R. (2000). The neural mecha-nisms of top-down attentional control. Nature Neuroscience, 3(3), 284–291.

Huang, L., & Pashler, H. (2005). Attention capacity and task difficulty in visualsearch. Cognition, 94, B101–B111.

Ioannides, A. A., Liu, L. C., Kwapien, J., Drozdz, S., & Streit, M. (2000).Coupling of regional activations in a human brain during an object and faceaffect recognition task. Human Brain Mapping, 11, 77–92.

Itier, R. J., & Taylor, M. J. (2004). N170 or N1? Spatiotemporal differencesbetween object and face processing using ERPs. Cerebral Cortex, 14(2),132–142.

Jackson, M. C., & Raymond, J. E. (in press). The role of attention and familiarityin face identification. Perception and Psychophysics.

Jeffreys, D. A. (1989). A face-responsive potential recorded from the humanscalp. Experimental Brain Research, 78(1), 193–202.

Jenkins, R., Lavie, N., & Driver, J. (2003). Ignoring famous faces: Category-specific dilution of distractor interference. Perception & Psychophysics,65(2), 298–309.

Johnson, M. H., Dziurawiec, S., Ellis, H. D., & Morton, J. (1991). Newborns’preferential tracking of faces and its subsequent decline. Cognition, 40, 1–19.

Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area:A module in human extrastriate cortex specialized for face perception. TheJournal of Neuroscience, 17, 4302–4311.

Kastner, S., & Ungerleider, L. G. (2001). The neural basis of biased competitionin human visual cortex. Neuropsychologia, 39, 1263–1276.

Kawasaki, H., Adolphs, R., Kaufman, O., Damasio, H., Damasio, A. R., Granner,M., et al. (2001). Single-neuron responses to emotional visual stimulirecorded in human ventral prefrontal cortex. Nature Neuroscience, 4(1),15–16.

Keane, J., Calder, A. J., Hodges, J. R., & Young, A. W. (2002). Face and emotionprocessing in frontal variant frontotemporal dementia. Neuropsychologia,40(6), 655–665.

Khurana, B., Smith, W. C., & Baker, M. T. (2000). Not to be and then to be:Visual representation of ignored unfamiliar faces. Journal of ExperimentalPsychology: Human Perception and Performance, 26, 246–263.

Killgore, W. D. S., Oki, M., & Yurgelun-Todd, D. A. (2001). Sex-specific devel-opmental changes in amygdala responses to affective faces. NeuroReport,12(2), 427–433.

Killgore, W. D. S., & Yurgelun-Todd, D. A. (2004). Activation of the amygdalaand anterior cingulate during nonconscious processing of sad versus happyfaces. NeuroImage, 21(4), 1215–1223.

Krolak-Salmon, P., Fischer, H., Vighetto, A., & Mauguiere, F. (2001). Pro-cessing of facial emotional expression: Spatio-temporal data as assessed byscalp event-related potentials. European Journal of Neuroscience, 13, 987–994.

Krolak-Salmon, P., Henaff, M. A., Isnard, J., Tallon-Baudry, C., Guenot, M.,Vighetto, A., et al. (2003). An attention modulated response to disgust inhuman ventral anterior insula. Annals of Neurology, 53, 446–453.

90 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

Krolak-Salmon, P., Henaff, M.-A., Vighetto, A., Bertrand, O., & Mauguiere,F. (2004). Early amygdala reaction to fear spreading in occipital, temporal,and frontal cortex: A depth electrode ERP study in humans. Neuron, 42,665–676.

Kuehn, S. M., & Jolicoeur, P. (1994). Impact of quality of the image, orienta-tion, and similarity of the stimuli on visual search for faces. Perception, 23,95–122.

Lavie, N. (1995). Perceptual load as a necessary condition for selective attention.Journal of Experimental Psychology: Human Perception & Performance, 21,451–468.

Lavie, N. (2000). Selective attention and cognitive control: Dissociating atten-tional functions through different types of load. In S. Monsell & J. Driver(Eds.), Control and cognitive processes: Attention and performance XVIII(pp. 175–194). Cambridge, MA: MIT Press.

Lavie, N., Ro, T., & Russell, C. (2003). The role of perceptual load in processingdistractor faces. Psychological Science, 14(5), 510–515.

LeDoux, J. E. (1986). Sensory systems and emotion: A model of affective pro-cessing. Integrative Psychiatry, 4, 237–248.

LeDoux, J. E. (1996). The emotional brain. New York: Simon and Schuster.LeDoux, J. E. (1998). Fear and the brain: Where we have been, and where are

we going? Biological Psychiatry, 44, 1229–1238.LeDoux, J. E. (2000). Emotion circuits in the brain. Annual Review of Neuro-

science, 23, 155–184.Liu, J., Harris, A., & Kanwisher, N. (2002). Stages of processing in face per-

ception: An MEG study. Nature Neuroscience, 5(9), 910–916.Liu, L., Ioannides, A. A., & Streit, M. (1999). Single trial analysis of neuro-

physiological correlates of the recognition of complex objects and facialexpressions of emotion. Brain Topography, 11, 291–303.

Loffler, G., Gordon, G. E., Wilkinson, F., Goren, D., & Wilson, H. R. (2005).Configural masking of faces: Evidence for high-level interactions in faceperception. Vision Research, 45(17), 2287–2297.

Luck, S. J., & Vecera, S. P. (2002). Attention. In H. Pashler & S. Yantis (Eds.),Steven’s handbook of experimental psychology sensation and perception:Vol. 1, (Vol. 1,(3rd ed., pp. 235–286). New York: John Wiley and Sons.

Lundqvist, D., Esteves, F., & Ohman, A. (1999). The face of wrath: Criticalfeatures for conveying facial threat. Cognition and Emotion, 13, 691–711.

Macchi Cassia, V., Turati, C., & Simion, F. (2004). Can a nonspecific bias towardtop-heavy patterns explain newborns’ face preference? Psychological Sci-ence, 15(6), 379–383.

Mack, A., & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MITPress.

Marois, R., & Ivanoff, J. (2005). Capacity limits of information processing inthe brain. Trends in Cognitive Sciences, 9(6), 296–305.

Maurer, D., & Barrera, M. (1981). Infants’ perception of natural and distortedarrangements of a schematic face. Child Development, 47, 523–527.

Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of config-ural processing. Trends in Cognitive Sciences, 6, 255–260.

Maxwell, J. S., & Davidson, R. J. (2004). Unequally masked: Indexing differ-ences in the perceptual salience of “unseen” facial expressions. Cognitionand Emotion, 18(8), 1009–1026.

McCarthy, G. (2000). Physiological studies of face processing in humans. InM. S. Gazzaniga (Ed.), The new cognitive neurosciences. Cambridge, MA:Bradford Books/MIT Press.

Mineka, S., & Ohman, A. (2002). Phobias and preparedness: The selective,automatic, and encapsulated nature of fear. Biological Psychiatry, 52(19),927–937.

Mogg, K., & Bradley, B. P. (1999). Orienting of attention to threatening facialexpressions presented under conditions of restricted awareness. Cognitionand Emotion, 13, 713–740.

Mondloch, C. J., Lewis, T. L., Budreau, D. R., Maurer, D., Dannemiller, J.L., Stephens, B. R., et al. (1999). Face perception during early infancy.Psychological Science, 10, 419–422.

Monk, C. S., McClure, E. B., Nelson, E. E., Zarahn, E., Bilder, R. M., Leibenluft,E., et al. (2003). Adolescent immaturity in attention-related brain engage-ment to emotional facial expressions. NeuroImage, 20(1), 420–428.

Morris, J. S., de Gelder, B., Weiskrantz, L., & Dolan, R. J. (2001). Differentialextrageniculostriate and amygdala responses to presentation of emotionalfaces in a cortically blind field. Brain, 124, 1241–1252.

Morris, J. S., Friston, K., Buechel, C., Frith, C., Young, A., Calder, A., et al.(1998). A neuromodulatory role for the human amygdala in processing emo-tional facial expressions. Brain, 121(1), 47–57.

Morris, J. S., Ohman, A., & Dolan, R. J. (1998). Conscious and unconsciousemotional learning in the human amygdala. Nature, 393, 467–470.

Morris, J. S., Ohman, A., & Dolan, R. J. (1999). A subcortical pathway tothe right amygdala mediating ‘unseen’ fear. Proceedings of the NationalAcademy of Science, 96, 1680–1685.

Morton, J., & Johnson, M. H. (1991). CONSPEC and CONLERN: A two-process theory of infant face recognition. Psychological Review, 98,164–181.

Murphy, S. T., & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affec-tive priming with optimal and suboptimal stimulus exposures. Journal ofPersonality & Social Psychology, 64(5), 723–739.

Nelson, C. A., Bloom, F. E., Camerson, J. L., Amaral, D. G., Dahl, R. E., &Pine, D. S. (2002). An integrative, multidisciplinary approach to the study ofbrain-behavior relations in the context of typical and atypical development.Development and Psychopathology, 14, 499–520.

Nothdurft, H. C. (1993). Faces and facial expressions do not pop out. Perception,22, 1287–1298.

O’Craven, K. M., Downing, P. E., & Kanwisher, N. (1999). fMRI evidence forobjects as the units of attentional selection. Nature, 401, 584–587.

Ohman, A. (1993). Fear and anxiety as emotional phenomenon: Clinical phe-nomenology, evolutionary perspectives and information processing mech-anisms. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp.511–536). New York: Guildford Press.

Ohman, A. (1997). As fast as the blink of an eye: Evolutionary preparednessfor preattentive processing of threat. In P. J. Lang, R. F. Simons, & M. T.Balaban (Eds.), Attention and orienting: Sensory and motivational processes(pp. 165–184). Mahwah, NJ: Lawrence Erlbaum.

Ohman, A. (2002). Automaticity and the amygdala: Nonconscious responses toemotional faces. Current Directions in Psychological Science, 11, 62–66.

Ohman, A., Esteves, F., & Soares, J. F. (1995). Preparedness and preattentiveassociative learning: Electrodermal conditioning to masked stimuli. Journalof Psychophysiology, 9(2), 99–108.

Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited:A threat advantage with schematic stimuli. Journal of Personality & SocialPsychology, 80, 381–396.

Ohman, A., & Mineka, S. (2001). Fears. Phobias, and preparedness: Towardan evolved module of fear and fear learning. Psychological Review, 108(3),483–522.

Oram, M. W., & Perrett, D. I. (1992). Time course of neural responses discrimi-nating different views of the face and head. Journal of Neurophysiology, 68,70–84.

Palermo, R., & Rhodes, G. (2002). The influence of divided attention on holisticface perception. Cognition, 82(3), 225–257.

Palermo, R., & Rhodes, G. (2003). Change detection in the flicker paradigm:Do faces have an advantage? Visual Cognition, 10(6), 683–713.

Pashler, H., & Harris, C. R. (2001). Spontaneous allocation of visual attention:Dominant role of uniqueness. Psychonomic Bulletin and Review, 8, 747–752.

Pasley, B. N., Mayes, L. C., & Schultz, R. T. (2004). Subcortical discriminationof unperceived objects during binocular rivalry. Neuron, 42, 163–172.

Pegna, A. J., Khateb, A., Lazeyras, F., & Seghier, M. L. (2004). Discriminatingemotional faces without primary visual cortices involves the right amygdala.Nature Neuroscience.

Pegna, A. J., Khateb, A., Michel, C. M., & Landis, T. (2004). Visual recognitionof faces, objects, and words using degraded stimuli: Where and when itoccurs. Human Brain Mapping, 22, 300–311.

Pessoa, L. (2005). To what extent are emotional visual stimuli processed withoutattention and awareness? Current Opinion in Neurobiology, 15, 1–9.

Pessoa, L., Japee, S., Sturman, D., & Ungerleider, L. G. (2006). Target visibilityand visual awareness modulate amygdala responses to fearful faces. CerebralCortex, 16, 366–375.

Pessoa, L., Japee, S., & Ungerleider, L. G. (2005). Visual awareness and thedetection of fearful faces. Emotion, 5, 243–247.

Pessoa, L., McKenna, M., Guiterrez, E., & Ungerleider, L. G. (2002). Neu-ral processing of facial expressions requires attention. Proceedings of theNational Academy of Science, 99, 11458–11463.

R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92 91

Pessoa, L., Padmala, S., & Morland, T. (2005). Fate of unattended fearful facesin the amygdala is determined by both attentional resources and cognitivemodulation. NeuroImage, 15, 249–255.

Pessoa, L., & Ungerleider, L. G. (2003). Neuroimaging studies of attention andthe processing of emotion-laden stimuli. Progress in Brain Research, 144,171–182.

Phillips, M. L., Drevets, W. C., Rauch, S. L., & Lane, R. (2003a). Neurobiologyof emotion perception I: The neural basis of normal emotion perception.Biological Psychiatry, 54, 504–514.

Phillips, M. L., Drevets, W. C., Rauch, S. L., & Lane, R. (2003b). Neurobiol-ogy of emotion perception II: Implications for major psychiatric disorders.Biological Psychiatry, 54(5), 515–528.

Phillips, M. L., Williams, L. M., Heining, M., Herba, C. M., Russell, T., Andrew,C., et al. (2004). Differential neural responses to overt and covert presen-tations of facial expressions of fear and disgust. NeuroImage, 21, 1484–1496.

Pizzagalli, D. A., Lehmann, D., Hendrick, A. M., Regard, M., Pascual-Marqui,R. D., & Davidson, R. J. (2002). Affective judgments of faces modulate earlyactivity ("160 ms) within the fusiform gyri. NeuroImage, 16, 663–677.

Pizzagalli, D. A., Regard, M., & Lehmann, D. (1999). Rapid emotional faceprocessing in the human right and left brain hemispheres: An ERP study.NeuroReport, 10, 2691–2698.

Pourtois, G., Grandjean, D., Sander, D., & Vuilleumier, P. (2004). Electrophysi-ological correlates of rapid spatial orienting towards fearful faces. CerebralCortex, 14(6), 619–633.

Purcell, D. G., & Stewart, A. L. (1988). The face-detection effect: Configurationenhances detection. Perception & Psychophysics, 43, 355–366.

Purcell, D. G., Stewart, A. L., & Skov, R. B. (1996). It takes a confounded faceto pop out of a crowd. Perception, 25, 1091–1108.

Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppres-sion of visual processing in an RSVP task: An attentional blink? Journalof Experimental Psychology: Human Perception & Performance, 18, 849–860.

Reddy, L., Wilken, P., & Koch, C. (2004). Face-gender discrimination is possiblein the near-absence of attention. Journal of Vision, 4, 106–117.

Rees, G., Kreiman, G., & Koch, C. (2002). Neural correlates of consciousnessin humans. Nature Reviews Neuroscience, 3, 261–270.

Reinitz, M. T., Morrissey, J., & Demb, J. (1994). Role of attention in face encod-ing. Journal of Experimental Psychology: Learning Memory & Cognition,20, 161–168.

Rensink, R. A., O’Regan, J. K., & Clark, J. J. (1997). To see or not to see: Theneed for attention to perceive changes in scenes. Psychological Science, 8,368–373.

Ro, T., Russell, C., & Lavie, N. (2001). Changing faces: A detection advantagein the flicker paradigm. Psychological Science, 12, 94–99.

Robinson, M. D. (1998). Running from William James’ bear: A review ofpreattentive mechanisms and their contributions to emotional experience.Cognition and Emotion, 12(5), 667–696.

Rossion, B., Curran, T., & Gauthier, I. (2002). A defense of the subordinate level-expertise account for the N170 component. Cognition, 85(2), 189–196.

Rotteveel, M., de Groot, P., Geutskens, A., & Phaf, R. H. (2001). Strongersuboptimal than optimal affective priming? Emotion, 1(4), 348–364.

Rousselet, G. A., Mace, M. J.-M., & Fabre-Thorpe, M. (2003). Is it an animal?Is it a human face? Fast processing in upright and inverted natural scenes.Journal of Vision, 3, 440–455.

Sato, W., Kochiyama, T., Yoshikawa, S., & Matsumura, M. (2001). Emotionalexpression boosts early visual processing of the face: ERP recording andits decomposition by independent component analysis. NeuroReport, 12(4),709–714.

Schneider, W., & Chein, J. M. (2003). Controlled and automatic process-ing: Behavior, theory, and biological mechanisms. Cognitive Science, 27,525–559.

Schupp, H. T., Ohman, A., Junghofer, M., Weike, A. I., Stickburger, J., & Hamm,A. O. (2004). The facilitated processing of threatening faces. Emotion, 4(2),189–200.

Schutter, D. J. L. G., de Haan, E. H. F., & van Honk, J. (2004). Functionallydissociated aspects in anterior and posterior electrocortical processing offacial threat. International Journal of Psychophysiology, 53, 29–36.

Shipp, S. (2004). The brain circuitry of attention. Trends in Cognitive Sciences,8(5), 223–230.

Simons, D. J. (2000). Current approaches to change blindness. Visual Cognition,7, 1–15.

Spezio, M. L., Adolphs, R., Hurley, R. S. E., & Piven, J. (2007). Analysis offace gaze in autism using “Bubbles”. Neuropsychologia, 45, 144–151.

Stefanacci, L., & Amaral, D. G. (2002). Some observations on cortical inputsto the macaque monkey amygdala: An anterograde tracing study. Journal ofComparative Neurology, 451, 301–323.

Stone, A., & Valentine, T. (2003). Viewpoint: Perspectives on prosopagnosiaand models of face recognition. Cortex, 39, 31–40.

Stone, A., & Valentine, T. (2004). Better the devil you know? Non-consciousprocessing of identity and affect of famous faces. Psychonomic Bulletin &Review, 11(3), 469–474.

Stone, A., Valentine, T., & Davies, R. (2001). Face recognition and emotionalvalence: Processing without awareness by neurologically-intact participantsdoes not simulate covert recognition in prosopagnosia. Cognitive, Affective& Behavioral Neuroscience, 1, 183–191.

Streit, M., Dammers, J., Simsek-Kraues, S., Brinkmeyer, J., Wolwer, W., &Ioannides, A. (2003). Time course of regional brain activations duringfacial emotion recognition in humans. Neuroscience Letters, 342(1–2), 101–104.

Streit, M., Ioannides, A., Liu, L., Wolwer, W., Dammers, J., Gross, J., et al.(1999). Neurophysiological correlates of the recognition of facial expres-sions of emotion as revealed by magnetoencephalography. Cognitive BrainResearch, 7, 481–491.

Streit, M., Wolwer, W., Brinkmeyer, J., Ihl, R., & Gaebel, W. (2000). Elec-trophysiological correlates of emotional and structural face processing inhumans. Neuroscience Letters, 278, 13–16.

Sugase, Y., Yamane, S., Ueno, S., & Kawano, K. (1999). Global and fine infor-mation coded by single neurons in the temporal visual cortex. Nature, 400,869–872.

Suzuki, S., & Cavanagh, P. (1995). Facial organization blocks access to low-levelfeatures: An object inferiority effect. Journal of Experimental Psychology:Human Perception and Performance, 21, 901–913.

Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. TheQuarterly Journal of Experimental Psychology, 46A, 225–245.

Tanaka, J. W., Kay, J. B., Grinnell, E., Stansfield, B., & Szechter, L. (1998).Face recognition in young children: When the whole is greater than the sumof its parts. Visual Cognition, 5, 479–496.

Taylor, J. G., & Fragopanagos, N. F. (2005). The interaction of attention andemotion. Neural Networks, 18, 353–369.

Tipples, J., Atkinson, A. P., & Young, A. W. (2002). The eyebrow frown: Asalient social signal. Emotion, 2(3), 288–296.

Tong, F., & Nakayama, K. (1999). Robust representations for faces: Evidencefrom visual search. Journal of Experimental Psychology: Human Perceptionand Performance, 25, 1–20.

Tong, F., Nakayama, K., Vaughan, J. T., & Kanwisher, N. (1998). Binocu-lar rivalry and visual awareness in human extrastriate cortex. Neuron, 21,753–759.

Tranel, D., & Damasio, A. R. (1985). Knowledge without awareness: Anautonomic index of facial recognition by prosopagnosics. Science, 228,1453–1454.

Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention.Cognitive Psychology, 12, 97–136.

van Honk, J., Tuiten, A., De Haan, E., van den Hout, M., & Stam, H. (2001).Attentional biases for angry faces: Relationships to trait anger and anxiety.Cognition and Emotion, 15(3), 279–297.

VanRullen, R. (in press). On second glance: Still no high-level pop-out effectfor faces. Vision Research.

VanRullen, R., Reddy, L., & Koch, C. (2004). Visual search and dual-tasks revealtwo distinct attentional resources. Journal of Cognitive Neuroscience, 16,4–14.

Vuilleumier, P. (2000). Faces call for attention: Evidence from patients withvisual extinction. Neuropsychologia, 38, 693–700.

Vuilleumier, P. (2002). Facial expression and selective attention. Current Opin-ion in Psychiatry, 15, 291–300.

Vuilleumier, P. (2005). Staring fear in the face. Nature, 433, 22–23.

92 R. Palermo, G. Rhodes / Neuropsychologia 45 (2007) 75–92

Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mecha-nisms during emotion face perception: Evidence from functional neuroimag-ing. Neuropsychologia, 45, 174–194.

Vuilleumier, P., Armony, J. L., Clarke, K., Husain, M., Driver, J., & Dolan, R.J. (2002). Neural responses to emotional faces with and without awareness:Event-related fMRI in a parietal patient with visual extinction and spatialneglect. Neuropsychologia, 40, 2156–2166.

Vuilleumier, P., Armony, J. L., Driver, J., & Dolan, R. J. (2001). Effects ofattention and emotion on face processing in the human brain: An event-related fMRI study. Neuron, 30, 829–841.

Vuilleumier, P., Armony, J. L., Driver, J., & Dolan, R. J. (2003). Distinct spa-tial frequency sensitivities for processing faces and emotional expressions.Nature Neuroscience, 6(6), 624–631.

Vuilleumier, P., Richardson, M. P., Armony, J. L., Driver, J., & Dolan, R. J.(2004). Distant influences of amygdala lesion on visual cortical activa-tion during emotional face processing. Nature Neuroscience, 7(11), 1271–1278.

Vuilleumier, P., & Schwartz, S. (2001). Emotional facial expressions captureattention. Neurology, 56(2), 153–158.

Whalen, P. J., Kagan, J., Cook, R. G., Davis, C., Kim, H., Polis, S., et al. (2004).Human amygdala responsivity to masked fearful eye whites. Science, 306,2061.

Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., &Jenike, M. A. (1998). Masked presentations of emotional facial expressionsmodulates amygdala activity without explicit knowledge. Journal of Neuro-science, 18, 411–418.

Williams, M. A., McGlone, F., Abbott, D. F., & Mattingley, J. B. (2005). Dif-ferential amygdala responses to happy and fearful facial expressions dependon selective attention. NeuroImage, 24(2), 417–425.

Williams, M. A., Morris, A. P., McGlone, F., Abbott, D. F., & Mattingley, J. B.(2004). Amygdala responses to fearful and happy facial expressions under

conditions of binocular suppression. The Journal of Neuroscience, 24(12),2898–2904.

Williams, M. A., Moss, S. A., Bradshaw, J. L., & Mattingley, J. B. (2005). Lookat me, I’m smiling: Visual search for threatening and nonthreatening facialexpressions. Visual Cognition, 12(1), 29–50.

Wilson, E., & MacLeod, C. (2003). Contrasting two accounts of anxiety-linkedattentional bias: Selective attention to varying levels of stimulus threat inten-sity. Journal of Abnormal Psychology, 112(2), 212–218.

Wojciulik, E., Kanwisher, N., & Driver, J. (1998). Covert visual attention modu-lates face-specific activity in the human fusiform gyrus: fMRI study. Journalof Neurophysiology, 79, 1574–1578.

Wolfe, J. M. (1998). What can 1 million trials tell us about visual search? Psy-chological Science, 9, 33–39.

Wong, P. S., & Root, J. C. (2003). Dynamic variations in affective priming.Consciousness and Cognition, 12, 147–168.

Yamamoto, S., & Kashikura, K. (1999). Speed of face recognition in humans:An event-related potentials study. NeuroReport, 10(17), 3531–3534.

Yamasaki, H., LaBar, K. S., & McCarthy, G. (2002). Dissociable prefrontal brainsystems for attention and emotion. Proceedings of the National Academy ofScience, 99(17), 11447–11451.

Yantis, S. (1998). Control of visual attention. In H. E. Pashler (Ed.), Attention(pp. 223–256). Hove, England: Psychology Press/Erlbaum.

Young, A. W., de Haan, E. H. F., Newcombe, F., & Hay, D. C. (1990). Facialneglect. Neuropsychologia, 28, 391–415.

Young, A. W., Ellis, A. W., Flude, B. M., McWeeny, K. H., & Hay, D. C. (1986).Face–name interference. Journal of Experimental Psychology: Human Per-ception and Performance, 12, 466–475.

Young, A. W., Hellawell, D., & Hay, D. C. (1987). Configural information inface perception. Perception, 16, 747–759.

Zald, D. H. (2003). The human amygdala and the emotional evaluation of sensorystimuli. Brain Research Review, 41, 88–123.