No Need for Alarm: A Critical Analysis of Greene's Dual-Process Theory of Moral Decision-Making

Post on 20-Feb-2023

2 views 0 download

Transcript of No Need for Alarm: A Critical Analysis of Greene's Dual-Process Theory of Moral Decision-Making

ORIGINAL PAPER

No Need for Alarm: A Critical Analysis of Greene’sDual-Process Theory of Moral Decision-Making

Robyn Bluhm

Received: 15 June 2013 /Accepted: 1 May 2014 /Published online: 8 May 2014# Springer Science+Business Media Dordrecht 2014

Abstract Joshua Greene and his colleagues have pro-posed a dual-process theory of moral decision-makingto account for the effects of emotional responses on ourjudgments about moral dilemmas that ask us to contem-plate causing direct personal harm. Early formulationsof the theory contrast emotional and cognitive decision-making, saying that each is the product of a separableneural system. Later formulations emphasize that emo-tions are also involved in cognitive processing. I arguethat, given the acknowledgement that emotions informcognitive decision-making, a single-process theory canexplain all of the data that have been cited as evidencefor Greene’s theory. The emotional response to thethought of causing harm may differ in degree, but notin kind, from other emotions influencing moraldecision-making.

Keywords Dual-process theory . JoshuaGreene .Moraldecision-making .Morality . Neuroscience

The dual-process theory of moral decision-making de-veloped by Joshua Greene and his colleagues has beenvery influential, attracting much interest from neurosci-entists, psychologists, and philosophers. Briefly, theyoffer the theory as an alternative to so-called rationalist

theories of moral judgment, which viewmoral decision-making as primarily a cognitive, rational process, and“sentimentalist” theories, which view emotional re-sponses as the ultimate basis for moral decision-making. Greene and colleagues propose instead thatboth cognitive and emotional processes are involved inmaking moral judgments. The theory was first intro-duced in a report of a neuroimaging study that claimedto distinguish different patterns of neural activity asso-ciated with emotional and with cognitive judgmentsabout moral dilemmas [1]. Greene and his colleaguessuggested that these responses are subserved by distinctpsychological and neural processes that, while oftenintegrated, can in some circumstances conflict with eachother. Since then, Greene has continued to refine thedual-process theory, as well as to develop its implica-tions for philosophical moral theory [2, 3], legaldecision-making [4], and even our understanding ofwhat it means to be human [5].

Yet despite the far-reaching implications that Greenedraws from his theory, the development of the theoryitself over the past decade suggests that the dual-processaccount is actually quite narrow in scope. Recent devel-opments of the theory by Greene and his colleaguesinclude (1) an expanded account of the role of affect incognitive processes and (2) clarification of the circum-stances that evoke the “purely” emotional processesidentified in the initial neuroimaging study. AlthoughGreene and his colleagues have always acknowledged arole for emotion in cognitive processing, later versionsof the theory can be characterized as distinguishingbetween two types of emotional process. In one of these,

Neuroethics (2014) 7:299–316DOI 10.1007/s12152-014-9209-0

R. Bluhm (*)Department of Philosophy and Religious Studies,Old Dominion University,23529, Norfolk, VA, USAe-mail: rbluhm@odu.edu

emotion contributes to cognitive processing, while inthe other, emotions are “prepotent” no cognitive re-sponses that, when elicited, compete with cognition.The latter responses are elicited, specifically, by scenar-ios that ask us to contemplate causing serious physicalharm to another person.

In this paper, I argue that, (1) once a place is made foremotion in cognitive processing, a move that is surelycorrect, the rationale for positing distinct, and some-times conflicting, processes is not clear and that, (2) infact, the evidence that Greene provides in support of hisdual-process theory actually better supports the conclu-sion that moral decision-making is the result of a singlesystem that integrates emotional and cognitive aspectsof cognitive decision-making. In other words, there is noconvincing evidence that the purported “alarm bell”emotional response to causing personal harm is distinctfrom other emotional responses in anything butintensity.

In making this case, I begin by tracing the develop-ment of Greene’s theory. I then examine the empiricalevidence that has been put forward in support of thedual-process theory and show that it can also beinterpreted as supporting a single-process alternativethat incorporates both cognitive and emotional aspects.In fact, on balance, the evidence is slightly in favor of atheory that posits only a single, integrative process. Ithen consider a potential response to this conclusion,which says that there is an additional reason to think thatthe strong emotional response to the thought of causingpersonal harm is, indeed, a separate cognitive process: ithas characteristics that indicate that it is a cognitivemodule. I show that, in fact, the emotional processposited by the dual-process theory is not likely to bemodular and that there is no need for an “alarm bell” toexplain our moral decision-making.

My purpose in this paper is to critique the scientificsupport offered for the dual-process theory. The theoryhas been very influential; some of the experimentalwork I will discuss in this paper is explicitly based onthe dual-process theory, and many other scientists havealso cited Greene’s work. I contrast Greene’s interpreta-tion of the data with an alternative, single-process modelsuggested by Greene’s own discussion of the role ofemotion in the cognitive processing of moral dilemmas,and I suggest that this alternative better accounts for thedata used to support the dual-process theory. I do not,however, make the stronger claim that this single-process alternative is the best available account of moral

decision-making. There are a number of other theoriesin the empirical literature on moral decision-making thatare quite distinct from Greene’s (see [6] for a thoroughoverview of these theories) and consideration of thesetheories is beyond the scope of this paper. I am also notconcerned here with the normative conclusions thatGreene draws on the basis of his dual-process theory.It is worth noting, however, that if my analysis of thescientific evidence is correct, then the normative impli-cations of the dual-process theory are also undermined.

The Dual-Process Theory ofMoral Decision-Making

As noted above, the dual-process theory was first intro-duced in a paper that reported the results of a neuroim-aging experiment. Participants in the study were askedto make decisions about the appropriate action in sce-narios depicting moral dilemmas. These dilemmas werebased on the classic philosophical thought experiment,the trolley problem. In this dilemma, an individual hasthe opportunity to save five people from certain deathcaused by a runaway trolley, simply by pulling a switchto divert the trolley onto another track. The catch is thatdoing so will cause the death of a single individual onthe second track. The majority of people (both philoso-phers and nonphilosophers) respond to the dilemma bysaying that the person should pull the switch, thussaving five lives at the expense of one. But a smallchange to this scenario makes a big difference in peo-ple’s responses. Instead of pulling a switch to divert thetrolley, they must now push another individual off of afootbridge, so that he lands in front of, and stops, therunaway trolley. In this version, most people respondthat it would not be morally appropriate to push theperson.

Greene and colleagues hypothesized that moral judg-ments regarding these similar (hereafter “switch” and“footbridge”) scenarios differ because the footbridgeproblem has a stronger tendency to engage people’semotions than does the switch problem. This differencesuggests a general hypothesis to the effect that somemoral dilemmas “engage emotional processing to agreater extent than others…and these differences inemotional engagement affect people’s judgments” [1,p. 2,106]. They further suggest that those dilemmas thatengage emotional processing will have an “up close andpersonal” aspect, since the primary difference betweenthe two scenarios is how you must cause harm to

300 R. Bluhm

someone to save the five people in danger. Their neuro-imaging study then compared brain activity during “per-sonal” moral dilemmas like the footbridge scenario andmore “impersonal” dilemmas like the switch scenario.1

When neural activity patterns in the different condi-tions were compared, Greene et al. found that “person-al”/emotional dilemmas were associated with greateractivity in brain areas that had previously been shownto be active during the processing of emotional stimuli.By contrast, activity was greater during “impersonal”dilemmas, compared with personal dilemmas in areasthat had previously been shown to be active duringcognitive working memory tasks. 2 In addition, a secondexperiment (which replicated the neuroimaging resultsof the first) also showed that average reaction times forparticipants were slower when they made judgmentsthat went against their emotional reaction to a dilemma(e.g. when they judged it morally appropriate to push aperson onto the track to stop the runaway trolley) thanwhen their judgments were congruent with their emo-tional responses (e.g. when they decided that it was notappropriate to push the person) [1].3

A second neuroimaging study [7] clarifies the rela-tionship between cognition and emotion in this discus-sion of the dual-process theory. Greene et al. use quota-tion marks around the term “cognitive” to emphasizethat they are referring to “a class of processes thatcontrast with affective or emotional process [emphasismine]” [7, p. 389] as opposed to a broader use of theterm to mean “information processing in general” [7, p.389]. Later in the paper, the authors acknowledge that

the distinction between emotion and “cognition”, whileuseful, “may be somewhat artificial”. They also suggestthat one possible way of understanding the distinction isthat emotional representations have a direct motivation-al force [7, p. 397], while “cognitive” representationshave no motivational force of their own, but “can becontingently connected to affective/emotional states thatdo have such force” [7, p. 398]. They endorse this view,saying that the distinction between emotion and cogni-tion “is real, but it is a matter of degree…[and] far fromclear cut” [7, p. 398]. Their discussion suggests that,while “cognitive” representations are distinct from emo-tional ones, they can, and possibly must, be linked withemotional ones in order to influence decision-making.4

Later discussions of the dual-process theory furtherexamine the relationship between emotional and cogni-tive processes [2, 11]. I will focus here primarily on apaper by Cushman, Young, and Greene [11], as it is themost detailed description of these developments. Cush-man et al. begin by characterizing the dual-processtheory as involving “two psychological systems thatshape moral judgments concerning physically harmfulbehavior” [11, p. 49]. The emotional system involves anemotional alarm bell that is evoked by the thought ofharming somebody in a “personal” way, as in the foot-bridge dilemma.5 Yet in some cases, we do decide tocause harm in order to achieve a greater good. In thesecircumstances, the decision was the product of the “cog-nitive” system, which functions according to a “welfareprinciple” that, in utilitarian fashion, calculates whichpossible outcome will minimize harm and maximizewelfare.

Despite its name, the “cognitive” system also re-quires emotional input. Picking up on the suggestionmade in [7] (and expanded on in [2]), Cushman et al.describe the role of emotions in practical reasoning asadding motivational “weight” to the various alternativesunder contemplation. Unlike the alarm bell emotionalresponse that comes from the emotional system, the

1 They also use a set of nonmoral dilemmas (e.g. scenarios thatasked participants to choose whether to take the bus or the train,given prevailing time constraints). Patterns of neural activity weresimilar for nonmoral and for impersonal moral dilemmas.2 These cognition-related areas found to be more active duringcontemplation of “impersonal” moral dilemmas were the rightmiddle frontal gyrus (BA 46) and bilateral parietal areas (BA7/40). The emotion-related areas active during “personal” moraldilemmas were themedial prefrontal cortex, Brodmann areas (BA)9 and 10; the posterior cingulate cortex (BA 31) and bilateralangular gyrus (BA 39). In their second paper, some areas of theposterior cingulate (BA 31) were more active during contempla-tion of personal moral dilemmas, while others (BA 23, 31) weremore active when impersonal moral dilemmas were presented.3 Greene et al. originally interpret these reaction time results asfurther evidence for the operation of two distinct processes, butlater retract this claim in response to a reanalysis of their data byMcGuire et al. [8]. (See [9] for Greene’s response regarding theimplications of the reanalysis. I discuss this exchange further lateron in this paper.) Berker [10] also provides important criticism ofthe reaction time data.

4 Greene et al. refer to “cognitive” versus emotional representa-tions and “cognitive” versus emotional processes. They don’telaborate on this terminology but, roughly, it seems that “cogni-tive” processing is carried out over “cognitive” representations(with their attached affective states), while emotional processingoccurs over intrinsically-emotional representations.5 The idea that the emotion system functions like an alarm bell isfirst introduced in [2]. Greene also discusses here the idea that“cognitive” representations are inherently neutral and are, or per-haps must be, contingently attached to emotional representationsin order to exert any influence on behavior.

No Need for Alarm 301

emotions involved in “cognition” function like a curren-cy. When the “cognitive” system is in charge of moraldecision-making (for example in the switch dilemma)“the relative costs and benefits of lives, among otheroutcomes, are weighted against each other to determinethe best course of action” [11, p. 63]. Currency emotionsadd positive or negative weight to various possibleoutcomes, which provide the assessment of benefitsand costs that is required in order to engage in cost-benefit calculations. By contrast, the alarm bell emo-tional response of the emotion system does not lenditself to a cost-benefit analysis; it can be overridden bythe “cognitive” system, “but is not designed to be nego-tiable” [11, p. 63].

Another key difference between the two kinds ofemotion involved in moral decision-making is thatalarm bell responses are elicited only in response tospecific circumstances, i.e., the prospect of causing per-sonal harm. Currency emotions are involved in moraldecision-making, but also in other kinds of decision-making; for example, the preference for ice cream on ahot summer day “supplies a reason to pursue the GoodHumor truck, but this reason can be traded off againstothers, such as maintaining a slim poolside profile” [11,p. 63]. “Cognitive” moral reasoning thus has a domain-general character that makes it much like other kinds ofpractical reasoning.

The most recent description of the dual-process the-ory occurs in Greene’s book [3]. Here, he uses much thesame language to describe the emotional response tocausing personal harm, describing it as sounding an“alarm bell” [3, p. 224 and 235], which issues “nonne-gotiable” commands [3, p. 303] that can, in some in-stances, be ignored or overridden [3, p. 248]. Greenefurther uses the metaphor of a dual-mode camera, withboth automatic settings and amanual mode of operation,to describe our brains. The alarm bell response to theprospect of causing personal harm is the result of one ofthe brain’s “automatic settings”, which Greene alsodescribes as being moral emotions [3, p. 172; see alsop. 213, where the prospect of causing personal harm isdescribed as “an automatic emotional response”]. Aswith a camera, the brain’s automatic settings are opti-mized to respond quickly and efficiently to specific,standard circumstances, but are not flexible enough torespond to complex or unfamiliar settings [3, p. 133].

By contrast with Cushman et al. [11], the idea ofcurrency emotions does not appear in Greene’s book.This is, perhaps, in part because it would invite

confusion with one of the book’s central metaphors, thatof utilitarian thinking providing a “common currency,”or meta-morality, which allows us to adjudicate amongcompeting moral claims. What does remain, however,are the central ideas behind the concept of currencyemotions: first, that the “cognitive” process (which isnow referred to as being the brain’s “manual mode”)integrates competing values, weighing costs againstbenefits [3, p. 257], and second, that these values existonly because we have “motivating emotions” [3, p.131]. As in earlier work, Greene follows Hume insaying that “reason has no ends of its own”, but ratherrequires emotions in order to function [3, p. 137].Greene also emphasizes that manual mode is “not anabstract thing. It is…a set of neural networks, basedprimarily in the prefrontal cortex (PFC), that enableshumans to engage in conscious and controlled reasoningand planning” [3, p. 196]. As in Greene’s earlier discus-sions of the cognitive system, then, manual mode relieson emotional input in order to function and, in contrastwith the automatic settings that respond to specificfeatures of the environment, it is domain-general.

Yet, once Greene and his colleagues have developedthe idea that currency-type emotions that are integral to“cognitive” moral decision-making, the character of thedual-process theory changes. It is not that there areseparate (no cognitive) emotion and (nonemotional)cognitive systems that contribute to moral decision-making. Rather there are two kinds of emotional re-sponse possible to moral dilemmas involving personalharm, and only one of those is integrated with cognitiveprocessing. This change, however, raises the question ofwhether there really are two distinct emotion systems.Couldn’t there just be a single “currency” emotion sys-tem without an additional alarm bell response? After all,if we are ever to make decisions (moral or otherwise), itmust be possible for one of the contemplated outcomesto carry more weight than the others. Since “cognitive”representations are inherently neutral, it must be thecurrency emotions that carry different weights. More-over, at least some of these emotions must, on the dual-process theory, carry a great deal of weight if they are tooverride the nonnegotiable alarm bell response. Yet thisraises the possibility that all emotional responses arecurrency-type that add motivational weight to differentpossible responses. The purported “alarm bell” responsewould therefore be no different from any other emotion-al response, except in intensity; it would not be theproduct of a distinct cognitive system.

302 R. Bluhm

Cushman et al. do not consider this possibility, asthey begin their chapter with a review of the empiricalevidence that has been offered in support of the dual-process theory and conclude that “[t] he division be-tween affective and cognitive systems of moral judg-ment…is by nowwell supported” [11, p. 54]. In the nextsection, however, I will look at the evidence given insupport of the dual-process theory and argue that it (atleast) equally well supports a single-process system onwhich the “prepotent” emotional response to causingpersonal harm is not an alarm bell, but a currencyresponse with a heavy emotional weighting. In otherwords, I do not deny the claim that some moral di-lemmas evoke stronger emotional reactions than others,but argue that this is best seen as a difference in degree,rather than in the kind of emotion evoked. Emotionalresponses that differ only in degree can also explain whymoral decision-making can be so difficult. Recall thatthe difference, for Greene and his colleagues, betweencurrency emotions and alarm bell emotions is that thelatter can be “overruled” but do not negotiate. But not allnegotiations occur between those with equal power, anda cognitive representation carrying a strong negativecurrency emotion may well have a greater influence onnegotiations than weaker currency emotions. Thus, acognitive representation with a strongly negative emo-tional weighting, rather than an alarm from a separateemotion system, may well be responsible for our incon-sistent responses to the switch and the footbridgedilemmas.

Clarifying the Scope of the Dual-Process Theory

Greene’s claims about the dual-process theory varysomewhat in different papers. In the earliest discussionsof the theory, the different responses to the footbridgeand the switch versions of the trolley problem werehypothesized to occur because of differences in theparticipants’ emotional responses to the two dilemmas,leaving open the possibility that “cognitive” and emo-tional processing differed across a diverse range ofstimuli. Cushman et al. are more specific in their char-acterization of the dual-process theory; as quoted above,they take it to explain only our conflicting responses tothe prospect of physically harmful behavior. In Greene’sbook, the claim is again broader, as we are described ashaving “dual-process moral brains” that lead us to have“separate automatic and controlled responses to moral

questions” [3, p. 131]. Here, though, the emotionalresponse to causing harm to save others is portrayed asonly one of a number of “automatic settings”, whichmeans that the case for each of the purported automaticsettings must be made separately. That is, this charac-terization leaves open the possibility that there are anumber of automatic settings, but that there is no suchsetting for “response to causing personal harm”.

The potential ambiguity in the scope of the theoryhas lead Kahane [12] to distinguish between theModest Dual-Process Model, which only makesclaims about the nature of our responses to trolley-type problems, and the Grand Dual-Process Model,which is broader in scope.6 Like Kahane, I empha-size that the evidence given in support of Greene’sdual-process model comes from studies that usetrolley-type problems, i.e., it supports only the mod-est version of the theory. (In fact, the quote fromGreene’s book in the preceding paragraph comesfrom the end of a chapter called “Trolleyology”and is intended as a conclusion about what trolleycases can show about moral decision-making.)

Two further points made by Kahane should be notedhere. First, Kahane interprets the dual-process model asmaking claims about automatic versus controlled pro-cessing, rather than about emotional versus cognitiveprocessing. He points out that, while Greene also be-lieves that the dual-process model makes claims aboutemotional versus cognitive processing, this latter dis-tinction does not map neatly onto the former. This isclearly true, since we can certainly make intuitive “cog-nitive” judgments. With regard to the more defensible,modest, version of Greene’s theory, however, claimsabout automatic versus controlled processes do maponto claims about emotional versus cognitive processes,since the relevant automatic process is clearly emotionalin nature and is contrasted with controlled “cognitive”processing. Since I am focusing only on this modestversion, I will not consider broader claims about poten-tial differences between emotional and automatic pro-cessing in this paper.

Second, Kahane notes that the dual-process theory“is a claim about the psychological processes underlyingcertain types of moral judgment. It is not a claim about

6 Reflecting his main interests in his paper, Kahane characterizesthe two versions of the theory as making claims about deontolog-ical versus utilitarian judgment, but I will use the distinctionwithout reference to Greene’s conclusions about the relationshipbetween his dual-process theory and normative ethical theories.

No Need for Alarm 303

the factors to which these judgments are sensitive” [12,p. 522]. This is important because the emotional re-sponse to the thought of causing personal harm has beencharacterized differently in different papers. Originally,Greene et al. hypothesized that the different responses tothe switch and the footbridge dilemmas occur becauseof the “up close and personal” nature of the harm that iscontemplated in the footbridge dilemma which causesthe strong emotional response” [1]. By contrast, theswitch dilemma involves an “impersonal” method ofcausing harm. In response to challenges to thepersonal/impersonal distinction (particularly byMcGuire et al. [8]), this distinction was downplayed inlater research. Instead, the focus was on “difficult” or“high conflict” personal moral dilemmas, which activat-ed both the emotional and “cognitive” systems.

Later work also attempted to clarify the specificaspects of the “personal” harm that are most emotionallysalient and showed that the strength of the emotionalresponsemay depend on intentionality, physical contact,the exertion of physical force on the victim, and the“directness” of the harm; these differing characteriza-tions are discussed by Greene in [9]. This work is,however, an attempt to develop an explanation of thepersonal nature of the harm, not a repudiation of the ideathat there is a psychological process that responds topersonal harm. In Kahane’s terms, differences amongthe accounts of the factors relevant to the nature ofpersonal harm are immaterial to the consistent claimthat there is a distinct process that responds to (someaspect of) personal harm.

Based on these considerations, I conclude thatGreene’s dual-process theory is best understood asthe more modest of the two versions distinguished byKahane. That is, its central claim is that there is adistinct psychological process that is elicited bytrolley-type dilemmas and that responds specificallyto the prospect of causing direct personal harm tosomeone. Evaluating the theory therefore requiresassessing the evidence that there is such a distinctpsychological process, that is, that the prospect ofcausing someone direct personal harm is processeddifferently than other considerations relevant to trol-ley dilemmas. In the next section, I provide thisassessment of the evidence and conclude that it doesnot support the claim that there is a distinct process.Instead, the best explanation of the data is that ourstrong emotional response to causing personal harmis processed the same way as other considerations,

and that the mere strength of the emotion is whatallows it to influence our response to these dilemmas.

Evidence for–and Against–the Dual-Process Theory

Greene and his colleagues provide a variety of ev-idence, both from their own studies and from thoseof other research groups, in support of the dual-process theory. In this section, I survey this evi-dence and show that it could also be interpreted assupporting a single-process theory that has bothcognitive and emotional processes functioning in asingle decision-making system. The relevant evi-dence comes from functional neuroimaging studies,from studies that examine moral decision-making inpeople with lesions to particular areas of the brain,and from studies that experimentally manipulate thecognitive and the emotional factors that influencepeople’s moral decision-making or that examineindividual differences in the propensity to give“cognitive” or to emotional responses to trolleyproblems.

Neuroimaging Studies of Moral Decision-Making

The earliest studies investigating the hypothesis thatseparate “cognitive” and emotional processes are in-volved in moral reasoning are two functional neuroim-aging studies conducted by Greene and his colleagues.As described earlier, these studies examined patterns ofneural activity associated with making judgments aboutimpersonal (e.g. switch) and personal (e.g. footbridge)moral dilemmas. Personal moral dilemmas were associ-ated with greater activity, compared with impersonalmoral dilemmas, in areas of the brain that had previous-ly been associated with the processing of emotionalstimuli. By contrast, impersonal moral dilemmas wereassociated with greater neural activity, compared withpersonal moral dilemmas, in areas associatedwith work-ing memory and abstract reasoning. These differences inneural activity were interpreted as reflecting the opera-tion of distinct emotional and “cognitive” processes inmaking judgments about personal and impersonal di-lemmas, respectively.

There are, however, at least two problems with theidea that differences in brain activity patterns indicatethat two distinct systems are operating. First, ColinKlein [13] has argued that the dual-process theory has

304 R. Bluhm

been called into question by more recent neuroimagingresearch showing that the brain areas that Greene et al.associate with emotional processing have also beenshown to be involved in cognitive reasoning, while the“cognitive” areas have also been shown to be activatedin response to emotional stimuli.7 Second, because theanalyses in these studies compare personal and imper-sonal dilemmas directly, they tell us which areas aremore active in one kind of dilemma than in the other,not that the emotion areas are inactive when participantsthink about impersonal dilemmas, or that “cognitive”areas do not respond to personal moral dilemmas. Infact, as we shall see, Greene’s later work focuses on“difficult” personal moral dilemmas that are said toactivate both “cognitive” and emotional responses.Moreover, once Greene amends his theory to include arole for currency emotions in “cognitive” processes, itfollows that activation of the cognitive system alsoinherently involves emotion processing. All of theseconsiderations show that the initial conclusions thatdistinct neural systems were operating in these studieswere overly hasty.

In their second neuroimaging paper [7], Greene et al.also began to lookmore closely at distinctionswithin thecategory of personal moral dilemmas. Drawing on thedual-process theory, they suggested that some personaldilemmas engage both the emotional and the “cogni-tive” processes and that this makes it difficult to reach adecision about what should be done. An example ofsuch a dilemma is a scenario in which you are asked toimagine that you are part of a group that is hiding fromenemy soldiers, who will kill you all if they find you.Your baby starts to cry, and you must decide whether tosmother the child so that the soldiers do not hear it andfind your group. (The dilemma specifies that if they dofind you, the soldiers will kill the baby along with therest of the group.) People take a relatively long time toreply to this “crying baby” dilemma and there is lessagreement about the answer than, for example, about thefootbridge dilemma.

Other personal moral dilemmas are much easier, suchas the “infanticide” dilemma, “in which a teenage moth-er must decide whether or not to kill her unwantednewborn infant” [7, p. 391]. Unsurprisingly, people

respond quickly and unanimously that she should notkill her baby. According to Greene et al., this responseoccurs because, unlike in the crying baby dilemma,there is no “cognitive” case in favor of killing the baby;only the emotional response is engaged. Thus there is noconflict between the two systems.8

Greene et al. hypothesized that, compared witheasy personal dilemmas, difficult personal dilemmaswould be associated with greater activity in thedorsal anterior cingulate cortex, which detects “re-sponse conflict,” and in the dorsolateral prefrontalcortex (DLPFC), which is involved with cognitivecontrol. Within the difficult decisions, trials werealso compared for dilemmas for which participantsgave a “characteristically utilitarian” response (say-ing that it was morally appropriate to harm oneindividual to save many) and dilemmas for whichthey gave a nonutilitarian answer (i.e. one that,according to Greene et al., accorded with their emo-tional reaction that causing harm was not morallyacceptable). Here, they hypothesized that same twoareas would be more active when participants gaveutilitarian responses, because those areas would beinvolved in overcoming the emotional response tothe dilemma. The neuroimaging results confirmedtheir predictions.

These results, however, are subject to the same kindof criticism Klein raised against the comparison of per-sonal and impersonal moral dilemmas; the brain areasthat are differentially active in the two conditions areassociated with other kinds of activity than those em-phasized by Greene et al.9 For my purpose in this paper,however, there is a more important question to be ad-dressed à propos of these findings. This is the questionof whether response conflict necessarily requires theoperation of two separate processes, specifically of the“cognitive” and the “alarm bell” emotional processes

7 Beyond the dual-process theory, the opposition between cogni-tion and emotion has also been criticized. For a recent overview ofthis issue, see [14].

8 Whether a dilemma counted as easy or difficult for a particularparticipant depending on the time they took to respond whether theaction was morally appropriate or inappropriate. Greene et al.assumed that a longer reaction time indicated a more difficultdecision. The infanticide dilemma was easy for the majority ofparticipants, while the crying baby dilemma was difficult for mostpeople. It should also be noted that there was less agreementamong the responses to difficult than to easy dilemmas.9 See also Sauer [15] on this point. Cordelia Fine also notes thatone neuroscientist of her acquaintance refers to the anterior cingu-late as the “on button” because it is active in so many kinds ofsituations [16, p. 152].

No Need for Alarm 305

postulated by the dual-process system. And it seemsclear that, in Greene’s theory, response conflict canoccur within a single (currency-type) domain-general system. Recall the example given by Cush-man et al. about the conflicting desires to eat icecream and to look good in a bathing suit. Both ofthese desires, they suggest, are mediated by “curren-cy” emotions that are attached to differing “cogni-tive” representations and weighed in a single cost-benefit analysis. Thus nothing about the finding thatdifficult personal moral dilemmas are associatedwith conflicting responses seems to require the ex-istence of two separate systems. If a single systemconsiders one representation that involves causingphysical harm, and this representation is associatedwith a strong negative (but still “currency”) emo-tion, at the same time as it considers another repre-sentation (associated with a positive “currency”emotion) that focuses on the distinct harm to beprevented by causing that physical harm, then con-flict would certainly seem likely to occur. Thus, inthe absence of the neural evidence for two distinctsystems, which Klein’s criticism has called intoquestion, the occurrence of response conflict doesnot establish on its own that the conflict occursbetween two systems instead of within a singlesystem.

Studies in Patients With VMPFC Damage

Greene et al. note in several places that their neuro-imaging studies provide only “correlational” evi-dence for association between neural activity andbehavior. More direct evidence for a causal relation-ship must come from studies that involve interven-ing in the process under study and examining theimpact of the intervention on behavior. One sourceof this kind of evidence is research involving pa-tients with brain lesions. Although the “interven-tions” here are not made by the researchers, observ-ing differences in task performance between patientswith brain damage and healthy controls providesevidence that the brain lesions affect the neuralprocesses that underlie that task.

Drawing inferences about the function of areas of thebrain based on the results of damage to those areas ischallenging (see, e.g., [17]), but neuropsychologicalstudies have long been an important source of informa-tion about brain function. In the case of Greene’s dual-

process theory, neuropsychological evidence has comefrom studies of patients with damage to areas of thebrain involved in emotion processing, specifically theventromedial prefrontal cortex (VMPFC). The dual-process theory suggests that these patients will be morelikely to endorse “cognitive,” utilitarian answers to per-sonal moral dilemmas.

Several studies have tested this hypothesis. Forexample, Mendez et al. [18] found that patients withVMPFC damage due to frontotemporal dementiawere more likely to endorse pushing the person infront of the trolley in the footbridge dilemma.Greene et al. point out that this study provides onlyl imi ted suppor t of thei r hypothes is , s incefrontotemporal dementia actually results in damageto widespread areas of the brain. They note, howev-er, that investigations have also been conducted inpatients with damage localized to the ventromedialprefrontal cortex [19, 20, 28]. Each of these studiesused the dilemmas developed by Greene and hiscolleagues to examine moral decision-making inpatients with focal VMPFC lesions (subsequent,for example, to stroke or to surgery to removemeningiomas).

Koenigs et al. [19] found that patients with VMPFClesions were more likely than healthy controls to en-dorse committing personal harm in “high conflict” di-lemmas like the footbridge dilemma, but did not givesignificantly different answers than did controls to im-personal moral dilemmas (e.g. the switch dilemma) or to“low conflict” personal moral dilemmas (e.g.abandoning one’s baby so as not to have to take careof it). Greene interprets their results as supporting thedual-process theory [3, 9].

The study by Ciaramelli et al. presents some compli-cations. Cushman et al. describe it as producing resultsthat are “consistent” with those reported by Koenigset al. [11, p. 54], but this is not entirely the case.Ciaramelli et al. did find that, on average, the healthycontrol group, but not the VMPFC patient group, tooklonger to respond to personal than to impersonal moraldilemmas. In addition, within the set of personal di-lemmas, the control group took longer to respond thatcausing personal harm was appropriate than that it wasinappropriate. By contrast, VMPFC patients had similarreaction times for both kinds of response to personaldilemmas. Unlike Koenigs et al., however, Ciaramelliet al. did not find statistically significant differences inthe tendency of the two groups to say that causing harm

306 R. Bluhm

was appropriate. 10 Thus only one study found a quali-tative difference in the responses to personal moraldilemmas between patients with VMPFC lesions andhealthy controls.11

Thus the Koenigs et al. study, at least, providesevidence that patients with VMPFC lesions are lesslikely than healthy controls to disapprove of personalmoral violations. In fact, Greene [21] has said that thisstudy provides the strongest evidence in favor of thedual-process theory. It also, however, supports a single-process moral system that combines currency-like emo-tions and cost-benefit analyses. On both accounts, pa-tients would be predicted to lack a strong negativeemotional response to the thought of (for example)pushing someone in front of a speeding trolley in orderto save the lives of several others. On the dual-processaccount, this lack is the result of damage to a distinctemotion-processing system that would normally com-pete with the “cognitive” system, so that the “cognitive”system, attaching its “currency-like” emotions to thevarious representations it considers, proceeds unimped-ed to reach the conclusion that welfare will be maxi-mized by pushing the individual to his death. On asingle-process system, the representation of pushingsomeone to his death is not tagged with a strong nega-tive currency emotion, so it does not carry the sameweight in a cost-benefit analysis that it would normallyhave. Either way, the predicted outcome is the same.

Of course, this alternative account raises the questionof how “normal” cost-benefit analyses can occur inVMPFC patients, despite impairments in processingcurrency emotions. After all, they reach the same con-clusions as healthy individuals about impersonal moraldilemmas and about low-conflict personal moral di-lemmas. Note, though, that this finding also needs tobe explained by the dual-process theory. Koenigs et al.provide an explanation that is consistent with either adual-process or a single-process theory. They suggestthat there is evidence to suggest that VMPFC patientsretain knowledge of explicit social and moral norms, sothat “[i] n the absence of an emotional reaction to theharms of others, VMPC [sic] patients may rely onexplicit norms endorsing the maximization of aggregatewelfare and prohibiting the harming of others. Thisstrategy would lead VMPC patients to a normal patternof judgments on low-conflict personal dilemmas but anabnormal pattern of judgments on high-conflict personaldilemmas, precisely as was observed” [19, p. 910].

There is also independent reason to think thatVMPFC lesions affect emotion processing, in generalrather than interfering specifically with an alarm bellresponse to the thought of causing personal harm. Pa-tients with VMPFC lesions do not show impaired rea-soning with regard to “unemotional” decisions. Yet theydo show impairment when it comes to other kinds ofreasoning that involves currency emotions. (In fact, thepatients tested by Koenigs and colleagues did showimpairments with regard to social emotions, but not incognitive tasks, a point that Greene [21] also mentions.)The neurologist Antonio Damasio has worked with anumber of patients who have suffered damage to theVMPFC and shown that they exhibit a marked inabilityto engage in cost-benefit reasoning across a wide rangeof personal and social domains. These include followingsocial conventions12 and getting along with others, timemanagement, and the motivation to begin or to completetasks (see [22], Chapters 1 and 2). Their deficits clearlygo beyond what would be expected from the loss of adedicated system responding to personal harm andmight better be explained by a general effect on curren-cy emotions that would normally be assigned to variousoptions in a cost-benefit analysis.

In fact, Damasio’s explanation for the deficits he hasobserved in VMFPC patients looks remarkably like a

10 In fact, there is another important difference between this studyand that of Koenigs et al.; Ciaramelli et al. used a random selectionof dilemmas from Greene’s set), but did not distinguish betweenhigh conflict and low conflict dilemmas in their analysis. (Thankyou to an anonymous reviewer of this paper for pointing out thisdifference.) Recall that Greene’s original set of personal moraldilemmas included “low conflict” ones in which participants wereasked whether they would cause personal harm in scenarios inwhich that harm did not prevent greater harm to others (e.g.abandoning a baby you did not want to care for). Since Koenigset al. found statistically significant differences between the groupsonly with regard to high conflict dilemmas, it may be thatCiaramelli et al. did not find a significant difference between thegroups because they analyzed all of the (high and low conflict)personal moral dilemmas together.11 It is easy to see why Cushman et al. were misled, given the waythat the study results were reported. Despite failing to find astatistically significant difference in the responses each group gaveto personal moral dilemmas, Ciaramelli et al. still conducted posthoc analyses “for completeness.” On the basis of these post hocresults, they concluded that “patients were faster andmore inclinedthan normal controls to authorize moral violations in personalmoral dilemmas [emphasis mine]” [20, p. 88]. Actually, their dataonly demonstrate a statistically significant difference in the speedof responses.

12 This observation may be difficult to reconcile with the sugges-tion, above, that these patients retain knowledge of social norms.

No Need for Alarm 307

single-process theory that uses currency emotions. Hehypothesizes that before we engage in any kind of cost-benefit analysis, we automatically connect the variousresponse options with a positive or negative “gut feel-ing” that Damasio calls a somatic marker.13 Damasiofurther describes negative somatic markers as providing“an automated alarm signal which says: Beware ofdanger ahead if you choose the option which leads tothis outcome” [22, p. 173]. A positive somatic marker,by contrast, is “a beacon of incentive” [22, p. 174].14

Damasio suggests that somatic markers can be in-volved when immediate negative consequences areoutweighed by less immediate, positive ones, leadingus to suffer current unpleasantness for future benefit; hisexamples include jogging and graduate school. Thedecision process he describes is remarkably like theone involved in Greene’s examples of difficult personalmoral dilemmas. The key difference is that somaticmarkers can attach to a wide variety of possible out-comes, much like currency emotions. It is the somaticmarker system that, Damasio says, is damaged inVMPFC patients, whereas the dual-process theoryposits damage to a specific emotional response thatleaves patients more “utilitarian” in their moraldecision-making. This means, though, that in order toexplain the broader range of deficits observed inVMPFC patients, the dual-process theory will eventual-ly need to explain why both the currency system and theseparate alarm-bell system would be damaged by le-sions to a single area of the brain. In summary, whilethe results of the study by Koenigs et al. are consistentwith the dual-process theory, they are also consistentwith Damasio’s somatic marker hypothesis, which isone “single-process” alternative that has the further ad-vantage of explaining a wider range of deficits observedin VMPFC patients.

The claim that the study by Koenigs et al. is evidencefor the dual-process theory has also been criticized byJorge Moll and Ricardo de Oliveira-Souza [23]. Theseauthors have developed an alternative account of moraldecision-making on which emotion and cognition areintegrated [24, 25]. With regard to the decision-makingpatterns of VMPFC patients, they argue that, in order forthe lesion studies to provide evidence for Greene’s dual-process theory, a double-dissociation must be shown,such that VMPFC damage is associated with an in-creased tendency to endorse causing personal harm inorder to prevent greater harm, while damage to thedorsolateral and/or lateral prefrontal cortex is associatedwith an increased tendency to decide against causingsuch harm. The study by Koenigs et al. thus providesonly one half of the evidence. Moll and de Oliveira-Souza further note that the patients in this study also hadsome damage to prefrontal areas (including thefrontopolar cortex) that Greene had found to be activeduring (and thus concludes to be necessary for) “cogni-tive” processing. They conclude that a better interpreta-tion of the data is one that is consistent with their ownmodel, on which both the VMPFC and frontopolarcortex are necessary for prosocial moral sentiments,which “emerge from integration, instead of conflict,between emotional and cognitive mechanisms” [23, p.321]. Greene responds to this critique by Moll and deOliveira-Souza [26], who in return respond to Greene[27]. A detailed discussion of this exchange is beyondthe scope of this paper, but it ends with Moll and deOliveira-Souza acknowledging that the Koenigs et al.study is consistent with Greene’s theory, but reiteratingthat it is also consistent with their alternativeexplanation.

Similarly, Greene cites a study by Moretto et al. [28]in support of the dual-process theory [3, 12], thoughthese authors also prefer a single-process interpretationof their data. Moretto et al. compared patients withVMPFC lesions to patients with lesions to other areasof the brain and to healthy controls, and found thatpatients with VMPFC lesions were both more likelyto, and faster to, endorse causing personal harm to oneindividual to save others from harm [28]. They alsomeasured skin conductance, which reflects emotionalarousal in response to the moral dilemmas, and foundthat both the healthy control group and the control groupwith lesions to brain areas other than the VMPFC “gen-erate larger SCRs during contemplation of personalmoral dilemmas that were associated with affirmative

13 Damasio explains his choice of terminology: “Because thefeeling is about the body, I gave the phenomena the technical termsomatic state…and because it ‘marks’ an image, I called it amarker…I use the term somatic in the most general sense (thatwhich pertains to the body) and I include both visceral andnonvisceral sensations when I refer to somatic markers” [22, p.173]. Damasio’s proposed mechanism for attaching emotionalsignificance to possible outcomes is therefore based on a generalability to read bodily reactions, whereas Greene’s emotional pro-cess is explained in terms of a dedicated neural system.14 Damasio’s use of the phrase “alarm signal” should not beconfused with Greene’s “alarm bell” terminology. Somaticmarkers are not, as the next paragraph will show, responses tovery specific problems like the thought of causing personal harm;they are attached to all sorts of cognitive representations.

308 R. Bluhm

responses [to the question of whether it was morallyappropriate to cause harm]” [28, p. 1,894].

Interestingly, despite their use of Greene’s moraldilemmas, they do not interpret their results assupporting the dual process theory, describing theminstead in terms more reminiscent of Damasio’s somaticmarker theory: “One possibility…is that emotional re-sponses mark utilitarian choices in personal moral di-lemmas with a negative tag, discouraging the selectionof these options on future decisions” [28, p. 1,895].They also claim that, while their results “strongly sug-gest a causally necessary role of emotions in morallyrelevant decision-making, a mechanistic account ofhow, and at which point, emotional states subservedby vmPFC influence moral judgments is still lacking”[28, p. 1,895].

In summary, lesion studies may indicate that individ-uals with damage to the VMPFC show altered emotion-al responses to the prospect of causing personal harmbut, particularly since they also show other emotionaldeficits, this does not indicate that there is a separatecognitive process dedicated to processing “personal”harm. Moreover, the interpretations of these studies byMoll and de Oliveira-Souza [23] and by Moretto et al.[28] provide viable alternatives to the dual-process the-ory, both of which are based on the idea that there is asingle system that integrates cognition and emotion.

Variability in moral decision-making

The third type of empirical evidence offered in supportof the dual-process theory of moral judgment comesfrom research that manipulates either the emotional orthe “cognitive” aspect of moral decision-making orlooks at variations in participants’ responses to trolley-type dilemmas. Valdesolo and DeSteno [29] have shownthat people’s moral judgments are affected by the emo-tional state they are in prior to being presented with amoral dilemma. They induced a positive mood in onegroup of their research participants by showing them acomedy clip (showing their “neutral” control group asegment from a documentary) and tested that the com-edy clip did indeed result in higher self-reported positivemood in the group that had watched it. Participants thenresponded to both the trolley and the footbridge di-lemmas, which were presented in random order. Thegroup that had seen the positive clip were significantlymore likely to respond to the footbridge dilemma bysaying it was appropriate to push the individual in front

of the trolley (though ¾ of the people in this group didstill rate the action as inappropriate).

Valdesolo and DeSteno explicitly conducted theirstudy using the framework of the dual-process theory,accepting that “cognitive” and emotional responses tothe dilemmas are processed separately. They note thatGreene’s work suggests that whether the “utilitarian”answer is given to the footbridge dilemma depends on“individuals’ abilities and motivations to engage in con-trolled analysis” [29, p. 476] rather than to reply inaccordance with their immediate emotional reaction.They further suggest that choice might also be affectedby “environment-induced feelings of positivity at thetime of judgment” which “might reduce the perceivednegativity, or aversion ‘signal’ of any potential moralviolation” [29, p. 476]. Yet this explanation seems atodds with the characterization of the emotional systemgiven by Cushman et al., which emphasizes that the“alarm bell” emotion evoked at the prospect of causingphysical harm to another person is “nonnegotiable.” Itdoes not seem likely that such a signal would be dimin-ished by a prior positive mood.15 Moreover, Valdesoloand DeSteno give no mechanism by which induction ofpositive emotion might reduce perceived negativity, andtherefore the experience of negative emotion, in re-sponse to the footbridge dilemma. By contrast, the op-erations of a domain-general “cognitive” system thatoperates over a number of different representationsmay well be influenced by irrelevant, but positive, emo-tions. Therefore, Valdesolo and DeSteno’s findingsseem to support a single-process system better than theydo the dual-process theory.

Another study that separately manipulated the emo-tional and cognitive aspect of moral decision-makingwas conducted by Crockett and colleagues [30], whotreated healthy study participants with citalopram,atomoxetine, or placebo, prior to presenting them withpersonal and impersonal moral dilemmas. Citalopram isa selective serotonin reuptake inhibitor (SSRI), whileatomoxetine enhances noradrenaline transmission; thedrugs were expected to decrease and to increase, respec-tively, the likelihood that participants would endorsecausing personal harm to prevent greater harm. In fact,the group of participants who received citalopram diddecide more often than either of the other two groupsthat causing personal harm was not morally acceptable.There was, however, no difference between the group

15 I will return to this point later in the paper.

No Need for Alarm 309

that received atomoxetine and the group that receivedplacebo, indicating that (despite the fact that theatomoxetine group performed better on measures ofcognitive control), the drug did not affect moraldecision-making.

Greene cites this study as evidence for the dual-process theory [3, 21], but, to the contrary, Crockettet al. conclude that it does not, saying that their findings“do not support the hypothesis that executive controlprocesses compete with emotional reactions to harm inmoral judgment” [30, p. 17,436]. They do, however,support the claim that enhancing emotional responsesmakes strong emotional responses more likely to influ-ence moral decision-making, which is consistent with asingle-process theory. Crocket et al. themselves suggestthat these results are due to the enhancement of aprosocial moral response of the kind hypothesized byMoll and colleagues [23–25].

Perhaps the strongest evidence for the dual-processtheory comes from the “cognitive load” study conductedby Greene and his colleagues [31]. This study useddifficult personal moral dilemmas (i.e. those that arethought to activate both the emotional and the “cogni-tive” systems) and compared both the types of judg-ments made and the reaction times in a cognitive loadversus a control condition. For the cognitive load con-dition, participants performed a second task, looking fora target digit in a stream of numbers scrolling across thebottom of the computer screen, at the same time as theyresponded to the moral dilemmas. The researchers pre-dicted that increased load on the “cognitive” systemwould make participants both slower and less likely toendorse causing personal harm than when they were inthe “no load” condition.

Greene et al. did find that participants took signifi-cantly longer in the cognitive load condition to give thekind of welfare-maximizing responses thought to be theoutput of the “cognitive” system than they did in thecontrol condition. By contrast, there was no differencein reaction times across the two conditions for the “emo-tional” responses. These results were consistent with thepredictions made on the basis of the dual-process theory.Greene et al. suggest that the absence of an effect ofcognitive load on the time taken to make emotionaljudgments is an indication that they are performed bya second, dedicated system, so that their results supportthe dual-process theory. Contrary to their second hy-pothesis, however, there was no difference in the pro-portion of “cognitive” versus emotional responses in the

two conditions; participants were equally likely to givewelfare-maximizing responses in both conditions, eventhough they took longer to do so when concurrentlyperforming a second task. To describe these results inthe terms used by Cushman et al. [11], cognitive loaddoes not affect whether the “cognitive” system willoverride the alarm bell response, even though it doestake longer for the system to produce its characteristic,welfare-maximizing output.

Yet a single-process theory can also account for theseresults. It would also predict that increasing cognitiveload would result in slower processing, and that thiswould be especially true for deliberations that requireovercoming a strong emotional response and weighingthe outcomes of different possible actions. They onlything that remains to be explained is why participantswere not slower to respond in the cognitive load condi-tion when they answered in accordance with their strongnegative response to causing personal harm. And asingle-process theory can do this, as well. If individualsare already inclined to respond in accordance with thisemotional response (and recall that Greene et al. foundthat increasing cognitive load did not affect people’stendency to do so), then no further processing is neededand a negatively-weighted currency emotion (elicited inresponse to the prospect of causing someone personalharm) drives a quick response to the dilemma. In fact, asingle process might even be more likely to show aneffect of cognitive load on reaction times than twoseparate processes, because on a single-process account,the heavily weighted currency emotion must be takeninto account in deliberations; on the dual-process theory,since emotional responses do not negotiate, it is not partof one’s deliberations.

It is not clear why increased cognitive load does notaffect whether participants tend to give emotional or“cognitive” responses to difficult personal moral di-lemmas. This phenomenon, however, needs to be ex-plained whether one endorses a single- or a dual-processtheory. A number of studies have attempted to betterunderstand the factors that promote emotional versus“cognitive” responses, either in terms of differences inindividual traits or as a result of features of the experi-mental design. Greene also cites these studies assupporting the dual-process theory [3, 21].

Bartels [32] andMoore et al. [33] have both looked atwhether individual differences relevant to cognitive pro-cessing influence people’s responses to trolley-type di-lemmas. Bartels measured participants’ tendency to

310 R. Bluhm

enjoy and to rely on deliberation, as well as their faith inintuition and found that participants who reported areliance on intuition tended to be less likely to endorsecausing personal harm to prevent harm to others, whileparticipants who reported a reliance on deliberationwere more likely to do so.16 Moore et al. used scoreson a working memory test to divide participants in theirstudy into groups with high, medium, and low cognitivecontrol (making the explicit assumption that workingmemory is a reliable indicator of cognitive control).They hypothesized that high working memory wouldbe associated with a tendency to endorse causing per-sonal harm to prevent greater harm, because greatercognitive control would mean that participants werebetter able to overcome conflicting responses promptedby the strongly negative emotional reaction to causingpersonal harm and the prospect of preventing greaterharm. Moore et al. found that participants with betterworking memory scores were more likely to endorsecausing personal harm only in a subset of dilemmas inwhich the person they would have to kill in order toprevent greater harm would inevitably die. They alsofound, “in direct contradiction to the dual-process mod-el” [33, p. 556], that greater working memory capacitywas associated with longer reaction times to dilemmaswhere causing personal harmwas judged as appropriate.While the results of both of these studies suggest that(self-reported) cognitive style and working memory ca-pacity may predict people’s responses to trolley-typeproblems, they do not actually provide evidence thattwo distinct cognitive/neural processes underlie thesedifferences. Instead, the studies could simply show thatsome people are more inclined to engage in deliberation,while others tend to go with their first impression. Thisinterpretation of the results is consistent with the dual-process theory, but does not require the theory to betrue.17

Other studies design the experimental protocol toprompt or to discourage deliberation. Paxton et al. testedparticipants’ responses to difficult personal dilemmaseither before or after they completed a cognitive teston which peoples’ intuitive answers are incorrect [35].They found that those participants who completed thecognitive test before, rather than after, responding to themoral dilemmas were more likely to override their initialnegative response to causing personal harm and thus toendorse killing one person to save others. Suter andHertwig manipulated the time available to respond tomoral dilemmas, either by giving study participants ashort (8 s) or long (3 min) time to consider their re-sponses [36]. In a second experimental group, theyasked participants to respond quickly based on theirintuitive response to a dilemma or to think carefullyabout the dilemma before responding and found thatparticipants who were under a time constraint or whowere instructed to answer based on their first impressionwere more likely to say that it was wrong to cause harmto others. By contrast, participants who were given extratime to think about the dilemma or asked to deliberatebefore answering were more likely to endorse causingharm in one person in order to prevent harm to others.

For both of these papers, the authors interpret theirresults as being consistent with Greene’s dual-processtheory (and in fact Greene is one of the authors of [35]).The idea is that encouraging deliberation, either bypriming participants to doubt their first impressions orby giving them ample time to do so on their own,prompts them to engage the “cognitive” system. Butthese studies simply show that circumstances can bemanipulated to promote deliberation, not that these cir-cumstances engage a distinct cognitive system. They arealso therefore compatible with a single-process theory,on which the first impression is merely a strongly-weighted (currency) emotional response, rather thanthe result of the operation of a distinct “alarm bell”system.

In summary, this section has considered the empiricalevidence offered in support of the dual-process theory,which comes from neuroimaging research, from studiesof patients with lesions to the VMPFC, and from exper-iments that manipulate decision-making conditions and/or look at variability in individual responses to trolley-type dilemmas. Overall, this research does show thatsome versions of trolley-type problems evoke a strongemotional response, while others do not, and also(unsurprisingly) that difficult personal moral dilemmas

16 Bartels follows Greene in identifying emotional/intuitive judg-ments with deontology and deliberate “cognitive” judgments withutilitarianism. Because I am not concerned here with the extensionof Greene’s dual-process theory to normative moral theories, I willnot consider this aspect of Bartels’s paper.17 Oddly, in a later paper these authors seem much more sympa-thetic to Greene’s theory, though they do not address the reason forthis change [34]. The primary conclusion of this second paper isthat the distinction between personal and impersonal moral di-lemmas is, contra McGuire et al. [9], reliable. Since Greene’s owndiscussions of the dual-process theory have moved away from thisdistinction, I will not discuss the second paper by Moore et al. anyfurther.

No Need for Alarm 311

require more processing than easy ones. But it does notestablish that these differences reflect the operation oftwo different systems, one of which is an “alarm bell”dedicated to responding to the prospect of causing per-sonal harm. It is at least as plausible to interpret theevidence as supporting a theory that posits a single-process system using “currency” emotions that are inte-grated with cognitive representations.

Given that the evidence is largely consistent witheither alternative, there may be other reasons to chooseone over the other. I believe that the evidence actuallyfavors an integrated, single-process account, largelybecause the study by Valdesolo and DeSteno shows thata generally elevated mood affects the purported emo-tional process [29]. Moreover, a number of the scientistswho conducted these studies have interpreted their re-sults as supporting a single-process account. A thirdreason to conclude that there is only one process is that,in general, simplicity is a virtue of theories and it is moreparsimonious to hypothesize one emotion system thantwo. There is, however, also an additional possibleargument for the dual-process theory. If the strong emo-tional response has characteristics associated with mod-ularity, this would be a strong reason to think that thereis a dedicated psychological system that responds to thethought of causing personal harm. In the next section, Iconsider this possibility, but I conclude that the emo-tional response to the thought of causing personal harmis not likely to be modular.

Is the Emotional Response Modular?

There are a number of characterizations of modules inthe philosophical and cognitive science literatures. PeterCarruthers [37] shows that claims about modularity liealong a spectrum. At the conservative end, lie JerryFodor’s rather stringent criteria for modularity [38].Fodor lists nine characteristics that a cognitive processmust possess in order to be a module, though he doesacknowledge that at least some of these processes can be“more or less” true of a particular process. At the otherend of the spectrum, a module is understood as beingany dissociable cognitive process. Between these ex-tremes, and also most closely matching Greene’s de-scription of the emotional process posited by his theory,lies the characterization of modules given in evolution-ary psychology, as articulated by Leda Cosmides andJohn Tooby. Cosmides and Tooby say that, according to

evolutionary psychology, the mind is “a set of informa-tion processing machines that were designed by naturalselection to solve adaptive problems faced by ourhunter-gatherer ancestors” [39]. Moreover, our mindshave remained largely unchanged since this time; asCosmides and Tooby put it, “our modern skulls housea stone age mind.”18

The “machines” posited by evolutionary psychologyhave a number of characteristics that are important forputting Greene’s discussion of the dual-process theoryinto context. As noted above, evolutionary psychologyclaims that a number of the processes in our minds areadaptations to our prehistoric environment and that theseprocesses still respond in those “stone age” ways despitethe many changes in our environment. Moreover, be-cause these processes use dedicated circuits (i.e. mod-ules) that specialize in solving a particular problem, theyrespond only to those specific inputs that are relevant tothat adaptive problem, but not to others. This property isoften referred to as domain specificity. As Cosmides andTooby say, there are “specialized neural circuits becausethe samemechanism is rarely capable of solving differentadaptive problems.”We therefore have modules dedicat-ed to solving different kinds of problems, which respondspecifically to “stably recurring properties of [our] ances-tral worlds”. For example, they say, we have specializedcircuits that allow us to choose nutritious food, but thatare useless for choosing a mate. Inputs relevant to one ofthese modules do not cause activity in the other.

In addition, the operation of these modules is notopen to conscious introspection; Cosmides and Toobyemphasize that this means that much of what comesautomatically to us is actually the result of complexprocessing within a module. This property of modulesis often referred to as cognitive impenetrability. A relat-ed property is that of informational encapsulation. Rob-bins [42] clarifies the difference between them, pointingout that “cognitive impenetrability” means specificallythat processing within a module is not affected by ourbeliefs and other cognitive states. If a module is cogni-tively impenetrable and also not influenced by “lowerlevel” perceptual processing, it is said to be informa-tionally encapsulated. Finally, and because the operationof a module is not affected by other kinds of processing

18 Evolutionary psychology is itself controversial. (For criticisms,see ([40]; [41]), among others.) I will not address this controversyhere, but will simply examine whether Greene’s hypothesizedmodule meets the criteria for modularity that have been set outby evolutionary psychologists themselves.

312 R. Bluhm

going on at the same time, modules operate not justautomatically, but also quickly.

In summary, evolutionary psychology says that mod-ules are (1) unchanged since our “stone age” days, (2)domain-specific, (3) cognitively impenetrable, and (4)fast. To what extent does Greene’s discussion of theemotional process reflect this characterization of a mod-ule? Greene does, in several places, speculate about theevolutionary origin of our strong emotional response tocausing personal harm. For one thing, he notes, “[t] herationale for distinguishing between personal and im-personal forms of harm is in large part evolutionary”([43, p. 59]; see also [44, p. 345]). Briefly, ethologistshave observed great apes who live “intensely sociallives” that appear to be guided by emotions, but whodo not have the human ability to engage in abstractreasoning. These apes also have the ability to causeharm to each other, but depend on each other for sur-vival. They may therefore have evolved responses thatregulate their social behavior in ways that minimizeviolence. Since we share a common ancestor with theseape species, “it should come as no surprise if we [also]have innate responses to personal violence that arepowerful, but rather primitive” [43, p. 59]. Yet sincehuman beings are also clearly capable of “cognitive”reasoning, “it would be surprising as well if this capacityplayed no role in human moral judgment” [7, p. 390].

Unlike Cosmides and Tooby, however, Greene is notcommitted to the idea that the emotional response isunchanged since the “Stone Age.” In his most recentwork, he describes the emotional response as an auto-matic setting; these settings are indeed partly genetic,but they are also further shaped by culturally-transmittedknowledge and by personal experience [3, p. 143]. Thuswhile our emotional response may well have been anadaptation that emerged in our ancestors, it need not beexactly the same as it was in our evolutionary past.

With regard to domain specificity, Greene and hiscolleagues often talk about the emotional response as ifit is domain-specific, though they do so inconsistently.Greene sometimes suggests that the emotional alarmbell responds to very specific features of the environ-ment that were also present in our evolutionary past. Indiscussing our very different responses to the switch andthe footbridge dilemmas, he suggests that we do nothave a strong emotional response to the switch dilemma,in which harm is caused in an impersonal fashion,because our ancestral environment “did not includeopportunities to harm other individuals using

complicated, remote-activated machinery” [44, p.345]. The footbridge dilemma, by contrast, involvesthe opportunity for a kind of physical harm that wouldhave been more familiar to our ancestors. We havetherefore “very likely” inherited an emotional instinctto refrain from pushing others into harm’s way. Thespecificity of the emotional response is also most con-sistent with the most recent version of Greene’s theory[3], on which it is characterized as one of a number ofdifferent automatic settings.

However, Greene and his colleagues also sometimescharacterize the emotional response in broader terms, asresponding to a range of stimuli that our ancestorswould have encountered. For example, Greene has sug-gested that the emotional process it is “selectively trig-gered by personal moral violations, perceived unfair-ness, and, more generally, socially significant behaviorsthat existed in our ancestral environment” [44, p. 350].19

It is not clear, then, what Greene would say about thedomain specificity of the emotional process, though,again, his most recent work suggests that he thinks it isindeed domain specific. Note, though, that the inconsis-tency across different papers raises similar issues to theones Kahane discusses when he distinguishes betweenmodest and grand versions of Greene’s theory [12]. Tothe extent that the emotional process is understood asbeing hypothesized to occur only in the context ofdecisions involving personal harm, it is domain specific.I am concerned here only with the modest dual-processtheory, where, by definition, the emotional response isdomain-specific. This means, though, that evidence ofstrong emotional reactions to perceived unfairness orother social violations cannot be taken as evidence forthe dual-process theory.

Although the features often go together, domainspecificity is distinct from informational encapsulationand cognitive impenetrability; the former characteristicrefers to the inputs that activate a module, while thelatter two refer to processing within the module. Thestudy by Valdesolo and DeSteno, discussed above,shows that the emotional response is not informationallyencapsulated, since elevating study participants’ moods(by showing them a funny film clip) increased their

19 Greene’s suggestion that the system responds to perceivedunfairness comes from work by Sanfey et al. [45] that shows thatperceived unfairness is associated with activation in brain areassimilar to those activated by the footbridge problem. See Crockettet al. [30] for an alternative explanation of the emotional responseto unfairness.

No Need for Alarm 313

tendency to endorse causing personal harm in the foot-bridge dilemma, indicating that processes related toaffect in general have an effect on the specific emotionalresponse posited by the dual-process theory.

It is the cognitive penetrability of the emotional re-sponse, however, that is most relevant to Greene’s recentwork. In his book, he explicitly describes the emotionalresponse as “a cognitive subsystem, a “module”, thatmonitors our behavioral plans and sounds an emotionalalarm bell when we contemplate harming other people”[3, p. 224].20 He further says that one of the criticalfeatures of this module is that “the system’s internaloperations are not accessible to introspection” [3, p.376, note to p. 224]. Thus, in Robbins’s terms, the alarmbell emotional response is cognitively impenetrable.

Greene does not discuss the evidence for the cogni-tive impenetrability of the emotional response at thatpoint in his argument, but it can be gleaned from otherplaces in his work. First, there is the claim that the alarmbell response “does not negotiate” and so must be over-ridden, rather than being open to incorporation in alarger cost-benefit analysis. The operation of the cogni-tive system is therefore not affected by other things thatwe believe (including, of course, the belief that pushingsomeone in front of a runaway trolley will save the livesof several other people). Second, Greene et al. cite thephenomenon of moral dumbfounding [46] as evidencethat supports their theory. Moral dumbfounding occurswhen research participants find themselves unable toexplain why they have a particular response to a moraldilemma, even though they are confident that this re-sponse is correct. 21 Crucially, moral dumbfoundingoccurs when participants give responses to a dilemmathat do not accord with the “welfare principle” that issupposed to guide the operation of the cognitive system

[47]. By contrast, participants generally do justify “util-itarian” responses, e.g. to the switch problem, by ap-pealing to the explicit principle of minimizing harm.The fact that we can explain our “cognitive” responsesbut not our emotional ones is supposed to show that thetwo types of response are produced by distinct systems,only one of which, it would appear, is open tointrospection.

The fact that moral dumbfounding occurs is consis-tent with the dual-process theory, but does not yet pro-vide evidence for the existence of a distinct emotionalresponse to causing personal harm. Recall that the twoprocesses posited by the theory are a cognitively impen-etrable alarm bell response and a “cognitive” system thatinvolves explicit reasoning about representations thatare tagged with currency emotions. But the process bywhich “cognitive” representations get tagged with cur-rency emotions seems to be just as closed to introspec-tion as the emotional response associated with the pros-pect of causing personal harm. Certainly this is howDamasio describes somatic markers (as quoted above).Whatever is introspectively inaccessible here, it is notunique to the purported emotional system. Not only that,but the distinction between negotiating and overriding,which is also central to distinguishing the two processes,does not hold up to closer scrutiny. To follow through onthe example given by Cushman et al., sometimes it iseasy to resist the Good Humor truck. Other times wereally want the damned ice cream. In the latter case, ourultimate decision to forgo the treat looks more like adecision to push the man off of the footbridge than acool weighing of options. Both cases require overcom-ing a strong desire. If this sketch is correct, then there isless difference between the operations of alarm bell andcurrency emotions thanGreene and his colleagues think.

There is, then, no good reason to think that theemotional response to the thought of causing personalharm is domain specific or informationally encapsulated(with regard to other emotions), and to the extent that itsoperation is cognitively impenetrable, it is so in the sameway as the emotional responses associated with the“cognitive” system seem to be. This leaves one finalcharacteristic of modules to be considered, and theemotional response is indeed fast. On its own, though,the speed of the emotional response is not enough tosuggest that it is the result of the operation of a module.Moreover, for many of the dilemmas Greene’s grouphas used, participants who give purportedly “cognitive”responses are equally quick to answer. Recall that

20 Greene describes the emotional response as modular in thecontext of his “modular myopia” hypothesis. This hypothesis isone of the major new theoretical developments presented in thebook. Briefly, the “myopia” of the emotional response moduleoccurs because it is not able to see harmful side effects of contem-plated actions (thus explaining why the switch dilemma does nottrigger a strong emotional response). Because this new develop-ment does not change the dual-process theory itself, I do notconsider it further in this paper.21 Fodor draws a distinction between informational encapsulation(other cognitive processes do not influence the operation of themodule) and inaccessibility to central processes (the operation ofthe module is not open to introspection). I have suggested here thatGreene thinks that both of these are true of the emotional process:the first because the alarm bell is nonnegotiable and the secondbecause of moral dumbfounding.

314 R. Bluhm

McGuire et al. showed that the difference in reactiontimes between impersonal and personal dilemmas wasdriven by a small number of the personal dilemmas inwhich there was no good “cognitive” case for causingharm to oppose the emotional response [8]. And in the“difficult” personal moral dilemmas that were used inlater research, Greene et al. observed that participantswho generally tended to give the kind of welfare-maximizing responses that reflect “cognitive” process-ing did so more quickly than they made “emotional”judgments [31, p. 1,152]. Speed of response is thereforenot necessarily an indication that a module is responsi-ble for output. Hanno Sauer notes in a discussion ofGreene’s results that “[p] eople’s intuitions have alreadybeen shaped by prior moral reasoning and byhabitualized intuitive responses” [15, p. 792]; we are,unsurprisingly, quick to give answers that we are in thehabit of giving.

In this section, I have considered the possibility thatevolutionary psychology provides theoretical supportfor the dual-process theory because the hypothesizedemotional process has the characteristics of anevolutionarily-conserved module. Yet while Greeneet al. seem to want to say that the emotional responseto the prospect of causing personal harm is the result ofthe operation of a module, it does not have the keycharacteristics associated with modularity.

Conclusion

Once we have acknowledged that there is an importantemotional component to cognitive reasoning, there isless reason to view moral reasoning about situationsinvolving harm as having two distinct component pro-cesses. The most recent discussions of the dual-processtheory draw a distinction between alarm bell emotionsand currency emotions. I have argued that, on the as-sumption that some of our moral reasoning is conductedby a domain-general system that integrates emotionswith “cognitive” representations, there is no good rea-son to posit a distinct system that underlies our strongemotional response to the prospect of causing personalharm. Rather, the evidence given in favor of the dual-process theory better supports the conclusion that thisemotional response differs from other emotions only inintensity. To borrow Greene’s terminology, an alarm-bell emotion is nothing more than a very stronglyweighted currency response.

Acknowledgments Work on this paper was supported by aNational Endowment for the Humanities Summer Stipend #FT-59,871-12. I am grateful for feedback on earlier versions of thispaper provided by Ginger Hoffman, Dale Miller, and the partici-pants at the second annual New Scholars in Bioethics (NSiB)Symposium: Kirstin Borgerson, Danielle Bromwich, MichaelGarnett, Douglas MacKay, and Joseph Millum. Three anonymousreviewers for this journal also provided extremely valuablefeedback.

References

1. Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M.,& Cohen, J.D. (2001) An fMRI investigation of emotionalengagement in moral Judgment. Science, Vol. 293, Sept. 14,2001, 2105–2108.

2. Greene, J.D. 2008. The Secret Joke of Kant’s Soul, in MoralPsychology, vol. 3: The Neuroscience of Morality: Emotion,Disease, and Development, W. Sinnott-Armstrong, ed., 35–79. Cambridge: MIT Press.

3. Greene, J.D. 2013. Moral Tribes: Emotion, Reason, and theGap Between Us and Them. New York: Penguin Press.

4. Greene, J.D., and J.D. Cohen. 2006. For the law, NeuroscienceChanges Nothing and Everything. In Law and the Brain, ed.S. Zeki and O. Goodenough, 207–226. New York: OxfordUniversity Press.

5. Greene, J.D. 2011. Social Neuroscience and the soul’s LastStand. In Social Neuroscience: Toward Understanding theUnderpinnings of the Social Mind, ed. A. Todorov, S. Fiske,and D. Prentice, 263–273. NewYork: Oxford University Press.

6. Waldmann, M.R., J. Nagel, and A. Wiegmann. 2012. MoralJudgment. In The Oxford Handbook of Thinking andReasoning, ed. K.J. Holyoak and R.G. Morrison, 274–299.New York: Oxford University Press.

7. Greene, J.D., L.E. Nystrom, A.D. Engell, J.M. Darley, andJ.D. Cohen. 2004. The Neural Bases of Cognitive Conflict andControl in Moral Judgment. Neuron 44: 389–400.

8. McGuire, J., R. Langdon, M. Coltheart, and C. Mackenzie.2009. A Reanalysis of the Personal/Impersonal Distinction inMoral Psychology Research. J Exp Soc Psychol 45(3): 577–580.

9. Greene, J.D. (2009) Dual-process theory and the personal/impersonal distinction: A reply to McGuire, Langdon,Colthart, and Mackenzie. Journal of Experimental SocialPsychology

10. Berker, S. 2009. The Normative Insignificance ofNeuroscience. Philos Public Aff 37(4): 294–239.

11. Cushman, F., L. Young, and J.D. Greene. 2010. Multi-SystemMoral Psychology. In The Oxford Handbook of MoralPsychology, ed. J. Doris, G. Harman, S. Nichols, J. Prinz, W.Sinnott-Armstrong, and S. Stich, 47–71. New York: OxfordUniversity Press.

12. Kahane, G. 2012. On the wrong track: Process and content inmoral psychology. Mind Lang 27(5): 519–545.

13. Klein, C. 2011. The Dual Track Theory of Moral Decision-Making: A Critique of the Neuroimaging Evidence.Neuroethics 4: 143–162.

No Need for Alarm 315

14. Pessoa, L. 2013. The Cognitive-Emotional Brain: FromInteractions to Integration. Cambridge: MIT Press.

15. Sauer, H. 2012. Morally Irrelevant Factors: What’s Left of theDual Process-Model of Moral Cognition? Philos Psychol26(5): 783–811.

16. Fine, C. 2010.Delusions of Gender: How Our Minds, Society,and Neurosexism Create Difference. New York: W.W. Norton& Company.

17. Shallice, T. 1988. FromNeuropsychology toMental Structure.Cambridge: Cambridge University Press.

18. Mendez, M.F., E. Anderson, and J.S. Shapira. 2005. AnInvestigation of Moral Judgment in FrontotemporalDementia. Cogn. Behav Neurol 18(4): 193–197.

19. Koenigs, M., L. Young, R. Adolphs, D. Tranel, F. Cushman,M. Hauser, and A. Damasio. 2007. Damage to the PrefrontalCortex Increases Utilitarian Moral Judgments. Nature446(7138): 908–911.

20. Ciaramelli, E., M. Muccioli, E. Làdavas, and G. di Pellegrino.2007. Selective Deficit in Personal Moral JudgmentFollowing Damage to Ventromedial Prefrontal Cortex. SocCogn Affect Neurosci 2: 84–92.

21. Greene, J.D. (2010) Notes on “The Normative Insignificanceof Neuroscience” by Selim Berker. (Draft). Retrieved fromhttp://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Notes-on-Berker-Nov10.pdf on February 1, 2014.

22. Damasio, A. 1994. Descartes’ Error: Reason, Emotion, andthe Human Brain. New York: G.P. Putnam Publishing.

23. Moll, J., and R. de Oliveira-Souza. 2007. Moral Judgments,Emotions, and the Utilitarian Brain. Trends Cogn Sci 11(8):319–321.

24. Moll, J., R. Zahn, R. de Oliveira-Souza, F. Krueger, and J.Grafman. 2005. The Neural Basis of HumanMoral Cognition.Nat Rev Neurosci 6: 799–809.

25. Moll, J., de Oliveira-Souza, R., Zahn, R., Grafman, J. (2008)The cognitive neuroscience of moral emotions. in MoralPsychology, Vol. 3: The Neuroscience of Morality: Emotion,Disease, and Development, W. Sinnott-Armstrong, ed., 1–18.Cambridge, MA: MIT Press.

26. Greene, J.D. 2007. Why are VMPFC Patients MoreUtilitarian? A Dual-Process Theory of Moral JudgmentExplains. Trends Cogn Sci 11(8): 322–323.

27. Moll, J., and R. de Oliveira-Souza. 2007. Response to Greene:Moral Sentiments and Reason: Friends or Foes? Trends CognSci 11(8): 323–324.

28. Moretto, G., E. Làdavas, F. Mattioli, and G. di Pellegrino.2009. A Psychophysiological Investigation of MoralJudgment After Ventromedial Prefrontal Damage. J CognNeurosci 22(8): 1888–1899.

29. Valdesolo, P., and D. DeSteno. 2006. Manipulations ofEmotional Context Shape Moral Judgment. Psychol Sci17(6): 476–477.

30. Crockett, M.J., L. Clark, M. Hauser, and T.W. Robbins. 2010.Serotonin Selectively Influences Moral Judgment and

Behavior Through Effects on Harm Aversion. Proc NatlAcad Sci 107(40): 17444–17438.

31. Greene, J.D., S.A. Morelli, K. Lowenberg, L.E. Nystrom, andJ.D. Cohen. 2008. Cognitive Load Selectively Interferes WithUtilitarian Moral Judgment. Cognition 107(3): 1144–1154.

32. Bartels, D.M. 2008. Principled Moral Sentiment and theFlexibility of Moral Judgment and Decision Making.Cognition 108: 381–417.

33. Moore, A.B.., B.A. Clark, and M.J. Kane. 2008. Who Shaltnot Kill? Individual Differences in Working MemoryCapacity, Executive Control, and Moral Judgment. PsycholSci 19(6): 549–557.

34. Moore, A.B.., N.Y.L. Lee, B.A.M. Clark, and A.R.A.Conway. 2011. In Defense of the Personal/ImpersonalDistinction in Moral Psychology Research: Cross-CulturalValidation of the Dual Process Model of Moral Judgment.Judgment Decis Making 6(3): 186–195.

35. Axton, J.M., L. Ungar, and J.D. Greene. 2012. Reflection andReasoning in Moral Judgment. Cogn Sci 36: 163–177.

36. Suter, R.S., and R. Hertwig. 2011. Time andMoral Judgment.Cognition 119: 454–458.

37. Carruthers, P. 2006. The Architecture of the Mind. Oxford:Oxford University Press.

38. Fodor, J. 1983. The Modularity of Mind. Cambridge: MITPress.

39. Cosmides, L., Tooby J. (1997) Evolutionary psychology: Aprimer. http://www.psych.ucsb.edu/research/cep/primer.html

40. Buller, D.J. 2005. Adapting Minds: Evolutionary Psychologyand the Persistent Quest for Human Nature. Cambridge: MITPress.

41. Richardson, R.C. 2007. Evolutionary Psychology asMaladapted Psychology. Cambridge: MIT Press.

42. Robbins, P. (2009) Modularity of Mind. StanfordEncyclopedia of Philosophy. http://plato.stanford.edu/entries/modularity-mind/

43. Greene, J. 2005. Emotion and Cognition in Moral Judgment:Evidence from Neuroimaging. In Neurobiology of HumanValues, ed. J.P. Changeux, A.R. Damasio, W. Singer, and Y.Christen, 57–66. Berlin: Springer.

44. Greene, J.D. (2005) Cognitive neuroscience and the structureof the moral mind, in The Innate Mind: Structure andContents, S. Laurence, P. Carruthers, and S. Stich. Eds., 338– 352. New York: Oxford University Press.

45. Sanfey, A., J. Rilling, J. Aronson, L. Nystrom, and J. Cohen.2003. The Neural Basis of Economic Decision-Making in theUltimatum Game. Science 300: 1755–1758.

46. Bjorkland, F., J. Haidt, and S. Murphy. 2000. MoralDumbfounding: When Intuition Finds no Reason. LundPsychol Rep 2: 1–23.

47. Cushman, F.A., L. Young, and M.D. Hauser. 2006. The Roleof Conscious Reasoning and Intuition in Moral Judgment:Testing Three Principles of Harm. Psychol Sci 17(12): 1082–1089.

316 R. Bluhm