Agency as a Marker of Consciousness

21
9 Agency as a Marker of Consciousness TIM BAYNE e primary aim, object and purpose of consciousness is control. Lloyd Morgan 1. INTRODUCTION One of the central problems in the study of consciousness concerns the ascription of consciousness. We want to know whether certain kinds of creatures—such as nonhuman animals, artificially created organisms, and even members of our own species who have suffered severe brain damage—are conscious, and we want to know what kinds of conscious states these creatures might be in if indeed they are conscious. e identification of accurate markers of consciousness is essential if the science of consciousness is to have any chance of success. An aractive place to look for such markers is in the realm of agency. Consider the infant who reaches for a toy, the lioness who tracks a gazelle running across the savanna, or the climber who searches for a handhold in the cliff. In each case, it is tempting to assume that the creature in question is conscious of the perceptual features of their environment (the toy, the gazelle, the handhold) that guide their behavior. More generally, we might say that the exercise of intentional, goal-directed agency is a reliable guide to the presence of consciousness. To put it in a slogan, we might be tempted to treat agency as a marker of consciousness (AMC). Although it has its advocates, AMC has come in for sustained criticism, and a significant number of theorists have argued that the inference from agency to con- sciousness is illegitimate on the grounds that much of what we do is under the con- trol of unconscious behavioral guidance systems. ese theorists typically hold that the only sound basis for the ascription of consciousness—at least when it comes to human beings—is introspective report. Frith and colleagues (1999) claim that “to discover what someone is conscious of we need them to give us some form of 09_AndyClark_09.indd 160 09_AndyClark_09.indd 160 10/11/2012 7:36:36 PM 10/11/2012 7:36:36 PM

Transcript of Agency as a Marker of Consciousness

9

Agency as a Marker of Consciousness

T I M BAY N E

Th e primary aim, object and purpose of consciousness is control. Lloyd Morgan

1. INTRODUCTION

One of the central problems in the study of consciousness concerns the ascription of consciousness. We want to know whether certain kinds of creatures—such as nonhuman animals, artifi cially created organisms, and even members of our own species who have suff ered severe brain damage—are conscious, and we want to know what kinds of conscious states these creatures might be in if indeed they are conscious. Th e identifi cation of accurate markers of consciousness is essential if the science of consciousness is to have any chance of success.

An att ractive place to look for such markers is in the realm of agency. Consider the infant who reaches for a toy, the lioness who tracks a gazelle running across the savanna, or the climber who searches for a handhold in the cliff . In each case, it is tempting to assume that the creature in question is conscious of the perceptual features of their environment (the toy, the gazelle, the handhold) that guide their behavior. More generally, we might say that the exercise of intentional, goal-directed agency is a reliable guide to the presence of consciousness. To put it in a slogan, we might be tempted to treat agency as a marker of consciousness (AMC).

Although it has its advocates, AMC has come in for sustained criticism, and a signifi cant number of theorists have argued that the inference from agency to con-sciousness is illegitimate on the grounds that much of what we do is under the con-trol of unconscious behavioral guidance systems. Th ese theorists typically hold that the only sound basis for the ascription of consciousness—at least when it comes to human beings—is introspective report. Frith and colleagues (1999) claim that “to discover what someone is conscious of we need them to give us some form of

09_AndyClark_09.indd 16009_AndyClark_09.indd 160 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 161

report about their subjective experience” (107); Weiskrantz (1997) suggests that “we need an off -line commentary to know whether or not a behavioural capacity is accompanied by awareness” (84); and Naccache (2006) claims that “conscious-ness is univocally probed in humans through the subject’s reports of his or her own mental states” (1396).

Th e central goal of this chapter is to assess the plausibility of the case against AMC. My aim is to provide a framework for thinking about the ways in which agency might function as a marker of consciousness, and to argue that the case against AMC is not nearly as powerful as it is oft en thought to be. Th e chap-ter divides into two rough halves. Th e fi rst half examines various ways in which agency might function as a marker of consciousness: section 2 focuses on the notion of a marker of consciousness itself, while section 3 examines the relation-ship between agency and consciousness. Section 4 forms a bridge between the two halves of the chapter. Here, I contrast AMC with the claim that the only legitimate marker of consciousness is introspective report. Th e second half of the chapter examines two broad challenges to AMC: section 5 examines the chal-lenge from cognitive neuroscience, while section 6 addresses the challenge from social psychology. I argue that although the data provided by these disciplines provide useful constraints on the application of AMC, they do not undermine the general approach itself. 1

2. CONSCIOUSNESS AND ITS MARKERS

A number of quite distinct things might be meant by the claim that agency is a marker of consciousness. Let us begin by considering the notion of consciousness.

Th e kind of consciousness in which I am interested is phenomenal conscious-ness—the kind of consciousness that there is “something that it is like” to enjoy (Nagel 1974). We can distinguish two aspects to phenomenal consciousness, what we might call creature consciousness and state consciousness. Creature conscious-ness is the property that a creature has when there is something it is like to be that creature—something for the creature itself. Conscious states, by contrast, are ways in which the creature’s consciousness is modulated. Such states come in a wide vari-ety of forms and are distinguished from each other by reference to their phenom-enal characters. What it’s like to smell lavender is distinct from what it’s like to taste coff ee, and each of these conscious states is in turn distinct from that associated with the experience of backache. We can think of conscious states as determinates of creature consciousness, in roughly the way in which being green is a determinate of being colored.

Th e distinction between creature consciousness and state consciousness brings with it a distinction between two forms of “the” problem of other minds: we need criteria for the ascription of consciousness as such to a creature (“crea-ture consciousness”), and we need criteria for the ascription of particular types of conscious states to a creature (“state consciousness”). Although these two demands are distinct, they are likely to be intimately related. For one thing, the ascription of state consciousness will automatically warrant the ascription of creature consciousness, for—trivially—any creature that is in a conscious state

09_AndyClark_09.indd 16109_AndyClark_09.indd 161 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y162

will itself be conscious. Further, although the converse entailment does not hold (for markers of creature consciousness need not be markers of state conscious-ness), in general it is likely that the ascription of creature consciousness will be grounded in the ascription of state consciousness. For example, one’s evidence that an infant is conscious will typically be grounded in evidence that the infant is enjoying certain types of conscious states, such as pain or visual experiences. With these points in mind, my focus in this chapter will be on the ascription of conscious states.

Th us far I have referred to markers of state consciousness in the abstract, but it is far from clear that we should be looking for a single marker—or even a single fam-ily of markers—of conscious states of all kinds. In fact, it is likely that such markers will need to be relativized in a number of ways. First, there is reason to think that the markers of consciousness will need to be relativized to distinct conscious state types, for it seems likely that the diff erence that consciousness makes to the func-tional profi le of a mental state depends on that kind(s) to which that mental state belongs. To put the point somewhat crudely, we may need one family of markers for (say) conscious perceptions, another for conscious thoughts, and a third for con-scious emotions. Indeed, our taxonomy might need to be even more fi ne-grained than this, for we might need to distinguish markers for (say) conscious vision from markers for conscious olfaction.

Second, markers of consciousness may need to be relativized to particular types of creatures. If consciousness is multiply realized, then the markers of conscious-ness that apply to the members of one species may not apply to the members of another. And even if there are species-invariant markers of consciousness, it may be diffi cult to determine whether any putative marker of consciousness is in fact species invariant or whether it applies only to the members of a certain species or range of species.

Th ird, markers of consciousness may need to be relativized to what are variously referred to as “background states,” “levels,” or “modes” of consciousness, such as normal wakefulness, delirium, REM sleep, hypnosis, the minimally conscious states, and so on. Unlike the fi ne-grained conscious states that can be individuated in terms of their content (or phenomenal character), these states modulate consciousness in broad, domain-general ways. Given the complex interactions between a creature’s “mode” of consciousness and its various (fi ne-grained) states of consciousness, it is highly likely that many of our markers for the latt er will need to be relativized to the former in various ways. In other words, what counts as a marker of (say) visual experience may depend on whether we are considering creatures in a state of nor-mal wakefulness or (say) creatures in a state of REM sleep.

My focus will be on the question of whether agency might count as a marker of consciousness in the context of human beings in the normal waking state. Th e further we depart from such contexts, the less grip we have on what might be a reasonable marker of consciousness (Block 2002). So, although I will consider the question of whether agency might be regarded as a marker of consciousness in other species or in human beings that have suff ered severe neurological damage, my primary interest will be with cognitively unimpaired human beings in the normal waking state.

09_AndyClark_09.indd 16209_AndyClark_09.indd 162 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 163

3. INTENTIONAL AGENCY

Agency is no less multifaceted than consciousness. Th e folk-psychological terms that we have for describing agency—“intentional agency,” “goal-directed agency,” “voluntary agency,” “deliberative agency,” and so on—are imprecise in various ways, and it is unclear which, if any of them, will fi nd gainful employment within the scientifi c study of agency (Prochazka et al. 2000). We might hope that advances in the scientifi c understanding of agency will bring with them a more refi ned taxonomy for understanding agency, but at present, we are largely reli-ant on these folk-psychological categories. From within this framework, the most natural point of contact with consciousness is provided by intentional agency. Th e driving intuition behind AMC is that we can use a creature’s intentional responses to its environment as a guide to the contents of its consciousness. But what is an intentional action?

Most fundamentally, intentional actions are actions that are carried out by agents themselves and not some subpersonal or homuncular component of the agent. Th is point can be illustrated by considering the control of eye movements. Some eye movements are controlled by low-level stimulus-driven mechanisms that are located within the superior colliculus; others are directed by high-level goal-based represen-tations (Kirchner & Th orpe 2006; de’Sperati & Baud-Bovy 2008). Arguably, the former are more properly ascribed to subpersonal mechanisms within the agent, while the latt er qualify as things that the agent does—as instances of “looking.” In drawing a distinction between the personal agency and subpersonal motor control, I am not suggesting that we should think of “the agent” as some kind of homunculus, directing the creature’s behavior from its director’s box within the Cartesian the-ater. Th at would be a grave mistake. As agents we are not “prime movers” but crea-tures that behave in both refl ective (or “willed”) and reactive (or “stimulus-driven”) modes depending on the dictates of the environment. Th e agent is not to be iden-tifi ed with the self of rational refl ection or pure spontaneity but is to be found by looking at how the organism copes with its environment. Th e exercise of willed agency draws on distinct neural circuits from those implicated in stimulus-driven agency ( Jahanshahi & Frith 1998; Lengfelder & Gollwitzer 2001; Mushiake et al. 1991; Obhi & Haggard 2004), but few of our actions are purely “self-generated” or purely “stimulus-driven.” Instead, the vast majority of what we do involves a com-plex interplay between our goals and the demands of our environment (Haggard 2008). Th ink of a typical conversation, where what one says is guided by both the behavior of one’s interlocutor and one’s own communicative intentions.

A useful way in which to unpack the contrast between personal and subpersonal control is in terms of cognitive integration. What it is for an action to be assigned to the agent herself rather than one of her components is for it to be suitably inte-grated into her cognitive economy. Th ere is some temptation to think that behav-ioral responses that are functionally isolated from the agent’s cognitive economy should not be assigned to the agent, at least not without reservation. In contrast to “hardwired” stimulus-response behaviors, intentional responses are marked by the degree to which they are integrated with each other. Th is integration might be most obvious in the context of deliberative, refl ective, and willed agency, but it can also

09_AndyClark_09.indd 16309_AndyClark_09.indd 163 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y164

be seen in so-called stimulus-driven behavior. Consider the behavioral responses that are exercised in the context of absentmindedly making oneself a cup of coff ee while chatt ing to a colleague. Although the fi ne-grained motor control involved in such a task might draw on unconscious perceptual representations (see later dis-cussion), there is good reason to think that the identifi cation of objects requires consciousness.

Th e foregoing raises the question of what we should say about “primitive agents”—creatures that lack the cognitive architecture required for (high degrees of) cognitive integration and rational control. Primitive agents possess litt le in the way of deliberative or reasons-responsive capacities, but they are nonetheless able to draw on those cognitive and behavioral capacities that they do possess in a rela-tively fl exible manner. Are such creatures capable of performing intentional actions? Our intuitions are likely to be pulled in two directions at once. On the one hand, the fact that primitive agents can deploy those cognitive and behavioral responses that they do possess in an integrated way might suggest that they are capable of act-ing intentionally. At the same time, the fact that such creatures are unable to engage in high-level behavioral control—control that involves deliberation and rational refl ection—might incline us to deny that they can perform intentional actions. What should we do here?

I think we can steer a path between these two intuitions by allowing that primitive agents can engage in intentional agency to the extent that they are able to draw on their various behavioral capacities in a fl exible and appropriate manner. Consider again the lioness who tracks a gazelle as it moves across her perceptual fi eld. Although the lioness cannot deploy the contents of her visual representation in the service of high-level agency (for she lacks the capacities required for high-level agency), she does act intentionally insofar as she can deploy those behavioral capacities that she does possess in a fl exible, goal-directed manner.

Similar points apply to creatures who possess the mechanisms required for high-level behavioral control but who cannot currently deploy those mechanisms due to temporary impairment or environmental demands. Consider, for example, the action of accelerating in response to a green traffi c light in an automatic and refl exive fashion. Although the person performing such an action might have the capacity for high-level agentive control, processing bott lenecks may prevent them from bringing that capacity to bear on this particular behavior. As such, we might say that there is a sense in which this “prepotent” action is not fully intentional. However, there is nonetheless a genuine sense in which it can (and indeed should) be assigned to the agent rather than one of their subpersonal components, and hence falls within the scope of AMC.

Th e foregoing means that the employment of AMC will be somewhat problem-atic in the context of creatures that lack the capacity for fl exible, reasons-responsive agency. I have already mentioned nonhuman animals in this respect, but no less important will be members of our own species in which such capacities have not yet appeared or in which they are impaired. Particularly problematic will be the evaluation of actions that occur in the context of global disorders of conscious-ness, such as the minimally conscious state, in which the individual’s ability to engage in high-level, reasons-responsive agency has been lost or at least severely

09_AndyClark_09.indd 16409_AndyClark_09.indd 164 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 165

compromised. Th e loss of high-level agentive control could be taken as evidence that consciousness is lost in such disorders, but it could equally be accounted for by supposing that the creature’s conscious states cannot govern its behavior in ways that they would normally be able to because those capacities for high-level behav-ioral control are themselves disrupted. What we can say is this: we have some evi-dence of consciousness in such individuals to the extent that they are able to engage in stimulus-driven—that is, routinized and automatic—agency, but this evidence is signifi cantly weaker than it would be were those individuals to manifest fl ex-ible and reasons-responsive control. 2 More generally, we might say that although the presence of fully intentional agency in such cases constitutes good evidence of consciousness, the absence of such agency may not constitute good evidence of the absence of consciousness, for the mechanisms of cognitive integration required for intentional agency may not be in place.

Let me bring this section to a close by addressing one oft -encountered objection to AMC. Th e objection is that intentional agency cannot qualify as a marker of con-sciousness because intentional agency presupposes consciousness. More fully, the worry is that since an agent acts intentionally only if they are aware of what they are doing, it follows that we cannot ascribe intentional agency to an agent without fi rst determining whether or not they are aware of what they are doing. But if we need to do that—the objection continues—then we cannot employ intentional agency as a marker of consciousness; rather, we must instead employ consciousness as a marker of intentional agency.

I don’t fi nd the objection compelling. For one thing, I am not convinced that intentional agency does require that one be aware of what one is doing (or trying to do) (see section 6). But the objection doesn’t go through even if we grant that intentional agency requires that one be aware of one’s intentions, for it is entirely possible that the ascription of intentional agency is, in certain contexts at least, prior to the ascription of consciousness. As the studies discussed in the following section illustrate, one can have good behavioral evidence for thinking that an individual is acting intentionally without already having evidence that they are conscious. Of course, that evidence will be defeasible —especially if intentional agency requires intentional awareness on the part of the agent—but defeasible evidence is evidence all the same. AMC asserts that intentional agency in a creature is a marker of con-sciousness; it does not assert that it is a surefi re guarantee of consciousness.

4. AN INTROSPECTIVE ALTERNATIVE?

As already noted, many consciousness scientists reject AMC in favor of the claim that introspection is the (only) marker of consciousness. We might call this position “IMC.” Comparing and contrasting AMC with IMC enables us to further illumi-nate what it is that AMC commits us to, and also indicates that AMC is in line with many of our intuitive responses. 3

Let us begin by distinguishing two versions of IMC. According to the strong ver-sion of IMC, introspective report is the only legitimate marker of consciousness in any context. Where a creature is unable to produce introspective reports that it is in a certain type of conscious state, then we have no reason to ascribe conscious states

09_AndyClark_09.indd 16509_AndyClark_09.indd 165 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y166

of that kind to it; and where a creature is unable to produce introspective reports of any kind, then we have no reason to think that it is conscious at all. (Indeed, we might even have reason to think that it is not conscious.) A weaker version of IMC takes introspective report to be the “gold standard” for the ascription of conscious-ness when dealing with subjects who possess introspective capacities, but it allows that nonintrospective measures might play a role in ascribing consciousness when dealing with creatures in which such capacities are absent.

I will focus here on the strong version of IMC for the following two reasons. First, I suspect that in general advocates of IMC incline toward the strong rather than the weak version of the view. Second, it is not clear that the weak version of IMC is stable. If there are viable nonintrospective markers of consciousness at all, then it is diffi cult to see why they couldn’t be usefully applied to creatures with introspective capacities. Of course, the advocate of the weak version of IMC could allow that nonintrospective measures can be “applied” to creatures with introspec-tive capacities but insist that whenever introspective and nonintrospective mea-sures point in diff erent directions, the former trumps the latt er. But it is diffi cult to see why we should assume that introspective measures should always trump nonintrospective measures if, as this response concedes, both introspective and nonintrospective measures can be “applied” to the same creature. At any rate, I will contrast AMC with the strong version of IMC and will leave the weak version of the view to one side.

In what contexts will AMC and IMC generate diff erent verdicts with respect to the ascription of consciousness? Consider a series of experiments by Logothetis and colleagues concerning binocular rivalry in rhesus monkeys (Logothetis & Schall 1989; Logothetis et al. 2003; Sheinberg & Logothetis 1997). In this work, Logothetis and colleagues trained rhesus monkeys to press bars in response to particular images, such as horizontal and vertical gratings. Following training, the monkeys were placed in a binocular rivalry paradigm in which a horizontal grating was presented to one eye and a vertical grating was presented to the other eye, and the monkeys were required to respond to the two stimuli by means of bar presses. At the same time, Logothetis and colleagues recorded from the visual system of the monkeys in order to determine where in the visual hierarchy the closest correla-tions between their conscious percepts (as measured by their responses) and neural activity might be found.

How should we interpret the monkeys’ responses? In a view that has been widely endorsed in the literature, Logothetis and colleagues describe the monkeys as pro-ducing introspective reports. Th is view seems to me to be rather problematic. In fact, it seems highly implausible that the monkeys were producing reports of any kind, let alone introspective reports. Arguably, a motor response counts as a report only if it is made in the light of the belief that one’s audience will take it to manifest the intention to bring about a certain belief in the mind of one’s audience, and it seems doubtful that the monkeys’ butt on presses were guided by mental states of this kind. In other words, commitment to IMC would be at odds with the assump-tion that the monkeys were indeed conscious of the stimuli that were presented to them, and would thus undermine the relevance of this work to questions concern-ing the neural correlates of consciousness.

09_AndyClark_09.indd 16609_AndyClark_09.indd 166 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 167

But we can interpret this research as bearing on the neural correlates of con-sciousness—as it seems very natural to do—by endorsing AMC. Instead of con-ceiving of the monkeys’ bar presses as reports, we should regard them as intentional actions made in light of particular types of conscious percepts, for the monkeys have learned that pressing a certain bar in response to (say) a horizontal grating will produce a reward.

A second domain in which our intuitive ascription of consciousness appears to favor AMC over IMC involves severely brain-damaged patients. In drawing the boundary between the vegetative state and the minimally conscious state, physi-cians lean heavily on appeals to the patient’s agentive capacities (Bernat 2006; Jennett 2002; Giacino et al. 2002). A vegetative state diagnosis requires that the patient show “no response to external stimuli of a kind that would suggest volition or purpose (as opposed to refl exes)” (Royal College of Physicians 2003, §2.2). Patients who do produce signs of volition are taken to have left the vegetative state and entered the minimally conscious state. Volition has even been used to make the case for consciousness in certain apparently vegetative state patients. One study involved a 23-year-old woman who had been in a vegetative state for 5 months (Boly et al. 2007; Owen et al. 2006). On some trials the patient was played a prerecorded instruction to imagine playing tennis; on other trials she was instructed to imag-ine visiting the rooms of her home. Astonishingly, the patient exhibited sustained, domain-specifi c neural activity in the two conditions that was indistinguishable from that seen in healthy volunteers.

It is widely—although not universally—supposed that this activity provided evi-dence of consciousness in this patient. AMC is consistent with that response, for it is reasonable to regard the neural activity as evidence of sustained, goal-directed mental imagery. In the same way that limb movement in response to command is generally taken as a manifestation of consciousness in minimally conscious state patients, so too we might regard evidence of sustained mental imagery as evidence of consciousness in vegetative state patients (Shea & Bayne 2010). By contrast, advo-cates of IMC will deny that we have any evidence of consciousness in this patient, as Naccache (2006) does. Indeed, the advocate of IMC may even be committ ed to denying that even so-called minimally conscious state patients are conscious, a position that is at odds with current clinical opinion.

A third domain in which AMC appears to deliver a more intuitively plausible ver-dict than IMC concerns the interpretation of the commissurotomy (or “split-brain”) syndrome. Th e commissurotomy procedure involves severing some portion of the corpus callosum in order to prevent epileptic seizures spreading from one hemi-sphere to the other. Although split-brain patients are largely unimpaired in every-day life, under carefully controlled laboratory conditions they can be led to exhibit striking behavioral dissociations (Gazzaniga 2005; Zaidel et al. 2003). In a typical split-brain experiment, distinct visual stimuli are presented to the patient in sepa-rate halves of the visual fi eld. For example, the word “key-ring” might be projected such that “key” is restricted to the patient’s left visual fi eld and “ring” is restricted to the patient’s right visual fi eld. Th e contralateral structure of the visual system ensures that stimuli presented in the left visual fi eld are processed only in the right hemisphere and vice versa. Th e typical fi nding is that patients say that they see only

09_AndyClark_09.indd 16709_AndyClark_09.indd 167 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y168

the word “ring,” yet with their left hand they will select a picture of a key and ignore pictures of both a ring and a key-ring.

As a number of theorists have pointed out (e.g., Milner 1992), strict adherence to IMC would require us to conclude that such patients are not conscious of the word “key.” 4 Although some theorists have found this view att ractive (e.g., MacKay 1966), most regard the “minor” nonspeaking hemisphere as capable of supporting consciousness (e.g., LeDoux et al. 1977; Marks 1981). In doing so, theorists seem to have implicitly embraced some version of AMC. Th ey take the right hemisphere of split-brain patients to support consciousness on the grounds that it is able to sup-port various forms of intentional, goal-directed agency.

We have examined three domains in which the contrast between IMC and AMC has an important bearing on the ascription of consciousness. Arguably, in each case AMC does a bett er job of capturing our pretheoretical intuitions than IMC does. Of course, the advocate of IMC need not be impressed by this result. Such a theorist might grant that even if certain experimental paradigms employed in the science of consciousness cannot be reconstructed according to the dictates of IMC, so much the worse for those paradigms. Rather than bring our views concerning the markers of consciousness into line with current experimental practice or pretheoretical intu-ition, they might say, we should revise both practice and intuition in light of theory (see, e.g., Papineau 2002).

Although it is certainly true that neither pretheoretical intuition nor experimen-tal practice is sacrosanct, they do confer a certain de facto legitimacy on AMC. Th e fact that IMC is at odds with them suggests that it is a somewhat revisionary proposal, one that stands in need of independent motivation. How might IMC be motivated?

Although a thorough evaluation of the case for IMC goes well beyond the scope of this chapter, I do want to engage with one argument that has been given for it-what we might call the argument fr om epistemic conservatism . Th is argument may not be the strongest argument for IMC, but I suspect that it is one of the most infl uential. Th e argument begins with the observation that there are two kinds of error that one can make in studying consciousness: false negatives and false posi-tives. False negatives occur when a marker misrepresents a conscious state or crea-ture as unconscious, while false positives occur when a marker misrepresents an unconscious state (or creature) as conscious. Now, let us say that an approach to the ascription of consciousness is conservative if it places more importance on avoiding false positives than on avoiding false negatives, and that it is liberal if it places more importance on avoiding false negatives than false positives. Th e argument from epistemic conservatism holds that because our approach toward the ascription of consciousness ought to be maximally conservative, we should adopt IMC, for only IMC is guaranteed to prevent false positives.

I don’t fi nd the argument compelling. For one thing, it is by no means clear that a conservative approach to the ascription of consciousness would lead inexorably to IMC. Th e literature on inatt entional blindness and change blindness—to take just two of many examples that could be cited here—suggests that our introspective beliefs concerning our current conscious states are oft en false: we ascribe to our-selves conscious states that we are not in, and we overlook conscious states that are

09_AndyClark_09.indd 16809_AndyClark_09.indd 168 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 169

we are in (Dennett 1991; Haybron 2007; Schwitzgebel 2008). More fundamentally, there is no reason to assume that our choice between rival markers of consciousness ought to be conservative (let alone ultraconservative). We should seek to avoid false positives in studying consciousness, but we should also seek to avoid false negatives. As far as I can see, there is no reason to regard one of these errors as more serious than the other, at least so far as the scientifi c study of consciousness is concerned. 5

5. THE CHALLENGE FROM COGNITIVE NEUROSCIENCE

I suspect that most of those who reject AMC do so not because they regard it as pretheoretically implausible but because they take it to have been undermined by fi ndings in cognitive science. In particular, many theorists regard AMC to be at odds with what we have learned from the study of blindsight and visual agno-sia. Blindsight is a condition caused by damage to primary visual cortex. 6 Although patients deny that they are aware of stimuli that are presented within their scotoma (“blindfi eld”), they are nonetheless able to discriminate certain kinds of blindfi eld stimuli under forced choice conditions. In some cases, the acuity of the patient’s blindfi eld can even exceed that of the sighted portions of the visual fi eld for certain kinds of stimuli (Weiskrantz et al. 1974; Trevarthen et al. 2007). 7 Visual form agno-sia has been described as a “blindsight for orientation.” As with blindsight proper, patients retain striking behavioral capacities despite the apparent loss of certain aspects of conscious awareness. Much of our knowledge of such capacities comes from the study of D.F., a woman who developed visual agnosia at the age of 35 due to carbon monoxide poisoning (Goodale & Milner 2004; Milner & Goodale 2006). Owing to ventral stream damage, D.F. suff ered severe impairments in her ability to see the shape and location of objects. However, the dorsal stream of her visual sys-tem was left largely unaff ected, and she retained the ability to execute fi ne-grained online motor control in response to visual cues. In one particularly striking study, D.F. was able to “post” a lett er through a slot despite being unable to report the slot’s orientation or say whether it matched that of another slot (Carey et al. 1996; McIntosh et al. 2004).

Th ese fi ndings are widely taken to put pressure on AMC. Frith and colleagues (1999) preface their rejection of AMC with the claim that “blindsight shows that goal-directed behaviour is not a reliable indicator of consciousness” (107), and Weiskrantz (1997) appeals to blindsight in defense of his claim that “we need an off -line commentary to know whether or not a behavioural capacity is accompanied by awareness” (84). In a somewhat similar vein, Milner and Goodale’s practice of describing the contrast between ventral stream processing and dorsal stream pro-cessing as a contrast between “vision-for-perception” and “vision-for-action” has encouraged the view that visual experience is not in the business of guiding agency. In the words of Koch and Crick (2001), it is commonly thought that visually guided action is subserved only by “zombie systems.”

Although this body of research certainly ought to inform our conception of just how agency might support ascriptions of consciousness, it would be premature to conclude that these conditions undermine AMC. We can think of the objection from cognitive neuroscience as consisting of four steps. First, the introspective

09_AndyClark_09.indd 16909_AndyClark_09.indd 169 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y170

reports of the relevant patients are correct: the representations that patients have of objects in their blindfi eld are unconscious. Second, these unconscious representa-tions support various forms of intentional agency, such as pointing and guessing. Th e third step of the argument puts these two claims together to argue that con-sciousness of stimulus (or its relevant properties) is not required for intentional agency that is directed toward that stimulus (i.e., intentional agency that crucially involves the relevant properties). But—and this is the fourth step—if conscious-ness is not required for intentional agency, then intentional agency cannot ground ascriptions of consciousness. What should we make of this argument?

Th e fi rst step appears to be warranted. Although some theorists have cast doubt on the introspective reports of blindsight patients, suggesting that the damage they have sustained might have impaired their introspective capacities rather than their visual experience as such (see, e.g., Gertler 2001), this position is undermined by evidence of “blindsight-like” phenomena (Kolb & Braun 1995; Lau & Passingham 2006) and unconscious dorsal stream visual control (Milner & Goodale 2008) in normal individuals. 8 Although we could suppose that normal subjects also lack introspective access to these visual representations, it seems more parsimonious to assume that the introspective capacities of normal subjects are intact, and that their dorsal representations are—as their introspective reports indicate—unconscious. 9

More problematic is the fourth step of the argument. Th ere is no incoherence in holding that although the intentional actions of these patients are guided by unconscious representations, these contexts are exceptions to a general rule that links agency to consciousness. AMC, so the thought goes, might lead us astray in dealing with patients with blindsight and visual agnosia, but needn’t be unreliable in general. In order for X to qualify as a marker of Y, it need not be the case that every instance of X is accompanied by an instance of Y. Just as smoke is a marker of fi re even though not all instances of smoke are accompanied by fi re, intentional agency might be a marker of consciousness even if it is possible for intentional agency to be guided by unconscious representations.

Although this line of response is perfectly acceptable as far as it goes, it is not clear that we need be as concessive to the objection as this response is. Th e reason for this is that it is not at all clear that blindsight-supported actions are fully intentional. In other words, it is not clear that the second step of the argument is warranted. In fact, there are three respects in which the actions seen in these conditions fall short of full-blooded intentionality

First, blindsight-supported agency is oft en curiously “response-specifi c.” An early study by Weiskrantz and colleagues (1974) found that D.B. was able to local-ize blindfi eld targets much bett er when he was required to point to them than when he was required to make an eye movement toward them. In another study, Zihl and von Cramon (1980) required three blindsight patients to report when they saw a light that had been fl ashed into their blindfi eld. On some trials the patients were instructed to produce eye-blink reports, on other trials they were required to pro-duce key-press reports, and on still other trials they were asked to produce verbal reports (saying “yes”). Although the patients’ blinking and key-pressing responses were signifi cantly above chance (aft er practice), their verbal responses were not. Another series of studies investigating the ability of two blindsight patients to

09_AndyClark_09.indd 17009_AndyClark_09.indd 170 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 171

perceive size and orientation found that their performance was above chance on goal-directed actions (grasping and posting), but below chance when they were required to either perceptually match or verbally report the target’s size (Perenin & Rossett i 1996; Rossett i 1998). Th e fact that their responses to the stimuli were restricted to particular behavioral modalities suggests that they were not fully inten-tional. As we noted in section 3, the more fl exible a behavioral response is, the more reason there is to regard it as fully intentional.

A second respect in which blindfi eld supported actions are less than fully inten-tional is that they must typically be prompted . Blindfi eld content is not generally available for spontaneous agentive control. Patients must initially be told that their guesses are reliable before they will employ the contents of their blindfi eld repre-sentations in agentive control. Th e fact that blindfi eld content is not spontaneously employed by the subject suggests that it is not accessible to her as such —that is, at the personal level.

In response, it might be pointed out that some blindsight subjects do use the con-tents of their blindfi eld in the service of spontaneous agency. Consider Nicholas Humphrey’s blindsight monkey—Helen:

Helen, several years aft er the removal of visual cortex, developed a virtu-ally normal capacity for ambient spatial vision, such that she could move around under visual guidance just like any other monkey. Th is was certainly unprompted, and in that respect “super” blindsight. (Humphrey 1995: 257; see also Humphrey 1974).

What should we make of Helen’s “superblindsight”? One possibility is that it involves the spontaneous use of purely unconscious visual representations. Another possi-bility is that Helen’s blindfi eld representations became conscious as she acquired the ability to spontaneously deploy their contents in the service of behavioral control. I am not sure that we are yet in a position to adjudicate between these two possibili-ties. What we can say is that it is far from obvious that Helen’s behavior provides us with an example of spontaneous behavioral guidance in the absence of visual consciousness.

Th ird, and perhaps most important, whereas the contents of perceptual con-sciousness can be used for both the selection of goals and the execution of inten-tions, blindfi eld content appears capable of sustaining only the latt er of these two operations. Th e dorsal stream is not a homunculus—a “mini-me” that can both select and initiate actions under its own stream. Milner and Goodale (2006: 232) liken it to the robotic component of a tele-assistance system: the ventral stream selects the goal object from the visual array, and the dorsal stream carries out the computations required for the assigned action (see also Clark 2009).

In response, it might be argued that dorsal stream representations can drive goal selection. D.F., for example, appears to engage with the world in a fl uid and dynamic way, using her visual experience in order to select target objects on which to act. However, such behavior cannot necessarily be att ributed to dorsal stream represen-tations acting on their own, for D.F.’s agnosia is quite selective. D.F. is missing only quite specifi c properties from her visual experience, and those elements that she

09_AndyClark_09.indd 17109_AndyClark_09.indd 171 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y172

retains enable her to engage in intentional agency across a wide range of everyday contexts (Milner 2008: 181). Restricting conscious access to these properties—as is done in experimental contexts—reveals how impoverished D.F.’s blindsight-based behavioral capacities really are. As Dretske (2006) puts it, unconscious sensory information may be able to “control and tweak” behaviors that have already been selected, but it appears as though conscious information is required for behavioral planning and the selection of goals. 10

Let me briefl y summarize the claims of this section. Th e objection from cognitive neuroscience takes the form of a putative counterexample to the claim that con-sciousness is required for intentional agency. In response, I have made two central points. First, AMC requires only a robust correlation between consciousness and intentional agency, and hence it could be justifi ed even if there are conditions in which certain types of intentional agency are possible in the absence of conscious-ness. But—and this is the second point—it is far from clear that cognitive neuro-science does provide us with counterexamples to the claim that consciousness is required for intentional agency, for truly blindsight-based actions may not qualify as fully intentional.

6. THE CHALLENGE FROM SOCIAL PSYCHOLOGY

A second body of research that is widely taken to undermine AMC derives from social psychology. An impressive body of work in this fi eld suggests that goals can be nonconsciously acquired, and that once acquired they can function without the subject being aware of their infl uence (e.g., Bargh 2005; Bargh & Chartrand 1999; Bargh & Morsella 2009; Dijksterhuis & Bargh 2001). Let us begin by considering some of the data that are emerging from this fi eld, before turning to questions of interpretation.

Th e following two “scrambled sentence” studies by Bargh and collaborators are representative of this body of work. In the fi rst study, half of the participants were given sentences containing words that primed for stereotypes of old age (“wrinkle,” “grey,” “wise”), while the other half were given sentences containing only age-neutral words (Bargh et al. 1996). Th e participants left the unscrambling task believing that they had completed the experiment. However, as they left , the time that it took for them to walk from the experimental room to the end of the corridor was measured. As Bargh and colleagues predicted, those who had been given sentences containing old-age primes took signifi cantly longer to reach the end of the corridor than did control subjects, who had not been primed in this way.

In the second study, Bargh et al. (2001) assigned subjects to one of two groups: a high-performance group and a neutral group. Th e members of the former group were instructed to search for words that primed for performance, such as “win,” “compete,” “strive,” “att ain,” and “succeed,” while the members of the neutral group were given words that carried no such connotations, such as “ranch,” “carpet,” “river,” “shampoo,” and “robin.” Subjects who had been primed with high-performance words did signifi cantly bett er in a subsequent word-fi nding task than did controls. Questioning within a debriefi ng session indicated that participants had not been aware of the relationship between the priming task and the subsequent experimental

09_AndyClark_09.indd 17209_AndyClark_09.indd 172 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 173

situation. In a follow-up experiment, subjects were told to cease working on the word-fi nding task aft er two minutes and were then surreptitiously observed to determine whether or not they did as instructed. Surprisingly, 57 percent of sub-jects in the high-performance group continued to work on the task following the instruction to stop as opposed to only 22 percent of those in the neutral group. In yet another follow-up study, subjects were interrupted aft er one minute and were then made to wait for fi ve minutes before being given the option of continuing with the word-fi nding task or instead participating in a (more enjoyable) cartoon-rating task. Subjects who had been primed with high-performance words were much more likely (66 percent) than controls (32 percent) to persevere with the word-fi nding task. 11

Bargh (2005) draws the following moral from these (and other) studies:

Conscious acts of will are not necessary determinants of social judgment and behavior; neither are conscious processes necessary for the selection of com-plex goals to pursue, or for the guidance of those goals to completion. Goals and motivations can be triggered by the environment, without conscious choice or intention, then operate with and run to completion entirely non-consciously, guiding complex behaviour in interaction with a changing and unpredictable environment, and producing outcomes identical to those that occur when the person is aware of having that goal. (52)

Do these fi ndings—as Bargh’s comments might be taken to imply—undermine AMC? I don’t think so. For one thing, we need to be careful about just how we inter-

pret these studies. Although I see no a priori reason to rule out the possibility of unconscious goal selection, I am not convinced that these studies provide evidence of such a phenomenon. Th e primes given in these studies infl uenced the subjects’ behavior by modulating how they acted (e.g., walking more slowly, persevering with a task), but they did not provide the subjects with a goal. Subjects in the fi rst study were not att empting to walk slowly, and high-performance and low-performance subjects in the second study diff ered only in how long they stuck with the task. Subjects in both sets of studies were trying to complete the assigned word-fi nding task—and presumably they were fully aware that that was what they were trying to do. We can grant that subjects were unaware of what factors infl uenced their degree of motivation in carrying out the task, and perhaps even that they were unaware of their degree of motivation itself (although we have no direct evidence on this score, since subjects weren’t asked about how motivated they were), but being unaware of these aspects of one’s agency is quite a diff erent thing from being unaware of what it is that one is trying to do.

Having said that, we should grant that agents are oft en unaware of their inten-tions, even when those intentions are “proximal”—that is, are currently guiding behavior (Mele 2009). Suppose that one is crossing the road while talking with a friend. One might be so engrossed by the conversation that one is unaware of one’s intention to cross the road. Moreover, it is likely that many creatures are capable of goal-directed agency without having the capacity for agentive self-awareness. But AMC does not require that intentional agency be grounded

09_AndyClark_09.indd 17309_AndyClark_09.indd 173 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y174

in an agent’s awareness of its intentions and goals. All it requires is that conscious states of some kind or another are implicated in the exercise of intentional agency. Even when agents act on the basis of goals of which they are unaware, they will generally be aware of the perceptual features of their environment that govern the selection and implementation of those goals. It is one thing to act on the basis of an unconscious intention , but quite another to act on the basis of an unconscious representation of one’s perceptual environment or body. In many cases, the most likely form of consciousness to be implicated in intentional agency will be percep-tual rather than agentive.

Th is point can be further illustrated by considering pathologies of agency, such as the anarchic hand syndrome (Della Sala & Marchett i 2005; Marchett i & Della Sala 1998). Patients with this condition have a hand that engages in apparently inten-tional behavior of its own accord. Th e patient will complain that she has no control over the hand’s behavior and will describe it as having a “mind of its own.” Although patients appear not to have any sense of agency with respect to “their” actions, these actions are presumably triggered and guided by the patient’s conscious perceptual experiences. So, although anarchic hand actions might provide an unreliable guide to the presence of conscious intention , there is no reason to think that they provide an unreliable guide to the presence of conscious perception .

Th ere is a fi nal point to note, a point that is relevant to the assessment of both the case against AMC based on social psychology and that which is based on cogni-tive neuroscience. Although there are question marks about the kinds of conscious states that the subjects studied in these experiences might be in, there is no doubt whatsoever that the subjects themselves are conscious. As such, it is unclear what bearing such studies might have on the question of whether completely unconscious creatures are capable of intentional agency. It is not implausible to suppose that the kinds of behavioral capacities that unconscious mental states are able to drive in conscious creatures diff er in fundamental respects from those that they are able to drive in unconscious creatures. Perhaps mentality is something like an iceberg, not only in the sense that only a small portion of it is conscious but also in the sense that the existence of its submerged (or unconscious) parts demands the existence of its unsubmerged (or conscious) parts.

Are there any pathologies of consciousness in which intentional agency occurs in the complete absence of creature consciousness? A number of authors have argued that sleepwalking, automatisms, and epileptic absence seizures—to name just three of the many conditions that might be mentioned here—provide exam-ples of states in which completely unconscious individuals engage in intentional actions, albeit ones that are highly automatized (see, e.g., Koch 2004; Lau & Passingham 2007). Th e problem with such claims is that it is far from clear that such individuals are completely unconscious. Th ey might not be conscious of what they are doing or of why they are doing what they are doing, but it is—it seems to me—very much an open question whether they might nonetheless be conscious of objects in their immediate perceptual environment (Bayne 2011). Such individuals certainly act in ways that are under environmental control, and as Lloyd Morgan once remarked, control is the “primary aim, object and purpose of consciousness” (1894: 182).

09_AndyClark_09.indd 17409_AndyClark_09.indd 174 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 175

7. CONCLUSION

Current orthodoxy within the science of consciousness holds that the only legiti-mate basis for ascribing consciousness is introspective report, and the practice of employing agency as a marker of consciousness is looked upon with some suspicion by many theorists. In this chapter I have argued that such suspicion is unjustifi ed. In the fi rst half of the chapter I clarifi ed a number of ways in which agency might be adopted as a marker of consciousness, and in the second half I examined and responded to the claims that fi ndings in cognitive neuroscience and social psychol-ogy undermine the appeal of AMC. I argued that although these fi ndings provide the advocate of AMC with plenty of food for thought, neither domain demonstrates that intentional agency is an unreliable guide to the presence of consciousness.

What I have not done in this chapter is provide a direct argument for AMC. My primary concern has been to defend AMC against a variety of objections, and I have left the motivation for AMC at a relatively intuitive and pretheoretical level. Th e task of developing a positive argument for AMC is a signifi cant one and not some-thing that I can take on here. All I have att empted to do in this chapter is remove some of the undergrowth that has come to obscure the claim that agency might function as a marker of consciousness: a full-scale defense of that claim must wait for another occasion. 12

NOTES

1 . My analysis builds on a number of recent discussions of the relationship between agency and consciousness. I am particularly indebted to Clark (2001, 2009), Dretske (2006), Flanagan (1991), and van Gulick (1994).

2 . See chapter 6 of Bayne (2010) for discussion of the ascription of consciousness in these conditions.

3 . Th is section draws on chapter 5 of Bayne (2010). 4 . Whether or not this interpretation of the split-brain data is at odds with the weak ver-

sion of IMC or merely the strong version of IMC depends on the delicate question of whether we think of the split-brain patient as two cognitive subjects (only one of whom possesses introspective capacities) or as a single subject (with introspective capacities).

5 . It is somewhat ironic that many of those who are most strident in their defense of IMC are also among the most critical of the claim that introspection is reliable. Although not strictly inconsistent with each other, these two views are not natural bedfellows.

6 . See P ö ppel et al. (1973), Weiskrantz et al. (1974), Perenin & Jeannerod (1975), and Weiskrantz (2009).

7 . Note that there are diff erent forms of blindsight, and not all blindsight patients dem-onstrate the same range of blindsight-related potential for action. My discussion here concerns what Danckert and Rosett i (2005) call “action-blindsight.”

8 . For a sample of the vast array of studies in this vein, see Aglioti et al. (1995); Brenner & Smeets (1997); Bridgeman et al. (1981); Castiello et al. (1991); Fourneret & Jeannerod (1998); McIntosh et al. (2004); Milner & Goodale (2008); Slachevsky et al. (2001); and Schenk & McIntosh (2010).

09_AndyClark_09.indd 17509_AndyClark_09.indd 175 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y176

9 . Although certain blindsight patients claim to have some form of conscious awareness of stimuli in their blindfi eld, they tend to describe such experiences as qualitatively distinct from visual experiences of stimuli (Magnussen & Mathiesen 1989; Morland et al. 1999).

10 . In part this may be because the dorsal stream lacks access to information about the categories to which visually perceived objects belong. For example, D.F. will fail to pick up a screwdriver from the appropriate end because she will not recognize it as a screwdriver (Dijkerman et al. 2009). However, even when the dorsal stream is able to represent the appropriate properties of objects, it seems unable to draw on that content in order to initiate action.

11 . For similar studies see Bargh et al. (1996); Bargh & Ferguson (2000); Bargh & Gollwitzer (1994); and Carver et al. (1983).

12 . For helpful comments on earlier versions of this chapter, I am indebted to Bart Kamphorst, Julian Kiverstein, Hakwan Lau, Eddy Nahmias, and Tillmann Vierkant.

REFERENCES

Aglioti , S., De Souza, J. F. X. , & Goodale , M. A . 1995 . Size-contrast illusions deceive the eye but not the hand . Current Biology , 5 : 679–685 .

Bargh , J. A . 2005 . Bypassing the will: Toward demystifying the nonconscious control of social behavior. In R. Hassin , J. Uleman, & J. Bargh (eds.), Th e New Unconscious , 37–58. New York : Oxford University Press .

Bargh , J. A. , & Chartrand , T. 1999 . Th e unbearable automaticity of being . American Psychologist , 54 : 462–479 .

Bargh , J. A., Chen, A. , & Burrows , L . 1996 . Automaticity of social behavior: Direct eff ects of trait construct and stereotype activation on action . Journal of Personality and Social Psychology , 71 : 230–244 .

Bargh , J. , and Ferguson , M . 2000 . Beyond behaviorism: On the automaticity of higher mental processes . Psychological Bulletin , 126 : 925–945 .

Bargh , J., & Gollwitzer, P. M. 1994.. Environmental control of goal-directed action: Automatic and strategic contingencies between situations and behavior . Nebraska Symposium on Motivation , 41 : 71–124.

Bargh , J., Gollwitzer, P. M., Lee-Chai, A. Y., Barndollar, K. , & Tr ö tschel , R . 2001 . Th e automated will: Nonconscious activation and pursuit of behavioural goals . Journal of Personality and Social Psychology, 81 : 1014–1027 .

Bargh , J. , & Morsella , E. 2009 . Unconscious behavioural guidance systems. In C. Agnew , D. Carlston, W. Graziano, & J. Kelly (eds.), Th en a Miracle Occurs: Focusing on Behaviour in Social Psychological Th eory and Research , 89–118. New York : Oxford University Press .

Bayne , T . 2010 . Th e Unity of Consciousness . Oxford : Oxford University Press . Bayne , T . 2011 . Th e presence of consciousness in “absence” seizures . Behavioural Neurology ,

24 (1): 47–53 . Bernat , J. L . 2006 . Chronic disorders of consciousness . Lancet , 367 : 1181–1192 . Block , N . 2002 . Th e harder problem of consciousness . Journal of Philosophy , 99 :

391–425 . Boly , M., Coleman, M. R., Davis, M. H., Hampshire, A., Bor, D., Moonen, G., Maquet, P.

A., Pickard, J. D., Laureys, S. , & Owen , A. M . 2007 . When thoughts become action: An fMRI paradigm to study volitional brain activity in noncommunicative brain injured patients . NeuroImage, 36 : 979–992 .

09_AndyClark_09.indd 17609_AndyClark_09.indd 176 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 177

Brenner , E. , & Smeets , J. B. J . 1997 . Fast responses of the human hand to changes in target position . Journal of Motion Behaviour , 29 : 297–310 .

Bridgeman , B., Kirsch, M. , & Sperling , G . 1981 . Segregation of cognitive and motor aspects of visual function using induced motion . Perception and Psychophysics , 29 : 336–342 .

Carey , D. P., Harvey, M., & Milner, A. D . 1996 . Visuomotor sensitivity for shape and orien-tation in a patient with visual form agnosia . Neuropsychologia , 34 : 329–338 .

Carver , C. S., Ganellen, R. J., Froming, W. J. , & Chambers , W . 1983 . Modelling: An ana-lysis in terms of category accessibility . Journal of Experimental Social Psychology , 19 : 403–421 .

Castiello , U., Paulignan, Y. , & Jeannerod , M . 1991 . Temporal dissociation of motor responses and subjective awareness . Brain , 114 : 2639–2655 .

Clark , A . 2001 . Visual experience and motor action: Are the bonds too tight? Philosophical Review , 110 : 495–519 .

Clark , A . 2009 . Perception, action and experience: Unraveling the golden braid. Neuropsychologia , 47 : 1460–1468 .

Danckert , J. , & Rossett i , Y. 2005 . Blindsight in action: What does blindsight tell us about the control of visually guided actions? Neuroscience and Biobehavioural Reviews , 29 : 1035–1046 .

Della Sala , S. , & Marchett i , C . 2005 . Th e anarchic hand syndrome. In H.-J. Freund , M. Jeannerod, M. Hallett , & R. Leiguarda (eds.), Higher-Order Motor Disorders: From Neuroanatomy and Neurobiology to Clinical Neurology , 293–301. New York : Oxford University Press .

Dennett , D . 1991 . Consciousness Explained . Boston : Litt le, Brown . de’Sperati , C. , & Baud-Bovy , G . 2008 . Blind saccades: An asynchrony between seeing and

looking. Journal of Neuroscience , 28 : 4317–4321 . Dijkerman , H. C., McIntosh, R. D., Schindler, I., Nijboer, T. C. W. , & Milner , A. D . 2009 .

Choosing between alternative wrist postures: Action planning needs perception. Neuropsychologia, 47 : 1476–1482 .

Dijksterhuis , A. , & Bargh , J. A . 2001 . Th e perception-behaviour expressway: Automatic eff ects of social perception on social behaviour. In M. P. Zanna (ed.), Advances in Experimental Social Psychology , 33:1–40. San Diego , Academic Press .

Dretske , F . 2006 . Perception without awareness. In T. S. Gendler & J. Hawthorne (eds.), Perceptual Experience, 147–180. Oxford : Oxford University Press .

Fourneret , P. , & Jeannerod , M . 1998 . Limited conscious monitoring of motor perfor-mance in normal subjects. Neuropsychologia , 36 : 1133–1140 .

Frith , C., Perry, R. , & Lumer , E. 1999 . Th e neural correlates of conscious experience: An experimental framework. Trends in the Cognitive Sciences 3 : 105–114 .

Gazzaniga , M. S. 2005 . Forty-fi ve years of split-brain research and still going strong . Nature Reviews Neuroscience, 6 : 653–659 .

Gertler , B . 2001 . Introspecting phenomenal states , Philosophy and Phenomenological Research , 63 : 305–328 .

Giacino , J. T., Ashwal, S., Childs, N., Cranford, R., Jennett , B., Katz, D. I., Kelly, J. P., Rosenberg, J. H., Whyte, J., Zafonte, R. D. , & Zasler , N. D . 2002 . Th e minimally con-scious state: Defi nition and diagnostic criteria . Neurology , 58 : 349–353 .

Goodale , M. , & Milner , A. D . 2004 . Sight Unseen: An Exploration of Conscious and Unconscious Vision . Oxford : Oxford University Press .

Haggard , P . 2008 . Human volition: Towards a neuroscience of will . Nature Reviews Neuroscience , 9 : 934–946 .

09_AndyClark_09.indd 17709_AndyClark_09.indd 177 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y178

Haybron , D . 2007 . Do we know how happy we are? On some limits of aff ective introspec-tion and recall. No û s , 41 : 394–428 .

Humphrey , N. K . 1974 . Vision in a monkey without striate cortex: A case study . Perception , 3 : 241–255 .

Humphrey , N. K . 1995 . Blocking out the distinction between sensation and perception: Superblindsight and the case of Helen . Behavioral and Brain Sciences, 18 : 257–258 .

Jahanshahi , M. , & Frith , C . 1998 . Willed action and its impairments . Cognitive Neuropsychology , 15 : 483–533 .

Jennett , B . 2002 . Th e Vegetative State . Cambridge : Cambridge University Press . Kirchner , H. , & Th orpe , S. J. 2006 . Ultra-rapid object detection with saccadic eye move-

ments: Visual processing speed revisited . Vision Research , 46 : 1762–1776 . Koch , C . 2004 . Th e Quest for Consciousness. Englewood, CO : Roberts . Koch , C. , & Crick , F . 2001 . On the zombie within . Nature , 411 : 893 . Kolb , F. C. , & Braun , J . 1995 . Blindsight in normal observers . Nature, 377 : 336–338 . Lau , H. C. , & Passingham , R. E. 2006 . Relative blindsight in normal observers and the

neural correlate of visual consciousness . Proceedings of the National Academy of Sciences , 103 : 18763–18768 .

Lau , H. C. , & Passingham , R. E . 2007 . Unconscious activation of the cognitive control sys-tem in the human prefrontal cortex . Journal of Neuroscience , 27 : 5805–5811 .

LeDoux , J. E., Wilson, D. H. , & Gazzaniga , M. S. 1977 . A divided mind: Observations on the conscious properties of the separated hemispheres . Annals of Neurology , 2 : 417–421 .

Lengfelder , A. , & Gollwitzer , P. M . 2001 . Refl ective and refl exive action control in patients with frontal brain lesions . Neuropsychology, 15 : 80–100 .

Logothetis , N. K., D. A. Leopold , & Sheinberg , D. L . 2003 . Neural mechanisms of per-ceptual organization. In N. Osaka (ed.), Neural Basis of Consciousness: Advances in Consciousness Research , 49:87–103. Amsterdam : John Benjamins .

Logothetis , N. , & Schall , J . 1989 . Neuronal correlates of subjective visual perception. Science , 245 : 761–763 .

MacKay , D. M . 1966 . Cerebral organization and the conscious control of action. In J. C. Eccles (ed.), Brain and Conscious Experience , 422–445. Heidelberg : Springer-Verlag .

Magnussen , S. , & Mathiesen , T . 1989 . Detection of moving and stationary gratings in the absence of striate cortex . Neuropsychologia , 27 : 725–728 .

Marchett i , C. , & Della Sala , S . 1998 . Disentangling the alien and anarchic hand . Cognitive Neuropsychiatry , 3 : 191–207 .

Marks , C . 1981 . Commissurotomy, Consciousness and Unity of Mind . Cambridge, MA : MIT Press .

McIntosh , R. D., McClements, K. I., Schindler, I., Cassidy, T. P., Birchall, D., & Milner, A. D. 2004 . Avoidance of obstacles in the absence of visual awareness . Proceedings of the Royal Society of London Series B Biological Sciences , 271 : 15–20 .

Mele , A . 2009 . Eff ective Intentions: Th e Power of Conscious Will . Oxford : Oxford University Press .

Milner , A. D . 1992 . Disorders of perceptual awareness: A commentary. In A. D. Milner and M. D. Rugg (eds.), Th e Neuropsychology of Consciousness , 139–158. London: Academic Press .

Milner , A. D . 2008 . Conscious and unconscious visual processing in the human brain. In L. Weiskrantz and M. Davies (eds.), Frontiers of Consciousness , 169–214. Oxford : Oxford University Press .

09_AndyClark_09.indd 17809_AndyClark_09.indd 178 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

Agency as a Marker of Consciousness 179

Milner , A. D. , & Goodale , M. A. 2006 . Th e Visual Brain in Action . 2nd ed. Oxford : Oxford University Press .

Milner , A. D. , & Goodale , M. A . 2008 . Two visual systems reviewed . Neuropsychologia , 46 : 774–785 .

Morgan , C. L . 1894 . An Introduction to Comparative Psychology . London . W. Scott . Morland , A. B., Jones, S. R., Finlay, A. L., Deyzac, E., Le, S., & Kemp, S . 1999 . Visual percep-

tion of motion, luminance and colour in a human hemianope . Brain , 122 : 1183–1196 . Mushiake , H., Masahiko, I. , & Tanji , J . 1991 . Neuronal activity in the primate premotor,

supplementary, and precentral motor cortex during visually guided and internally determined sequential movements. Journal of Neurophysiology , 66 : 705–718 .

Naccache , L . 2006 . Is she conscious? Science , 313 : 1395–1396 . Nagel , T . 1974 . What is it like to be a bat? Philosophical Review , 83 : 435–450 . Obhi , S. , & Haggard , P . 2004 . Internally generated and externally triggered actions are

physically distinct and independently controlled . Experimental Brain Research , 156 : 518–523 .

Owen , A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D . 2006 . Detecting awareness in the vegetative state . Science, 313 : 1402 .

Papineau , D . 2002 . Th inking about Consciousness . Oxford : Oxford University Press . Perenin , M.-T. , & Jeannerod , M . 1975 . Residual vision in cortically blind hemifi elds.

Neuropsychologia , 13 : 1–7 . Perenin , M.-T. , & Rossett i , Y . 1996 . Grasping without form discrimination in a hemiano-

pic fi eld . Neuroreport , 7 : 793–797 . P ö ppel , E., Held R. , & Frost , D . 1973 . Residual visual function aft er brain wounds involv-

ing the central visual pathways in man . Nature , 243 : 295–296 . Prochazka , A., Clarac F., Loeb, G. E., Rothwell, J. C., & Wolpaw, J. R . 2000 . What do refl ex

and voluntary mean? Modern views on an ancient debate . Experimental Brain Research, 130 : 417–432 .

Rossett i , Y . 1998 . Implicit short-lived motor representations of space in brain-damaged and healthy subjects . Consciousness and Cognition, 7 : 520–558 .

Royal College of Physicians . 2003 . Th e Vegetative State: Guidance on Diagnosis and Management . London : Royal College of Physicians .

Schenk , T. , & McIntosh , R. D . 2010 . Do we have independent visual streams for percep-tion and action? Cognitive Neuroscience , 1 : 52–78 .

Schwitzgebel, E. 2008. Th e unreliability of na ï ve introspection. Philosophical Review , 117: 245 –2 73.

Shea , N. , & Bayne , T . 2010 . Th e vegetative state and the science of consciousness . British Journal for the Philosophy of Science , 61 : 459–484 .

Sheinberg , D. L. , & Logothetis , N. K . 1997 . Th e role of temporal cortical areas in percep-tual organization . Proceedings of the National Academy of Sciences , 94 : 3408–3413 .

Slachevsky , A., Pillon, B., Fourneret, P., Pradat-Diehl, P., Jeannerod, M. , & Dubois , B. 2001 . Preserved adjustment but impaired awareness in a sensory-motor confl ict fol-lowing prefrontal lesions . Journal of Cognitive Neuroscience, 13 : 332–340 .

Trevarthen , C. T., Sahraie, A., & Weiskrantz, L . 2007 . Can blindsight be superior to “sighted-sight”? Cognition , 103 : 491–501 .

Weiskrantz , L . 1997 . Consciousness Lost and Found . Oxford : Oxford University Press . Weiskrantz L. 2009 . Blindsight. 2nd ed.. Oxford : Oxford University Press . Weiskrantz , L., Warrington, E. L., Saunders, M. D. , & Marshall J . 1974 . Visual capacity in

the heminopic fi eld following a restricted occipital ablation . Brain , 97 : 709–728 .

09_AndyClark_09.indd 17909_AndyClark_09.indd 179 10/11/2012 7:36:36 PM10/11/2012 7:36:36 PM

T H E S E N S E O F A G E N C Y180

Zaidel , E., Iacoboni, M., Zaidel, D. W., & Bogen, J. E . 2003 . Th e callosal syndromes. In K. H. Heilman and E. Valenstein (eds.), Clinical Neuropsychology , 347–403. Oxford : Oxford University Press .

Zihl , J. , & von Cramon , D . 1980 . Registration of light stimuli in the cortically blind hemi-fi eld and its eff ect on localization . Behavioural Brain Research, 1 : 287–298 .

09_AndyClark_09.indd 18009_AndyClark_09.indd 180 10/11/2012 7:36:37 PM10/11/2012 7:36:37 PM