Partial Unity of Consciousness: A Preliminary Defense

63
E. Schechter Partial Unity of Consciousness 15 Partial Unity of Consciousness: A Preliminary Defense Elizabeth Schechter 1 Introduction Under the experimental conditions characteristic of the “split-brain” experiment, a split-brain subject’s conscious experience appears oddly dissociated, as if each hemisphere is associated with its own stream of consciousness. On the whole, however, split-brain subjects appear no different from “normal” subjects, whom we assume have only a single stream of consciousness. The tension between these impressions gives rise to a debate about the structure of consciousness: the split-brain consciousness debate. 1 That debate has for the most part been pitched between two possibilities: that a split-brain subject has a single 551

Transcript of Partial Unity of Consciousness: A Preliminary Defense

E. Schechter

Partial Unity of Consciousness

15

Partial Unity of Consciousness: A PreliminaryDefense

Elizabeth Schechter

1 Introduction

Under the experimental conditions characteristic of the

“split-brain” experiment, a split-brain subject’s conscious

experience appears oddly dissociated, as if each hemisphere

is associated with its own stream of consciousness. On the

whole, however, split-brain subjects appear no different

from “normal” subjects, whom we assume have only a single

stream of consciousness. The tension between these

impressions gives rise to a debate about the structure of

consciousness: the split-brain consciousness debate.1

That debate has for the most part been pitched between

two possibilities: that a split-brain subject has a single

551

stream of consciousness, associated with the brain (or with

the subject) as a whole, or that she has two streams of

consciousness, one associated with each hemisphere.

Considerably less attention has been paid to the possibility

that a split-brain subject has a single but an only

partially unified stream of consciousness, a possibility

that has been articulated most clearly by Lockwood (1989)

(see also Trevarthen, 1974; Moor, 1982).

The partial unity model of split-brain consciousness is

interesting for reasons that extend beyond the split-brain

consciousness debate itself. Most saliently, the model

raises questions about subjects of experience and phenomenal

perspectives, about the relationship between phenomenal

structure and the neural basis of consciousness, and about

the place for the type/token distinction in folk and

scientific psychology.

This chapter examines two objections that have been

raised to the partial unity model, objections that

presumably account for how relatively little attention the

model has received. Because I argue that neither of these

objections impugns the partial unity model in particular,

552

the chapter constitutes a preliminary defense of the partial

unity model, working to show that it is on par with its

clearest contender, a version of the conscious duality model.

2 The Split-Brain Consciousness Debate

The split-brain experimental paradigm typically involves

carefully directing perceptual information to a single

hemisphere at a time, to the extent possible. (See Lassonde

& Ouiment, 2010, for a recent review.) This is relatively

simple to understand in the case of tactile perception.

Suppose you blindfold a split-brain subject (or in some

other way obscure his hands from his sight) and put an

object in his left hand, say, a pipe. Since patterned touch

information transmits from each hand only to the

contralateral (opposite side) hemisphere (Gazzaniga, 2000,

1299), tactile information about the pipe will be sent from

the subject’s left hand to his right hemisphere (RH). In a

“non-split” subject, the corpus callosum would somehow

transfer this information to, or enable access by, the left

hemisphere (LH) as well. In the split-brain subject,

however, this tactile information more or less stays put in

553

the initial hemisphere that received it. Meanwhile, in a

large majority of the population, the right hemisphere is

mute. A split-brain subject is therefore likely to say, via

his LH, that he cannot feel and doesn’t know what he is

holding in his left hand. A few minutes later, however,

using the same left hand, and while still blindfolded, the

subject can select the object he was holding a minute ago

from a box of objects—showing that the object was not only

felt but recognized and remembered. The subject may even

draw a picture of a pipe, again using the left hand, which

is under dominant control of the right hemisphere (Levy,

1969). Visual, auditory, olfactory, pain, posture, and

temperature information may all be lateralized, to varying

degrees, under some conditions.

What makes such findings interesting for thinking about

conscious unity is this: On the one hand, a split-brain

subject can respond to stimuli presented to either

hemisphere in ways that we think generally require

consciousness. On the other hand, a subject can’t respond to

stimuli in the integrated way that we think consciousness

affords, when the different stimuli are lateralized to

554

different hemispheres (or when a response is elicited not

from the hemisphere to which the stimulus was presented, but

from the other). For example, a very basic test for the

“split-brain syndrome” is a simple “matching” task in which

the subject is first required to demonstrate ability to

recognize both RH-presented stimuli and LH-presented stimuli

by pointing to a picture of the referents of the presented

words, by drawing a picture, and so on. After demonstrating

this capacity, the subject is then finally asked to say

whether the two lateralized stimuli are the same or

different. In the paradigmatic case, the subject can perform

the former, apparently much more complex sort of task, but

not the second, apparently simpler task. This is what first

suggests (obviously not conclusively), that the hemispheres

somehow have different streams of consciousness: after all,

I could demonstrate what I was conscious of and you could

demonstrate what you were conscious of, without either of us

having any idea whether we were conscious of the same thing.

Such results notwithstanding, a number of philosophers

have defended some kind of unity model (UM) of split-brain

consciousness, according to which a split-brain subject (at

555

least typically) has a single stream of consciousness. In

the only version of the unity model invariably mentioned in

the split-brain consciousness literature, a split-brain

subject has a single stream of consciousness whose contents

derive exclusively from the left hemisphere. It’s actually

not clear that anyone ever defended this version of the

model; a couple of theorists (Eccles, 1973, 1965; Popper &

Eccles, 1977) are widely cited as having denied RH

“consciousness,” but they may have been using the term to

refer to what philosophers would call “self-consciousness”

(see especially Eccles, 1981). The simple difficulty with

that version of the UM is that a lot of RH-controlled

behavior so strongly appears to be the result of conscious

perception and control. As Shallice once said of RH-

controlled performance on the Raven’s Progressive Matrices

task (Zaidel, Zaidel, & Sperry, 1981):

If this level of performance could be obtained unconsciously, then it would be really difficult to argue that consciousness is not an epiphenomenon. Given that it is not, it is therefore very likely, if not unequivocally

556

established, that the split-brain right hemisphereis aware. (Shallice, 1997, 264)

Contemporary versions of the unity model (Marks, 1981;

Hurley, 1998; Tye, 2003; Bayne, 2008) in fact all assume

that conscious contents derive from both hemispheres. I will

make this same assumption in this paper.2

The major alternative to the unity model is the

conscious duality model (CDM). According to the CDM, a

split-brain subject has two streams of consciousness, each

of whose contents derive from a different hemisphere. This

model appealed particularly to neuropsychologists (e.g.,

Gazzaniga, 1970; Sperry, 1977; LeDoux, Wilson, & Gazzaniga,

1977; Milner, Taylor, & Jones-Gotman, 1990; Mark, 1996;

Zaidel et al., 2003; Tononi, 2004), but several philosophers

have defended or assumed it as well (e.g., Dewitt, 1975;

Davis, 1997).

Since both the CDM and contemporary versions of the UM

allow that conscious contents derive from both hemispheres,

what is at issue between them is whether or not RH and LH

experiences are unified or co-conscious with each other—that is,

whether they belong to one and the same or to two distinct

557

streams of consciousness. Unsurprisingly, there is

disagreement about what co-consciousness (or conscious

unity) is, and whether there is even any single relation

between conscious phenomena that we mean to refer to when

speaking of someone’s consciousness as being “unified”

(Hill, 1991; Bayne & Chalmers, 2003; Tye, 2003; Schechter,

forthcoming b). It is nonetheless possible to articulate

certain assumptions we make about a subject’s consciousness—

assumptions concerning conscious unity—that appear to

somehow be violated in the split-brain case. As Nagel says,

we assume that, “for elements of experience … occurring

simultaneously or in close temporal proximity, the mind

which is their subject can also experience the simpler

relations between them if it attends to the matter” (Nagel,

1971, 407). We might express this assumption by saying that

we assume that all of the (simultaneously) conscious

experiences of a subject are co-accessible. Marks, meanwhile,

notes that we assume that two experiences “belong to the

same unified consciousness only if they are known, by

introspection, to be simultaneous” (1981, 13). That is, we

assume that any two simultaneously conscious experiences of

558

a subject are ones of which the subject is (or can be) co-

aware. Finally, we assume that there is some single thing that

it is like to be a conscious subject at any given moment,

something that comprises whatever multitude and variety of

experiences she’s undergoing (Bayne, 2010). We assume, that

is, that at any given moment, any two experiences of a

subject are co-phenomenal.

Although the split-brain consciousness debate and this

paper are most centrally concerned with co-phenomenality, I

will basically assume here that whenever two

(simultaneously) phenomenally conscious experiences are

either co-aware or co-accessible, then they are also co-

phenomenal. (This assumption may be controversial, but its

truth or falsity does not affect the central issues under

consideration in this chapter, so long as we view these

relations as holding of experiences rather than contents; see

Schechter, forthcoming a.) For simplicity’s sake, I will

focus only on synchronic conscious unity—the structure of

split-brain consciousness at any given moment in time—to the

extent possible. Accordingly, I will speak simply of the co-

559

consciousness relation (or conscious unity relation) in what

follows.3

Let us say that streams of consciousness are

constituted by experiences and structured by the co-

consciousness relation.4 According to the unity model of

split-brain consciousness, a split-brain subject has a

single stream of consciousness: right and left hemisphere

experiences are co-conscious, in other words. According to

the conscious duality model, co-consciousness holds

intrahemispherically but fails interhemispherically in the

split-brain subject, so that the subject has two streams of

consciousness, one “associated with” each hemisphere.

Despite their disagreements, the CDM and the UM share a

very fundamental assumption: that co-consciousness is a

transitive relation. In this one respect, these two models have

more in common with each other than either of them does with

the partial unity model (PUM). The PUM drops the

transitivity assumption, allowing that a single experience

may be co-conscious with others that are not co-conscious

with each other. Streams of consciousness may still be

structured by co-consciousness, but it is not necessary that

560

every experience within a stream be co-conscious with every

other. In this model, then, conscious unity admits of

degrees: only in a strongly unified stream of consciousness is

co-consciousness transitive. According to both the UM and

the CDM, then, a split-brain subject has some whole number

of strongly unified streams of consciousness, while according to

the PUM, a split-brain subject has only a single but only

partly (or weakly) unified consciousness.

Note that because there are several possible notions of

conscious unity, there are other possible partial unity

models. The truth is that conscious unity is (to borrow Block’s

[1995] term) a “mongrel concept” (Schechter, forthcoming b);

when we think of what it is to have a “unified”

consciousness, we think of a whole host of relations that

subjects bear to their conscious experiences and that these

experiences bear to each other and to action. Talk of a

“dual” consciousness may connote a breakdown of all these

relations simultaneously. In reality, though, these

relations may not stand or fall all together; in fact, upon

reflection, it’s unlikely that they would. One intuitive

sense of what it means to have a partially unified

561

consciousness, then, is a consciousness in which some of

these unity relations still hold, and others do not (Hill,

1991).

This is not what I mean by a “partially unified

consciousness,” however. In one possible kind of partial

unity model, some conscious unity relations, but not others,

hold between experiences. In the kind of partial unity model

under consideration here, conscious unity relations hold

between some experiences, but not between others. This point

will be crucial to understanding the choice between the PUM

and the CDM.5

The PUM of split-brain consciousness has several prima

facie strengths. Most obviously, it appears to offer an

appealingly intermediate position between two more extreme

models of split-brain consciousness. The UM must apparently

implausibly deny failures of interhemispheric co-

consciousness; the CDM is apparently inconsistent with the

considerable number of cases in which it is difficult or

impossible to find evidence of interhemispheric dissociation

of conscious contents.

562

The PUM that I will consider also makes some kind of

neurophysiological unity the basis for conscious unity. Against

those who would claim that splitting the brain splits the

mind, including the conscious mind, some philosophers argued

that a putatively single stream of consciousness can be

“disjunctively realized” (Marks, 1981; Tye, 2003).

Lockwood’s defense of the PUM in contrast appeals explicitly

to the fact that the “split” brain is not totally split, but

remains physically intact beneath the cortical level: the

cortically disconnected right and left hemisphere are therefore

associated with distinct conscious experiences that are not

(interhemispherically) co-conscious; nonetheless, these are

all co-conscious with a third set of subcortically exchanged

or communicated conscious contents. Many will be attracted

to a model that makes the structure of consciousness

isomorphic to the neurophysiological basis of consciousness

in this way (Revonsuo, 2000).6

Another significant source of the PUM’s appeal is its

empirical sensitivity or flexibility, in a particular sense.

Lockwood sought to motivate the PUM in part by considering

the possibility of sectioning a subject’s corpus callosum

563

one fiber at a time, resulting in increasing degrees of

(experimentally testable) dissociation. Would there be some

single fiber that, once cut, marked the transition from the

subject’s having a unified to a dual consciousness? Or would

the structure of consciousness change equally gradually as

did the neural basis of her conscious experience? Lockwood

implies that nothing but a pre-theoretic commitment to the

transitivity of co-consciousness would support the first

answer, and simply notes that “there remains something

deeply unsatisfactory about a philosophical position that

obliges one to impose this rigid dichotomy upon the

experimental and clinical facts: either we have just one

center, or stream, of consciousness, or else we have two (or

more), entirely distinct from each other” (Lockwood, 1989,

86).

Lockwood’s thought experiment is in fact not wholly

fictitious: callosotomy became routinely performed in

stages, with predictable degrees and sorts of dissociation

evident following sections at particular callosal locations

(e.g., Sidtis et al., 1981). “Partially split” subjects

really do seem somehow intermediate between “nonsplit” and

564

(fully) “split-brain” subjects. Surely one appealing

characterization of such subjects is that the structure of

their consciousness is intermediate between (strongly) unified

and (wholly) divided or dual.

In light of the apparent strengths of the PUM, it

should be puzzling how little philosophical attention it has

received. Those who have discussed the model, however, have

not been enthusiastic. Hurley (1994) suggested that there

could be no determinate case of partial unity of

consciousness; Nagel suggested that even if empirical data

suggested partial unity, the possibility would remain

inconceivable (and thus unacceptable) from the first-person

and folk perspective (1971, 409–410); Bayne (2008, 2010) has

questioned whether the model is even coherent. Indeed,

Lockwood himself at one point admitted that “in spite of

having defended it in print, I am still by no means wholly

persuaded that the concept of a merely weakly unified

consciousness really does make sense” (1994, 95).7

Of the philosophers just mentioned, Nagel, Bayne, and

Lockwood (as well as Dainton, 2000,) have been concerned,

first and foremost, with what I call the inconceivability

565

challenge. Their charge is, at minimum, that a partially

unified consciousness is not possibly imaginable. Hurley’s

indeterminacy charge, meanwhile, is that “no … factors can be

identified that would make for partial unity” (1998, 175) as

opposed to conscious duality.

At a glance, these two objections to the PUM look to be

in some tension with each other: the indeterminacy challenge

suggests that the PUM is in some sense equivalent to (or not

distinguishable from) the CDM, while, according to the

inconceivability objection, the PUM is somehow uniquely

inconceivable. Deeper consideration, however, reveals that

the two objections are importantly related. The

inconceivability objection is rooted in the fact that there

is nothing subjectively available to a subject that makes her

consciousness partially unified as opposed to dual; the

indeterminacy challenge adds that there is nothing objective

that would make it partially unified either. Taken together,

these concerns may even imply that there is no such thing as

a partial unity model of consciousness.

Sections 4 and 5 address these twin objections,

ultimately arguing that they do not and cannot work against

566

the PUM in the way its critics have thought. The conclusion

of the chapter is that the PUM is a distinct model, and one

that deserves the same consideration as any other model of

split-brain consciousness. In the next section, I will lay

out what is most centrally at issue between these models.

3 Experience Types and Token Experiences

The central challenge for the CDM has always been to account

for the variety of respects in which split-brain subjects

appear to be “unified.” First of all, split-brain subjects

don’t seem that different from anyone else: while their

behavior outside of experimental conditions isn’t quite

normal (Ferguson, Rayport, & Corrie, 1985), it isn’t

incoherent or wildly conflicted. Second of all, even under

experimental conditions, bihemispheric conscious contents

don’t seem wholly dissociated. Via either hemisphere, for

instance, a split-brain subject can indicate certain “crude”

visual information about a stimulus presented in a given

visual field (Trevarthen & Sperry, 1973; though see also

Tramo et al., 1995). Similarly, although finely patterned

tactile information from the hand transmits only

567

contralaterally, “deep touch” information (sufficient to

convey something about an object’s texture, and whether it

is, say, rounded or pointed) transmits ipsilaterally as

well. As a result, in such cases, one apparently speaks of

what the subject (tout court) sees and feels, rather than

speaking of what one hemisphere or the other sees or feels,

or of what the subject sees and feels via one hemisphere or

the other.

Proponents of the CDM, however, have always viewed it

as compatible with the variety of respects in which split-

brain subjects appear “unified.” Of course a split-brain

subject seems to be a single thinker: RH and LH have the

same memories and personality by virtue of having the same

personal and social history, and so on. And of course split-

brain subjects typically behave in an integrated manner:

especially outside of experimental situations, the two

streams of consciousness are likely to have highly similar

contents. In other words, proponents of the CDM long

appealed to interhemispheric overlap in psychological types,

while maintaining that the hemispheres are subject to

distinct token mental phenomena.

568

A primary reason for the persistence of the debate

between the CDM and the UM is that proponents of the CDM

have readily availed themselves of the type-token

distinction in this way. Accordingly, the version of the CDM

that has been defended by neuropsychologists in particular

is one in which a split-brain subject has two entirely

distinct streams of conscious experiences, but with many

type- (including content-) identical experiences across the

two streams. Call this the conscious duality (with some duplication of

contents) model, or CDM-duplication. Proponents of the UM have

meanwhile sometimes responded by arguing that there is no

room for the type-token distinction in this context. (See

Schechter, 2010, responding to Marks, 1981, and Tye, 2003,

on this point; for a different version of this objection to

the CDM, see Bayne, 2010.) At around this point in the

dialectic, very deep questions arise about, among other

things, the nature of subjects of experience (Schechter,

forthcoming a), and it is not clear how to resolve them.

Let’s look at an example. In one experiment, a split-

brain subject had an apparently terrifying fire safety film

presented exclusively in her LVF (to her RH). After viewing,

569

V.P. said (via her LH) that she didn’t know what she saw—“I

think just a white flash,” she said, and, when prompted

further, “Maybe just some trees, red trees like in the

fall.” When asked by her examiner (Michael Gazzaniga)

whether she felt anything watching the film, she replied

(LH), “I don’t really know why but I’m kind of scared. I

feel jumpy. I think maybe I don’t like this room, or maybe

it’s you. You’re getting me nervous.” Turning to the person

assisting in the experiment, she said, “I know I like Dr.

Gazzaniga, but right now I’m scared of him for some reason”

(Gazzaniga, 1985, 75–76).

In this case, there appeared to be a kind of

interhemispherically common or shared emotional or affective

experience. (And, perhaps, visual experience.) But here the

defender of the CDM will employ the type/token distinction:

what was common to or shared by V.P.’s two hemispheres was,

at most, a certain type of conscious emotional or affective

(and perhaps visual) experience—but each hemisphere was

subject to its own token experience of that type. Perhaps,

for instance, interhemispheric transfer of affect or simply

bihemispheric access to somatic representations of arousal

570

meant that each hemisphere generated and was subject to an

experience of anxiety while V.P. (or her RH) watched the film

—but if so, then there were two experiences of anxiety.

Of course, if there really was an RH conscious visual

experience of the fire safety film that was not co-conscious

with, say, an LH auditory experience of a stream of inner

speech that the LH was simultaneously engaging in (“What’s

going on over there? I can’t see anything?”), then someone

who accepts the transitivity principle has to resort to some

kind of strategy like this. If the RH experience and the LH

experience are not co-conscious, then they cannot belong to

the same stream of consciousness—even if both are co-

conscious with an emotional or affective experience of

anxiety.

Because the PUM drops the transitivity principle,

however, it can take unified behavior and the absence of

conscious dissociation at face value. According to the PUM,

the reason V.P. was able to describe, via her left

hemisphere, the feeling the anxiety that her RH was

(presumably) also experiencing was because V.P. really had a

571

single token experience of anxiety, co-conscious with all of

her other token experiences at that time.

More generally, wherever the CDM posits two token

experiences with a common content (figure 15.1), the PUM

posits a single token experience with that content (figure

15.2). To put it differently, where there is no qualitative

difference between contents, the PUM posits no numerically

distinct experiences.

[Insert Figures 15.1 and 15.2 near here]

4 The Inconceivability Objection

As Bayne points out (2008, 2010), the inconceivability

objection says more than that we cannot imagine what it’s

like to have a partially unified consciousness. After all,

there may be all kinds of creatures whose conscious

experience we cannot imagine (Nagel, 1974) simply because of

contingent facts about our own perceptual systems and

capacities. According to the inconceivability objection,

there is nothing that would even count as successfully

imagining what it is like to have partially unified

consciousness.

572

Why should this objection face the PUM uniquely? After

all, we cannot imagine what it would be like to have

(simultaneously) two streams of consciousness, either. This

follows from the very concept of co-consciousness: two

experiences are co-conscious when there is something it is

like to undergo them together. Failures of co-consciousness,

in general then, are not the kinds of things for which there

is anything that it’s like to be subject to them (see Tye,

2003, 120). (More on subjects of experience below.)

As Tye (2003, 120) notes, there is of course a

qualified sense in which one can imagine having two streams

of consciousness: via two successive acts of imagination.

That is, one can first imagine what it’s like to have the

one stream of consciousness and then imagine what it’s like

to have the other. There is just no single “experiential

whole” encompassing both imaginative acts, for the

experiences in the two streams aren’t “together” in

experience, in the relevant, phenomenological sense. We

could say, if we wanted, that having multiple streams of

consciousness is sequentially but not simultaneously

imaginable.

573

These same remarks apply to the PUM as well, however.

Consider figure 15.2 again. We can first imagine what it’s

like to undergo experiences E1 and E2 together, and can then

imagine what it’s like to undergo E2 and E3 together. There

is just no single “experiential whole” encompassing E1, E2,

and E3, because neither E1 and E3 nor their contents, A and

C, are together in experience in the relevant,

phenomenological sense. Thus having partially unified

consciousness is also sequentially if not simultaneously

imaginable.

On the face of it, then, the inconceivability objection

should face the PUM and the CDM equally. The objection

concerns what it’s like to be conscious—a subjective matter—

and there is nothing in the phenomenology of conscious

duality or partial unity to distinguish them. The PUM and

the CDM-duplication differ with respect to whether the

experience that is carrying the content B and that is co-

conscious with the experience that is carrying the content

A, is the very same experience as the experience that is

carrying the content B that is co-conscious with the

experience that is carrying the content C. They differ, that

574

is, with respect to whether the experience that is co-

conscious with E1 is the very same (token) experience as the

experience that is co-conscious with E3. This is a question

about the token identities of experiences, and as Hurley (1998)

notes, the identities of experiences are not subjectively

available to us.8

The inconceivability objection concerns the

phenomenality or subjective properties of experience, but

there is no phenomenal, subjective difference between having

two streams of consciousness and having a single but only

weakly unified stream of consciousness. Why, then, have

critics of the PUM—and even its major philosophical

proponent (Lockwood, 1994)—found the PUM somehow uniquely

threatened by the objection?

I think that the reason has to do with personal

identity. In ordinary psychological thought, the

individuation of mental tokens, including conscious

experiences, is parasitic upon identifying the subject whose

experiences they are, so that if there is a single subject,

for example, feeling a twinge of pain at a given time, there

is one experience of pain at that time; if there are two

575

subjects feeling (qualitatively identical) pains at that

time, then there are two experiences of pain, and so on. The

problem is that our thinking about experience is so closely

tied to our thinking about subjects of experience that

whether or not the “divided” hemispheres are associated with

distinct subjects of experience seems just as uncertain as

whether or not they share any (token) conscious experiences.

Precisely because we ordinarily individuate conscious

experiences by assigning them to subjects, one natural

interpretation of the CDM has always been that the two

hemispheres of a split-brain subject are associated not only

with different streams of consciousness but with different

subjects of experience (or “conscious selves,” e.g., Sperry,

1985). If that interpretation is correct, then no wonder

split-brain consciousness is only sequentially imaginable:

when we imagine a split-brain human being’s consciousness,

we must in fact imagine the perspectives of two different

subjects of experience in turn. The PUM has instead been

interpreted as positing a single subject of experience with

a single stream of consciousness—but one whose consciousness

is not (simultaneously) imaginable.

576

There must at least be two subjective perspectives in the

conscious duality case (figure 15.3) because the co-

consciousness relation is itself one that appeals to falling

within such a perspective. (Think about the origins of this

“what it’s like” talk!; Nagel, 1974.) An experience is

conscious if and only if it falls within some phenomenal

perspective or other; two experiences are co-conscious if

and only if they fall within the same phenomenal

perspective, if there is some perspective that “includes”

them both. Now, either subjects of experience necessarily

stand in a one-to-one with phenomenal perspectives, or they

do not. We might understand subjects of experience in such a

way that a subject of experience necessarily has a (single)

phenomenal perspective at a time. If this is the case, then

the CDM posits two subjects of experience, each of whose

perspectives is (it would seem) perfectly imaginable.

Alternatively we might let go of the connection between

subjects of experience and phenomenal perspectives. If so,

then the CDM may posit a single subject of experience with

two phenomenal perspectives. If we pursue this second

course, then we cannot imagine what it is like to be such a

577

subject of experience—but this is unsurprising, since we

have already forgone the connection between being a subject

of experience and having a phenomenal perspective.

As before, however, these remarks apply equally to the

PUM. The PUM also posits two phenomenal perspectives, for

again failures of co-consciousness—even between two

experiences that are mutually co-conscious with a third—mark

the boundaries of such perspectives. Only and all those

experiences that are transitively co-conscious with each

other fall within a single phenomenal perspective (figure

15.4).

[Insert Figure 15.3 and 15.4 near here]

(As before, the solid lines signify co-consciousness;

each dashed oval circumscribes those experiences that fall

within a single subjective perspective.)

Once again, we can relinquish the connection between

being a subject of experience and having a single phenomenal

perspective, in which case we can’t imagine what it’s like

to be the subject with the partially unified consciousness,

but in which case, again, we’ve already forgone the

578

commitment to there being something it’s like to be her.

Alternatively, we can insist upon a necessary connection

between being a subject of experience and having a

phenomenal perspective—but then the PUM must also posit two

subjects of experience within any animal that has a

partially unified consciousness. And we can imagine the

perspective of either of these subjects of experience.9

Whichever model we accept—that shown in figure 15.3 or

in figure 15.4—and whether we identify, for example, the

split-brain subject as a whole with a subject of experience

or not, the entity to which we would ascribe E1 and E3 in

the figures above—the subject in the organismic sense—is not

something that has a phenomenal perspective—not in the

ordinary sense in which we speak of subjects “having” such

perspectives.

These remarks suggest an attenuated sense in which the

two models can be distinguished on subjective grounds. On

the one hand, there is no difference between what it’s like

to have a partially unified consciousness versus what it’s

like to have two streams of consciousness because there is

nothing—no one thing—that it is like to have either of those

579

things. But there is a difference between the models with

respect to the role they make for phenomenal perspectives in

individuating experiences. Because streams of consciousness

are strongly unified, according to the CDM, an experience’s

token identity may depend upon the phenomenal perspective

that it falls within (or contributes to). The PUM forgoes

this dependence: there can be multiple phenomenal

perspectives associated with the same stream of

consciousness, and a single experience can fall within

multiple phenomenal perspectives.

The strength of the conceptual connection between

experiences and phenomenal perspectives is certainly a

consideration that speaks against the PUM. What remains

open, however, is whether other considerations could

outweigh this one. For the reasons I go on to explain in the

next section, I agree with Lockwood that this is at least

possible.

For now, the important point is that the distinction

between subjects of experience and subjective perspectives

undercuts the force of the inconceivability objection.

Consider figure 15.2 again. According to the PUM, the

580

experience that is co-conscious with the experience of A

(with E1, in other words) and the experience that is co-

conscious with the experience of C (with E3, in other words)

is one and the same experience. Since the experience

nonetheless contributes to two distinct phenomenal

perspectives, there is nothing subjective that makes it true

that there is just one experience with that content. It must

therefore be an objective fact or feature that makes it the

case that the experience that is co-conscious with E1 is one

and the same as the experience that is co-conscious with E3.

So long as there are properties of experiences that are

not subjectively available to us, there is, on the face of

it, no reason to think that there could not be any such

feature or fact. According to the indeterminacy objection,

however, this is just the situation that the PUM is in. That

is, there is no fact or feature—subjective or objective—that

could make it true that the experience that is co-conscious

with E1 is the experience that is co-conscious with E3. I

turn to this objection next.

581

5 The Indeterminacy Objection

Where the CDM posits two token experiences with a common

content, the PUM posits a single token experience with that

content. This is where the threat of indeterminacy gets its

grip: what would make it the case that a subject had a

single token experience that was co-conscious with others

that were not co-conscious with each other (figure 15.6)—

rather than a case in which the subject had two (or more)

streams of consciousness, but with some overlap in contents

(figure 15.5)?

[Insert Figure 15.5 and 15.6 near here]

The conscious duality model and the partial unity model

agree that wherever there is a dissociation between

contents, there is a failure of co-consciousness between the

vehicles or experiences carrying those contents. The models

differ with respect to what they say about nondissociated

contents: according to the PUM, interhemispherically shared

contents are carried by interhemispherically shared experiences;

according to the CDM-duplication, they are not.

582

Neuropsychologists apparently recognized these as

distinct possibilities. Sperry, for instance, once

commented, “Whether the neural cross integration involved in

… for example, that mediating emotional tone, constitutes an

extension of a single conscious process [across the two

hemispheres] or is better interpreted as just a transmission

of neural activity that triggers a second and separate

bisymmetric conscious effect in the opposite hemisphere

remains open at this stage” (Sperry, 1977, 114). Sperry

implies, here, that whether a subject like V.P. (sec. 3) has

one or two experiences of anxiety is something we simply

have yet to discover.

Hurley (1998), however, suggested that the difficulty

of distinguishing between partial unity of consciousness and

conscious duality with some duplication of contents is a

principled one. According to Hurley, the problem is not at

base epistemic, but metaphysical: there is nothing that

would make a subject’s consciousness partially unified, as

opposed to dual but with some common contents. The PUM thus

stands accused, once again, of unintelligibility:

583

What does the difference between these two interpretations [partial unity of consciousness versus conscious duality with some duplication of contents] amount to? There is no subjective viewpoint by which the issue can be determined. If

it is determined, objective factors of some kind must determine it. But what kind? … Note the lurking threat of indeterminacy. If no objective factors can be identified that would make for partial unity as opposed to separateness with duplication, then there is a fundamental indeterminacy in the conception of what partial unity would be, were it to exist. We can’t just shrug this off if we want to defend the view that partial unity is intelligible. (1998, 175)

The difficulty of conceptualizing the difference between

partial unity and conscious duality with some duplication of

contents is rooted in the purposes to which the type/token

distinction is ordinarily put. Generalizations in psychology

—whether folk or scientific—are generalizations over

psychological types, including contents (Burge, 2009, 248).

Mental tokens are just the instantiation of those properties

or types within subjects. We assume that two subjects can’t

584

share the same mental token, so if they both behave in ways

that are apparently guided by some mental content, we must

attribute to each of them a distinct mental token with that

content. That is: what entokenings of contents explain is

the access that certain “systems”—in ordinary thought,

subjects—have to those contents.

The problem is that both the PUM and the CDM-

duplication allow that the right and left hemisphere of a

split-brain subject have access to some of the same

contents. Indeed, while disagreeing about how to individuate

tokens, the PUM and the CDM-duplication could in principle

be in perfect agreement about which systems have access to

which information, and about what role this shared access to

information plays in behavioral control. In that case, there

would be no predictive or explanatory work, vis-à-vis

behavior, for the type/token distinction to do.

Suppose, for the sake of argument, that this is right,

and that the two models are predictively equivalent vis-à-

vis behavior. I have already argued that they are

subjectively indistinguishable as well. Are there any other

585

grounds for distinguishing partial unity from conscious

duality with some duplication of contents?

The most obvious possibility is that some or other

neural facts will “provide the needed objective basis for the

distinction” (Hurley, 1998, 175). In the early days of the

split-brain consciousness debate, consciousness was usually

assumed to be a basically cortical phenomenon so that the

neuroanatomy of the callosotomized brain was taken to

support the conscious duality model. Tides have changed,

however, and by now the “split” brain, which of course

remains physically intact beneath the cortical level, might

be taken to provide prima facie support for the claim that

split-brain consciousness is partially unified as well.

Although my reasons for thinking so differ from hers, I

agree with Hurley that the structure of consciousness cannot

be read off neuroanatomical structure so straightforwardly.

To start with, although subcortical structures are (usually)

left intact by split-brain surgery, subcortico-cortical

pathways may still be largely unilateral. Indeed so far as I

know, this is largely the case for individual pathways of,

for example, individual thalamic nuclei, though subcortico-

586

cortical pathways taken collectively may still ultimately

terminate and originate bilaterally.10

Furthermore, although structural connectivity is a good

guide to functional connectivity, the latter is what we are

really interested in. Now, given how intimately subcortical

activities are integrated with cortical activities in the

human brain, it is of course natural to hypothesize that the

physical intactness of subcortical structures in the “split”

brain provides the basis for whatever kind or degree of

interhemispheric functional connectivity is needed for

conscious unity. On the other hand, one could apparently

reason just as well in the opposite direction: given how

intimately subcortical activities are integrated with

cortical activities, it is reasonable to suspect that a

physical (surgical) disruption of cortical activities creates

a functional disruption or reorganization of activity even at

the subcortical level. Johnston et al. (2008), for instance,

found a significant reduction in the coherence of firing

activity not just across the two hemispheres of a recently

callosotomized subject, but across right and left thalamus,

587

despite the fact that the subject’s thalamus was structurally

intact.

What we will ultimately need, in order to determine the

side on which the neural facts lie in this debate, is a

developed theory of the phenomena of interest—consciousness

and conscious unity—including a theory of their physical

basis. It is only against the background of such a theory

that the relevance of any particular neural facts can be

judged, and, of course, only against the background of such

a theory that those facts could make it intelligible that

the experience that is co-conscious with E1 is the

experience that is co-conscious with E3. Suppose, for

instance, that we found the neural basis of the co-

consciousness relation: suppose we find the neurophysiological

relation that holds between neural regions supporting co-

conscious experiences, and found that that relation holds

between the region supporting consciousness of B on the one

hand and the regions supporting consciousness of A and of C

on the other. That discovery would weigh in favor of the

PUM. But we would first have needed to have a theory of the

co-consciousness relation, and we would need to have had

588

some prior if imperfect grip on when experiences are and

aren’t co-conscious. Thus, for example, Tononi (2004), who

views thalamocortical interactions as a crucial part of the

substrate of consciousness, also believes that the split-

brain phenomenon involves some conscious dissociation, and

this is because Tononi makes the integration of information

the basis (and purpose) of consciousness. Behavioral

evidence meanwhile strongly suggests that there is more

intrahemispheric than interhemispheric integration of

information in the split-brain subject. Depending on whether

conscious unity requires some absolute degree of

informational integration or instead just some relatively

greatest degree, split-brain consciousness could be revealed

to have been dual or partially unified.

In her discussion of whether the PUM can appeal to

neural facts to defeat the indeterminacy objection, Hurley

considers neuroanatomical facts are the only class of neural

facts that Hurley considersalone. I think there is a

dialectical explanation for this: Lockwood himself motivates

the PUM by appealing to neuroanatomical facts specifically,

and of course the (very gross) neuroanatomy of the “split”

589

brain is relatively simple to appreciate. In the long run

though, we will have various facts about neural activity to

adjudicate between the PUM and the CDM as well. Consider

recent fMRI research investigating the effects of

callosotomy on the bilateral coherence of resting state

activity. Now as it happens, these studies have thus far

yielded conflicting results. Johnston et al. (2008) (cited

above) found a significant reduction in the coherence of

firing activity across the two hemispheres following

callosotomy, while Uddin et al. (2008) found a very high

degree of bihemispheric coherence in a different subject.11

Suppose, however, that one or the other finding were

replicated across a number of subjects. This is just the

kind of finding that could weigh in favor of one model or

the other—assuming some neurofunctional theory of

consciousness according to which internally generated,

coordinated firing activity across wide brain regions serves

as the neural mechanism of consciousness.

Hurley herself has fundamental objections to the notion

that neural structure might make it the case that a

subject’s consciousness was partially unified. On the basis

590

of considerations familiar from the embodied/extended mind

view, she argues that the very same neuroanatomy may be

equally compatible with a dual and a unified consciousness.

(Though I don’t know if she would say that all neural

properties—not just those concerning anatomy—are so

compatible!) A discussion of the embodied/extended mind

debate would take us too far afield here. Suffice it to say

that the position Hurley espouses is controversial from the

perspective of the ongoing science of consciousness, and, as

for a science of conscious unity, “it seems to me that the

physical basis of the unity of consciousness should be sought

in whatever we have reason to identify as the physical

substratum of consciousness itself” (Lockwood, 1994, 94).

Still, whether our best-developed theory of consciousness

will necessarily be a theory of the brain is admittedly

itself an empirical question.

6 Principles of Conscious Unity

According to the indeterminacy objection, there is nothing

that would make it the case that a subject’s consciousness

was partially unified. Unfortunately, it is not possible, at

591

present, to respond to the objection by stating what would.

I have argued that if we had an adequate theory of the

phenomenon of interest, we could use it to adjudicate the

structure of consciousness in hard cases. Because we don’t

yet have such a theory, this response, however persuasive in

principle, is not fully satisfying at present.

I will therefore conclude by offering a very different

kind of response to the indeterminacy objection. The basic

thought will be that the indeterminacy objection is neutral

or asymmetric between the PUM and the CDM-duplication: that

is, the PUM is no more vulnerable to the objection than is

the CDM-duplication. If that is right, then the objection

cannot work to rule out the PUM since it can’t plausibly

rule out both models simultaneously.

Even on the face of things, it is puzzling that the PUM

should be uniquely vulnerable to the indeterminacy

objection, since what is purportedly indeterminate is

whether a given subject’s consciousness is partially unified

or dual with some duplication of contents. In that case,

shouldn’t the CDM be just as vulnerable to the objection?

Why does Hurley (apparently) think otherwise?

592

Hurley might respond that there are at least

hypothetical cases involving conscious dissociation for

which the PUM isn’t even a candidate model, cases that are

thus determinately cases of conscious duality. These are cases

in which there are no contents common to the two streams.

Perhaps this suffices to make the CDM invulnerable (or less

vulnerable somehow) to the indeterminacy objection.

The version of the CDM under consideration here,

however—and the version that has been popular among

neuropsychologists—is one that does posit some duplicate

contents. Moreover, although there may not be any candidate

cases of partial unity for which the CDM-duplication is not

a possible model as well, there are at least hypothetical

cases that look to be pretty strong ones for the PUM. Imagine

sectioning just a tiny segment of the corpus callosum,

resulting in, say, dissociation of tactile information from

the little fingers of both hands, and no more. Now consider

a proposed account of the individuation of experiences: for

a given content B, there are as many experiences carrying

the content B as there are “functional sets” of conscious

control systems to which that content is made available.

593

What makes a collection of control systems constitute a

single functional set, meanwhile, is that they have access

to most or all of the same contents. (The prima facie appeal

of this account is that it is, I think, consistent with some

accounts of the architecture of the mind, according to which

all that “unifies” conscious control systems is their shared

access to a limited number of contents [Baars, 1988].) In

the imagined case, in which we section only one tiny segment

of the corpus callosum, there is (arguably) a single

functional set of conscious control systems, and thus just

one vehicle carrying the content B.12, 13

Is there any other reason to think that the

indeterminacy challenge faces the PUM uniquely? Hurley’s

thought seems to be that the CDM-duplication skirts the

indeterminacy challenge by offering a constraint according to

which a partially unified consciousness is impossible. The

constraint in question is just that co-consciousness is a

transitive relation:

What does the difference between these two interpretations [partial unity of consciousness versus conscious duality with some duplication of contents] amount to? … In the absence of a constraint of

594

transitivity, norms of consistency do not here give usthe needed independent leverage on the identity ofexperiences … note the lurking threat of indeterminacy. (Hurley, 1998, 175; emphasis added)

This is a threat, Hurley means, to the intelligibility of

the PUM in particular.

The transitivity constraint in effect acts as a

principle of individuation for the CDM-duplication and makes

rules out the possibility of a partially unified

consciousness impossible. If the PUM comes with no analogous

constraint or principle of individuation, then the most a

proponent of the PUM can do is simply stipulate that a subject

has a partially unified consciousness. Such stipulation

would of course leave worries about metaphysical

indeterminacy intact; the PUM would thus be uniquely

vulnerable to the indeterminacy challenge.

There is a constraint that plays an individuating role

for the PUM, however, one analogous to that played by the

transitivity constraint for the CDM-duplication. For the

PUM, the individuating role is played by the nonduplication

constraint. This constraint might say simply that, at any

moment in time, an animal cannot have multiple experiences

595

with the same content. Such a nonduplication principle falls

out of the account of the tripartite account of experiences

offered by Bayne (2010), for instance, at least one version

of which identifies an experience only by appeal to its

content, time of occurrence, and the biological subject or

animal to which it belongs. Or the constraint might be

formulated in terms of a (prominent though still developing)

functional theory of consciousness (Baars, 1988; Dehaene &

Naccache, 2001): perhaps there is but a single experience

for each content that is available to the full suite of

conscious control systems within an organism. Whatever the

ultimate merits of such a nonduplication constraint, it can

at least be given a principled defense (see Schechter,

forthcoming a).

I cannot see a reason, then, to conclude that the

indeterminacy objection faces the PUM uniquely. If that is

so, then the objection cannot work in quite the way Hurley

suggests. My reasoning here takes the form of a simple

reductio: if the indeterminacy objection makes the PUM an

unacceptable model of consciousness, then it should make the

CDM-duplication model equally unacceptable, and on the same

596

a priori grounds. Yet a priori grounds are surely the wrong

grounds upon which to rule out both the PUM and the CDM-

duplication for a given subject: whether there are any

animals in whom some but not all conscious contents are

integrated in the manner characteristic of conscious unity

is surely at least in part an empirical question.

For all the reasons I have discussed, it seems possible

that there should be determinate cases of partially unified

consciousness. Of course, I have not addressed how we (that

is, neuropsychologists) should determine whether a subject

has a partially unified stream of consciousness or two

streams of consciousness with some duplication of contents.

The question is difficult in part because it is, as I have

suggested throughout, heavily theoretical rather than

straightforwardly empirical. But that is true for many of

the most interesting unanswered questions in psychology.

Figure 15.1

Conscious duality with partial duplication of contents.

Figure 15.2

597

Partial unity of consciousness.

Figure 15.3

Conscious duality with partial duplication.

Figure 15.4

Partial unity of consciousness.

Figure 15.5

Conscious duality with partial duplication of contents.

Figure 15.6

Partial unity of consciousness.

Notes

1. Throughout the chapter I use the term “split-brain

subject” (in place of “split-brain patient”) to be

synonymous with “split-brain human animal.” I mean the term

to be as neutral as possible with respect to personal

identity concerns. How many subjects of experience there are

598

within or associated with a split-brain subject will be

addressed separately.

2. Marks (1981) and Tye (2003) believe that a split-brain

subject usually has one stream of consciousness but

occasionally—under experimental conditions involving

perceptual lateralization—two. It does not matter here

whether we view this as a unity or a duality model. Because

Marks and Tye make common contents the basis of conscious

unity, their models are interestingly related to the partial

unity model, but the version of the partial unity model that

I consider also makes some kind of neurophysiological unity the

basis of conscious unity, which their models do not.

3. Restricting our attention to synchronic co-consciousness

in this way of course yields, at best, a limited view of

split-brain consciousness. Moreover, co-accessibility, co-

awareness, and co-phenomenality relations are probably more

likely to diverge diachronically than synchronically

(Schechter, 2012). I still hope that the restricted focus is

justified by the fact that the objections to the partial

599

unity model that I treat here don’t particularly concern

what’s true across time in the split-brain subject.

4. This way of talking suggests what Searle calls a

“building-block” model of consciousness (Searle, 2000; see

also Bayne, 2007). If one assumes a unified field model of

consciousness, then the distinction between the partial

unity model (PUM) and the CDM is, at a glance, less clear,

for reasons that will emerge in sec. 4. It nonetheless seems

possible to me that the kinds of considerations I discuss in

sec. 5 could be used to distinguish partial unity from

conscious duality (with some duplication of contents).

5. The two kinds of partial unity models are of course

interestingly related, and Hurley (1998), for one, considers

a kind of mixed model. Although the objections to the PUM

that I discuss here could be raised against either version

of the model, I think they emerge most starkly in the

context of the second.

6. There is a possible version of the PUM that is (at least

on its face) neutral with respect to implementation. I don’t

think that’s the version that Lockwood intended (see, e.g.,

600

Lockwood, 1994, 93), but nothing hinges on this exegetical

claim. A version that is neutral with respect to

implementation would be especially vulnerable to the

indeterminacy objection (and, thereby, the inconceivability

objection), though I suggest in sec. 5 that theoretical

constraints and not just neural facts could be brought to

bear in support of the PUM.

7. Within the neuropsychological literature on the split-

brain phenomenon, the model is occasionally hinted at (e.g.,

Trevarthen, 1974; Sperry, 1977; Trevarthen & Sperry, 1973),

but, interestingly, these writings are on the whole

ambiguous—interpretable as endorsing either a model of

split-brain consciousness as partially unified or a model in

terms of two streams of consciousness with common inputs.

Several explanations for this ambiguity will be suggested in

this paper.

8. Bayne (2010) disputes this, at least up to a point. See

response in Schechter (forthcoming a).

9. The language used in this section implies that we can

choose whether and how to revise our concepts, but I don’t

601

mean to commit myself to this (Grice & Strawson, 1956).

Perhaps our concept of a subject of experience is basic,

even innately specified, and perhaps there just is an

essential conceptual connection between it and the concept

of a subjective perspective.

10. Certainly this is the case if we read “subcortical” to

mean “noncortical,” which most discussions of the role of

“subcortical” connections in the split-brain subject appear

to do.

11. The subject Johnston et al. (2008) looked at had been

very recently callosotomized, while the subject Uddin et al.

studied—“N.G.”—underwent callosotomy nearly fifty years ago.

One possibility then is that in N.G., other, noncortical

structures have come to play the coordinating role that her

corpus callosum once played. (Actually N.G. has always been

a slightly unusual split-brain subject, but then arguably

each split-brain subject is.) A distinct possibility is that

the marked reduction in interhemispheric coherence observed

by Johnston et al. was simply an acute consequence of

undergoing major neurosurgery itself.

602

12. This particular approach to individuating conscious

experiences makes it possible for there to be subjects for

whom it is genuinely indeterminate (not just indeterminable)

whether they have a dual or a partially unified

consciousness. This is because it views the identity of

experiences and streams of consciousness as in part a matter

of integration, something that comes in degrees. It isn’t

clear, for instance, whether a split-brain subject has one or

two “functional sets” of conscious control systems. So the

structure of split-brain consciousness could be genuinely

indeterminate without showing that there are no possible

determinate cases of partial unity.

13. It is worth noting that there is in fact some debate

about the structure of consciousness in the “normal,” i.e.,

“nonsplit” case. How certain are we that there won’t turn

out to be any failures of co-consciousness in nonsplit

subjects? Several psychologists believe that there are

(e.g., Marcel, 1993). If we discovered that there were any

such failures, my guess is that we would be inclined to

conclude that our consciousness was mostly unified, rather

than dual—but to admit that our consciousness is mostly

603

unified would be to acknowledge that it is partially not. Thus

it is possible that even the normal case will end up being

one to which we confidently apply the PUM rather than the

CDM-duplication.

References

<bok>Baars, B. (1988). A cognitive theory of consciousness.

Cambridge: Cambridge University Press.</bok>

<jrn>Bayne, T. (2008). The unity of consciousness and the

split-brain syndrome. Journal of Philosophy, 105, 277–

300.</jrn>

<bok>Bayne, T. (2010). The unity of consciousness. Oxford: Oxford

University Press.</bok>

<other>Bayne, T. (2007). Conscious states and conscious

creatures: Explanation in the scientific study of

consciousness. Philosophical Perspectives 21 (Philosophy of

Mind): 1–22. Oxford: Wiley.</other>

<edb>Bayne, T., & Chalmers, D. (2003). What is the unity of

consciousness? In A. Cleeremans (Ed.), The unity of

604

consciousness: Binding, integration, and dissociation (pp. 23–58).

Oxford: Oxford University Press.</edb>

<jrn>Block, N. (1995). On a confusion about a function of

consciousness. Behavioral and Brain Sciences, 18, 227–

287.</jrn>

<edb>Burge, T. (2009). Five theses on de re states and

attitudes. In J. Almog & P. Leonardi (Eds.), The

philosophy of David Kaplan (pp. 246–316). Oxford: Oxford

University Press.</edb>

<bok>Dainton, B. (2000). Stream of consciousness. London:

Routledge.</bok>

<jrn>Davis, L. (1997). Cerebral hemispheres. Philosophical

Studies, 87, 207–222.</jrn>

<jrn>Dehaene, S., & Naccache, L. (2001). Towards a cognitive

neuroscience of consciousness: Basic evidence and a

workspace framework. Cognition, 79, 1–37.</jrn>

<jrn>Dewitt, L. (1975). Consciousness, mind, and self: The

implications of the split-brain studies. British Journal for

the Philosophy of Science, 26, 41–47.</jrn>

605

<bok>Eccles, J. (1965). The brain and the unity of conscious experience.

Nineteenth Arthur Stanley Eddington Memorial Lecture. Cambridge:

Cambridge University Press.</bok>

<bok>Eccles, J. (1973). The understanding of the brain. New York:

McGraw-Hill.</bok>

<jrn>Eccles, J. (1981). Mental dualism and commissurotomy.

Brain and Behavioral Science, 4, 105.</jrn>

<edb>Ferguson, S., Rayport, M., & Corrie, W. (1985).

Neuropsychiatric observations on behavioural

consequences of corpus callosum section for seizure

control. In A. Reeves (Ed.), Epilepsy and the corpus callosum

(pp. 501–514). New York: Plenum Press.</edb>

<bok>Gazzaniga, M. (1970). The bisected brain. New York:

Appleton-Century-Crofts.</bok>

<bok>Gazzaniga, M. (1985). The social brain. New York: Basic

Books.</bok>

<jrn>Gazzaniga, M. (2000). Cerebral specialization and

interhemispheric communication: Does the corpus

callosum enable the human condition? Brain, 123, 1293–

1326.</jrn>

606

<jrn>Grice, H., & Strawson, P. (1956). In defense of a

dogma. Philosophical Review, 65, 141–158.</jrn>

<bok>Hill, C. (1991). Sensations: A defense of type materialism.

Cambridge, MA: Cambridge University Press.</bok>

<edb>Hurley, S. (1994). Unity and objectivity. In C.

Peacocke (Ed.), Objectivity, simulation, and the unity of

consciousness (pp. 49–77). Oxford: Oxford University

Press.</edb>

<bok>Hurley, S. (1998). Consciousness in action. Cambridge, MA:

Harvard University Press.</bok>

<jrn>Johnston, J., Vaishnavi, S., Smyth, M., Zhang, D., He,

B., Zempel, J., et al. (2008). Loss of resting

interhemispheric functional connectivity after complete

section of the corpus callosum. Journal of Neuroscience, 28,

6452–6458.</jrn>

<jrn>Lassonde, M., & Ouiment, C. (2010). The split-brain.

Wiley Interdisciplinary Reviews: Cognitive Science, 1, 191–

202.</jrn>

<jrn>LeDoux, J., Wilson, D., & Gazzaniga, M. (1977). A

divided mind: Observations on the conscious properties

607

of the separated hemispheres. Annals of Neurology, 2, 417–

421.</jrn>

<ths>Levy, J. (1969). Information processing and higher psychological

functions in the disconnected hemispheres of human commissurotomy

patients. Unpublished doctoral dissertation. California

Institute of Technology.</ths>

<bok>Lockwood, M. (1989). Mind, brain, and the quantum. Oxford:

Blackwell.</bok>

<edb>Lockwood, M. (1994). Issues of unity and objectivity.

In C. Peacocke (Ed.), Objectivity, simulation, and the unity of

consciousness (pp. 89–95). Oxford: Oxford University

Press.</edb>

<edb>Marcel, A. (1993). Slippage in the unity of

consciousness. In G. Bock & J. Marsh (Eds.), Experimental

and theoretical studies of consciousness (pp. 168–179).

Chinchester: John Wiley & Sons.</edb>

<edb>Mark, V. (1996). Conflicting communicative behavior in

a split-brain patient: Support for dual consciousness.

In S. Hameroff, A. Kaszniak, & A. Scott (Eds.), Toward a

608

science of consciousness: The first Tucson discussions and debates (pp.

189–196). Cambridge, MA: MIT Press.</edb>

<bok>Marks, C. (1981). Commissurotomy, consciousness, and unity of

mind. Cambridge, MA: MIT Press.</bok>

<edb>Milner, B., Taylor, L., & Jones-Gotman, M. (1990).

Lessons from cerebral commissurotomy: Auditory

attention, haptic memory, and visual images in verbal-

associative learning. In C. Trevarthen (Ed.), Brain

circuits and functions of the mind (pp. 293–303). Cambridge:

Cambridge University Press.</edb>

Moor, J. 1982. Split-brains and atomic persons. Philosophy of

Science 49: 91-106.

<jrn>Nagel, T. (1971). Brain bisection and the unity of

consciousness. Synthese, 22, 396–413.</jrn>

<jrn>Nagel, T. (1974). What is it like to be a bat?

Philosophical Review, 83, 435–450.</jrn>

<bok>Popper, K., & Eccles, J. (1977). The self and its brain. New

York: Springer International.</bok>

<edb>Revonsuo, A. (2000). Prospects for a scientific

research program on consciousness. In T. Metzinger

609

(Ed.), Neural correlates of consciousness: Empirical and conceptual

questions. Cambridge, MA: MIT Press.</edb>

<jrn>Schechter, E. (2010). Individuating mental tokens: The

split-brain case. Philosophia, 38, 195–216.</jrn>

<jrn>Schechter, E. (2012). The switch model of split-brain

consciousness. Philosophical Psychology, 25, 203–226.</jrn>

<jrn>Schechter, E. (forthcoming a). The unity of

consciousness: Subjects and objectivity. Philosophical

Studies.</jrn>

<jrn>Schechter, E. (forthcoming b). Two unities of

consciousness. European Journal of Philosophy.</jrn>

<jrn>Searle, J. (2000). Consciousness. Annual Review of

Neuroscience, 23, 557–578.</jrn>

<jrn>Sidtis, J., Volpe, B., Holtzman, J., Wilson, D., &

Gazzaniga, M. (1981). Cognitive interaction after

staged callosal section: Evidence for transfer of

semantic activation. Science, 212, 344–346.</jrn>

<edb>Shallice, T. (1997). Modularity and consciousness. In

N. Block, O., Flanagan, and G. Güzeldere (Eds.), The

610

nature of consciousness (pp. 255–276). Cambridge, MA: MIT

Press.</edb>

<jrn>Sperry, R. (1977). Forebrain commissurotomy and

conscious awareness. Journal of Medicine and Philosophy, 2,

101–126.</jrn>

<jrn>Sperry, R. (1985). Consciousness, personal identity,

and the divided brain. Neuropsychologia, 22, 661–

673.</jrn>

<jrn>Tononi, G. (2004). An information integration theory of

consciousness. BMC Neuroscience, 5, 42.</jrn>

<edb>Tramo, M., Baynes, K., Fendrich, R., Mangun, G.,

Phelps, E., Reuter-Lorenz, P., et al. (1995).

Hemispheric specialization and interhemispheric

integration: Insights from experiments with

commissurotomy patients. In A. Reeves & D. Roberts

(Eds.), Epilepsy and the corpus callosum (Vol. 2, pp. 263–

295). New York: Plenum Press.</edb>

<edb>Trevarthen, C. (1974). Analysis of cerebral activities

that generate and regulate consciousness is

commissurotomy patients. In S. Dimond and J. Beaumont

611

(Eds.), Hemisphere Function in the Human Brain (pp. 235–263).

New York: Halsted Press.</edb>

<jrn>Trevarthen, C., & Sperry, R. (1973). Perceptual unity

of the ambient visual field in human commissurotomy

patients. Brain, 96, 547–570.</jrn>

<bok>Tye, M. (2003). Consciousness and persons: Unity and identity.

Cambridge, MA: MIT Press.</bok>

<jrn>Uddin, L., Mooshagian, E., Zaidel, E., Scheres, A.,

Margulies, D., Clare Kelly, A., et al. (2008). Residual

functional connectivity in the split-brain revealed

with resting-state functional MRI. Neuroreport, 19, 703–

709.</jrn>

<edb>Zaidel, E., Iacaboni, M., Zaidel, D., & Bogen, J.

(2003). The callosal syndromes. In K. M. Heilman & E.

Valenstein, E. (Eds.), Clinical Neuropsychology 2002 (4th

ed., pp. 347–403). New York: Oxford University Press.

</edb>

<jrn>Zaidel, E., Zaidel, D., & Sperry, R. (1981). Left and

right intelligence: Case studies of Raven’s Progressive

612

Matrices following brain bisection and hemi-

decortication. Cortex, 17, 167–186.</jrn>

613