8 January 1994 No1 907 Weekly f 1 .60lssN 0262 4079
NewScientistThe manwho chopped up light
Diaq of. a disaster
Jets for deadlier dogfights
HO\I/ RED IS THIS TOMATO?of consciousness
1ilil[[Jill[ilillluillllll
Entering the pfivate world
N Ew SCIENTIST ALL IIII THE MIITID
What does it mean to be conscious? Over the next four weeks New Scientist will be delving deep into the
meaning of "mind" to find out. Our articles will ask if animals are conscious, if emotions could hold the key
to the puzzle, and how future researchers might dissect the human mind. But we begin with a radical theory
based on a seemingly bizarre concept-"privatised senses"
fte private worldof consctousnessComputers make it easy to view the mind as a thinking machine.
But the key to consciousness lies with feeling, not thinking
Nicholas Humphrey
THERE would seem to be a certain logic to the followingargument. "X has the feel of a problem that no one knowshow to solve; Y is a problem that is on its way to beingsolved; therefore Y is not X." Applied to the problem of con-sciousness it might run like this: "We have no idea how aphysical process in the brain could bring about consciousness;we now have a pretty good idea of how a physical process inthe brain could be responsible for a whole variety of cognitiveabilities-perceiving, remembering, reasoning and so on;therefore these particular cognitive abilities cannot be the basisof consciousness."
The logic may not be watertight, but it does go some wayto explaining many people's stick-in-the-mud reaction tothe progress of artificial intelligence (AI) and cognitive sci-ence in solving the problem of how a brain or a machinecould "think". These achievements of AI are all very well, so
the argument would go, but when it comes to"conscious thinking" there is obviously stillsomething missing.
Nor is the kind of thinking that today's researchers reckon tobe able to explain mechanistically limited to low-level roboticthinking about cats and mats. In the hands of the philosopherDaniel Dennett, for example, every aspect of human thought,from complex decision-making right through to the sense of self,is given a computational interpretation. So should we not nowconcede that consciousness is a mystery no longer? Should wenot just announce success, as Dennett does in his latest bookC o ns cio usne s s kplain e d?
These days the gurus of AI, such as Marvin Minsky at theMassachusetts Institute of Technology, do indeed simply take itfor granted that there is nothing more to consciousness thansophisticated information processing-and talk blithely aboutrobots that would be conscious merely by virtue of their abilityto manipulate s).nnbolic representations (see "I process, there-fore I am", New Scientist,2T March 1993). And in the world ofliving animals, comparative psychologists are happy to go thesame way and to attribute consciousness to any animal that
shows evidence of high enough intelligence. Can
Until quite recently the issue could be shelved.People supposed that thinking was unique toconscious human minds. There was no reason totake seriously the idea that thinking could everbe explained merely in terms of non-consciousmechanical processes either in a machine or in abrain. It was not until Alan Turing's research inthe 1950s that the problem of making a "think-ing machine" began to look soluble in theory.Turing realised *rat thoughts could be mappedonto symbols, that these symbols could bephysical objects and that playng around withthese nhvsical ohiects could be eouivalent to rthese physical objects could be equivalent to playing aroundwith thoughts. Hence a mechanical device could in principle bebuilt that would go through the motions-literally-of thinking.
Turing's insight revolutionised brain science and the philoso-phy of mind. Fifty years ago, the possibility that a physiologistmight one day record brain activity and daim that the physicalsignals he was picking up underlay the thought, for example,that "the cat is on the mat", would have seemed philosophicallyoutrageous. But now it makes perfect sense to many scientists.
the doubters have any more grounds for com-plaint-other, that is, than some kind of latenthostility to explanation?
I think they can. And, though in the past Isought to explain consciousness in terms ofthought processes, I have more recently becomean ally of the sceptics. Although I do not sidewittr the more obscurantist critics, I agree thaton present showing AI is still far from solving-or even addressing-the real problem.
The question is: what do most people wantto have explained? What do most peoplemean by consciousness? Or rather-since theymay mean different things at different times-
what is it they really care about?If we listen to the kinds of questions people ask about con-
sglell5nsss-"Are babies conscious?", "Will I be conscious dur-ing the operation?", "How does my consciousness compare withyours?" and so on-we find that, again and again, the centralissue is not thinking but feeling. What concerns people is notso much the stream of thoughts that may or may not berunning through their heads as the sense they have of beingalive at al! alive, that is, as embodied beings, interacting with
23
;cooooEL
8 January 1994
ALL IIU THE MTITID N nw ScTTNTIST
Peach on the brainSo what other kind of brain activity could pos-sibly do the trick? And how can it have beenshaped up in the course of evolution? The evo-lutionary question raises what has traditionallybeen regarded as one of the most puzzling fea-tures of sensations: their apparent privacy.
Is it true that sensations have private proper-ties? It may seem odd to suppose that some-thing so important to us as the redness of red,
Did evolutionary changes in theway primitive organisms sensethe world lead to consciousness?
is- the same as yours? Do you have any way of knowingwhether when she looks at the picture she might be havin[whal for you would be a green experience-or even a salty oia tickly experience or perhaps some strange experience ihatyou've never had?
How does red feel?The answer, it would appear, is "no". The conscious quality ofsensations has no discernible effects. As Ludwig Witigensteinsardonically concluded: "The assumption would thus be possi-ble-though unverifiable-that one section of mankind his onesensation ofred and another section another." This is one ofthereasons why Wittgenstein considered that we simply cannot talksensibly about privately sensed qualities-and why others havebeen tempted to go even further and to argue that sensoryquality has no objective reality whatever.
And yet, look at it: what could bemore real for you than the redness .t ryou are expeiiencing? ;;-iail;; 'Evolution raisesreal, why is it like this and not likesomething else? One Of the
Evolutionary considerations ..--rshoutd surely have
" b.;il;';J;ri[ most puzzling
flll,il",?.1,#",ij:J5 ti*lh,"'n:;: reatures orbeen
-shaped by natural selection, Sensatigns:and the way we experience sensa-tions would seem as basic.as. any. theif appafentYet there is an obvious difficultylooming,. Natural selection can only priVaCytact on those features of an organ-ism that make a difference to itschances of survival. And it follows that there can only have beenselection for the quality of sensory perception if thij quality hasa public effect-which, apparently, it does not.
We cannot have it both ways-or can we? I think it is possi-ble to make the case that sensations are both private andhavebeen shaped by selection, though not at the same time.. To ge! to this answer I have had to go back to the drawingboard, both philosophically and biologically (goaded and
encouraged by some of Dennett's or,rm thinkingon these matters). Sensations, I argue in mybook, A History of the Mind, are not at all thekind of thing that most people think they are. AsWittgenstein realised, our language misleads us.We talk about "having" a sensation-as if some-how sensations were independent entities, outthere in the world, waiting for us to grasp themwith our mind's eye. But the truth is-asdetailed analysis of the psychology and phe-nomenology shows-that sensations are not somuch things we observe as things we do: theyare our or,rm active response to stimulation hap-pening on the surface of our bodies.
Sensations began their evolutionary life asbodily behaviours. For a primitive organism, the aitivity ofsensing red, for example, would have involved responding tostimulation by red light with a particular sort of wriggle in-thearea where the stimulation was occurring let's call lt a ,,red
wriggle". The activity of sensing green would have involved adifferent sort of wriggle, a "green wriggle".
The subjective experience (or proto-experience) in thisprimitive organism would simply have involved the wriggling,and its quality;would have been correlated with their foim. fo
Body's sensesproduce local
response
Response becomestargeted on incoming
sensory nerve
Response becomes"privatised" within a
brain circuit
Evolutlonan external world via their own body surfaces, and subject to aspectrum of sensations-pains in their feet, tastes on theirtongue, colours in front of their eyes.
What matters in particular is the subjective quality of thesesensations: the peculiar painfulness of a thorn, the saltiness ofan anchovy, the redness of an apple, the "what it is 1ike for us,,when the stimuli from these external objects meet our bodiesand we respond. Thoughts may come and thoughts may go. Aperson can be conscious without thinking anything. But a per-son simply cannot be conscious without feeling. I feel, thire-fore I am. And, as Milan Kundera has put ii in his novel,Immortality, " 'I think, therefore I am,' is the statement of anintellectual who underrates toothaches."
But here is the paradox. For what is central in this everydayconception of consciousness figures hardly at all in most con-temporary theories of it. What does AI or cognitive science have!o- say about the subjective quality of sensations? Nothing.Where do sensations feature in Dennett's explanation of con-sciousness? Not centrally, but as a side issue. If a physiologistannounced that he had recorded the brain activity underlyingsomeone's experience of tasting a ripe peach, could we see inprinciple how that might be so? No, if we stick to computeranalogies, we wouldn't have a clue. For those who want tobelieve in a continuing mystery the game is far from over.
In my or,rm recent work I have been trying, if not to solve theproblem of sensory consciousness to clarifii more precisely whatthe issues are. I take it there is nothing actually magicai aboutsensations, that some kind of brain actMty does underlie thesubjective experience, and that when we get to understand itwe will indeed see why it has to be so. But I amalso prety sure that sensory consciousness hasto involve something quite unlike the kind ofsymbol manipulation that AI people deal with.
the sweetness of sweet, the painfulness of pain, and so on,exists for us alone and has no public consequences for ourbehaviour. And yet it certainly seems to be thai way.
-Imagine yourself and a friend both looking ar the pictureof a tomato on this page. Concentrate on the quality ofyour sensory experience, and forget about any more abstractthought you might be having (that "this is a tomato,,, forexample, or "this is the colour we call red"). Is there anyway you could tell whether your friend's conscious experience
248 January 1994
N nw SCIENTIST ALL IIU THE MIITID
sense red, for example, wouldhave been to issue the commandsappropriate to bringing about thered wriggle.
In these early days, the sensoryactivities involved actual bodilybehaviour. They occurred in thepublic domain. And their form was
indeed shaped by natural selection. The red wriggle wasselected as ihe biologically adaptive response to red light, thegreen wriggle as the response to Sreen light-and the samewent for salty wriggles, ticklish wriggles, and so on.
But then, in the course of evolution, there was a slow butremarkable change. To start with, the wriggles in response tostimulation became less important to survival. But by thenthe organism had come to re1y, for other reasons, on havinga mental representation of the stimulation. It needed to beable to tell that "such and such a kind of stimulation is happen-ing to this part of me". And the only way it could do this wasto-monitorlts own response (or at least the response it mighthave made). That is to say, it could only tell what had beenhappening at its own body surface by issuing commands foran appropriate wriggle and then letting these commands repre-sent what was happening.
The wriggle withinBecause there was no longer any point in actually acting outthese commands the organism had now only to issue com-mands as if to the body surface that would have produced a
specific kind of wriggle. Thus the commands could be, as
it were, short-circuited. And so the whole sensory activitybecame closed off from the outside world in an internalloop within the brain. In other words the sensory activitiesgot "privatised".
Thereafter the forms of the responses to different kinds ofstimulation ceased to be subject to selection. But by then thehistorical forms had already been established. The response as
if for a red wriggle had one form, and the green wriggleanother. And although, once selection was relaxed, these formsmay have drifted somewhat, they never lost touch with theirpedigree. It is this pedigree that colours our private sensoryexperience today. We might think of the analogy of the Britishariitocracy, where the forms of behaviour to be found in theinner world of the House of Lords, follow the pafferns laid dov,nfor public reasons long ago.
But there is much more to this story. For after the privatisa-tion had taken place and the sensory activity had begun to loopback on itself within the brain, there were dramatic conse-quences for sensory phenomenology. In particular, the activitybecame self-sustaining and partly self-creating. As a resultsensory experience moved into a time dimension of its own,what in my latest book I have called the "thick time" of thesubjective present.
This theory of what sensations are and where they came fromcan, I believe, explain much of what has seemed most puzzlingabout the nature of conscious experience. Yet even if the modelworks as well as I believe, is it likely to be accepted as asolution to the mystery? Probably not. Chasing the rainbow ofconsciousness is a sport that someone somewhere will alwaysfind new reasons for continuing. n
Nicholas Humphrey r's a Senior Resea rch Fellow at Darwin College,
Cambridge. His latest book, A History of the Mind (Vintage, 1993), has beenawarded the British Psychological Society's first annual award.
8 january 1994
Just pub/ishe d, the most comprehensive study to date
on what the future holds for scientific research into the
environment.
The UK EnvironmentalForesight ProjectWe need to know now what are the iikely UK environmental
issues of the future. With this foresight we can direct research,
policies, and make the best use of resources.
The UK Environmental Foresight Project provides this vital
strategic guidance for government planners, environmental
researchers and leaders of industry. The research for the three-
volume report was carried out by Keith lVlason of the Centre for
Exploitation of Science and Technology (CEST) on behalf of
the Department of the Environment.
Volume 1Preparing for the FutureMore than a 'crystal ball'exercise, Volume 1 identifies necessary
developments in UK environmental research, technology and
environmental policy that should accompany the UK into the
next century,
t December 1993 156 pases ISBIV 0 11 7528838 t40
Volume 2Road Transnort and the Environment:The Futurehgenda in the UKRoad transport contributes to one of the most pervasive of
today's environmental problems - air pollution. This international
review outlines a possible future agenda and likely policy and
technical responses.
t December 1993 164 pages lSBl\l 0 11 752841 2 {.43
Volume 3The Future Road Transport NoiseAgenda in the UKDoes the UK need a more definitive noise reduction policy? This
is a detailed world-wide review of one of the most prevalent, yet
least understood, environmental issues facing the UK.
t December 1993 88 pages ISBIV 0 11 7528846 !30
Atl published by HMSO Books and available from:AIi good booksellers, HMSO Bookshops andAgents (in the UK see Yellow Pages: Booksellers).
HMSO Publications Cenire, PO Box 276,
London SWB sDT.
[Post and packing is free. P]ease makecheques payable to 'HMSO Books'.)
Credit card orders:Tel 071-873 9090 Fax 071,-873 8200
1
EXPLAINING CONSCIOUSNESS. Nicholas Humphrey
How can the water of the brain be turned into the wine of
consciousness? The real puzzle is not thinking but
feeling.
_________________________________________________________
THERE would seem to be a certain logic to the following
argument. "X has the feel of problem that no one knows
how to solve; Y is a problem that is on its way to being
solved; therefore Y is not X". Applied to the problem of
consciousness it might run like this: "We have no idea
how a physical process in the brain could bring about
consciousness; we now have a pretty good idea of how a
physical process in the brain could be responsible for a
whole variety of cognitive abilities - perceiving,
remembering, reasoning and so on; therefore these
particular cognitive abilities cannot be the basis of
consciousness."
The logic may not be water-tight, but it does go
some way to explaining many people's stick-in-the-mud
reaction to the progress of artificial intelligence (AI)
and cognitive science in solving the problem of how a
brain or a machine could "think". These achievements of
AI are all very well, so the argument would go, but when
it comes to "conscious thinking" there is obviously still
something - isn't it the main thing? - missing.
Until quite recently the issue could be put aside.
While people still supposed that thinking was something
that only conscious human minds could do, consciousness
as it were came with the territory. There was no reason
to take seriously the idea that thinking could ever be
explained merely in terms of non-conscious mechanical
processes either in a machine or in a brain. It was not
until Alan Turing's researches in the 1950's that the
2
problem of making a "thinking machine" began to look
soluble in theory. Turing realised that thoughts can be
mapped onto symbols, that these symbols can be physical
objects, and that playing around with these physical
objects can be equivalent to playing around with
thoughts. Hence a mechanical device could in principle be
built that would go through the motions - literally - of
thinking.
Turing's insight revolutionised brain science and
the philosophy of mind. As little as fifty years ago, the
possibility that a physiologist might one day record from
someone's brain and claim that the purely physical
activity that he was picking up underlay the thought, for
example, that "the cat is on the mat", would have seemed
philosophically outrageous. But now it would seem to many
scientists to make perfect sense.
Nor is the kind of thinking that today's researchers
reckon to be able to explain mechanistically, limited to
low level robotic thinking about cats and mats. In the
hands of the philosopher Daniel Dennett, for example,
every aspect of human thought, from complex decision
making right through to the sense of self, is given a
computational interpretation. So should we not now accept
- gladly - that the mystery has finally gone out of
consciousness? Should we not just announce success, as
Dennett does in his brilliant new book Consciousness
Explained?
These days the gurus of Artificial Intelligence,
such as Marvin Minsky at the Massachusetts Institute of
Technology, do indeed simply take it for granted that
there is nothing more to consciousness than sophisticated
information processing - and talk blithely about
conscious robots that would be conscious merely by virtue
of their ability to manipulate symbolic representations
(see "I process, therefore I am", New Scientist, 27 March
3
1993). And in the world of living animals comparative
psychologists are happy to go the same way and to
attribute consciousness to any animal that shows evidence
of high enough intelligence. Can the stick-in-the-muds
still have any principled complaint about what has been
achieved - other, that is, than some kind of latent
hostility to explanation?
I think they can. And, though I myself have in the
past been one of those who sought to explain
consciousness in terms of thought processes, I have more
recently become an ally of those who remain unconvinced
by this approach. Although I do not side with the more
obscurantist critics, I agree that on present showing
Artificial Intelligence is still far from solving - or
even addressing - the real problem.
The question is what do people - let's call them
ordinary people - want to have explained. What do
ordinary people mean by consciousness? Or rather - since
they may mean different things at different times - what
is it they really care about?
If we listen to the kinds of questions people ask
about consciousness - "Are babies conscious?", "Will I be
conscious during the operation?", "How does my
consciousness compare with yours?" etc. - we find that,
again and again, the central issue is not thinking but
feeling. What concerns people is not so much the stream
of thoughts that may or may not be running through their
heads as the sense they have of being alive at all:
alive, that is, as embodied beings, interacting with an
external world at their own body surfaces, and subject to
a spectrum of sensations - pains in their feet, tastes on
their tongue, colours at their eyes.
What matters in particular is the subjective quality
of these sensations: the peculiar painfulness of a thorn,
the saltiness of an anchovy, the redness of an apple -
4
the "what it is like for us" when the stimuli from these
external objects meet our bodies and we respond.
Thoughts may come and thoughts may go. A person can be
conscious without thinking anything. But a person simply
cannot be conscious without feeling. I feel therefore I
am. And, as Milan Kundera has put it in his novel,
Immortality, "'I think therefore I am,' is the statement
of an intellectual who underrates toothaches."
But here is the paradox. For what figures so
strongly in ordinary people's conception of what matters
about consciousness figures hardly at all in most
contemporary theories of it. What does A.I. or cognitive
science have to say about the subjective quality of
sensations? Nothing. Where do sensations feature in
Dennett's explanation of consciousness? Not centrally,
but as a side issue. If a physiologist were to record
from someone's brain and announce that the activity under
his electrodes underlay the subjective experience of
tasting a ripe peach, could we see in principle how that
might be so? No, if we stick to computer analogies, we
wouldn't have a clue. For those who want to believe in a
continuing mystery the game, it seems, is certainly not
over.
In my own recent work I have been trying, if not to
solve the problem of sensory consciousness in all its
glory, at least to clarify more precisely what the issues
are. I take it there is nothing actually magical about
sensations, that some kind of brain activity does
underlie the subjective experience, and that when we get
to understand it we will indeed see why it has to be so.
But I am also pretty sure that sensory consciousness has
to involve something quite unlike the kind of symbol
manipulation that AI people typically deal with.
So what other kind of brain activity could possibly
do the trick? And how can it have been shaped up in the
5
course of evolution? Let me comment here on the
evolutionary question, since it raises what has
traditionally been regarded as one of the most puzzling
features of sensations: their apparent privacy.
Is it true that sensations have private properties?
It may seem odd to suppose that something so important to
us as the redness of red, the sweetness of sweet, the
painfulness of pain, and so on, exists for us alone and
has no public consequences for our behaviour. And yet it
certainly seems to be that way.
Imagine yourself and a friend both looking at the
picture of a tomato on this page. Concentrate on the
quality of your sensory experience, and forget about any
more abstract thought you might be having (e.g. that
"this is a tomato" or "this is the colour we call red").
Is there any way you could tell whether your friend's
conscious experience is the same as yours? Do you have
any way of knowing whether when she looks at the picture
she might be having what for you would be a green
experience - or even a salty or a tickly experience or
perhaps some strange experience that you've never had?
The answer, it would appear, is No. The conscious
quality of sensations has no discernible effects. As
Ludwig Wittgenstein sardonically concluded: "The
assumption would thus be possible - though unverifiable -
that one section of mankind has one sensation of red and
another section another." This is one of the reasons why
Wittgenstein considered that we simply cannot talk
sensibly about privately sensed qualities - and why
others have been tempted to go even further and to argue
that sensory quality has no objective reality whatever.
And yet, look at it: what could be more real for you
than the redness you are experiencing? But if it is real,
why is it like this and not like something else?
6
Evolutionary considerations should surely have a
bearing on this question. Most if not all of our
biologically-based features have been shaped by natural
selection, and the way we experience sensations would
seem as basic as any. Yet there is an obvious difficulty
looming. Natural selection can only act on those features
of an organism that make a difference to the organism's
chances of survival. And it follows that there can only
have been selection for the quality of sensory
experiences - one way of sensing red as against another
way - if this quality does have a public effect. Which,
apparently, it does not.
We cannot have it both ways. Either sensations
cannot be as private as they seem, or they cannot have
evolved by natural selection. Well, which is it? Which
are we going to give up? My answer is that we need not
give up either. For, despite what I have just said, I
think it is possible to make the case that sensations are
indeed private and that they have indeed been shaped by
selection - but not at the same time.
To get to this answer I have had to go back to the
drawing board, both philosophically and biologically
(goaded and encouraged by some of Dennett's own thinking
on these matters). Sensations, I argue in my book, "A
History of the Mind", are not at all the kind of thing
that most people think they are. As Wittgenstein
realised, our language misleads us. We talk about
"having" a sensation - as if somehow sensations were
independent entities, out there in the world, waiting for
us to grasp them or look at them or observe them with our
mind's eye. But the truth is - as detailed analysis of
the psychology and phenomenology shows - that sensations
are not so much things we observe as things we do: they
are our own active response to stimulation occurring at
the body surface.
7
Sensations began their evolutionary life as bodily
behaviours. For a primitive organism, the activity of
sensing red, for example, would have involved responding
to stimulation by red light with a particular sort of
wriggle in the area where the stimulation was occurring -
let's call it a "red wriggle". The activity of sensing
green would have involved a different sort of wriggle, a
"green wriggle".
The subjective experience - or perhaps we should
call it the proto-experience in this primitive organism -
would simply have consisted in the making of the
wriggles, and its quality would have been correlated with
their form. To sense red, for example, would have been to
issue the commands appropriate to bringing about the red
wriggle.
In these early days, the sensory activities involved
actual bodily behaviour. They occurred in the public
domain. And their form was indeed shaped by natural
selection. The red wriggle was selected as the
biologically adaptive response to red light, the green
wriggle as the adaptive response to green light - and the
same went for salty wriggles, ticklish wriggles, and so
on.
But then, in the course of evolution, there was a
slow but remarkable change. To start with, the wriggles
in response to stimulation became less important to
survival. But by then the organism had come to rely, for
other reasons, on having a mental representation of the
stimulation. It needed to be able to tell that "such and
such a kind of stimulation is happening to this part of
me". And the only way it could do this was to monitor its
own response (or at least the response it might have
made). That is to say, it could only tell what has
happening at its own body surface by issuing commands for
8
an appropriate wriggle and then letting these commands
represent what was happening.
Since, however, there was no longer any point in
actually carrying these commands through into bodily
behaviour, the organism had now only to issue commands as
if to the body surface as if to produce a specific kind
of wriggle. Thus the commands could be, as it were,
short-circuited. And so the whole sensory activity became
closed off from the outside world in an internal loop
within the brain. In other words the sensory activities
got "privatised".
Thereafter the forms of the responses to different
kinds of stimulation ceased to be subject to selection.
But by that time the historical forms had already been
established. The response as if for a red wriggle had one
form, as if for a green wriggle another. And although,
once selection was relaxed, these forms may have drifted
somewhat, they never lost touch with their pedigree. It
is this pedigree that colours our private sensory
experience right down to the present day. We might think
of the analogy of the British aristocracy, where the
forms of behaviour to be found in the now largely
functionless inner world of the House of Lords, follow
the patterns laid down for public reasons long ago.
But there is much more to this story. For after the
privatisation had taken place and the sensory activity
had begun to loop back on itself within the brain, there
were dramatic consequences for sensory phenomenology. In
particular, the activity became self-sustaining and
partly self-creating. One consequence was that sensory
experience got lifted into a time dimension of its own -
what I have called the "thick time" of the subjective
present (one might think again of the British House of
Lords).
9
This theory of what sensations are and where they
came from can, I believe, explain much of what has seemed
most puzzling about the nature of conscious experience.
Yet even if the model works as well as I believe, is it
likely to be accepted as a solution to the mystery?
Probably not. Chasing the rainbow of consciousness is a
sport that someone somewhere will always find new reasons
for continuing.
Nicholas Humphrey is a Senior Research Fellow at Darwin
College, Cambridge. His new book, A History of the Mind
(Vintage, 1993), has been awarded the British
Psychological Society's first annual award.
Figure
Responses to sensory stimulation became "privatised" in
the course of evolution. A response that originally
involved activation of the body in the region of the
stimulus (a), later became targeted not on the body
surface but on the incoming sensory nerve (b), and
eventually became completely closed off within a re-
activating internal circuit in the brain (c).
Top Related