EVOLVABLE ARCHITECTURES FOR HUMANLIKE MINDS

17
EVOLVABLE ARCHITECTURES FOR HUMAN-LIKE MINDS Aaron Sloman and Brian Logan School of Computer Science, The University of Birmingham, UK http://www.cs.bham.ac.uk/ ˜axs/ ˜bsl/ Presented at 13th Toyota Conference, on “Affective Minds” Nagoya Japan, Nov-Dec 1999 Toappear in a volume published by Elsevier, Ed. Giyoo Hatano, 2000, Copyright Elsevier. 1 Abstract There are many approaches to the study of mind, and much ambiguity in the use of words like ‘emotion’ and ‘consciousness’. This paper adopts the design stance and attempts to understand human minds as information processing virtual machines with a complex multi-level architecture whose components evolved at different times and perform different sorts of functions. A multi-disciplinary perspective combining ideas from engineering as well as several sciences helps to constrain the proposed architecture. Variations in the architecture should accommodate infants and adults, normal and pathological cases, and also animals. An analysis of states and processes that each architecture supports provides a new framework for systematically generating concepts of various kinds of mental phenomena. This framework can be used to refine and extend familiar concepts of mind, providing a new, richer, more precise theory-based collection of concepts. Within this unifying framework we hope to explain the diversity of definitions and theories and move towards deeper explanatory theories and more powerful and realistic artificial models, for use in many applications, including education and entertainment. 2 Approaches to the study of mind There are many approaches to the study of mind. Experimental approaches involve searching for patterns in data from laboratories, questionnaires, etc. Philosophers try to analyse the concepts we use in thinking about minds, or try to work out general requirements for a mind. Minds can be regarded as biological phenomena and attempts made to trace their evolution. Some physicists claim that mentality is implicated in the basic mechanisms of quantum mechanics and try to derive therefrom explanations of more familiar mental phenomena. Social scientists view minds as products of culture constituted largely by their social context. AI researchers and some cognitive scientists and brain scientists attempt to build computational models. There are several types of computational approaches to mind, which produce theories and models of varying depth. At one extreme, researchers are concerned entirely with practical goals, such as trying to produce robots or software agents which are ‘believable’ and produce Now at the School of Computer Science & IT, University of Nottingham: [email protected]

Transcript of EVOLVABLE ARCHITECTURES FOR HUMANLIKE MINDS

EVOLVABLE ARCHITECTURESFOR HUMAN-LIKE MINDS

Aaron Slomanand Brian Logan�

Schoolof ComputerScience,TheUniversityof Birmingham,UKhttp://www.cs.bham.ac.uk/

�˜axs/ ˜bsl/�

Presentedat13thToyotaConference,on “Affective Minds”Nagoya Japan,Nov-Dec1999To appearin avolumepublishedby Elsevier, Ed. GiyooHatano,2000,Copyright Elsevier.

1 Abstract

Thereare many approachesto the study of mind, and much ambiguity in the useof words like‘emotion’and‘consciousness’.Thispaperadoptsthedesignstanceandattemptsto understandhumanminds as information processingvirtual machineswith a complex multi-level architecturewhosecomponentsevolvedat differenttimesandperformdifferentsortsof functions.A multi-disciplinaryperspective combining ideasfrom engineeringas well as several scienceshelps to constraintheproposedarchitecture.Variationsin thearchitectureshouldaccommodateinfantsandadults,normalandpathologicalcases,andalsoanimals.An analysisof statesandprocessesthateacharchitecturesupportsprovidesanew framework for systematicallygeneratingconceptsof variouskindsof mentalphenomena.This framework canbeusedto refineandextendfamiliar conceptsof mind, providinga new, richer, moreprecisetheory-basedcollectionof concepts.Within this unifying framework wehopeto explainthediversityof definitionsandtheoriesandmovetowardsdeeperexplanatorytheoriesandmorepowerful andrealisticartificial models,for usein many applications,includingeducationandentertainment.

2 Approachesto the study of mind

Therearemany approachesto thestudyof mind. Experimentalapproachesinvolvesearchingfor patternsin datafrom laboratories,questionnaires,etc. Philosopherstry to analysetheconceptswe usein thinking aboutminds, or try to work out generalrequirementsfor amind. Minds can be regardedas biological phenomenaandattemptsmadeto tracetheirevolution. Somephysicistsclaim that mentality is implicated in the basic mechanismsof quantummechanicsand try to derive therefromexplanationsof more familiar mentalphenomena.Socialscientistsview mindsasproductsof cultureconstitutedlargely by theirsocialcontext. AI researchersandsomecognitive scientistsandbrain scientistsattempttobuild computationalmodels.

Thereareseveraltypesof computationalapproachesto mind,whichproducetheoriesandmodelsof varyingdepth.At oneextreme,researchersareconcernedentirelywith practicalgoals,suchastrying to producerobotsor softwareagentswhichare‘believable’andproduce�

Now at theSchoolof ComputerScience& IT, Universityof Nottingham:[email protected]

3 CONSTRAININGTHE ARCHITECTURE 2

appropriatereactions(suchassympathy)in humans,or which areadequatefor somepracti-cal purpose,suchascontrollinga factory. Often,shallow simulationsachievesuchpracticalgoalswithoutexplainingverymuch.Whenthemainresearchobjectiveis toproduceaccurateexplanationsandmodelsof naturallyoccurringintelligent systemsmorecomplex theoriesarerequired.Thesemayinvolvedifferentlevelsof abstraction.For example,sometheoristsstrive to producerealisticexplanationsby modellingknown neuralstructuresandprocesses.Othersaim for amoreabstractlevel of modelling.

Our approachis to explorevirtual machine informationprocessingarchitectureswhichmightexplainhumanmentalphenomena.Weoperateatanintermediatelevel,expectingthatlater work will move ‘downwards’ from the architectureto connectit with more realisticimplementationscloserto biologicaldetailsand‘upwards’by showing how thearchitecturecanexplain a wide rangeof phenomenaarisingout of mechanismsandprocessestherein,includingsocialphenomena.

Thereis a huge spaceof potentiallyrelevantarchitecturesalthoughresearchershave sofarstudiedonly atiny subset,soany theoriesproducedin thenearfuturemustbeprovisional,andsubjectto revisionaswe learnmore.

3 Constraining the architecture

Thereareindefinitelymany virtual machinearchitecturesthatcouldin principleproduceanyobservedbehavioursof anorganism,evenif theorganismis observedover its lifetime. Howcanthesearchfor explanatoryarchitecturesbecontrolled?Thereis no guaranteedmethodof success,hereor in any otherareaof science.At bestwe canusegeneralguidelines.Forinstance,thefollowing kindsof informationcanhelpto constrainor guidethesearchfor anexplanatorytheory:(1) Informationaboutthephysicaldesignof thesystem.Studyingbrainmechanismsprovidessuchinformation. Brain structuresandmechanismsconstrainthe typesof virtual machine(VM) they cansupport,thoughonly in subtleandindirectways.For instancethenumberofpossiblestatesof thebrainlimits thenumberof distinctstatesin theVM, thoughthemappingbetweenphysicalandVM levels is complicatedby suchthingsasuseof sparsearrays,orinformationimplicit in axiomsor rulesfrom which consequencescanbederivedasneeded,so that information‘in’ the systemneednot be explicitly mappedonto particularphysicalcomponents.(2) Informationaboutthe designhistory. If somesub-systemwithin anorganismevolved,then the mechanismsof biological evolution will constrainthe design,e.g. becausepre-cursorsof the organismmustbe completeviable organisms,unlike partially built artificialsystems.Thinkingabouttrajectoriesin evolutionarydesignspacemayhelpusfind explana-tory mechanisms,for instancebasedonthenotionthatevolutionfrequentlymodifiesadesignby producinganextra copy of somecomponentthenmodifying thatcomponentto performnew functions.By contrast,for certainartificial systems,designsarepossiblethatcouldnothave evolvedin naturebecausetheprocessof creationrequiresproductionof complex sub-mechanismswhichwouldnotbeviableasindependentagents.Similarly if theVM is partlybootstrappedvia a learningor developmentprocessthenthingswe know aboutcapabilities

3 CONSTRAININGTHE ARCHITECTURE 3

of variousformsof learningor developmentcanconstrainour theories.(3) Observedbehaviour. Informationaboutwhat the systemdoes,whetherin a ‘natural’environmentor in variouslaboratoryor field-testcontexts,helpto provideinformationaboutthemachine’s capabilities(subjectto many qualificationsabouthow misleadingsuchinfor-mationcanbe). For example,humanscandeceive us abouttheir abilities, or they canbetired,temporarilyforgetful,distractedor misledby ambiguousinstructionsor questions,andso on. Moreover somebehaviours may be constrainedby the currentculture, not by theintrinsiccapabilitiesof theVM.(4) Introspection. Whentrying to build theoriesaboutyour own virtual machine,you alsohave accessto introspective informationotherpeopledo not have: e.g. I know that a fewminutesagoI waswonderingwhetherI shouldreply to amessageor markastudent’sessay,andnobodyelsecould have told by observingme that that wasgoing on insideme. So Iknow thingsaboutmy VM which do not comefrom observation of behaviour. (Whethermeasurementsof brainactivity couldever provide suchinformationis anopenquestion:itwill beevenharderthan‘decompiling’ machinecodeon computers.)(5) Broken parts. A powerful sourceof informationis to tamperwith bits of the physicalmachineryandseehow thataffectsthecapabilitiesof thesystem(which is oftennot at allobvious,andposesits ownproblems).Thiscanprovideevidencethatthevirtual machinehasanunexpectedmodularstructure,sincedifferentkindsof damageleave differentpreviouslyunnoticedcapabilitiesintact.Damagecausedby strokesor injuries,andgeneticbraindefectscanall contributesuchinformation.(6) Commonknowledge. Explicit or implicit commonsenseknowledgeincludessuchthingsasknowledgethatpeoplecansometimesbe jealousor angrywithout beingawarethat theyare, and that certainemotionalstatesdisrupt thinking and attention,that deepemotionsare sometimesexternally visible and sometimeshiddenfrom others,that many emotionsandmotiveshave rich semanticcontent(e.g. beingangrythat someonehasbetrayedyourfriendship).Evenif many widely heldbeliefsaboutmindsareerroneous,it doesnot followthatall arefalse.(7) Informationfrom other disciplines. Combiningknowledgefrom many disciplinesalsohelps. For instance:philosophyprovidestechniquesfor revealingsurprisingaspectsof fa-miliar concepts;psychologyandbrainscienceprovidemany factsthatneedto beexplained;ethologyrevealsthe diversity of animalminds; evolutionarybiology helpsus understandpossibleroutesfrom simpleto complex brainsandthefunctionsof variousaspectsof mind;computerscienceand software engineeringteachus about importantgeneralcharacteris-tics of informationprocessingmechanismsandarchitectures;mathematicsprovidespreciseanalysisof someof thepropertiesof thosemechanisms;andAI hasprovideduswith muchexperienceaboutspecificvarietiesof virtual machinesandwhat they areandarenot goodfor, andthe tradeoffs betweendesignoptions. ComputerengineeringandAI alsoprovidetoolsfor testingideasin workingmodels.We learnbothfrom theprocessof implementationandfrom the limitations of the systemswe build. (Like the disappointingperformanceofmostcurrentrobots!)(8) Unifying explanations. Aiming for a unified explanationof many phenomenahelpstoconstraintheories. Too often theoristsstudy only normalhumans. Besidesnormal adulthumanminds, we shouldconsiderinfants,peoplewith brain damageor disease,insects,

4 ARCHITECTURAL DECOMPOSITION 4

chimpanzeesandotheranimals.

Thefollowing additionalcriteriaareusefulin selectingbetweenrival theories.(9) Analogies. Whentrying to understanda particularlycomplex system,observableanalo-gieswith systemsaboutwhich we alreadyhave moreinformationincluding thosewe havedesignedourselves, may give clues, thoughanalogiesshouldalways be usedwith greatcaution.(10) Whatwe havelearnt in AI and software engineering. We have learnta lot aboutthesortsof designswhich areandarenot capableof producingvariouskindsof functionality,andaboutthe trade-offs involved in choosingbetweenoptions. This canhelp us rule outunworkablealternativesandavoid prematurecommitmentto particulardesigns.However,westill havemuchto learnunderthis heading.(11) Selectamongcompetingtheories. A standardmethodin scienceis to compareall theavailableexplanatorytheoriesandtry to decidewhich oneis best(in termsof explanatorydepth,predictive precision,predictive coverage,consistency with known evidence,coher-encewith other good theories,etc. etc.). As Popperalways stressed(e.g. in his 1976book),suchselectionsarealwaysprovisionalconjectures,andexplorationof alternativesto‘accepted’theoriesshouldalwaysbeallowed.Nothingis everfinal in science.

4 Ar chitectural decomposition

Using theseconstraintsandsourcesof informationasinspiration,we have beenexploringvirtual machineinformation processingarchitectureswhich might explain humanmentalphenomena.

Oneapproachwhich we have foundfruitful canbeexplained(approximately)by super-imposingtwo commonlyusedarchitecturaldecompositions,a ‘vertical’ anda ‘horizontal’decompositionillustratedin Figure 1 (a) and (b) respectively. The first correspondsto aview of theflow of informationthrougha systemandthesecondcorrespondsto a view ofan organismashaving levels of control, or, alternatively, ashaving layersthat evolved atdifferentstages.

4.1 Three towers

The‘threetower’ model(Nilson,1998)shown with verticaldivisionsin Figure1(a),is oftenimplicit. Therearedifferentversionsof this model,dependingon thesophisticationof theperceptual,centralandmotorsubsystems.For examplesimpleversionstreattheperceptualmechanismssimply asphysicaltransducers.More sophisticatedmodelsincludecomplexperceptualprocessessuchassegmentation,recognition,interpretation,anddirectionof at-tention,with variousintermediateinformationstoresusedto recordpartial results.Anotherkind of variationconcernsthekindsanddegreesof controlof perceptionby thecentraltower.Somearchitecturessupportexplanatoryconceptsreferringto theseintermediateinformationstoresor to centralcontrolof perception,whereasothersdo not (Sloman,1989).

Likewise, the actioncomponentmight, at oneextreme,be regardedasa collectionoftransducerssendingsignalsto motors,or in moresophisticatedcasesasa complex hierar-

4 ARCHITECTURAL DECOMPOSITION 5

CentralProcessing

Perception ActionMeta-management

(reflective processes)(newest)

Deliberative reasoning("what if" mechanisms)

(older)

Reactive mechanisms(oldest)

Figure1: (a) (b)Thediagramsshowtwo waysof partitioning the architecture. In (a) informationflow through thesystemvia sensory, central andmotormechanismsis emphasised,where each ‘tower’ maycomprisesimple transducers or may have extremely sophisticatedmulti-functional layered mechanisms.Diagram (b) emphasisesdifferencesin degreeandtypeof sophisticationof processinglayers in thearchitecture which evolvedat different times.Processingin differentsub-systemsmaybeconcurrentandasynchronous.

chical control mechanismwhich translateshigh level instructionsinto detailedpatternsofactionby motorsor muscles.Examplescanbe found in (Johnson-Laird1993)and(Albus1981,Ch 7), andmany studiesof action. Action systemsmay alsovary in the amountoffeedbackthey includeeitherwithin themselves(e.g.proprioceptivefeedback)or theextenttowhichthey work in closecoordinationwith perceptualmechanisms,for instancein hand-eyecoordination.Again, themorecomplex andsophisticatedactionmechanismswill supportawider rangeof descriptiveconceptsreferringto differenttypesof controlof actions.

Thuswecanhave‘thin’ or ‘f at’ perceptionor actiontowerswith variouskindsof internallayeringandvariouskindsof informationflow in differentdirections.

Additional typesof variationdependon whetherthe systemis physicallyembodiedorconsistsentirely of software,wheresensorsandmotorsarevirtual machinesobservingoractingon asoftwareenvironment.

Yet anothercomplicationconcernsthe ontologicallevel of the model. A threetowermodelcould refer to just thephysicalarchitecture.Alternatively it could refer to themoreabstractvirtual machinefunctionality involved in processinginformationof variouskindsin variousways,evenif theunderlyingphysical(or physiological)implementationdoesnothave clearboundaries.It seemsto be the casethat in humansmuchthinking (e.g. somemathematicalreasoning,andvisualisinga rearrangementof furniture) makesuseof partsof thebrainrelatedto vision (representingandprocessingspatialstructure).It maybethatthesamephysicalcomponentimplementsbothpartof theperceptualtower andpartof thecentraltower.

4 ARCHITECTURAL DECOMPOSITION 6

4.2 Three layers

The‘threelayer’ model,depictedcrudelyin Figure1(b)attemptsto accountfor theexistenceof avarietyof moreor lesssophisticatedformsof informationprocessingandcontrolwhichcanoperateconcurrently. Theversiondiscussedhere,1 postulatesthreeconcurrentlyactivelayerswhich evolved at different timesandarefound in differentbiological species.Thethreelayersaccountfor differentsortsof processes,foundin differentkindsof animalsandwill be shown below to provide a framework for distinguishingthreedifferentconceptsof‘emotion’.

The first layer containsreactivemechanismswhich automaticallytake action assoonas appropriateconditionsare satisfied. The seconddeliberative layer provides ‘what if ’reasoningcapabilities,requiredfor planning, predicting and explaining. Relatively feworganismshave this, and againthe forms can vary widely. The meta-managementlayerprovidestheability to monitor, evaluate,andpartlycontrol,internalprocessesandstrategies.

Roughly, within the reactive layer, whenconditionsaresatisfiedactionsareperformedimmediately: they may be external or internal actions. A reactive systemmay includeboth analogcomponents,in which statesvary continuously, and digital components,e.g.implementingcondition-actionrules,or variouskindsof neuralnets,oftenwith ahighdegreeof parallelism.

By contrast,thedeliberative layer, insteadof alwaysactingimmediatelyin responsetoconditions,cancontemplatepossibleactions,comparethem,evaluatethemandselectamongthem. At leastin humans,chainsof possibleactionscanbeconsideredin advance,thoughthereare individual differencesin suchcapabilities. The humandeliberative systemcanalsoconsiderhypotheticalpastor futuresituationsnot reachableby chainsof actionsfromthecurrentsituation,andcanreasonabouttheir implications.As explainedelsewhere(e.g.(Sloman2000))physicallyimplementablemechanismsrequiredfor suchsophistication,in-cludinga long termassociative memoryandespeciallya re-usableshorttermmemory, willcausethedeliberativesystemto bediscreteandserial,andto proceedin muchslower stepsthana reactivesystemcan.

A meta-managementsystemcanact, in a reactive or deliberative fashion,on someoftheinternalprocessesinvolvedin thereactiveor deliberative(or meta-management)system.This includesmonitoring,evaluatingandredirectingsuchinternalprocesses,andpossiblyreflectingon themafter theevent in orderto analysewhatwentwrongor how successwasachieved.Like thedeliberative layer, it will beresource-limited.

Wesuspectthatresearchersandtherapistswhoreferto ‘executivefunction’ in humansareoften unawarethat they arediscussingmechanismswhich combinedeliberative andmeta-managementcapabilities.That suchcapabilitiesarefunctionally differentis shown by thefact that therearemany AI systemsthat have deliberative capabilities,insofar asthey canmake plans,executethem, revise them when executiongoeswrong, etc, but lack meta-managementcapabilities. So they may not have the ability to notice that their planningprocessesarewastefulor that it might be betterto abandonthecurrentgoal in the light of

1Partly inspiredby Simon (1967) , and elaboratedin our previous papers(Sloman& Croucher1981,Beaudoin1994,Sloman1994,Sloman& Poli 1996,Sloman1997,Sloman1998,Sloman2000,Sloman&Logan1999)

5 RELATED THEORIES 7

somenew information.Variousformsof reactivemechanismsarefoundin all organismsandsomeof themmust

have developedvery early in biological evolution. Deliberative mechanismsevolved laterandarefound in fewer organisms,thoughwe do not know exactly which oneshave them.Can a bumble beeor even a rat wonderwhat would have happenedif...? It seemsfromKohler’s work that at leastchimpanzeescanthink ahead.Meta-managementmechanismsevolved last of all and it is not clear how many organismshave this, apartfrom humans(thougheven they may not have it at birth). Perhapschimpanzeesandotheranimalshavelesssophisticatedversionsof meta-management.

How the layersevolved must be largely a matterof speculation. We conjecturethatoneof the importantfeaturesof biologicalevolution makingthis possibleis theprocessofproducingtwo copiesof anold structure,afterwhich oneof themdevelopsa new function.In (MaynardSmith & Szathma´ry 1999)this is referredto as‘duplication anddivergence.’E.g. mechanismswhich at first storedusefulreactive condition-actionpatternsmight laterbecopiedandmodifiedto form alongtermassociativememorythatcanbeusedin ‘what-if ’reasoning.

Of course,all the different layersmust ultimately be implementedin purely reactivemechanisms,otherwisenothingwould ever happen.This commonimplementationfeatureis consistentwith greatfunctionaldiversity within the layers,just asa commoncomputerarchitecturecansupportverydifferentoperatingsystemsandsoftwarepackages.

5 Relatedtheories

Theideaof a layeredarchitectureis quiteold in neuroscience,includingversionssimilar tothearchitecturewe propose.E.g. Albus (1981,page184) presentsthenotionof a layered‘triune’ brainwith areptilianlowestlevel andtwo morerecentlyevolved(old andnew mam-malian)levelsabove that, includinghierarchicalperceptualandactionsystems(chapter7).Freud’s distinctionbetweenid, ego andsuper-ego seemsto bea relatedidea.AI researchershave beenexploring a numberof variants,of varying sophisticationand plausibility, andvaryingkinds of control relationsbetweenlayers. The ‘subsumptionhierarchy’ in Brooks(1991) is oneof many examples. CompareMinsky (1987)andNilsson(1998). Johnson-Laird’s discussion(1993)of consciousnessasdependingon a high level ‘operatingsystem’is relatedto our third layer. A multi-level architectureis proposedfor storyunderstandingin(Okada& Endo1992).

In sometheoriespresentinglayeredarchitectures,it is assumedthat assensoryinfor-mationcomesin, increasinglyabstractinterpretationsor summariesarepassedup throughvariouslayersuntil at thehighestlevel it maytriggerprocesseswhich causeinstructionstoact to trickle down throughthe layers.By contrast,in our hypothesisedarchitectureall thelayersgetinformation(of differentdegreesof abstraction)in parallelandprocessit in parallelandmayproduceactionsignals(of differentdegreesof abstraction)in parallel.An examplemight be walking with a friend andsimultaneouslydiscussingphilosophy, while digestingfood,controllingposture,admiringtheview, etc. Most of theprocessesareunconscious,ofcourse.

6 COMBINING MODELS,TO FORMA GRID 8

6 Combining models,to form a grid

CentralProcessing

Perception Action

Meta-management(reflective processes)

(newest)

Deliberative reasoning("what if" mechanisms)

(older)

Reactive mechanisms(oldest)

ALARMS

CentralProcessing

Perception Action

Meta-management(reflective processes)

Deliberativereasoning

Reactive mechanisms

Figure2: (a) (b)Thefirstfigureservesasa mnemonicindicatingsimultaneouslythetriple towerandtriple layerviews,where the variouscomponentsin the boxeswill havefunctionsdefinedby their relationshipswithotherpartsof thesystem.In (b) a globalalarmsystemis indicated,receivinginputsfromall themaincomponentsof thesystemandcapableof sendingcontrol signalsto all thecomponents.Sincesuchalarmsystemsneedto operatequickly whenthereareimpendingdangersor short-livedopportunities,they cannotmakeuseof elaborateinferencingmechanisms,andmustbepatternbased.Globalalarmmechanismsare likely therefore to make mistakesat times,thoughthey maybetrainable.

Whenthe horizontalandvertical subdivisionsaresuperimposedwe obtainthe schemaoutlinedin Figure:2(a). In Figure:2(b) we make explicit the role of global alarm mecha-nisms, which receive informationfrom all componentsof the systemandareableto sendinterruptsandredirectionsignalsto all partsof the system.The ideaof this sort of globalalarmmechanismwaspartly inspiredby considerationof engineeringrequirements,partlyby thediscussionof interruptsin Simon(1967)andpartlyby studiesof thebrain,especiallytherole of thelimbic system,e.g.seeAlbus(1981)andLeDoux(1996).

If processingof information in any of the layersis likely to take too long in relationto the urgency of someneedprovoked by the environment,for instanceif thereis a largeobject coming rapidly towardsyou, or a fast-moving edible object flying pastyou whenyou arevery hungry, thenit maybenecessaryfor ‘normal’ processesto be interruptedandredirected.This could be achieved by the additionwithin the reactive mechanismsof oneor moremodulesreceiving informationfrom sensorsor otherpartsof thesystemandusingfastandgeneralpattern-recognitiontechniquesto decideto interrupteverythingandredirectthewholesystemtowardsan appropriateresponse,e.g. runningaway, freezing,pouncing,attendingto aparticularobjectin theenvironment,andsoon.

Whetherthis requiresaspecialmechanismor cansimplybepartof thenormalfunction-ing of areactivesystem,dependsontherelativespeedsof variouskindsof processing.Thereis no needto interruptandredirecta systemtowardsa particularactionif it wasaboutto dothatanyway, andjustasquickly.

7 MOTIVE GENERATORS 9

A numberof additionalmechanisms,listedbriefly in Figure3(a),enablethevariouslay-ersto function,andtheir shortcomings(e.g. limited processingcapacityof thedeliberativelayer)to becompensatedfor. TheveryclutteredFigure3(b) impressionisticallyportraystheresultof puttingvariouspiecestogether, includingthealarmmechanism.

7 Moti vegenerators

Long term associative memories

personae (variable personalities)

moods (global processing states)

motive generators (Frijda’s "concerns")

skill-compilerattention filter

motives motive comparators

categoriesformalisms descriptions

standards & valuesattitudes

EXTRA MECHANISMS NEEDED

ALARMS

Variablethresholdattentionfilter

perception action

DELIBERATIVE PROCESSES

(Planning, deciding,scheduling, etc.)

META-MANAGEMENT

processes(reflective)

THE ENVIRONMENT

REACTIVE PROCESSES

Motiveactivation

Longtermmemory

Figure3: (a) (b)In (a) we list someadditional componentsrequired to support processingof motives,‘what if ’reasoningcapabilitiesin the deliberative layer, and aspectsof self-control. It is conjectured thatthere is a store of different, culturally influenced,‘personae’which take control of the top layer atdifferent times,e.g. whena person is at homewith family, whendriving a car, wheninteractingwith subordinatesin the office, in the pub with friends, etc. In (b) the relationsbetweensomeofthecomponentsare shownalongwith a global alarm system,receivinginputsfromeverywhere andsendinginterrupt and redirectionsignalseverywhere. It also showsa variable-thresholdinterruptfilter, which partly protectsresource-limited deliberative and reflectiveprocessesfrom excessivediversionandredirection.

Both in a sophisticatedreactive systemandin a deliberative systemwith planningca-pabilities thereis often a needfor motiveswhich representa stateor goal to be achievedor avoided. In simpleorganismstheremay be a fixed setof driveswhich merelychangetheir level of activationdependingon thecurrentstateof thesystem.In moresophisticatedsystemsnot all motivesarepermanentlypresent,sothereis a needfor motivegenerators tocreatenew goalspossiblyby instantiatingsomegeneralgoalcategory (eatsomething) witha particularcase(eat that foal). Thesegeneratorsaresimilar to thedispositional‘concerns’

8 DYNAMIC FILTERSAND MOODS 10

in Frijda’s theory. Beaudoin’s andWright’s PhDthesesdiscussvarioustypesof generatorsor ‘generactivators’andrelatedimplementationissues.

Theability to manipulategoals,andto planactions,or tailor actionsto currentcontexts,requiresvariouskindsof representationalmechanisms.In particular, reasoningaboutwhichactionsare possiblenow or in hypotheticalfuture situations,or about the consequencesof thoseactions,requiresone or more powerful long term associative memories,whoseform will be relatedto the form of representationusedfor goals,situationsand actions.Thesewill not necessarilymapdirectly ontosensoryinput (e.g. theperceivableaffordancesdiscussedby Gibson(1979),suchas‘somethingedible’or ‘a support’mayhaveenormouslyvariedsensorymanifestations)and that may inducea needboth for moreabstractcentralrepresentationalcapabilitiesandalsofor moresophisticatedprocessingof perceptualinputin order to provide informationat the appropriatelevel of abstractionfor the deliberativemechanism.While this happensthe visual systemmight simultaneouslybe sendingmore‘primiti ve’ sensoryinformationto otherpartsof thesystem,e.g.for posturecontrol.

It is importantthat noneof the threelayershastotal control: they areall concurrentlyactive andcaninfluenceoneanother. Thedegreeandtypeof influencewill vary from timeto time. In particular, all threelayerscanbedisruptedby theglobalalarmmechanisms.

8 Dynamic filters and moods

Sincethedifferentlayers,thesensorysystemsandthealarms,operateconcurrently, it is pos-siblefor new informationthatrequiresattentionto reachadeliberativeor meta-managementsub-systemwhile it is busyon sometask. Becauseof the resourcelimits mentionedprevi-ously, it may be unableto evaluatethe new informationwhile continuingwith the currenttask.But it wouldbeunsafeto ignoreall new informationuntil thecurrenttaskis complete.Sonew informationneedsto beableto interruptdeliberativeprocessing.

Understressfulconditions,deliberative mechanismswith limited processingor tempo-rary storagecapacitycanbecomeoverloadedby frequentinterrupts.We have arguedelse-where(e.g. (Beaudoin& Sloman1993))thatvariable-thresholdattentionfilters canreducethis problem.Settingthethresholdat a high level whenthecurrenttaskis urgent,importantandintricate,canproducea globalstateof ‘concentration’on that task. (Malfunctioningofthis mechanismmayproducea typeof attentiondisorder(Beaudoin1994).)

Variationsin the externalcontext andthe individual’s needsandresourceswill requiremorecoarse-grainedglobal control mechanisms.This may accountfor somemoods. Forexample,whentheenvironmentcanbeclassifiedas‘friendly’ becausemostgoalsarerela-tivelyeasilyachieved,aconfidentoptimisticmodeof behaviour maybefruitful. Whenthingsoftengowrong,andpredatorsabound,amorecautious,evenpessimisticdemeanourmaybemuchsafer. More subtleandcomplex changesof moodmaybetriggeredby recognitionofsociallysignificantcontexts. Therearesomeglobalstatechangesproducedby pathologies,e.g. depression,manic states. Much more researchis neededto help us understandthearchitecturalbasisof awide varietyof typesof globalstatechanges,including,for instance,thenatureof sleep.

9 ARCHITECTURE-BASED CONCEPTS 11

9 Ar chitecture-basedconcepts

The model outlined above (like many similar models)allows us to generatesystemsofcognitive and affective conceptswhich are groundedin the virtual machineinformationprocessingarchitectureof agents.Sucharchitecture-basedconceptsmayprove superiortothosewecurrentlyuseto formulateresearchquestionsandour theories.

We often think we know exactly what consciousness,experience,emotions,etc. are,becausewe experiencethemdirectly. This is mistaken. We may experiencesimultaneity‘directly’ sometimes,but that doesnot guaranteea cleargraspof the concept,asEinsteinshowed. Oneway to deepenour understandingof theseconcepts,and, wherenecessary,repairtheirdeficiencies,is to seekanexplanatoryarchitectureandthenuseit asaframeworkfor systematicallygeneratingconcepts,just asthe theoryof the sub-atomicarchitectureofmattergeneratedconceptsof kindsof elements,kindsof chemicalcompoundsandprocesses,etc.Therelationbetweentheperiodictableof elementsandmodernideasaboutthearchitec-tureof matterillustratehow anunderlyingarchitecturecangive new clarity andcoherenceto a family of concepts.Thenew conceptsdonot replaceourold ones,but extendandrefinethem,for instanceaddingconceptsof isotopesto old ideasof chemicalelements,andaddingnew ideasaboutvalency to old ideasaboutchemicalprocesses.

10 Ar chitecture-basedemotionconcepts

In our previouswork we have attemptedto show how the threelayersmight supportstatesandprocesseswhich correspondto someof our pre-scientificconceptsof ‘emotion’. Suchprocessesalsoexplain a commondistinctionbetween‘primary’ and‘secondary’emotions(e.g. found in Damasio,1994;Picard,1997)andsuggesta needfor anadditionalcategoryof ‘tertiary’ emotions.

Thereactive layeraccountsfor primary emotions(e.g. beingstartled).Thedeliberativelayer explainssecondaryemotionswith greatersemanticcontentand lessdependenceoneventsin theperceivedenvironment(e.g.apprehensionconcerningwhatmighthappenwhena risky plan is executedand relief concerningwhat did not happen). Variousprocessesimpingingonthemeta-managementproducingpartiallossof controlof attentionandthoughtprocessesaccountfor tertiary emotions,statesin which peoplemayfind it hardto redirecttheir attentionor hardto maintaina focusof attention,or wherevariouskinds of thoughts(‘Did shereally likeme?’ ‘How canI havemy revenge?’‘Why won’t hechangehismind?’,etc.) constantlyintrudedespitedecisionsto ignorethemandconcentrateon importantandurgenttasks.Thesetertiaryemotionssuchashumiliation,jealousy, andthrilled anticipation,areprobablyuniqueto humans,thoughperhapssimplified versionscanbe found in someotheranimals.

It is well known thatdefinitionsof ‘emotion’ vary widely (Oatley & Jenkins1996).Weexpectthat furtherwork on varietiesof architecture-basedconceptswill reveala still widerrangeof architecture-basedconceptsof emotion,alongwith new, moreprecise,architecture-basedconceptscorrespondingto old ideasaboutmood, motivation, attitude,personality,perception,learning,andso on. This sort of modelprovidesa unifying framework which

11 MULTIPLE PERSONALITIES 12

helpsus explain the diversity of definitions,causedby different researchers(unwittingly)focusingondifferentpartsof thesamearchitecture.

11 Multiple personalities

In humansit seemsthat themeta-managementlayerdoesnot have a rigidly fixedmodeofoperation.Ratherit is asif differentpersonalities,usingdifferentevaluations,preferencesandcontrol strategies,caninhabit/controlthe meta-managementsystemat differenttimes.E.g. the samepersonmay have differentpersonalitieswhenat home,whendriving on amotorwayandwhendealingwith subordinatesat theoffice. Switchingcontrolto a differentpersonalityinvolvesturningonalargecollectionof skills, stylesof thoughtandaction,typesof evaluations,decision-makingstrategies,reactive dispositions,associations,andpossiblymany otherthings.

For sucha thing to be possible,it seemsthat the architecturewill requiresomethinglike a storeof ‘personalities’,mechanismsfor acquiringnew ones(e.g. via varioussocialprocesses),mechanismsfor storingnew personalitiesandmodifying or extendingold ones,and mechanismswhich can be triggeredby external context to ‘switch control’ betweenpersonalities.

If sucha systemcango wrong, that could be part of the explanationof somekinds ofmultiplepersonalitydisorders.

It is probablyalso relatedto mechanismsof social control. E.g. if a social systemor culturecaninfluencethe meta-managementprocessesthat determinehow an individualrepresents,categorises,evaluatesand controlshis own deliberative processes,this mightprovide a mechanismwherebythe individual learnsthingsasa resultof the experienceofothers,or mechanismswherebyindividualsarecontrolledandmadeto conformto sociallyapproved patternsof thoughtand behaviour. An examplewould be a form of religiousindoctrinationwhich makes peopledisapprove of certain motives, thoughtsor attitudes,leadingto redirectionof deliberationin more‘socially acceptable’directions.

12 An ecologyof mind

Wehave indicatedhow duringevolution thechangingneedsof thecentralprocessingmech-anismsmight leadto developmentsof higherlevel layersin theperceptualandmotormech-anisms.For instance,developmentof a deliberative layer leadsto a requirementfor moresophisticatedandabstractinput from the sensorysystems(‘chunking’ at higher levels ofabstraction,to provide knowledgerelevant to generalplanningcapabilities). It can alsoproducepressurefor evolutionof higherlevel controlmechanismswithin theactionsubsys-tems,includingtheability to performsocialactions,suchasgreeting,performingrituals,orcooperatingon complex skilled tasksrequiringgoodcoordination.If hierarchicalcontrolofactioncanbe devolved to a sophisticatedactionsystemthis releasescentralresourcesforothertasks.

All this suggeststhat we can think about the boxes on the grid, and the additionalcomponents,asforminganecologyin whichsub-organismsco-evolvesothatdevelopments

12 AN ECOLOGYOFMIND 13

in someof themgenerateneedsandopportunitiesfor theothers.Suchco-evolution involvesa family of parallel trajectoriesthrough both ‘design space’and ‘niche space’(Sloman1998). Although thecomponentsclearlyarenot separateorganisms,they do co-exist, per-forming different tasks,makinguseof different information,sometimesco-operatingandsometimescompetingwith oneanother. Justasdifferentorganismsin thesamepartof theforestmayanalysetheir sensoryinputsin differentwaysandencodeinformationabouttheenvironmentusing different ontologiesand possiblydifferent forms of representation,soalsomay differentsub-componentsof a singlecomplex organism. For instanceincomingvisual information, as mentionedpreviously, may be processedto producea variety ofdifferentdescriptionsaboutaffordancesusedin parallelby reactivemechanisms,deliberativemechanismsandmeta-managementlevels,almostasif they hadtheir own eyes. Similarly,differentinternalstatemonitoringprocessesmayusedifferentontologiesin recordingevents,generatinggoals,etc.

In somewaysall this is reminiscentof Minsky’s ideason a ‘societyof mind’ (Minsky1987)thoughperhapsthe phrase‘ecology of mind’ is moreapt if we think of the variouscomponentsashaving co-evolvedto meetdifferentpressuresandopportunitiesprovidedbytheothercomponents.

This is of courseonly ametaphor, andsomeof thedifferencesfrom morecommonformsof co-evolution may help us to understandthe strengthsandweaknessesof the metaphor.Very often co-evolution of whole organismsinvolves competition. But often it involvescooperation,suchas evolution of the shapeof a flower and evolution of the shapeandbehaviour of insectsor birds that obtain nectarfrom the flower. Co-evolution within anorganismis morelikely tobeof thecooperativeform, thoughtherecouldalsobecompetition,e.g. competitionfor resources,suchasinformation,bloodsupplies,etc. (SeealsoCh 8 ofMaynardSmithandSzathmary (1999)).

The main differenceis thatnormalbiological co-evolution involvesorganismsthat canreplicateindependently, whereaspartsof a singleorganismcannot. Neverthelessjust asamutationthat changesa type of flower may produceopportunitiesfor changein a bee,soa mutationthat altersthe capabilitiesof a perceptualsub-mechanismmight producenewopportunitiesfor usefulchangesin amorecentralcomponent.

It is not possiblefor reproductivefitnessof onecomponentto increaseor decreaseinde-pendentlyof fitnessof another, sincethey reproducetogetherwhentheorganismreproduces.However, differentpartsor featuresof an individual maybe thoughtof ashaving differentdegreesof ‘fitness’ accordingto how fastthey spreadthrougha population.This is clearlyrelatedto the notion that differentgeneswithin a genomemay have differentreproductivefitness.

Althoughthenotionof an‘ecology’ mustnotbetakentooliterally, nevertheless,trying tounderstandprocessesof incrementalchangesin differentpartsof thearchitecturemayhelpusunderstandhow thewholesystemevolved,andhow that systemworks. Our discussionis closely relatedto Popper’s proposal(1976,p. 173) to distinguishexternaland internalselectionpressures.For instancehe suggeststhat sometimesin biological evolution newpreferencesevolve, e.g. if having thosepreferencesaidssurvival andreproduction. Thisin turn can producea new ‘niche’ in which thereis pressurefor certainskills to evolve.Thusorganismswill befavouredby naturalselectionif they developskills whichservethose

13 SOMECONJECTURES 14

preferences.This in turncanproduceapressureswhichfavourcertainanatomicalchangesifthey supportthoseskills. Thosechangesmaythensupporttheevolutionof new preferences,e.g.if they serve theneedsof thenew anatomicalmechanisms.

13 Someconjectures

It is conjecturedthatthethreelayerscanbeusedto explaindifferentsortsof consciousness,rangingfrom simple sentienceto full reflective self-awarenessandpossessionof ‘qualia’(Sloman2000),thoughthereis nospaceto elaborateon this here.

Likewise it is conjecturedthat this sortof architecturecouldgive a robotmany human-likementalprocesses– includingfalling into philosophicalconfusionsaboutconsciousness.

Of course,somerobots,likemany animals,youngchildren,or evenbraindamagedadulthumans,will have only partsof thesystempresent,andtheir cognitiveandotherstateswillbecorrespondinglylimited.

Similar commentscanbemadeaboutsoftwareagents.

14 Conclusion

As science,muchof this is conjectural– many detailsstill have to be filled in andconse-quencesdeveloped(both of which cancomepartly from building working models,partlyfrom multi-disciplinaryempiricalinvestigations).

An architecture-basedontologycanbringsomeorderinto themorassof studiesof affect.We have begun to illustrate this by showing how different conceptsof emotionrelatetoprocessesarisingin differentpartsof acomplex architecture,thoughthereis still muchworkto bedone.This partly helpsto explain why therearediversedefinitionsof emotionin theliterature:differentresearchersunwittingly focuson differentsubsetsof thephenomenawehavereferredto asprimary, secondaryandtertiaryemotions.It shouldalreadybeclearfromour discussionof theproposedarchitecturethatthis is a crudeandinadequateclassification:additionalimportantsubdivisionsbetweentypesof emotionsandotheraffective statescanbe basedon the differencesin mechanismsinvolved in generatingthemand the differentwaysthey develop,subside,aresuppressed,trigger furtheremotions,etc. (Wright, Sloman& Beaudoin1996).

Another featureof the architecture,pointedout in (Sloman1989), is that it predictsthatperceptualinformationfollows many differentroutesthroughthebrain,supportingandtriggeringdiverseprocesseswithin the mentalecology. This may accountfor phenomenafoundby (Goodale& Milner 1992)andothersinvolving differentvisualpathways.Howeverourarchitecturalproposalssuggestthatfarmorefunctionallydistinctsensorypathwaysexistthanhave beendiscoveredso far. We shouldalsonot be surprisedto find that sometimesconnectionsgowrongproducingphenomenasuchassynaesthesiain whichdifferentsensorymodalitiesbecomeentangled.

It is veryunlikely thatnewbornhumansarebornwith suchanarchitecturefully formed,thoughsimplerorganismsmayhavetheirarchitectureslargelydeterminedinnately. Weneed

15 ACKNOWLEDGEMENTS& NOTES 15

moreresearchonhow architecturesarebootstrappedbothin altricial species(whereindivid-ualsarebornor hatchedin a relatively helplessandundevelopedstate,likehumans,huntingmammals,andbirdsof prey) andprecocialspecies(whereindividualsarebornor hatchedfarmoreableto look afterthemselves,e.g.sheep,deer, grazingmammals,chickens,andmanyaquaticanimals). Suchresearchmight lead to deepinsights in comparative psychology,developmentalpsychology(e.g. if muchof thearchitecturedevelopsafterbirth in humans).This shouldalsoprovide an improvedconceptualframework for studiesof effectsof braindamageanddisease,by enablingusto classifyfarmorepreciselythanbeforethemany waysin which thingscango wrong. It will alsopoint to a muchricher classificationof typesofdevelopmentandlearningwithin individuals: the morecomplex the architecturethe morevarietiesof possiblechangeanddevelopmentthereare,at leastin principle.

By comparingand contrastingarchitecturesrequiredfor embodiedanimalsand thosethat suffice for software agentswe can producean improved conceptualframework forclassifyingtypesof emotionsthat canarisein softwareagents,for instancethosethat lackthereactivemechanismsrequiredfor controllingaphysicalbody.

Thereareimplicationsfor engineeringaswell asscience.Designersof complex systemsneedto understandtheissuesdiscussedhere:(a) if they wantto modelhumanaffectiveprocesses,(b) if they wish to designsystemswhich engagefruitfully with humanaffective processes,e.g.reallyconvincing syntheticcharactersin computerentertainments,(c) if they wishto produceteaching/trainingpackagesfor would-becounsellors,psychother-apists,psychologists.

Thereis alreadyrecognitionof the importanceof modellingaffective processesin syn-theticagentsamongorganisationsandresearchersinvolvedin theentertainmentandgamesindustry. Our introductionpointedout thatsomeof this work canusevery shallow models.However, as the requirementsfor realismbecomemore demandingit could turn out thatsimulationsin computergamesandentertainmentswill have the sideeffect of leadingtoverydeepadvancesin psychologyandphilosophy.

15 Acknowledgements& Notes

Someof this work wasdoneaspartof a projectfundedby theLeverhulmetrust. Theideaspresentedhereweredevelopedin collaborationwith colleaguesandstudentsin theCognitionand Affect Project,at the University of Birmingham. Papersand thesesby studentsandcolleaguescanbefoundat:

http://www.cs.bham.ac.uk/research/cogaff/Our tools(includingtheSIM AGENT toolkit) canbefoundhere:

http://www.cs.bham.ac.uk/research/poplog/freepoplog.htmlhttp://www.cs.bham.ac.uk/research/poplog/newkit/

REFERENCES 16

References

Albus,J.S.(1981),Brains,BehaviourandRobotics, ByteBooks,McGraw Hill,Peterborough,N.H.

Beaudoin,L. (1994),Goalprocessingin autonomousagents,PhDthesis,SchoolofComputerScience,TheUniversityof Birmingham.(Availableathttp://www.cs.bham.ac.uk/research/cogaff/).

Beaudoin,L. & Sloman,A. (1993),A studyof motiveprocessingandattention,inA. Sloman,D. Hogg,G. Humphreys,D. Partridge& A. Ramsay, eds,‘ProspectsforArtificial Intelligence’,IOS Press,Amsterdam,pp.229–238.

Brooks,R. A. (1991),‘Intelligencewithout representation’,Artificial Intelligence47, 139–159.

Damasio,A. R. (1994),Descartes’Error, EmotionReasonandtheHumanBrain,Grosset/PutnamBooks,New York.

Frijda,N. H. (1986),Theemotions, CambridgeUniversityPress,Cambridge.

Gibson,J. (1986),TheEcological Approach to VisualPerception, LawrenceEarlbaumAssociates.(originally publishedin 1979).

Goodale,M. & Milner, A. (1992),‘Separatevisualpathwaysfor perceptionandaction’,Trendsin Neurosciences15(1), 20–25.

Johnson-Laird,P. (1993),TheComputerandtheMind: An Introductionto CognitiveScience, FontanaPress,London.(Secondedn.).

Kohler, W. (1927),TheMentalityOf Apes, Routledge& KeganPaul,London.2ndedition.

LeDoux,J.E. (1996),TheEmotionalBrain, Simon& Schuster, New York.

MaynardSmith,J.& Szathma´ry, E. (1999),TheOriginsof Life: FromtheBirth of Life totheOrigin of Language, OxfordUniversityPress,Oxford.

Minsky, M. L. (1987),TheSocietyof Mind, William HeinemannLtd., London.

Nilsson,N. J. (1998),Artificial Intelligence:A New Synthesis, MorganKaufmann,SanFrancisco.

Oatley, K. & Jenkins,J. (1996),UnderstandingEmotions, Blackwell,Oxford.

Okada,N. & Endo,T. (1992),‘Story generationbasedondynamicsof themind’,ComputationalIntelligence8, 123–160.1.

Picard,R. (1997),AffectiveComputing, MIT Press,Cambridge,Mass,London,England.

Popper, K. (1976),UnendedQuest, Fontana/Collins,Glasgow.

REFERENCES 17

Simon,H. A. (1967),‘Moti vationalandemotionalcontrolsof cognition’. ReprintedinModelsof Thought,YaleUniversityPress,29–38,1979.

Sloman,A. (1989),‘On designingavisualsystem(TowardsaGibsoniancomputationalmodelof vision)’, Journalof ExperimentalandTheoretical AI 1(4), 289–337.

Sloman,A. (1994),Explorationsin designspace,in A. Cohn,ed.,‘Proceedings11thEuropeanConferenceon AI, Amsterdam,August1994’,JohnWiley, Chichester,pp.578–582.

Sloman,A. (1997),Whatsortof controlsystemis ableto haveapersonality, in R. Trappl&P. Petta,eds,‘CreatingPersonalitiesfor SyntheticActors: TowardsAutonomousPersonalityAgents’,Springer(LectureNotesin AI), Berlin, pp.166–208.

Sloman,A. (1998),Damasio,Descartes,alarmsandmeta-management,in ‘ProceedingsInternationalConferenceonSystems,Man,andCybernetics(SMC98),SanDiego’,IEEE,pp.2652–7.

Sloman,A. (2000),Architecturalrequirementsfor human-likeagentsbothnaturalandartificial. (whatsortsof machinescanlove?),in K. Dautenhahn,ed.,‘HumanCognitionAnd SocialAgentTechnology’,Advancesin ConsciousnessResearch,JohnBenjamins,Amsterdam,pp.163–195.

Sloman,A. & Croucher, M. (1981),Why robotswill haveemotions,in ‘Proc7th Int. JointConferenceonAI’, Vancouver, pp.197–202.

Sloman,A. & Logan,B. (1999),‘Building cognitively rich agentsusingtheSim agenttoolkit’, Communicationsof theAssociationof ComputingMachinery42(3), 71–77.

Sloman,A. & Poli, R. (1996),Sim agent:A toolkit for exploringagentdesigns,inM. Wooldridge,J.Mueller & M. Tambe,eds,‘Intelligent AgentsVol II (ATAL-95)’,Springer-Verlag,pp.392–407.

Wright, I. P. (1977),Emotionalagents,PhDthesis,Schoolof ComputerScience,TheUniversityof Birmingham.(Availableonlineathttp://www.cs.bham.ac.uk/research/cogaff/).

Wright, I., Sloman,A. & Beaudoin,L. (1996),‘Towardsadesign-basedanalysisofemotionalepisodes’,PhilosophyPsychiatry andPsychology3(2), 101–126.