Context-Aware Virtual Agents in Open Environments

7
Context-Aware Virtual Agents in Open Environments T. Steel, D. Kuiper, R. Z. Wenkstern Department of Computer Science University of Texas at Dallas Richardson, TX, USA {steel, kuiper, rmili}@utdallas.edu Abstract—This paper presents a model for the interaction between context-aware virtual agents and the environment in which they are situated. This model applies to multi- agent based simulation systems dealing with human-like virtual agents in decentralized, continuous, and dynamic environ- ments. The model supports an extensible agent perception module, allowing agents to perceive their environment through multiple senses (sight, hearing, smell, etc.). The environment reacts to agent influences as well as user-invoked stimuli by combining these influences to determine the next state of the environment. This paper introduces a formalization and an implementation of the model and discusses multiple scenarios involving context-aware virtual agents situated in dynamic environments. Keywords-simulation; multi-agent systems; virtual agents I. I NTRODUCTION In this paper we discuss the interaction between context- aware virtual agents and the simulated, open environments in which they are situated. Virtual agents are autonomous software entities which possess resources and use skills to achieve individual and societal objectives. Virtual agents can interact with other agents to accomplish objectives. This paper deals with human-like agents that are situated in open (i.e., inaccessible, non-deterministic, decentralized, dynamic, and continuous [1]) environments. These context- aware virtual agents perceive their environment using mul- tiple sensory mechanisms (sight, hearing, and smell). Multi-agent based simulation (MABS) systems attempt to model the behavior of virtual agents under various situations and conditions. MABS systems can be used to simulate real- world phenomena such as traffic flow [2], [3] and evacuation scenarios [4], [5], [6], [7], [8], [9]. In this paper, we de- scribe the design of agent-environment interactions in DIVAs (Dynamic Information Visualization of Agent systems), a large scale distributed multi-agent system framework 1 . This design is based on the influence-reaction model proposed by Ferber and M¨ uller in [10], but has been extended to deal with open environments and dynamic user interaction at run-time. Existing approaches do not formally incorporate agent perception or influence combination. We believe that the DIVAs influence-reaction model improves the fidelity 1 In the remainder of this paper the term framework is used to refer to a system of systems. of simulations by addressing these issues. The DIVAs influence-reaction model allows virtual agents to accurately perceive the environment using a multi-sense perception system that is extensible and modifiable. It also allows the environment to realistically combine internal (e.g., agent- induced) and external(e.g., user-induced) influences. Pro- viding users with a means to dynamically interact with the simulation at run-time allows for greater understanding of agent behavior and more thorough situational analysis than traditional simulation techniques. The rest of this paper is organized as follows: in Section II we provide background information regarding the DIVAs framework. In Section III we explain the DIVAs influence- reaction model for open environments. In Section IV we discuss virtual agent perception. In Section V we give examples demonstrating agent perception and environmental influence combination. In Section VI we discuss related works. II. THE DIVAS FRAMEWORK DIVAs is a simulation framework under development at the University of Texas at Dallas which allows for specification and execution of large scale distributed simulations where agents are situated in an open environment [11], [12], [13]. The DIVAs framework includes three main components (see Fig. 1). The Agent Environment System (AES) creates large scale simulation instances. The Data Management System (DMS) stores and processes information collected from the AES. The interactive Visualization Framework receives information from the DMS to create 2D or 3D images of the simulation and allows users to invoke events and modify simulation parameters at run-time. The DIVAs project is on-going, and the components are at various stages of development. DIVAs’ main constituent, i.e., the Agent-Environment System, consists of four components: the virtual agent plat- form creates and manages agents; the environment platform creates and manages a distinct, distributed, dynamic envi- ronment in which the virtual agents are situated; the agent- environment message transport service offers a mechanism for agent-environment interactions; and the AES message transport service allows the AES to communicate with the DMS and Visualization Framework. 2010 Sixth International Conference on Autonomic and Autonomous Systems 978-0-7695-3970-6/10 $26.00 © 2010 IEEE DOI 10.1109/ICAS.2010.36 90

Transcript of Context-Aware Virtual Agents in Open Environments

Context-Aware Virtual Agents in Open Environments

T. Steel, D. Kuiper, R. Z. WenksternDepartment of Computer Science

University of Texas at DallasRichardson, TX, USA

steel, kuiper, [email protected]

Abstract—This paper presents a model for the interactionbetween context-aware virtual agents and the environmentin which they are situated. This model applies to multi-agent based simulation systems dealing with human-like virtualagents in decentralized, continuous, and dynamic environ-ments. The model supports an extensible agent perceptionmodule, allowing agents to perceive their environment throughmultiple senses (sight, hearing, smell, etc.). The environmentreacts to agent influences as well as user-invoked stimuli bycombining these influences to determine the next state of theenvironment. This paper introduces a formalization and animplementation of the model and discusses multiple scenariosinvolving context-aware virtual agents situated in dynamicenvironments.

Keywords-simulation; multi-agent systems; virtual agents

I. INTRODUCTION

In this paper we discuss the interaction between context-aware virtual agents and the simulated, open environmentsin which they are situated. Virtual agents are autonomoussoftware entities which possess resources and use skills toachieve individual and societal objectives. Virtual agentscan interact with other agents to accomplish objectives.This paper deals with human-like agents that are situatedin open (i.e., inaccessible, non-deterministic, decentralized,dynamic, and continuous [1]) environments. These context-aware virtual agents perceive their environment using mul-tiple sensory mechanisms (sight, hearing, and smell).

Multi-agent based simulation (MABS) systems attempt tomodel the behavior of virtual agents under various situationsand conditions. MABS systems can be used to simulate real-world phenomena such as traffic flow [2], [3] and evacuationscenarios [4], [5], [6], [7], [8], [9]. In this paper, we de-scribe the design of agent-environment interactions in DIVAs(Dynamic Information Visualization of Agent systems), alarge scale distributed multi-agent system framework1. Thisdesign is based on the influence-reaction model proposedby Ferber and Muller in [10], but has been extended todeal with open environments and dynamic user interactionat run-time. Existing approaches do not formally incorporateagent perception or influence combination. We believe thatthe DIVAs influence-reaction model improves the fidelity

1In the remainder of this paper the term framework is used to refer to asystem of systems.

of simulations by addressing these issues. The DIVAsinfluence-reaction model allows virtual agents to accuratelyperceive the environment using a multi-sense perceptionsystem that is extensible and modifiable. It also allows theenvironment to realistically combine internal (e.g., agent-induced) and external(e.g., user-induced) influences. Pro-viding users with a means to dynamically interact with thesimulation at run-time allows for greater understanding ofagent behavior and more thorough situational analysis thantraditional simulation techniques.

The rest of this paper is organized as follows: in SectionII we provide background information regarding the DIVAsframework. In Section III we explain the DIVAs influence-reaction model for open environments. In Section IV wediscuss virtual agent perception. In Section V we giveexamples demonstrating agent perception and environmentalinfluence combination. In Section VI we discuss relatedworks.

II. THE DIVAS FRAMEWORK

DIVAs is a simulation framework under development at theUniversity of Texas at Dallas which allows for specificationand execution of large scale distributed simulations whereagents are situated in an open environment [11], [12], [13].The DIVAs framework includes three main components(see Fig. 1). The Agent Environment System (AES) createslarge scale simulation instances. The Data ManagementSystem (DMS) stores and processes information collectedfrom the AES. The interactive Visualization Frameworkreceives information from the DMS to create 2D or 3Dimages of the simulation and allows users to invoke eventsand modify simulation parameters at run-time. The DIVAsproject is on-going, and the components are at various stagesof development.

DIVAs’ main constituent, i.e., the Agent-EnvironmentSystem, consists of four components: the virtual agent plat-form creates and manages agents; the environment platformcreates and manages a distinct, distributed, dynamic envi-ronment in which the virtual agents are situated; the agent-environment message transport service offers a mechanismfor agent-environment interactions; and the AES messagetransport service allows the AES to communicate with theDMS and Visualization Framework.

2010 Sixth International Conference on Autonomic and Autonomous Systems

978-0-7695-3970-6/10 $26.00 © 2010 IEEE

DOI 10.1109/ICAS.2010.36

90

Figure 1. DIVAs Architecture

Figure 2. Agent architecture

A. DIVAs Virtual Agents

DIVAs’ agent architecture consists of an interaction mod-ule, knowledge module, task module, and planning andcontrol module (see Fig. 2).

Interaction Module. The interaction module handles anagent’s interaction with external entities, separating environ-ment interaction from agent interaction. The EnvironmentPerception Module contains various perception modulesemulating human-like senses and is responsible for perceiv-ing information about an agent’s environment. The AgentCommunication Module provides an interface for agent-to-agent communication.

Knowledge Module. This is partitioned into ExternalKnowledge Module (EKM) and Internal Knowledge Module(IKM). The EKM serves as the portion of the agent’smemory that is dedicated to maintaining knowledge about

Figure 3. Cell controller architecture

entities external to the agent, such as acquaintances andobjects situated in the environment. The IKM serves as theportion of the agent’s memory that is dedicated for keepinginformation that the agent knows about itself, including itscurrent state, physical constraints, and social limitations.

Task Module. This module manages the specification ofthe atomic tasks that the agent can perform in the domainin which it is deployed (e.g., walking, carrying, etc.).

Planning and Control Module. This serves as the brainof the agent. It uses information provided by the othermodules to plan, initiate tasks, make decisions, and achievegoals.

B. DIVAs Environment

DIVAs’ environment architecture is based upon two un-derlying concepts [11], [12], [13].• In order to manage a large distributed environment

efficiently, it is necessary to partition the space intosmaller defined areas called cells.

• Each cell is assigned a special agent called a cell con-troller. A cell controller is required to: 1) autonomouslymanage environmental information about its cell; 2) beaware of the virtual agents located in its defined area;3) be able to interact with local virtual agents to informthem about changes in their surroundings; 4) be able tocommunicate with other cell controllers to inform themof external events.

These characteristics reveal a strong correlation betweenthe cell controller and the agent architecture as shown inFig. 3. The Interaction Module handle asynchronous com-munication among cell controllers as well as synchronouscommunication between cell controllers and virtual agents.The EKM serves as the portion of the controller’s memorythat is dedicated to maintaining knowledge about the virtualagents within the cell’s boundaries as well as the list of

91

Figure 4. DIVAs Influence-Reaction Model

neighboring cells. The IKM serves as the portion of the con-troller’s memory that is dedicated for keeping informationthat the controller knows about itself, i.e., knowledge aboutits properties and its cell.

III. THE INFLUENCE-REACTION MODEL

Context-aware agents must be constantly informed aboutchanges in their own state and the state of their surroundings.Similarly, the environment must react to the influencesimposed upon it by virtual agents as well as users outsideof the simulation. DIVAs’ agent-environment interactionsfollow the influence-reaction model [14], [11] and extendit to handle open environments and include external stimuli.Specifically, the DIVAs influence-reaction model adds agentperception and influence combination to the existing model.In this section, we explain the DIVAs influence-reactionmodel and formally define the functions involved.

When agents execute actions, they produce influences thatare synchronously communicated to their cell controller (i.e.,the controller agent managing the cell in which the agent issituated). The cell controllers interpret and combine theseagent influences as well as external events triggered by theuser of the simulation. The cell controller reacts to theseinfluences and updates the state of the environment. Thenew state is then synchronously communicated to the agentslocated within the cell boundary and cell-to-cell influencesare passed to adjacent cell controllers (see Fig. 4).

Both virtual agents and cell controllers can be modeled asfunctions [11]. At a high level, a virtual agent is a functionthat processes environment states and produces influences.A cell controller is a function that processes virtual agentinfluences, external influences, and cell-to-cell influences toproduce a new state and subsequent cell-to-cell influences.

At a more detailed level (see Fig. 5), we can see thata virtual agent perceives the environment through a com-bination of senses (e.g., vision, hearing, smell). Using itscurrent percepts and knowledge, it decides the next course ofaction which it then initiates. A cell controller first combinesthe agent and external influences to obtain the combinedinfluence. This combined influence must be legal (containingno conflicts) with respect to the rules and constraints of the

Figure 5. Detailed Model

Σ: environment statesΣ′: limited environment stateΣj : cellj states; Σ =

⋃Jj=1 Σj

Γ: combined influencesΓi: influences produced by virtual agent iΓext: influences produced externallyΠ: virtual agent perceptionsK: virtual agent knowledgeΩ: virtual agent actionsΛj : cell-to-cell influences produced by cell

controller j

Figure 6. Set Definitions

environment. Using the combined influence and the cell’sprevious state, the cell controller determines the cell’s newstate and whether it is necessary to communicate informationto adjacent cell controllers.

Figs. 6 and 7 show a detailed specification of virtual agentand cell controller functions.

IV. AGENT PERCEPTION

The agents’ perception plays an important role in theinfluence-reaction model since it informs the agents abouttheir environment and provides a basis for planning and deci-sion making. Virtual agents are not directly or globally awareof their environment but perceive their environment usingvarious perception sensors. DIVAs agents use the environ-ment perception module (see Fig. 2) as the agents’ primaryinterface to environment information. As agents receiveinformation about the current state of their environment,perception sensors extract perceivable knowledge about theenvironment (see Fig. 8). For example, the visual perceptionsensor will determine which information the agent can seein the environment, whereas the auditory perception sensorwill determine the information the agent can hear. Since theperception module contains several perception sensors, eachderiving percepts separately, percepts must be combined

92

Agent Functions

perceivei: Σ′ → Πi

memorizei: Πi → Ki

decidei: Πi ×Ki → Ωi

initiate-actioni: Ωi → Γi

V-Agenti: Σ′ → Γi

γi(t) =initiate-actioni(decidei(perceivei(σ′(t)), κi(t− 1)))∧memorizei(perceivei(σ′(t)))

Cell Controller Functions

combinej : Γ1 × . . .ΓN × Γext1 × . . .ΓextM × Λ1 × . . .ΛJ → Γmemorizej : Σj → Σj

updatej : Γ× Σj → Σj ∪ Λj

Controllerj : Γ1 × . . .ΓN × Γext1 × . . .ΓextM × Λ1 × . . .ΛJ → Σj ∪ Λj

σj(t+ 1) =updatej(combinej(γ1(t)× . . . γn(t)× γext1(t)× . . . γextM (t)× λ1(t)× . . . λJ(t)), σj(t)))∧memorizej(updatej(combinej(γ1(t)× . . . γn(t)×γext1(t)× . . . γextM (t)×λ1(t)× . . . λJ(t)), σj(t))))

Figure 7. Function Definitions

Figure 8. Perception Module

to form concepts prior to being memorized. Sensors mayalso derive conflicting percepts about their environment,especially in the case of sensor impairment. These conflictsmust the resolved prior to being memorized. For example, anairplane flying overhead may visually appear farther aheadof where it sounds like it is, resulting in conflicting percepts.The concept extracted from the conflict may be that ofconfusion, or some senses may be given priority over othersto model confidence or fidelity over the perceived medium.Once all percepts are combined and conflicts resolved, theresulting percepts are memorized and used when decidingthe agent’s next action.

V. ILLUSTRATIVE EXAMPLES

In this section, we present two scenarios involving context-aware virtual agents situated in open environments. We

discuss how DIVAs’ virtual agents and environment cellcontroller agents behave for each.

A. Scenario 1

In the first scenario, several virtual agents are positionedequidistant from a hat on a table situated in a single-cell environment. As soon as an agent can see the hat, itmoves towards the table, attempts to pick up the hat, andif successful, wears the hat on its head. Once the agentsperceive that the hat is no longer available, they continuewith other goals. As agents are moving away from the table,the user triggers an explosion at the table. Any agent closeenough to see or hear the explosion flees the area; those thatare too close perish. The following scenario is intended toillustrate agent perception (vision and hearing), environmentcontrol of shared resources, and combination of agent andexternal (user-invoked) influences.

At t0, virtual agents V-Agent1 through V-AgentN aresituated at equal distances around the table, facing it. Thecell controller sends the state of the environment to theagents, including the state of the agents themselves (e.g.,their current position and heading). Upon receipt of thisinformation, each agent’s perception module applies itsvisual sensor to the information. The visual sensors extractthe visually-perceivable data from the information provided,in this case, the location of the hat. This data is storedas agent knowledge and the agents begin their planningphase. During this phase, each agent recognizes the existenceof a hat and decides to approach the hat’s location (seeFig. 9). This decision is transformed into an action (i.e.,

93

Figure 9. Agents perceive the hat.

walk(location)) which is sent to the cell controller as anagent influence, γi.

The cell controller receives the agent influences, collectsthe actions, and waits for the next time interval. Since agentscommunicate asynchronously, the wait ensures that all ac-tions are received for that time interval before combiningthe actions and identifying conflict. When t1 arrives, thecontroller begins to combine each of the agent actions itreceived in t0. In this case, the actions will be the motionsof agents walking toward the table. The cell controllerchecks for conflicts in the combined influences (e.g., agentscrossing paths or colliding), granting those which have noconflicts and modifying those which conflict according tothe rules of the environment (agents collide and may notnecessarily arrive at their expected location). Next, the cellcontroller combines external influences. Since there are noneat this time, the combined external influence is empty. Thecombined influence on the cell from t0 is then obtainedby combining the influence of agents (γ1 × . . . γN ) withthe influence of external events (γext1 × . . . γextM ). γtotalconforms to the rules and constraints of the environment.This combined influence is then applied to the state of thecell at t0 to produce the state of the cell at t1. The state of thecell, which contains the new state of each of its occupyingagents, is stored and sent to the agents.

Upon receipt of the new environment state, the agentsperceive the result of their actions on the environment, storeperceived knowledge of the environment, and begin to planaccording to the new state. Note that an agent may not nec-essarily be in the state it expected due to negotiations madeby the cell controller to eliminate conflicting influences. Thiscycle continues until the agents begin arriving at the tableat tx, shown in Fig. 10. At this point, the agents that havesucceeded in arriving at the table act by reaching for the tophat to grab it.

When these actions are combined by the environment attx+1, the cell controller notices that the agents are in con-tention over the hat. Since the hat is a single resource, only

Figure 10. Agents approach the hat.

Figure 11. Single agent wearing the hat.

one agent may obtain the hat. The cell controller resolves theshared resource conflict by granting the action to a singleagent and denying it to all others (this selection is madeaccording to the rules and constraints of the environment).When the cell controller sends the state of its cell at tx+1

to the agents, one agent will perceive that it has obtainedthe hat while the others will perceive that they have not (seeFig. 11).

Lastly, the user detonates a simulated bomb by injectingan event into the simulation. The cell controller receivesthis event at ty as an external influence (γext) and combinesit with the agent influences at ty . This combination takesinto account the intensity of the explosion and proximityof agents to determine which agents are killed by the blast(see Fig. 12). When the new state is calculated and sent tothe agents, several agents will perceive the blast (by hearingor seeing it) and decide to flee immediately. Others will befatally wounded and unable to make future decisions. Somewill not perceive the explosion at all if they are looking awayand out of audible range.

This scenario shows that virtual agents only gain knowl-edge of their environment through perception, influences(due to agent and external events) are combined to determine

94

Figure 12. Explosion triggered by user, agents flee.

the new state of the system, and virtual agents are notcapable of directly modifying their state. This scenario hasbeen implemented and is fully functional in the currentimplementation of DIVAs.

B. Scenario 2

The next scenario is similar to the first, except it dealswith a multi-cell environment. Rather than detonating anexplosion, the user starts a fire and adds a steady wind tothe environment. The virtual agents avoid the fire’s path asit is blown from one cell (managed by cell controller C1),crossing the boundary to an adjacent cell (managed by C2).The following scenario illustrates how environmental factorspropagate across cell boundaries.

At t0, virtual agents V-Agent1 through V-AgentN aresituated randomly throughout the cells managed by C1 andC2. At tx, the user triggers an event that starts a fire in C1’scell. C1 receives the event as an external influence, γext,and combines it with the virtual agent actions for that timeinterval. C1 updates the state of its cell to include the fireand sends the new state to the virtual agents situated its cell.

Several virtual agents will be able to see the fire, andif they consider themselves to be in danger, plan to moveaway from the threat. These virtual agent influences are sentto C1.

At a later time, ty , the user invokes a constant wind withinthe environment. C1 and C2 receive this influence, and C1

applies it to the fire, determining that the fire spreads inthe direction of the wind. At tz , C1 decides that updatingits state will cause the fire to cross the boundary it shareswith C2. C1 sends a cell-to-cell influence, λ1, to C2. C2

receives λ1 at tz+1 and combines it with the other influencesthat arrive at tz+1, resolving conflict if necessary. Once C2

obtains the total combined influence on its cell, it updates itscell’s state and sends the updated state to the virtual agentsoccupying C2’s cell.

This scenario shows that cell controllers interact with eachother when combining influences to determine the proper

state of the environment.

VI. RELATED WORK

The influence-reaction model was initially proposed in 1996by Ferber and Muller in [10]. In [14], Ferber proposes aformal approach to specify the influence-reaction model fortropistic and hysteretic agents evolving in a centralized en-vironment. In [15], Michel incorporates a temporal variableto clarify and simplify the two phase formal model.

From an implementation perspective, various approacheshave been proposed to tackle agent-environment interactionsin multi-agent based simulation systems. These fall intothree categories.

In the first category, there is no explicit interactionbetween agents and the environment. Hence, either theenvironment has complete control over the agents’ states([4], [16]) or agents have complete control over their ownstate ([5], [6], [17], [18], [2], [7], [8], [9]).

In the second category, agents ask the environment if theycan perform an action and the environment either grants ordenies the agents’ requests based on predefined rules [19].This approach is similar to the influence-reaction model,except that external influences cannot be used to update theenvironment state, and agents are responsible for definingtheir own states based on the environment’s response (grantor deny).

The third category uses a variant of the influence-reactionmodel. [20] describes a model where the agent is tightlycoupled with a graph-based, static environment. The envi-ronment handles agent perception, sending percepts directlyto the agent’s ”brain.” After an agent makes a decision, itsends the decision to the environment which translates thedecision into actions and executes them. In [21], the envi-ronment exists as a low-level decision-making process, notas a separate, distinct entity. The environment is responsiblefor agent perception and low-level locomotion. The agentsare simply planning entities; the environment has the taskof achieving the agents’ goals.

The DIVAs influence-reaction model discussed in thispaper improves on models that belong to the third categoryand extends Ferber’s model by formalizing large scaleagent-environment interactions when the environment isinaccessible, non-deterministic, decentralized, and dynamic.In addition, the DIVAs model includes extensible agentperception techniques, and facilitates the combination ofagent, environment, and user-generated influences.

VII. CONCLUSION

In this paper we discussed DIVAs’ influence-reaction model.This model allows virtual agents to accurately perceive theenvironment through various sensors, and the environment tocombine internal and external influences to update its state.We illustrated our model through two scenarios involvinghuman-like agents in an open environment.

95

Currently, we have developed an implementation of DI-VAs that satisfies the proposed model. Agents are capableof perceiving their environment through sight, sound, andsmell. Users can create events that influence the environmentat run-time, and the environment is able to enforce rules andconstraints on the agents using the influence combinationfunction.

Future work will include enhancement of currently im-plemented perception sensors and addition of environmentalconstraints for new scenarios. At present, cell space isstatically partitioned and are not dynamically modifiable. Weplan to implement algorithms to dynamically allocate cellregions. This enhancement would provide additional faulttolerance in the case of cell controller failure. We are alsointerested in experimenting with the number of agents andcell controllers that can be simulated in real-time by makinguse of distribution and other parallel processing techniques.

VIII. ACKNOWLEDGMENTS

The DIVAs project is supported by Rockwell Collinsunder grant number 5-25143.

REFERENCES

[1] S. Russell and P. Norvig, Artificial Intelligence: A ModernApproach. Prentice Hall, 1995.

[2] M. Kutz and R.Herpers, “Urban traffic simulation for games:a general approach for simulation of urban actors,” in Pro-ceedings of Conference on Future Play: Research, Play,Share, Toronto, Canada, November 3-5 2008, pp. 181–184.

[3] R. Zalila-Wenkstern, T. Steel, and G. Leask, “A self-organizing architecture for traffic management,” in Proceed-ings of WICSA/ECSA Workshop on Self Organizing Architec-tures, Cambridge, UK, September 2009.

[4] J. Was, “Cellular automata model of pedestrian dynamics fornormal and evacuation conditions,” in Proceedings of Intel-ligent Systems Design and Applications (ISDA05), Wroclaw,Poland, September 8-10 2005, pp. 151–159.

[5] K. Uno and K. Kashiyama, “Development of simulationsystem for the disaster evacuation based on multi-agent modelusing gis,” Tsinghua Science and Technology, vol. 13, no. 1,pp. 348–353, October 2008.

[6] J.Shi, A. Ren, and C.Chen, “Agent-based evacuation modelof large public buildings under fire conditions,” Automationin Construction, vol. 18, no. 3, pp. 338–347, May 2009.

[7] M. Xiaofeng, W. Chaozhong, and Y. Xinping, “A multi-agentmodel for evacuation system under large-scale events,” inProceedings of International Symposium on ComputationalIntelligence and Design(ISCID08), Wuhan, China, October17-18 2008, pp. 557–560.

[8] S. Sharma, H. Singh, and A. Prakash, “Multi-agent modelingand simulation of human behavior in aircraft evacuations,”IEEE Transactions on Intelligent Transportation Systems,vol. 44, no. 4, pp. 1477–1488, October 2008.

[9] S. Sharma, “Simulation and modeling of group behavior dur-ing emergency evacuation,” in Proceedings of the IEEE Sym-posium on Intelligent Agents, Nashville, Tennessee, March 30- April 2 2009, pp. 122–127.

[10] J. Ferber and J. Muller, “Influences and reaction : a model ofsituated multi-agent systems,” in Proceedings of the 2nd In-ternational Conference on Multi-agent Systems (ICMAS’96).The AAAI Press, December 10-13 1996, pp. 72–79.

[11] R. Z. Mili and R. Steiner, “Modeling agent-environmentinteractions in adaptive mas,” in Proceedings of Engineeringenvironments mediated multiagent systems (EEMAS’07), Eu-ropean Conference on Complex Systems, October 2007, alsoin Lecture Notes in AI 5049. Springer Verlag (2008) 135-147.

[12] R. Z. Mili, R. Steiner, and E. Oladimeji, “DIVAs: Illustratingan abstract architecture for agent-environment simulation sys-tems,” Multiagent and Grid Systems, Special Issue on Agent-oriented Software Development Methodologies, vol. 2, no. 4,pp. 505–525, 2006.

[13] R. Z. Mili, E. Oladimeji, and R. Steiner, “Architecture of theDIVAs simulation system,” in Proceedings of Agent-DirectedSimulation Symposium ADS06, Huntsville, Alabama, April2006.

[14] J. Ferber, Multi-Agent Systems: An Introduction to DistributedArtificial Intelligence. Addison Wesley, 1999.

[15] F. Michel, “The irms4s model: The influence/reaction prin-ciple for multi-agent based simulation,” in Proceedings ofConference on Autonomous Agents and Multi-Agent Systems(AAMAS’07), Honolulu, Hawaii, May 2007.

[16] B. Banerjee, A. Abukmail, and L. Kraemer, “Advancingthe layered approach to agent-based crowd simulation,” inProceedings of IEEE Workshop on Parallel and DistributedSimulation, Rome, Italy, June 3-6 2008, pp. 185–192.

[17] S. J. Rymill and N. A. Dodgson, “Psychologically-based vi-sion and attention for the simulation of human behaviour,” inProceedings of Computer graphics and interactive techniques,Dunedin, New Zealand, November 29 - December 02 2005,pp. 229–236.

[18] W. L. Koh and S. Zhou, “An extensible collision avoid-ance model for realistic self-driven autonomous agents,” inProceedings of the 11th IEEE International Symposium onDistributed Simulation and Real-Time Applications, Chania,Greece, 22-24 October 2007, pp. 7–14.

[19] P. Herrerro and A. de Antonio, “Introducing human-like hear-ing perception in intelligent virtual agents,” in Proceedingsof Autonomous Agents and Multi-Agent Systems (AAMAS03),Melbourne, Australia, July 14 - 18 2003, pp. 7–14.

[20] A. Shendarkar, K. Vasudevan, S. Lee, and Y. J. Son, “Crowdsimulation for emergency response using bdi agent basedon virtual reality,” in Proceedings of the Winter SimulationConference, Monterey, California, December 3-6 2006, pp.545–553.

[21] N. Pelechano, J. Allbeck, and N. Badler, “Controlling in-dividual agents in high-density crowd simulation,” in 2007ACM SIGGRAPH/Eurographics symposium on Computer an-imation, San Diego, California, August 02 - 04 2007, pp.99–108.

96