Goal-oriented dead reckoning for autonomous characters
-
Upload
independent -
Category
Documents
-
view
0 -
download
0
Transcript of Goal-oriented dead reckoning for autonomous characters
Computers & Graphics 25 (2001) 999–1011
Goal-oriented dead reckoning for autonomous characters
D. Szwarcman, B. Feij !o*, M. Costa
ICAD-Intelligent CAD Laboratory, Department of Informatics, PUC-Rio, Rua Marqu #es de S *ao Vicente. 225-Gavea,
22453-900 Rio de Janeiro, RJ, Brazil
Abstract
This paper proposes innovative concepts for shared state management of distributed virtual characters on a computer
network. Firstly, visual soundness is presented as a property that is essential to high quality networked virtual
environments and that is related to the autonomy of virtual components. Secondly, state consistency is defined in terms
of physical and mental states and it is shown to be in opposition to visual soundness. Thirdly, the usual concept of dead
reckoning is extended by exploring the idea of giving autonomy to remote replicas. This new concept, named goal-
oriented dead reckoning, is particularly advantageous for the state management of autonomous characters. r 2001
Elsevier Science Ltd. All rights reserved.
Keywords: Distributed/network graphics; Collaborative virtual environment; Artificial life; Dead reckoning; Autonomous characters
1. Introduction
Virtual environments distributed over computer net-
works began in the early 80s [1]. However, despite all
these years of activities, this research area still faces
enormous challenges and lacks uniformity in its
concepts and definitions, specially considering net-
worked virtual environments populated by ‘‘intelligent’’
characters. Notably, autonomy of virtual humans or
A-Life entities is a subject not fully investigated in the
context of shared state management. Unfortunately, at
the other side of the spectrum, most of the people
working on ‘‘intelligent’’ characters [2–8] are not focused
on networked virtual environments.
The present work reduces the gap between the
subjects above mentioned and contributes to the
discussion of artificial life over a large global computer
network rather than inside a single computer. Also this
work creates new possibilities for collective and massive
human interaction in a networked A-Life environment-
Fa crucial issue for A-Life Games and A-Life Arts.
This paper relies on the autonomy of virtual
components’ remote replicas (clones) as an adequate
approach to networked virtual environments. It presents
a flexible and clear view of the concept of shared state
management and proposes an extended form of dead
reckoning as an appropriate technique to produce high
quality virtual environments. In the context of the
present work, autonomy is the capacity of the virtual
component to react to events (reactivity), to act by its
own and to take initiatives (pro-activity). Moreover,
autonomy is a concept with no relation to representa-
tion; that is, an autonomous character may be an avatar,
which represents a human participant, or it may be an
A-Life entity that does not represent a user and is not
directly controlled by him/her. In most of the current
applications, autonomous characters living on a net-
work do not have autonomous clones. However,
depending on the degree of complexity of the autono-
mous characters and the number of hosts on the
network, the lack of clone autonomy may render the
virtual world unfeasible. Therefore, this paper claims
*Corresponding author. ICAD-Intelligent CAD Laboratory,
Department of Computing, PUC-Rio, Rua Marqu#es de S*ao
Vicente, 225-Gavea, 22453-900 Rio de Janeiro, Brazil. Tel.:
+55-21-2529-9544 x 4308; fax: +55-21-2259-2232.
E-mail addresses: [email protected] (D. Szwarcman),
[email protected] (B. Feij !o), [email protected]
(M. Costa).
0097-8493/01/$ - see front matter r 2001 Elsevier Science Ltd. All rights reserved.
PII: S 0 0 9 7 - 8 4 9 3 ( 0 1 ) 0 0 1 5 4 - 6
that massive networked artificial life should be heavily
based on clone autonomy, despite the problems of
consistency that should be solved.
Before presenting the main ideas of this paper, the
most closely related works are described in Section 2.
Sections 3 and 4 are dedicated to some notes on
terminology and general concepts that are essential to
understand the type of problem tackled in this paper. In
Section 5, this paper proposes the idea of visual
soundness, indicating it as measure of networked virtual
environments quality and showing its dependence on
virtual components autonomy. The same section pre-
sents the notion of physical and mental state consistency
and its relation to visual soundness. Sections 6 and 7
explain in detail the proposed goal-oriented shared state
management technique. In Section 8, the goal-oriented
dead reckoning is then presented as a generalization of
the current dead reckoning techniques. Finally, experi-
mental results are described in Section 9 and conclusions
are produced in Section 10.
2. Related work
JackMOO [9], VLNET [10] and Improv [11] are
remarkable works on distributed virtual actors. These
works admit autonomy of actors’ remote replicas, but
only at the level of fundamental physical actions such as
step-forward or turn-around. In the works above
mentioned, remote replicas are able to choose appro-
priate positions for their articulated parts to avoid
immediate collision and guarantee equilibrium, but they
cannot make higher level decisions such as deviating
right or left. Thus, for an actor to cross a room, where
several other characters move around, it must guide its
remote replicas almost every step of the way. In this
context, virtual actors are said to have ‘‘one mind and
multiple bodies’’ [11]. Shared mental state is not
considered in the literature.
JackMOO [9] and VLNET [10] adopt centralized
models that produce highly consistent virtual environ-
ments, where all participating hosts display nearly the
same animation sequences. However, the central
server can easily become the speed bottleneck. In
JackMOO [9], the central server maintains a semantic
state databaseFwith propositional information such as
‘‘actor 1 is standing’’Fwhile each client maintains a
graphical state database replica. Then the server orders
the execution of low-level physical actions at each client.
VLNET authors do not clearly mention how autonomy
is provided to virtual actors’ replicas.
In Improv [11], an actor consists of the following
parts: its geometry, an Animation Engine, which utilizes
descriptions of atomic actions to manipulate the
geometry, and a Behavior Engine that constitutes the
actor’s mind. Improv’s distribution model replicates
the geometry and Animation Engine on every partici-
pating Local Area Network (LAN). The Behavioral
Engine runs at a single LAN only. All body replicas
move according to the one-mind directions. Each actor’s
mind resides in a different LAN and thus the envir-
onment’s mental database is not concentrated in a single
central server. A flaw in Improv is that the authors do
not explicitly mention at what level consistency is
maintained. If the environment is to be highly consis-
tent, all Behavioral Engine hosts must follow a
centralized model. That is, these hosts must exchange
acknowledgements in order to guarantee the same
sequence of actions at all computers. Again, a centra-
lized model can easily turn into a speed bottleneck. If,
on the other hand, consistency is not maintained at a
high level, then each host may display environment
changes in a different order. In this case, precision
problems may occur, such as the body of an actor
running through the body of another actor. Improv
authors do not reveal how these problems are handled.
The literature on shared state management for
artificial life is practically missed, although some
research on networked artificial life has been reported
[12].
3. Distributed or networked VE?
The terms ‘‘distributed virtual environment’’ or
‘‘distributed A-Life environment’’ do not emphasize
the nature of the problems tackled in this paper.
Distributed virtual environment is a broader concept
to characterize systems where an application is decom-
posed into different tasks that are run on different
machines or as separate processes.
The designations ‘‘networked virtual environment’’
and ‘‘networked A-Life environment’’ are more specific
and adequate terms to define a software system in which
users or A-Life entities interact with each other in real-
time over a computer network. This is precisely the
definition given by Sandeep Singhal and Michael Zyda
[1]. Sometimes the cumbersome term ‘‘distributed
interactive simulation environments’’ is used in the
literature. This is motivated by the DIS network
software architecture and its successor HLA (High-
Level Architecture) [13], which was recently been
nominated as an IEEE standard and an OMG standard.
4. Shared state management
Shared state management is the area of networked
virtual environments dedicated to the techniques for
maintaining shared state information [1]. Since the
primary concern of networked virtual environments is
to give users the illusion of experiencing the same world,
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–10111000
consistency of the shared information is a fundamental
issue. However, it is hard to be achieved without
degrading the virtual experience. In fact, one of the
most important principles in networked virtual environ-
ments says that no technique can support both high
update rate and absolute state consistency [1]. Total
consistency means that all participants perceive exactly
the same sequence of changes in the virtual environment.
To guarantee such condition, the state of the virtual
environment can only be modified after the last update
has been surely disseminated to all hosts. For that
matter, shared state information must be stored in a
centralized repositoryFor some corresponding model
[14]Fwith hosts exchanging acknowledgements and re-
transmitting lost state updates before proceeding with
new ones. Hence, update rates become limited, which in
turn produces non-smooth movements and non-realistic
scenes. Highly dynamic virtual environments, such as
the ones populated by autonomous characters, must
tolerate inconsistencies in the shared state.
There are two shared sate management techniques
that allow inconsistencies [1]: (1) frequent state update
notification; (2) dead reckoning. In the first technique,
hosts frequently broadcast (or multicast) update changes
without exchanging acknowledgments. This technique
has been mostly used by PC-based multi-player games
[15]. In dead reckoning techniques (see Section 6), the
transmission of updates is less frequent than the
occurred state changes and the remote hosts predict
the objects’ intermediate states based on the last
transmitted update. Both techniques allow the use of
decentralized replicated databases where every virtual
component has a pilot, the main representation that
resides in its controlling host, and several clones, the
replicas that reside in other hosts.
The quality of a networked virtual environment is also
determined by its scalability, which is its capacity of
increasing the number of objects without degrading the
system. Therefore, techniques for managing shared state
updates should be used in conjunction with resource
management techniques for scalability and performance.
Singhal and Zyda [1] present an excellent description of
some of these techniques, such as packet compression,
packet aggregation, area-of-interest filtering, multicast-
ing protocols, exploration of the human perceptual
limitations (e.g. level-of-detail perception and temporal
perception) and system architecture optimizations
(e.g. server clusters and peer-server systems).
5. Visual soundness and state consistency
In a networked virtual environment all participating
hosts should pass the test of visual soundness, an
important subjective test proposed in this paper. A host
passes the test of visual soundness if the user is unable to
distinguish between local and remote objects on the
display. An environment that passes this test may be
considered a networked virtual environment that is
visually sound. The theoretical definition of the visual
soundness concept presents the same sort of difficulties
as the ones encountered, for example, by Artificial
Intelligence researchers in defining an intelligent system.
In these cases, subjective tests offer valuable contribu-
tions, providing operational definitions until a formal
one becomes available. The test of visual soundness is, in
this sense, analogous to the Turing Test [16], which
continues to be a reference for intelligent systems in spite
of being highly subjective.
For a user to be unable to note the difference between
local and remote virtual components, all of them must
move smoothly and interact in a precise way (local
precision). One example of local precision failure is the
overlapping of objects. Another example is when a
character fails to put his hand in the right place while
trying to shake hands with another one. As already
mentioned, in totally consistent environments hosts will
most probably fail the visual soundness test because of
limited update rates. In the case of environments that
allow inconsistencies, large jumps and overlapping
usually occurs when the clone tries to follow the pilot’s
movement.
Although, for the sake of visual soundness, incon-
sistency must be tolerated, it should be kept within
certain limits. Moreover, being physically discrepant
from its pilot, a clone can also turn out to express a
different mood or emotional state. State consistency
must be considered in a broader sense. A clone then
exhibits state consistency when it mirrors the following
states of its pilot:
* physical state (e.g. position, orientation, velocity,
acceleration, etc.);* mental state, including facts, beliefs, desires, inten-
tions, commitments, goals, emotions, etc.
It is evident that a host may exhibit a high level of
state consistency but fail the test of visual soundness
(e.g. clones and their pilots have quite similar positions
and velocities, but clones’ movements are jerky). The
reverse is also possible; that is, a host may pass the test
but exhibit low level of state consistency. In fact, state
consistency and visual soundness can be understood as
being inversely proportional to each other. State
consistency is increased by making clones act like their
pilots, that is, act like a remote object, which in turn
decreases visual soundness. Looking the other way
around, visual soundness is related to local precision,
which is achieved by giving autonomy to clones, that is,
capacity to act by their own according to the local
environment presented by each host. This perspective
based on the problem of autonomy is the starting point
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–1011 1001
for the main ideas of this paper. A totally autonomous
clone would act exactly like a local object but would, of
course, have no state consistency. The challenge is to
search for techniques that maintain physical and mental
state consistency while passing the test of visual
soundness and favoring scalability.
The authors believe that dead reckoning is the basic
technique to overcome the above-mentioned challenge.
By considering the technique from the viewpoint of
clone autonomy, the usual concept of dead reckoning
can be expanded to take into account the states of mind
of entities. In this manner, autonomous characters can
be supported in a more adequate way.
6. Dead reckoning and clone autonomy
Dead reckoning has its origin in networked military
simulations and multi-player games, where users manip-
ulate aircraft or ground vehicles. In these systems,
prediction is a matter of trying to guess the pilot’s
trajectory. Second-order polynomial is the technique
most commonly used to estimate these objects’ future
position based on current velocity, acceleration and
position. It provides fairly good prediction since aircraft
and vehicles do not usually present large changes in
velocity and acceleration in short periods of time.
Moreover, since the prediction algorithm is determinis-
tic, the source host can calculate the error between pilot
and clones’ positions, and send updates only when the
inconsistency level is considered unacceptable. Upon
receiving update messages, clones restore consistency by
converging (or simply ‘‘jumping’’) to the pilot’s trans-
mitted physical state. Convergence is usually done with
curve fitting [1]. In this case, clones do not have any
autonomy to deviate from obstacles. So, when their
trajectories differ from the pilot’s real path, either during
prediction or convergence, clones may overlap other
objects.
When one comes to articulated characters, a couple of
different approaches can be taken for dead reckoning.
For applications requiring high state consistency, joint-
level dead reckoning algorithms can be used to predict
in-between postures [17]. This approach does not take
into account the action the character is executing and
dead reckoning computations are performed on joint
angle information received at each update message.
However, in many situations consistency at the level of
articulated parts is not essential. Especially in colla-
borative work systems, where avatars interact with each
other and together manipulate objects, visual soundness
at each host becomes more important. Furthermore,
articulated characters have gained autonomy over the
past years and efforts are being made to give them
capacity to understand and accept orders like ‘‘say hello
to Mary’’ [9]. In this context, dead reckoning at the level
of physical actions has been considered [9–11, 18]. That
is, instead of sending position updates, the pilot sends to
clones messages that indicate the low-level physical
action it is executing, like ‘‘smile’’, ‘‘dance’’ or ‘‘wave’’.
Clones predict pilot’s states by performing the same
actions, regardless of the fact that each host may be
presenting different motion sequences. However, as
already mentioned, in these script-based systems clones
have very limited autonomy. Even though they are able
to choose the appropriate place to put their foot when
stepping forward, they cannot decide to deviate right or
left from another avatar. Their pilots have to instruct
them which way to take. Clones, in this case, make
decisions based on their current perception of the
environment because they do not keep previous percep-
tions. In fact, clones do not store facts or other mental
attributes, and thus they lack a mental state. For
overlapping not to occur, these systems must guarantee
that certain actions will not be executed in parallel: one
will only be started after the other has been finished at
all hosts. For instance, two very close characters may
run into each other if they are ordered to step forward at
the same time. In other words, total physical state
consistency must be achieved between certain actions.
Visual soundness may reach undesirable levels.
This paper proposes an extension to character dead
reckoning that gives even more autonomy to clones,
providing them with a mental state and, on top of that,
greater decision power and planning capacity. With
limited autonomy, clones are restricted to executing by
their own only certain localized actions. For example, if
the pilot tells them to ‘‘walk to the door’’, the
environment will probably not remain still during all
the time they take to get there. So, while they walk, the
pilot must tell them how to respond to environment
changesF‘‘deviate right’’ or ‘‘get your head down’’.
However, remembering that pilot and clones always
experience different views, some hosts may present poor
visual soundness. On the other hand, if clones can make
decisions based on what they ‘‘see’’, ‘‘hear’’ and ‘‘feel’’,
then they can get to the door naturally, deviating from
an angry dog or smiling to a friend that passes by.
With greater autonomy, clones can receive not only
high-level actions to execute but, in a broader sense, they
can receive a goal to be achieved. The goal reflects the
pilot’s final desired physical and/or mental state. Clones
decide which sequence of actions, or prediction plan, to
execute in order to accomplish the given goal. For
instance, if the pilot sends the goal ‘‘outside the room’’,
clones can decide which exit to take. Fig. 1 shows this
particular example for a character that does not like Mr.
Green, another avatar in the room. The pilot and its
clone experience different emotional states depending on
the position and posture of Mr. Green at each host;
consequently, they choose different doors to go through.
In spite of the fact that the local and remote hosts
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–10111002
exhibit different animated scenes, this distributed virtual
experience will be perfectly acceptable if the character is
given only the following properties: ‘‘leave the room’’
(goal) and ‘‘I am afraid of Mr. Green’’ (emotional state).
The specified goal and the prediction plan executed by
the clone constitute the prediction mechanism.
Although positional goals are easier to conceive, it
should be emphasized that a goal is not restricted to a
final physical state. Clones can be ordered to accomplish
a ‘‘not hungry’’ or a ‘‘very calm’’ state, which are strictly
mental but might require the execution of high
level actions such as buying food or practicing yoga.
Due to the fact that pilot and clones have mental
attributes, the autonomous character acquires a shared
mental state.
Since clones are given more autonomy, they can take
different amounts of time to achieve a given goal. A
clone may then receive a second goal before it is able to
accomplish the first one. However, all clones must start
and end goal pursuits in the same order carried out by
the pilot. For this reason, the prediction mechanism
requires that pilots also inform clones of the achieved
goals.
The proposed management system for autonomous
characters maintains goal consistency between pilot and
clones; that is, state consistency is kept at the mental
level. Consistency of the physical state and of other
constituents of the mental state (emotions, beliefs, etc.)
is a result of the type of goal given. The more restrictive
the goal is, the stronger the final state consistency will
Fig. 1. Pilot and clones experience different emotional states.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–1011 1003
be. It is evident that, during the execution of the
prediction plan, clones might deviate from their pilots’
states. The pilot can control, to a certain extent, clones’
state consistency during prediction by specifying re-
sources to be used. In the example of Fig. 1, the pilot
could have told clones to go ‘‘outside the room through
the right door’’. The more restrictive the resources are,
the stronger the intermediate state consistency will be.
In some situations, even with the specification of
resources, different decisions can take clones’ physical
and mental state consistency to unacceptable levels. In
this matter, a recovery mechanism that restores state
consistency is presented in the next section. It is worth
noting that, for the existing dead reckoning protocols,
inconsistencies introduce overlapping and interaction
problems. Clones that have greater autonomy maintain
local precision even when they are inconsistent.
7. Recovering mechanism keeps clones under control
Increasing clones’ autonomy makes them capable of
responding to the environmentFavoiding collisions,
expressing emotions and interacting with other char-
actersFaccording to what they experience locally. Each
clone can act differently as long as it achieves the pilot’s
goal. The prediction mechanism is such that the
estimated states are not the same at all hosts and are
unknown to the pilot. Taking the simple example where
an avatar must get to a given position, while the pilot
decides to deviate right from a fearing entity, one or
more clones may decide to deviate left or even not
deviate at all, depending on the entity’s state at each
host. If all clones get to the desired position in
approximately the same period of time, then the goal
is achieved. However, clones that deviate left may
encounter other obstacles that can make them get too
far from the pilot and from the goal. In this case, these
clones should try to recover to the pilot’s state.
To restore state consistency, this paper proposes an
original recovery mechanism. Both, pilot and clone, have
specific roles in this mechanism. The pilot, after telling
clones the goal to be achieved, should send them
recovery messages until the desired final state is reached.
Recovery messages specify recovery goals, which reflect
the pilot’s intermediate physical and/or mental states.
The interval of time between recovery messages may be
determined by environment properties that influence
clones state consistency, such as the type of application,
the number of entities and the level of activity. In this
way, the recovery rate will be dynamically adapted to
environment demands. Since the pilot has no way of
knowing how discrepant clones are, the recovery rate
cannot be based on state error. Considering that
recovery messages have the purpose of helping clones
out in extraordinary situations where they cannot find
their way to fulfill a goal, the average working recovery
rate will tend to be low.
The clone’s first attribution is to verify if it needs to
recover when it receives the pilot’s messages. That will
be the case if inconsistency exceeds the acceptable limit.
One possible evaluation criterion is to consider the
inconsistency level unacceptable when the clone is
approximating neither the recovery goal nor the main
goal. The clone will recover from inconsistencies by
suspending the prediction plan and devising a recovery
sequence of actions, or a recovery plan, that it will
execute to achieve the given intermediate goal in the
most natural way possible. When recovery is terminated,
the clone takes on again the fulfillment of the main
goalFconceiving and executing a new prediction plan.
Fig. 2 illustrates the following case:
* Avatar 1 is given the properties ‘‘leave the room’’ and
‘‘keep away from people’’.* Pilot 1 sends the main goal ‘‘outside the room’’ to its
clone (clone 1).* Avatar 2 is swinging randomly around a small area,
which causes different reactions from other objects
(i.e. the hosts will never display the same position of
avatar 2).* Clone 1 initially goes left (because the left door is a
valid exit), but it soon notices avatar 3 and moves
away from the main goal to avoid the character.* At point B, pilot 1 sends a recovery message
informing of its present position (recovery goal).* Clone 1 receives the recovery message when it is at
point A. Recovery is triggered, since the clone is
approximating neither ‘‘outside the room’’ nor
point B.* Clone 1 recovers from point A to point B.
At point B, clone 1 reconsiders the best door to go
through and decides to follow the pilot’s path towards
the right door. It could have decided for the left door
again. In any case, the final desired goal would have
been achieved.
According to the suggested inconsistency evaluation
criterion, recovery is only performed if the clone is
approximating neither the main nor the recovery goal.
Hence, it is not only necessary that state constitu-
entsFphysical and mentalFbe quantified, but also that
a clone be capable of computing how distant its own
global state is from the final and intermediate desired
states. In the implemented examples, states are con-
sidered to be vectors in a space whose dimensions
correspond to physical and mental attributes. A clone
then calculates how far it is from a desired state by
performing multi-dimensional vector subtraction. Natu-
rally, this approach requires a procedure for normalizing
the various types of attribute measures to the same range
of values, so that no vector coordinate outweighs any
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–10111004
other one. Such a procedure has not yet been fully
investigated and should be completed in future works.
Recovery actions are restrictive in the sense that they
limit clones’ autonomy in order to compel them to
achieve the recovery goal as fast as possible. However, a
clone can receive a new recovery goal before it reaches
the last one. In this case, the clone shifts its aim to the
new goal, if recovery is still needed. Recovery can
become as restrictive as a state update if a clone finds too
much difficulty in accomplishing the recovery goal.
8. Dead reckoning generalized
Based on the prediction and recovery mechanisms
presented in the last two sections, this paper proposes a
generalization of the dead reckoning technique called
goal-oriented dead reckoning. This generalization is
grounded on the assumption that pilot and clones are
completely specified by the following triple:
Oti ¼ Et
i ;StiG
ti
� �t ¼ time;
where Eti is the set of entities that compose the
environment inhabited by entity i, at time t; Sti the set
of attributes that represent entity i, at time t; Gti the
expression that defines the main goal of entity i, at
time t.
The set of attributes S t includes physical and mental
situations, that is:
St ¼ Stf,St
m:
Another assumption is that entities are capable of
devising plans:
Pt ¼ pðEt;St;GtÞ or Et;St;Gt-pt;
where a plan is a set of action descriptors
pt ¼ at1at
2;y� �
:
Naturally, entities should also be able to execute plans.
In the case of entities that have no decision power, such
as a ball that is thrown, the plan is reduced to the
application of a procedure, usually the physical law that
describes the entity’s behavior.
Action descriptors may be simple operators or
conditional actions. Conditional actions may be relative
to states or to other actions. In the occurrence of an
event not predicted as a condition, re-planning takes
place. Thus, the virtual component’s autonomy is
established at the level of conditional actions and at
the re-planning level. In this paper, conditional actions
are restricted to simple conditions including only states,
and plans are restricted to sequence of actions. The case
of actions executed in parallel is not investigated.
Fig. 2. Recovering state consistency.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–1011 1005
The above assumptions, as well as the goal fulfillment
situation, can all be described in some formal language
as powerful as the mathematical first order logic. This
formal description is not one of the main concerns of
this paper, since theorems are not to be derived.
However, for the purpose of clarity, the simplified
assumptions and the situation of goal achievement,
adopted in this work for the goal-oriented dead
reckoning, are expressed in first order logic. In this
way, the plan is a sequential list of actions,
p ¼ ½a1; a2;y�
and the main goal Gt is a sentence in the normal
disjunctive form,
8x1;8x2;yGt ¼ b13b23y;
where xn is a variable and bk is a conjunction of literals,
bk � Lk14Lk24y :
Literals, on the other hand, are predicates, negated or
not, that may include constants, function symbols and
variables. In this simplified case, the goal is achieved by
an entity i, at time t0, when the following relation holds:
( k such that Lk1;Lk2;yf gDSt0
i :
Examples of a simple goal (without variables) and a
simple plan are:
G=(positionX(MrBlue, 2.0) 4 positionZ(MrBlue, 3.0))
3 happy(MrBlue)
p=[move(MrBlue, 2.0, 0, 0), move(MrBlue, 2.0, 0,
3.0)]
An example of a conditional action is:
if (facing(MrBlue, danger), deviateRight(MrBlue),
continue)
That is, if Mr. Blue faces any danger, he deviates
right, otherwise he continues. The formalization of a
planner is not in the scope of this paper and its
implications are discussed elsewhere [18].
The goal-oriented dead reckoning is presented in this
paper in the form of a pseudo-language algorithm,
which is listed in the Appendix. For a quicker under-
standing, Fig. 3 shows a graphical representation that
summarizes the proposed generalization as follows:
* At time t0, the pilot sends its initial attributes and the
main goal ðS0i ;G
0i Þ to clones. The pilot devises and
executes its plan. Upon receiving the pilot’s messages,
clones devise and execute prediction plans
ðE0j ;S
0j ;G
0i -p0
j Þ:* At some time t1, the pilot sends a recovery goal ðg1
i Þto clones. Upon receiving the recovery message,
clones verify if inconsistency is greater then the
acceptable limit ðd > dlimÞ: If a given clone finds such
condition true, it suspends the prediction plan,
devises a recovery plan (Ej1, Sj
1, gi1-pj
1.) and starts
executing it. Otherwise, the clone continues without
re-planning.* At time t2, the recovering clone reaches the inter-
mediate goal and takes on again the main goal,
Fig. 3. Graphical representation of goal-oriented dead reckoning.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–10111006
devising and executing another prediction plan
ðE2j ;S
2j ;G
2j -p2
j Þ:* At time t3, the pilot achieves the main goal and
communicates the fact to clones. Upon receiving this
information, clones start time-out counters. If a clone
does not reach the main goal before time-out, then it
will probably have its attributes updated to satisfy
the given goal.
Traditional dead reckoning as well as joint angle and
physical action dead reckoning are special cases of goal-
oriented dead reckoning. In other words, previous
versions can be described in terms of prediction and
recovery mechanisms, considering purely physical goals,
of course. The traditional form is normally applied to
vehicles that are guided by joysticks or similar devices.
In this case, the final state cannot be predetermined and
the main goal is actually unknown. Therefore, the main
goal and achieved goal messages are never sent to
clones. The prediction plan is reduced to the second
order polynomial approximation, or some equivalent
algorithm, that depends only on the entity’s present
attributes. ðStj-pt
jÞ: Update messages are special cases of
recovery messages where the transmitted recovery goals
are purely physical states. The convergence algorithm,
on the other hand, is a particular type of recovery plan
that does not depend on the environment’s conditions
ðStj ; g
tj-pt
j Þ: In traditional dead reckoning, clones con-
verge to the pilot’s state every time an update message is
received. This is equivalent to setting the inconsistency
limit to zero (dlim=0) in the goal-oriented dead
reckoning algorithm. The same reasoning holds true
for joint angle dead reckoning, which is traditional dead
reckoning applied to the several parts of an articulated
entity. In physical action dead reckoning, the main goal
is given in the form of the action to execute in order to
achieve it. Clones never recover. This is, of course, a
more restrictive form of goal-oriented dead reckoning
where clones cannot choose the sequence of actions to
perform and where recovery is not considered. The
properties of the various types of dead reckoning are
summarized in Table 1. The qualitative values (high,
medium, low) presented in the table are not a result of
theoretical calculations or empirical measurements; they
are only given as indications of the potentialities of each
type of dead reckoning.
As a generalization, the goal-oriented dead reckoning
allows the degree of autonomy to be adapted to virtual
component type and to environment needs. In fact, even
clones of non-autonomous characters can be given some
autonomy to avoid overlapping.
9. Experimental results
In order to implement prototypes and obtain experi-
mental results, a framework for networked autonomous
characters is being developed. The framework is built on
top of Bamboo [19,20], an open-architecture language
independent toolkit for developing dynamically exten-
sible, real-time, networked applications. Bamboo’s de-
sign focuses on the ability of the system to configure
itself dynamically, which allows applications to take on
new functionality after execution. Bamboo is particu-
larly useful in the development of scalable virtual
environments on the Internet because it facilitates the
discovery of virtual components on the network at
runtime.
The framework for distributed autonomous charac-
ters offers a set of classes that provides the basic
functionality needed for shared state management using
Table 1
Comparison of the various dead reckoning types
Traditional dead
reckoning
Joint-angle dead
reckoning
Physical action dead
reckoning
Goal-oriented dead
reckoning
Main goal Physical
(unknown) Physical Physical Physical/mental
Recovery goal Physical Physical F Physical/mental
Prediction plan Polynomial Kalman filtering Action scripts Devised by clone
Recovery plan Convergence State update F Devised by clone
algorithm
Minimum Medium High Low
recovery rate (based on state error) (based on state error) F (based on environment)
Consistency Physical Physical Physical Goal
(high) (high) (medium to high) (low to high)
Maximum visual Medium Medium Medium High
soundness
Maximum clone None None Medium High
autonomy
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–1011 1007
goal-oriented dead reckoning. It also supports frequent
state regeneration and previous versions of dead
reckoning as special cases.
Several examples were run in a Windows NT
environment producing good results, although perfor-
mance was not formally investigated. Two of these
examples are shown in Figs. 1 and 2. Fig. 4 illustrates
the case where two charactersFMr. and Mrs. Blue-
Fare ordered to achieve the ‘‘very happy’’ state, a
purely mental goal. Mr. and Mrs. Blue become happier
as they approximate each other. When they get close,
they both demonstrate to be very happy by raising their
arms. Since the goal does not specify physical attributes,
the final positions of Mr. and Mrs. Blue are different at
each host.
10. Conclusions and future work
Autonomy of virtual humans or A-Life entities is
not a well-explored concept in the area of networked
virtual environments. On the other hand, most of the
people working on autonomous characters are not
focused on shared state management over a computer
network. This paper explores a common view amongst
these areas of research and proposes innovative concepts
for shared state management. Firstly, state consistency
is defined in terms of physical and mental states.
Secondly, visual soundnessFa concept ‘‘inversely pro-
portional’’ to state consistencyFis associated with the
autonomy of clones (regardless of the pilot’s natureFa
user or an autonomous creature). Finally, a new
extension to the concept of dead reckoning is proposed,
which is particularly advantageous for autonomous
characters. The advantages and worries of this new
approach, called goal-oriented dead reckoning, are
presented below.
Although one has already mentioned that state
prediction may be object-specific and dead reckoning
should handle any type of shared state [1], these two
ideas are not clearly explained in the literature. For
instance, the authors of Improv [11] do not present this
system in terms of shared state management, although
its scripted events have been recognized as a form of
object-specific state prediction elsewhere [1]. JackMOO
[9] and VLNET [10] are also impressive works on
distributed virtual actors but in these systems, like in
Improv, clones have very limited autonomy. As far as
collision is concerned, real-time distributed collision
agreement for dead reckoning algorithms are not clearly
mentioned in the literature [1]; and the existent systems
probably fail the test of visual soundness. Most of the
interesting dead reckoning extensions, such as the
position history-based protocol [21], produce smooth
animation but cannot cope with undesirable collisions
that happen when the predicted trajectory differs from
the real one. One possible solution for many of the
above-mentioned drawbacks is to relax the concept of
dead reckoning by exploring the idea of autonomy.
Therefore, this paper proposes the goal-oriented dead
reckoning, which allows clones to be empowered by
autonomy in order to produce visually sound networked
virtual environments.
The proposed shared state management mechanism
also works in favor of scalability in the sense that it
tends to substantially reduce the number of transmitted
network messages. Moreover, it is expected that the
contributions of the goal-oriented dead reckoning, with
respect to scalability, become more significant as mental
models are improved, since clones will tend to need less
instructions from the pilot.
It should be mentioned that, like physical action dead
reckoning, the goal-oriented dead reckoning requires
that special care be given to the interaction between
autonomous characters and moving objects. If, for
example, while ‘‘Mr. Green’’ is dancing, a ball is thrown,
it might hit ‘‘him’’ at one host and not hit ‘‘him’’ at
another host. Either the ball’s trajectory is allowed to be
different at each host, which will most likely make pilot
and clones end up at different positions, or the ball is
forced to always follow its pilot’s path, which will
probably produce unrealistic scenes.
As compared with previous versions, which are just
special cases, the generalized prediction and recovery
mechanisms are, of course, harder to program. A greater
share of each host’s computational capacity is also
needed. Clones that possess autonomyFas in physical
action or goal-oriented dead reckoningFrequire a more
elaborate initialization. When a new participant joins
the environment, existing actors may be executing
an action or fulfilling a goal. Therefore, initialization
of clones at the new user’s host cannot be done
with a simple state update. If, to increase scalability,
the environment is divided in areas of interest associated
to multicast groups, then clones must be created
and initialized as soon as the pilot crosses region
boundaries.
There are several topics for future work. For instance,
performance of the goal-oriented dead reckoning and its
impact on scalability should be formally investigated.
Furthermore, a complexity analysis of the proposed
algorithm is required. Recovering criteria should also be
studied in more detailsFon the pilot’s side, this
concerns the set of rules used to dynamically adapt the
recovery message rate; on the clones’ side, the focus is on
the set of rules used to trigger recovery. Complex
conditional actions and complex plans including parallel
actions should also be considered. Moreover, resource
management techniques for scalability need to be
carefully examined. Real-time distributed collision
agreement is another important issue for further
research.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–10111008
Fig. 4. Mr. and Mrs. Blue go after the ‘‘very happy’’ state.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–1011 1009
Acknowledgements
The authors would like to thank the CNPq for the
financial support.
Appendix
.oal oriented dead reckonin.
{
vector/planS plans of pilot[MAX PILOTS];
vector/planS plans of clone[MAX CLONES];
vector/goalS main goals of clone[MAX CLONES];
inconsistency d[MAX CLONES];
const inconsistency dlim[MAX CLONES];
/* prediction and recovery mechanisms
while (true)
for (each pilot Pi)
/*prediction: send main goals and achieved goals
if (new goal Gti )
send reliable message with goal Gti to clones;
Eti ; S
ti ;G
ti-pt
i ;insert pt
i in plans of pilot[i];
endif
for (each pk in plans of pilot[i])
execute a plan step;
if (Sit satisfies pk’s goal)
send reliably to clones ‘‘Gk achieved’’;
erase pk;
endif
endfor
/*recovery: send recovery messages
if ((plans of pilot[i] is not empty) and (recovery
message is needed))
send recovery goal git to clones;
endif
endfor
for (each clone Cj)
if (new message)
/*prediction: treat main goal
if (message carries main goal G)
if (no main goal is on timeout count)
insert G in main goals of clone[j];
if (recovery plan is not running)
Etj ; S
tj ;G-pt
j ;put pj
t in plans of clone[j];
endif
indicates that message has been treated;
endif
/*prediction: treat achieved goal
else if (message is of type ‘‘G achieved’’)
if (G is in main goals of clone[j])
start timeout counter for G;
indicates that message has been treated;
/*recovery: treat recovery messages
else if (message carries recovery goal g)
if ðd j½ � > dlim½j�Þempty plans of clone[j]
Etj ; S
tj ; g-pt
j ;mark pt
j as recovery plan;
insert ptj in plans of clone[j];
else if (recovery plan is running)
erase recovery plan;
for (each Gk in main goals of clone[j])
Etj ; S
tj ;Gk-pt
j ;insert pt
j in plans of clone[j];
endfor
endif
indicates that message has been treated;
endif
endif
/*prediction and recovery: plan execution
for (each pk in plans of clone[j])
execute a plan step;
if (Sjt satisfies pk’s goal)
if (pk is the recovery plan)
for (each Gn in main goals of clone[j])
Etj ; S
tj ;Gn-pt
j ;insert pt
j in plans of clone[j];
endfor
else
erase pk’s goal from main goals of clone[j];
endif
erase pk;
endif
endfor
/*prediction: goal timeout
for (each Gk in main goals of clone[j])
if (timeout)
erase pk from plans of clone[j];
force Sjt to satisfy Gk;
erase Gk;
endif
endfor
endfor
endwhile
}
References
[1] Singhal S, Zyda M. Networked virtual environmentsFde-
sign and implementation. New York: ACM Press, 1999.
[2] Reynolds CW. Flocks, herds, and schools: a distributed
behavioral model. Computer Graphics (Proceedings of
SIGGRAPH) 1987;21(4):25–34.
[3] Bates J, Loyall AB, Reilly WS. Integrating reactivity,
goals, and emotion in a broad agent. Technical Report
CMU-CS-92-144, School of Computer Science, Carnegie-
Mellon University, Pittsburgh, PA, 1992.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–10111010
[4] Tu X, Terzopoulos D. Artificial fishes: physics, locomo-
tion, perception, behavior. Computer Graphics (Proceed-
ings of SIGGRAPH) 1994;28(3): 43–50.
[5] Costa M, Feij !o B. Agents with emotions in behavioral
animation. Computer & Graphics 1996;20(3):377–84.
[6] Maldonado H, Picard A, Doyle P, Hayes-Roth B. Tigrito:
a multi-mode interactive improvisational agent. Stanford
Knowledge Systems Laboratory Report KSL-97-08, Stan-
ford University, 1997.
[7] Badler N, Bindiganavale R, Bourne J, Palmer M, Shi J,
Schuler W. A parameterized action representation for
virtual human agents. In: Proceedings of First Workshop
on Embodied Conversational Characters (WECC 1998),
Lake Tahoe, CA, 1998.
[8] B!eicheiraz P, Thalmann D. A behavioral animation system
for autonomous actors personified by emotions. In:
Proceedings of First Workshop on Embodied Conversa-
tional Characters (WECC 1998), Lake Tahoe, CA, 1998.
[9] Shi J, Smith TJ, Granieri J, Badler NI. Smart avatars in
JackMOO. In: Proceedings of the 1999 Virtual Reality
Conference (VR’99), IEEE, Texas, USA, 1999. p. 156–63.
[10] Capin TK, Pandzic IS, Thalmann NM, Thalmann D.
Realistic avatars and autonomous virtual humans in
VLNET networked virtual environments. In: Earnshaw R,
Vince J, editors. Virtual worlds in the internet. CA: IEEE
Computer Society Press, 1998. p. 157–74 ([Chapter 8]).
[11] Perlin K, Goldberg T. Improv: a system for scripting
interactive actors in virtual worlds. In: Proceedings of
SIGGRAPH’96, New Orleans, LA, 1996. p. 205–16.
[12] Cliff D, Grand S. The creatures global digital ecosystem.
Artificial Life, Winter 1999;5(1):77–93.
[13] Dahmann J, Weatherly R, Kuhl F. Creating computer
simulation systems: an introduction to the high level
architecture. NJ: Prentice-Hall, 1999.
[14] Anupam V, Bajaj C. Distributed and collaborative
visualization. IEEE Multimedia 1994;1(2):39–49.
[15] Doom.idSoftwareWebsite, http://www.idsoftware.com/
(as in August 28, 2000).
[16] Turing AM. Computing machinery and intelligence. Mind
1950;59:433–560.
[17] Capin TK, Pandzic IS, Thalmann D, Thalmann NM. A
dead-reckoning algorithm for virtual human figures. In:
Proceedings of the 1997 Virtual Reality Annual Interna-
tional Symposium (VRAIS’97), IEEE, Albuquerque, USA,
1997. p. 161–9.
[18] Living Worlds Web site, http://www.vrml.org/Work-
ingGroups/living-worlds/draft 2/ (as in April 5, 2000).
[19] Watsen K, Zyda M. BambooFa portable system for
dynamically extensible, networked, real-time, virtual en-
vironments. In: Proceedings of the 1998 Virtual Reality
Annual International Symposium (VRAIS’98), IEEE,
Atlanta, GA, 1998. p. 252–9.
[20] Bamboo Web site, http://npsnet.org/Bwatsen/Bamboo/
index.html (as in August 28, 2000).
[21] Singhal SK, Cheriton DR. Using a position history-based
protocol for distributed object visualization. In: Designing
real-time graphics for entertainment. Course Notes for
SIGGRAPH’94, Course #14, July 1994 [Chapter 10]. Also
available as Technical Report STAN-CS-TR-1505,
Department of Computer Science, Stanford University,
1994.
D. Szwarcman et al. / Computers & Graphics 25 (2001) 999–1011 1011