Spatial Awareness in Full-Body Immersive Interactions: Where Do We Stand

11
Spatial Awareness in Full-Body Immersive Interactions: Where do we Stand ? Ronan Boulic 1 , Damien Maupu 1 , Manuel Peinado 2 , Daniel Raunhardt 3 , 1 VRLAB, Ecole Polytechnique Fédérale de Lausanne, station 14, 1015 Lausanne, Switzerland 2 Universidad de Alcalá, Departamento de Automática, Spain 3 BBV Software Service AG, Zug, Switzerland {Ronan.Boulic,Damien.Maupu}@epfl.ch, {Manuel.Peinado, Daniel.Raunhardt}@gmail.com Abstract. We are interested in developing real-time applications such as games or virtual prototyping that take advantage of the user full-body input to control a wide range of entities, from a self-similar avatar to any type of animated characters, including virtual humanoids with differing size and proportions. The key issue is, as always in real-time interactions, to identify the key factors that should get computational resources for ensuring the best user interaction efficiency. For this reason we first recall the definition and scope of such essential terms as immersion and presence, while clarifying the confusion existing in the fields of Virtual Reality and Games. This is done in conjunction with a short literature survey relating our interaction efficiency goal to key inspirations and findings from the field of Action Neuroscience. We then briefly describe our full-body real-time postural control with proactive local collision avoidance. The concept of obstacle spherification is introduced both to reduce local minima and to decrease the user cognitive task while interacting in complex environments. Finally we stress the interest of the egocentric environment scaling so that the user egocentric space matches the one of a height-differing controlled avatar. Keywords: spatial awareness, immersion, presence, collision avoidance. 1 Introduction Full-body interactions has a long history in Virtual Reality and start to have significant commercial successes with dedicated products. However exploiting the full-body movements is still far from achieving its full potential. Current spatial interactions are limited to the control of severely restricted gesture spaces with limited interactions with the environment (e.g. the ball of a racquet game). In most games the postural correspondence of the player with the corresponding avatar posture generally doesn’t matter as long as the involvement is ensured and preserved over the whole game duration. We are interested in achieving two goals: 1) increasing the user spatial awareness in complex environment through a closer correspondence of the user and the avatar postures, and 2) impersonating potentially widely different entities ranging from a self-similar avatar to any type of animated characters, including virtual

Transcript of Spatial Awareness in Full-Body Immersive Interactions: Where Do We Stand

Spatial Awareness in Full-Body Immersive Interactions: Where do we Stand ?

Ronan Boulic1, Damien Maupu1, Manuel Peinado2, Daniel Raunhardt3,

1 VRLAB, Ecole Polytechnique Fédérale de Lausanne, station 14, 1015 Lausanne, Switzerland

2 Universidad de Alcalá, Departamento de Automática, Spain 3 BBV Software Service AG, Zug, Switzerland

{Ronan.Boulic,Damien.Maupu}@epfl.ch, {Manuel.Peinado, Daniel.Raunhardt}@gmail.com

Abstract. We are interested in developing real-time applications such as games or virtual prototyping that take advantage of the user full-body input to control a wide range of entities, from a self-similar avatar to any type of animated characters, including virtual humanoids with differing size and proportions. The key issue is, as always in real-time interactions, to identify the key factors that should get computational resources for ensuring the best user interaction efficiency. For this reason we first recall the definition and scope of such essential terms as immersion and presence, while clarifying the confusion existing in the fields of Virtual Reality and Games. This is done in conjunction with a short literature survey relating our interaction efficiency goal to key inspirations and findings from the field of Action Neuroscience. We then briefly describe our full-body real-time postural control with proactive local collision avoidance. The concept of obstacle spherification is introduced both to reduce local minima and to decrease the user cognitive task while interacting in complex environments. Finally we stress the interest of the egocentric environment scaling so that the user egocentric space matches the one of a height-differing controlled avatar.

Keywords: spatial awareness, immersion, presence, collision avoidance.

1 Introduction

Full-body interactions has a long history in Virtual Reality and start to have significant commercial successes with dedicated products. However exploiting the full-body movements is still far from achieving its full potential. Current spatial interactions are limited to the control of severely restricted gesture spaces with limited interactions with the environment (e.g. the ball of a racquet game). In most games the postural correspondence of the player with the corresponding avatar posture generally doesn’t matter as long as the involvement is ensured and preserved over the whole game duration. We are interested in achieving two goals: 1) increasing the user spatial awareness in complex environment through a closer correspondence of the user and the avatar postures, and 2) impersonating potentially widely different entities ranging from a self-similar avatar to any type of animated characters, including virtual

humanoids with differing size and proportions. These two goals may sound contradictory when the animated character differs markedly from the user but we are convinced it is a definitely useful long term goal to identify the boundary conditions of distorted self-avatar acceptance. For the time being, the core issue remains, as always in real-time interactions, to identify the key factors that should get computational resources for ensuring the best interaction efficiency, either as a gamer or as an engineer evaluating a virtual prototype for a large population of future users. Within this frame of mind it is important to recall the definition and scope of such essential terms as immersion and presence, while clarifying the confusion existing in the fields of Virtual Reality and Games. This is addressed in the first part of section 2 in conjunction with a literature survey relating our interaction efficiency goal to key inspirations and findings from the field of Action Neuroscience. The second part of this background section deals with the handling of collision avoidance in real-time interactions. Section 3 then briefly describe our full-body real-time postural control with proactive local collision avoidance. The concept of obstacle spherification is introduced both to reduce local minima and to decrease the user cognitive task while interacting in complex environments. Finally we stress the interest of the egocentric environment scaling so that the user egocentric space matches the one of a height-differing controlled avatar.

2 Background

Spatial awareness in our context is the ability to infer one’s interaction potential in a complex environment from one’s continuous sensorimotor assessment of the surrounding virtual environment. It is only one aspect of feedback and awareness considered necessary to maintain fluent collaborations in virtual environments [GMMG08]. We first address terminology issues in conjunction with a brief historical perspective. We then recall the key references about handling collision avoidance for human or humanoid robot interactions.

2.1 What Action Neuroscience tells us about Immersive Interactions

The dualism of mind and body from Descartes is long gone but no single alternative theory is yet able to replace it as an explanatory framework integrating both human perception and action into a coherent whole. Among the numerous contributors to alternate views to this problem, Heidegger is often acknowledged as the first to formalize the field of embodied interactions [D01] [MS99] [ZJ98] through the neologisms of ready-to-hand and present-at-hand [H27]. Both were illustrated through the use of a hammer that can be ready-to-hand when used in a standard task; in such a case it recedes from awareness as if it became part of the user’s body. The hammer can be present-at-hand when it becomes unusable and has to be examined to be fixed. Most of human activities are spent according to the readiness-to-hand mode similar to a subconscious autopilot mode. On the other hand human creativity emerges through the periods in present-at-hand mode where problems have to be faced and solved.

We are inclined to see a nice generalization of these two modes in the work of the psychologist Csikszenmihalyi who studied autotelic behaviors, i.e. self-motivational activities, of people who showed to be deeply involved in a complex activity without direct rewards [NvDR08][S04]. Csikszenmihalyi asserted that autotelicity arises from a subtil balance between the exertion of available skills and addressing new challenges. He called flow the strong form of enjoyment resulting from the performance of an autotelic activity where one easily looses the sense of time and of oneself [S04]. This term is now in common use in the field of game design, or even teaching, together with the term of involvement. Both terms are most likely associated to the content of the interactive experience (e.g. the game design) rather than the form of the interaction (the sensory output which is generic across a wide range of game designs).

The different logical levels of content and form have not been consistently used in the literature about interactive virtual environments, hence generating some confusion about the use of the word presence, (e.g. in [R03][R06]). As clearly stated by Slater, presence has nothing to do with the content of an interactive experience but only with its form [S02]. It qualifies the extent to which the simulated sensory data convey the feeling of being there even if cognitively one knows not to be in a real life situation [S03]. As Slater puts it, a virtual environment system can induce a high presence, but one may find the designed interaction plain boring. Recently Slater has opted for using the expression Place Illusion (PI) in lieu of presence due to the existing confusion described above [RSSS09]. While PI refers to the static aspects of the virtual environment, the additional term of Plausibility (Psi) refers to its dynamic aspects [RSSS09]. Both constitute the core of a new evaluation methodology presented in [SSC10].

A complementary view to explaining presence/PI is also grounded on the phenomenology approach from Heidegger and the more recent body of work from Gibson [FH98] [ZJ98] [ON01] both characterized by the ability to ‘do’ there [SS05]. However it is useful to recall the findings from Jeannerod [J09] that questions the validity of the Gibsonian approach [ON01]. It is based on the display of point-light movement sequences inspired by the original study of Gunnar Johansson on the perception of human movement [J73]. The point light sequences belong to three families: movement without/with human character, and for this latter class, without meaning (sign language) or know meaning (pantomime) for the subjects. Movements without human characters appear to be processed only in the visual cortex, whereas those with human character are treated in two distinct region depending on whether they are known (ventral “semantic” stream) or unknown (dorsal “pragmatic”[J97] goal-directed stream). One particularly interesting case is that an accelerated human movement loses its human character and is processed only in the visual cortex. These findings have the following consequences for our field of immersive interactions when interacting with virtual humans. First it is crucial to respect the natural dynamics of human entities when animating virtual humans otherwise it may be disregarded as human altogether. Second, humans beings have internal models of human actions that are activated not only when they perform it themselves but also when they see somebody else perform it. It is hence reasonable to believe that virtual human performing good quality movement activate the corresponding internal model of immersed subjects. Viewed known and unknown human movements are treated

along different neural stream, one of which might be more difficult to verbalize a posteriori in a questionnaire as it has no semantic information associated with it.

As suggested above it can be more difficult to assess presence through the usual means of questionnaires as actions performed in this mode are not performed at a conscious level. In general, alternative to questionnaires have to be devised, such as the comparison with the outcome that would occur if performed in a real world setting. Physiological measurements are particularly pertinent as Paccalin and Jeannerod report that simply viewing someone performing an action with efforts induce heart and breathe rate variations [PJ00].

Simultaneously introduced with presence, the concept of immersion refers to the objective level of sensory fidelity a Virtual Reality system provides [SS05]. For example, the level of visual immersion depends only on the system’s rendering software and display technology [BM07]. Bowman et al have chosen to study the level of visual immersion on application effectiveness by combining various immersion components such as field of view, field of regard (total size of the visual field surrounding the user), display size, display resolution, stereoscopy, frame rate, etc [BM07].

2.2 Handling Collisions for Full-Body Avatar Control

Numerous approaches have been proposed for the on-line full-body motion capture of the user, we refer the interested reader to [A] [TGB00][PMMRTB09] [UPBS08]. A method based on normalized movement allows to retarget the movement on a large variety of human-like avatars [PMLGKD08][ PMLGKD09].

Fig. 1. (a) In the rubber band method, the avatar’s body part (white dot) remains tangent to the obstacle surface while the real body part collides (black dot). (b) In the damping method [PMMRTB09], whenever a real body part enters the influence area of an obstacle (frame 2), its displacement is progressively damped (2-3) to ensure that no interpenetration happens (4-6). No damping is exerted when moving away from the obstacle (7).

Spatial awareness includes the proper handling of the avatar collisions, including self-collisions[ZB94]. In fact the control of a believable avatar should avoid collision proactively rather than reactively; this reflects the behavior observed in monkeys [B00]. In case an effective collision happens between the user and the virtual environment the standard approach is to repel the avatar hence inducing a collocation error; however such an error is less disturbing than the visual sink-in that would occur

otherwise [BH97] [BRWMPB06] (hand only) [PMMRTB09] (full body); both of these approaches are based on the rubberband method (Fig 1a) except that the second one has an additional damping region surrounding selected body parts to be able to enforce the proactive damping of segments’ movements towards obstacles (Fig 1b, Fig 2). The original posture variation damping has been proposed in [FT87] and extended in a multiple priority IK architecture in [PMMRTB09].

Fig. 2. Selective damping of the arm movement component towards the obstacles; a line is displayed between a segment and an obstacle [PMMRTB09].

3 Smoothing Collision Avoidance

The damping scheme recalled in the previous section is detailed in Fig. 3 in the simplified context of a point (called an observer) moving towards a planar obstacle. Only the relative displacement component along the normal is damped. Such an approach may produce a local minima in on-line full-body interaction whenever an obstacle lies between a tracked optical marker and the following avatar segment (equivalent to the point from Fig. 3). In such a case the segment attraction is counter-balanced by the damping as visible on Fig. 4a.

∆d

∆n

∆t

gd

∆d

ng

normal line

tangentplanebp

bpbp

bpobp

D

closest point PCon obstacle

Observer

∆p

∆pn

∆on

∆oOp Oc Observer

d

g= δn⋅(d/D)2

δn

Fig. 3. Damping in the influence area of a planar obstacle for a point-shaped observer with a relative movement towards the obstacle. The relative normal displacement δn is damped by a factor (d/D)2 [PMMRTB09]

The present section proposes a simple and continuous alteration of an obstacle normals so that obstacles appear from far as if it were a sphere whereas it progressively reveals its proper normal when the controlled segment is getting closer to its surface. Obstacle shapes offering large flat surfaces are ideal candidates for the spherification as we call it (see Fig. 4 bottom row)). The concept is simple to put into practice. Instead of using the normal to the obstacle n, we replace it by nf , a combination of n and a “spherical” normal ns which results from taking a vector from the obstacle centroid (see Fig. 4b). We have :

nf = NORMALIZE((1-k)n + k ns) (1)

with m

Ddk

1/

= .

(2)

In Eq. 1, k is a spherification factor which ranges from 0 at the obstacle’s surface, to 1 at the boundary of its influence area (d=D). The rationale for this factor is that we can "spherify" a lot when we are far away from the obstacle (k=1), but not when close to it (k<<1) because using an altered normal could lead to a collision.

Fig. 4. (top row) local minima due to a balance of the damping with the attraction towards a goal (left), the two contributing vectors to the sperification (middle), the resulting behavior (right), (bottom row) various degree of spherification from null (left) to hight (right).

Finally, the degree of spherification m is a user-defined parameter which can be increased to provide more aggressive spherifications that help solve difficult scenarios. Figure 4(bottom row) shows the normals generated by this method for a box-shaped obstacle and different values of m. The value m=0.5 is used throughout the rest of this paper. Figure 6(top row) shows how a scenario with blockage is solved in a smooth manner thanks to the spherification of the obstacle. It is still possible to devise situations where spherification alone cannot prevent blockage but we deem the technique worthy nevertheless because it offers a significant improvement in many common situations, at virtually no computation cost.

This technique has been integrated in the combined IK postural control and collision avoidance presented in [PMMRTB09]. Additional illustrations of on-line use can be seen in Fig. 6 but it is even more interesting for off-line simulation of complex reach tasks as described in [PBRM07] and briefly recalled now. The virtual human being totally autonomous in such a context, a deliberative software layer is required to avoid the local minima that are not handled by the spherification. For that purpose we introduced the task-surface heuristic (Fig. 5) that is used to check whether an obstacle initially lies within the triangle set built from the effector location, its reach goal and all parent joint locations (Fig. 5a,b). If it is the case a set of candidate paths is built as described in [ABD98] and the best path is selected according to a fitness criteria dedicated to reach tasks and optimizing the shoulder joint mobility [PBRM07]. Fig. 5c highlights the selected path and its performance; please note that the torso and the arm segments benefit from the proactive collision avoidance with respect to the box.

a b

c

Fig. 5. (a) Triangle task heuristic for a simple chain (a), or a virtual human (b). Selected optimal path with respect to the shoulder mobility (c) [PBRM07].

4 Spatial Awareness with Height-Differing Avatars

Spatial awareness is also strongly related with the relative height of the target avatar. The issue we examine in this section can be formulated as follow: how should we render the virtual environment to best convey the experience of an avatar of differing body height than the user ? We consider this problem with a third person viewpoint setup as in Fig. 6a. Our motivation is to benefit from the large field of view provided by the 3m x 2,3m screen. This figure illustrates an extreme case of an adult wishing to interact in a virtual kitchen as if he were as tall as a child (Fig 6c).

a b

c d

Fig. 6. Illustration of the Embodiment and the Awareness issues; how should we control a height-differing avatar while automatically managing collision avoidance. (a) immersive display with active marker motion capture system, (b) control of a self-height avatar, (c) control the child avatar with the visuocentic strategy, (d) egocentric scaling strategy: the virtual environment is scaled as experienced by the child avatar [BMT09].

We have conducted an experiment in a simpler setup than Fig. 6 (details in [BMT09]) for a range of reach tasks with three scaling strategies :

Reference strategy: control of a same-height avatar (display analog to Fig. 6a,b). Both the visual display and the postural sensor data (active optical markers visible in Fig. 6.a) are respectively presented and exploited at scale 1/1. Its purpose is to calibrate the reaching duration as a function of target heights.

Visuocentric scaling strategy: we could call it the puppeteer strategy as the user sensor data are scaled by the body heights ratio to match the body height of the avatar

(display analog to Fig 6c). The “puppeteer” subject has to rely heavily on the visual feedback to guide the avatar in the reach tasks.

Egocentric scaling strategy: this strategy does not change the sensor data but instead scales the displayed environment with the inverse ratio of body heights so that we ensure an exact correspondence of the user egocentric space and the one of the controlled avatar. Hence the user becomes the puppet as opposed to the puppeteer strategy.

The [BMT09] study quantified the reaching response duration as a function of a set of target heights, from very low to above the head but always reachable. The targets were either in free space or on a shelf, and the subject was controlling either a simple solid collocated with the hand (baseline) or a full body avatar.

Despite great performance variations between subjects, performances remained coherent per subject. So we performed a per-subject duration normalization, and a target height normalization by the subject body height to obtain comparable results for an analysis of variance. The egocentric strategy provided better fitted reach duration characteristics in the baseline case (a single solid was controlled). The baseline and the avatar control cases provided similar performance only when reaching in free space. The subjects were significantly slower to reach all the targets when it was displayed on a shelf although the targets are located at the same relative distance. The control of height-differing avatars with both scaling strategies led to performances consistent with the height differences. Only for the lowest target, the visuocentric led to a significantly slower reach compared to the egocentric scaling. Overall the egocentric scaling showed less variance which suggests it is more appropriate to enforce spatial awareness.

One key finding of the study was that nearby obstacle (the shelf) can alter significantly the performance of subjects when they control an avatar instead of a simple solid. We suspect that subjects took great care of cognitively avoiding the shelf (no collision avoidance algorithm was implemented). This would support the introduction of such collision avoidance algorithm to relieve the user from such management that induce delay and cognitive fatigue.

5 Conclusion

Immersive interactions lies at the intersection of various complementary research fields. On the perception-action level, the intense debates about the nature of human embodiment from the past century have highlighted that most human sensorimotor activities occur at a “not explicitly conscious” cognitive level, e.g. through motor internal models [J97, J09]. This dimension of immersive interactions is still not fully addressed in the immersive interaction. We believe it is highly desirable to reproduce the same cognitive pattern while interacting with the full-body by freeing the user from any conscious cognitive load of explicitly managing the spatial relationships between the controlled avatar and the virtual environment. We have presented some of our efforts towards relieving the user of handling collision avoidance in complex environments by proactively damping body segments movement along the obstacle normal and by altering these normals to ease the obstacles bypass. Further researches

will evaluate sensorimotor tasks performed by increasingly differing avatars to assess the boundaries of avatar embodiment in task-driven contexts and thus to enlarge the scope of immersive interaction applications [N].

Acknowledgments. This work has been partly supported by the Swiss National Foundation under the grant N° 200020-117706 and by the University of Alcalá under grant PI2005/083. Thanks to Mireille Clavien for the characters design and to Achille Peternier for the MVisio viewer and the CAVE software environment.

References

[A]Autodesk MotionBuilder, http://usa.autodesk.com [ABD98] Amato, N.M., Bayazit, O. B., Dale, L.K.: OBPRM: An Obstacle-Based PRM for 3D

Workspaces. In: WAFR ’98 (1998). [B00] Berthoz, A.;The Brain Sense of Movement. Chapter “building coherence”, Section

“seeing with the skin” pp 83-86. In Perspective in Cognitive Neuroscience, Harward University Press (2000)

[BMT09] Boulic, R., Maupu, D., Thalmann, D.: On Scaling Strategies for the Full Body Interaction with Virtual Mannequins. Interacting with Computers, Special Issue on Enactive Interfaces, 21(1-2), 11--25 (2009)

[BH97] Bowman, D., Hodges, L.F.: An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments. In: Symp. I3D, 35--38 (1997)

[BM07] Bowman, D., McMahan, P.: Virtual Reality: How Much Immersion Is Enough? Computer, 40(7), 36--43 (2007)

[BRWMPB06] Burns, E., Razzaque, S., Whitton, M.C., McCallus, M.R., Panter, A.T., Brooks, F.P.: The Hand is More Easily Fooled than the Eye: Users Are More Sensitive to Visual Interpenetration than to Visual-proprioceptive Discrepancy. Presence: Teleoperators and Virtual Environments, 15, 1--15 (2006)

[D01] Dourish, P.: Where the action is. MIT Press (2001) [FH98] Flach, J.M., Holden, J.G.: The reality of experience: Gibson’s way. Presence-

Teleoperators and Virtual Environments 7(1), 90--95 (1998) [FT87] Faverjon, B., Tournassoud, P.: A Local Based Approach for Path Planning of

Manipulators with a High Number of Degrees of Freedom, In: IEEE Int’l Conf. Robotics and Automation, pp. 1152–1159, IEEE Press, New York (1987)

[GMMG08] García, A.S., Molina, J.P., Martínez, D., González, P.: Enhancing collaborative manipulation through the use of feedback and awareness in CVEs. In: 7th ACM SIGGRAPH Int. Conf. VRCAI '08, ACM, New York (2008)

[H27] Heidegger, M.: Being and Time, John Macquarrie and Edward Robinson, translated in English in 1962, New York: Harper and Row (1962)

[J97] Jeannerod, M.: The cognitive neuroscience of action. Blackwell.(1997) [J09] Jeannerod, M.: Le cerveau volontaire. Odile Jacob Sciences (2009) [J73] Johansson, G.: Visual perception of biological motion and a model for its analysis.

Perception and Psychophysics 14, 201-211 (1973) [MBRT99] Molet, T., Boulic, R., Rezzonico, S., Thalmann, D.: An architecture for immersive

evaluation of complex human tasks, IEEE TRA 15(3), (1999) [MS99] Murray, C.D., Sixsmith, J.: The Corporeal Body in Virtual Reality. Ethos 27(3):315-

343. The American Anthropological Association (1999)

[N] National Academy of Engineering, http://engineeringchallenges.org/ [NvDR08] Nijholt, A., van Dijk, B., Reidsma, D.: Design of Experience and Flow in

Movement-Based Interaction. In: A. Egges, A. Kamphuis, and M. Overmars (Eds.) MIG 2008. LNCS, vol. 5277, pp. 166–175. Springer (2008)

[PMMRTB09] Peinado, M. Meziat, D., Maupu, D., Raunhardt, D., Thalmann, D., Boulic, R., Full-body Avatar Control with Environment Awareness, IEEE CGA, 29(3), May-June 2009

[PBRM07] Peinado, M., Boulic, R., Raunhardt, D., Meziat, D.: Collision-free Reaching in Dynamic Cluttered Environments, VRLAB Technical Report (2007)

[PMLGKD08] Pronost, N., Multon, F., Li, Q., Geng, W., Kulpa, R., Dumont, G.: Interactive animation of virtual characters: application to virtual kung-fu fighting. In: International Conference on Cyberworlds 2008, Hangzhou – China (2008)

[PMLGKD09] Pronost, N., Multon, F., Li, Q., Geng, W., Kulpa, R., Dumont, G.: Morphology independent motion retrieval and control, The International Journal of Virtual Reality 8(4), 57--65 (2009)

[ON01] O’Regan, J.K., Noë, A.: A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(5), 939--1011 (2001)

[PJ00] Paccalin, C., Jeannerod, M.:Changes in breathing during observation of effortfull actions. Brain Research, 862, 194--200 (2000)

[R03] Reteaux X.: Presence in the environment: theories, methodologies and applications to video games, PsychNology Journal, 1(3), 283--309 (2003)

[R06] Reteaux X.: Immersion, Presence et Jeux Video. In: S. Geno (ed) Le Game Design de Jeux Video, Approches de l'Expression Videoludique, L'Harmattan, Paris (2006)

[RSSS09] Rovira, A., Swapp, D., Spanlang, B., Slater, M.: The Use of Virtual Reality in the Study of People's Responses to Violent Incidents. Front Behav Neurosci 3: 12 (2009)

[S03] Slater, M.: A note on presence terminology. Presence Connect 3: 3. (2003) [S04] Steel, L.: The autotelic principle. Embodied artificial intelligence. LNAI vol. 3139, 231--

242 (2004) [SS05] Sanchez-Vives, M.V., Slater, M.: From presence to consciousness through virtual

reality. Nat Rev Neurosci 6(4), 332--339 (2005) [SSC10] Slater, M., Spanlang, B., Corominas, D.: Simulating virtual environments within

virtual environments as the basis for a psychophysics of presence. In: ACM SIGGRAPH 2010 Papers (Los Angeles, California, July 26 - 30, 2010). H. Hoppe (ed.) SIGGRAPH '10. ACM, New York (2010)

[TGB00] Tolani, D., Goswami, A., Badler, N.I.: Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs, Graphical Models 62(5), 353-388. (2000)

[UPBS08] L. Unzueta, M. Peinado, R. Boulic, A. Suescun, “Full-Body Performance Animation with Sequential Inverse Kinematics”, Graphical Models, 70(5), 87--104 (2008)

[ZJ98] Zahorik, P., Jenison, R. L.: Presence as being-in-the world. Presence-Teleoper. Virtual Environ. 7, 78--89 (1998)

[ZB94] Zhao, X., Badler N.: Interactive Body Awareness. In: Computer Aided Design, 26(12), 861--866 (1994)