Heterogeneous data fusion for an adaptive training in informed virtual environment

6
Heterogeneous Data Fusion for an Adaptive Training in Informed Virtual Environment Lo¨ ıc Fricoteaux, Indira Mouttapa Thouvenin and J´ erˆ ome Olive Heudiasyc Laboratory University of Technology of Compi` egne Compi` egne, France Email: fi[email protected] Abstract—This paper presents an informed virtual environ- ment (environment including knowledge-based models and pro- viding an action/perception coupling) for fluvial navigation train- ing. We add an automatic guide to a driving ship simulator by displaying multimodal aids adapted to human perception for trainees. To this end, a decision-making module determines the most appropriate aids according to heterogeneous data coming from observations of the learner (his/her mistakes, the risks taken, his/her state determined by using physiological sensors, etc.). The Dempster-Shafer theory is used to merge these uncertain data. The purpose of the whole system is to manage the training almost autonomously in order to relieve trainers from controlling the whole training simulation. We intend to demonstrate the relevance of taking the learner’s state into account and the relevance of the heterogeneous data fusion with the Dempster-Shafer theory for decision-making about the best learner guiding. First results, obtained with a predefined set of data, show that our decision-making module is able to propose a guiding well-adapted to the trainees, even in complex situations with uncertain data. I. I NTRODUCTION Training for steering, driving or piloting has been made with simulators for many years. Their advantage is to train learners without risks with a cost and training time reduction [1]. This is possible thanks to the different parameters of the simulation which can be easily modified. As a first platform, we use a fluvial-navigation training simulator, called SimNav [2], developed by the CETMEF (In- stitute for inland and maritime waterways) and the University of Technology of Compi` egne (Fig. 1). It allows learners to experiment barge steering in a virtual environment by using real controls. This simulator includes a trajectory computing module, Navmer, allowing knowing the position, the direction and the speed of the boat in real time according to the controls operated and the environmental conditions (currents, winds, banks). Such a simulator allows learners to experiment navigation, to familiarize themselves with the controls and to visualize their effects on the behavior of the ship thanks to the physical engine. However, during a training session with such a simulator, trainees do not know if they have just made a This work has been funded by the European Union and Picardie region under the OSE project (fOrmation for fluvial tranSport with an informed virtual Environment). Europe is committed in Picardie with the European Regional Development Fund. wrong maneuver, if they have forgotten to perform an action, if their steering is risky, etc. A trainer has to be present for each learner during the use of the simulator to help them and to control the simulation (to make a break, to redo a difficult passage, etc.). Fig. 1. The fluvial-navigation simulator SimNav Our proposal is an adaptive training system, added to SimNav, which can automatically display aids according to the learner’s performance. We do not want to suppress the trainer presence, but to support him. This is possible by monitoring the learner to extract heterogeneous data about his/her performance. These data are then merged to take a decision about the best guiding to display to the learner. This system is intended to relieve the trainer from controlling the whole training simulation. In this paper, we present in section II an overview of existing adaptive training systems for learner guiding and an overview of the feedbacks used. Then, section III describes our adaptive training system, GULLIVER, with an emphasis on the learner monitoring and the heterogeneous data fusion for decision- making. Section IV gives an application of our approach with the heterogeneous data fusion for learner guiding. Section V presents first evaluations of this approach. We conclude with its advantages.

Transcript of Heterogeneous data fusion for an adaptive training in informed virtual environment

Heterogeneous Data Fusion for an Adaptive

Training in Informed Virtual Environment

Loıc Fricoteaux, Indira Mouttapa Thouvenin and Jerome Olive

Heudiasyc Laboratory

University of Technology of Compiegne

Compiegne, France

Email: [email protected]

Abstract—This paper presents an informed virtual environ-ment (environment including knowledge-based models and pro-viding an action/perception coupling) for fluvial navigation train-ing. We add an automatic guide to a driving ship simulatorby displaying multimodal aids adapted to human perceptionfor trainees. To this end, a decision-making module determinesthe most appropriate aids according to heterogeneous datacoming from observations of the learner (his/her mistakes, therisks taken, his/her state determined by using physiologicalsensors, etc.). The Dempster-Shafer theory is used to merge theseuncertain data. The purpose of the whole system is to managethe training almost autonomously in order to relieve trainersfrom controlling the whole training simulation. We intend todemonstrate the relevance of taking the learner’s state intoaccount and the relevance of the heterogeneous data fusion withthe Dempster-Shafer theory for decision-making about the bestlearner guiding. First results, obtained with a predefined set ofdata, show that our decision-making module is able to propose aguiding well-adapted to the trainees, even in complex situationswith uncertain data.

I. INTRODUCTION

Training for steering, driving or piloting has been made with

simulators for many years. Their advantage is to train learners

without risks with a cost and training time reduction [1]. This

is possible thanks to the different parameters of the simulation

which can be easily modified.

As a first platform, we use a fluvial-navigation training

simulator, called SimNav [2], developed by the CETMEF (In-

stitute for inland and maritime waterways) and the University

of Technology of Compiegne (Fig. 1). It allows learners to

experiment barge steering in a virtual environment by using

real controls. This simulator includes a trajectory computing

module, Navmer, allowing knowing the position, the direction

and the speed of the boat in real time according to the

controls operated and the environmental conditions (currents,

winds, banks). Such a simulator allows learners to experiment

navigation, to familiarize themselves with the controls and to

visualize their effects on the behavior of the ship thanks to the

physical engine. However, during a training session with such

a simulator, trainees do not know if they have just made a

This work has been funded by the European Union and Picardie regionunder the OSE project (fOrmation for fluvial tranSport with an informedvirtual Environment). Europe is committed in Picardie with the EuropeanRegional Development Fund.

wrong maneuver, if they have forgotten to perform an action,

if their steering is risky, etc. A trainer has to be present for

each learner during the use of the simulator to help them and

to control the simulation (to make a break, to redo a difficult

passage, etc.).

Fig. 1. The fluvial-navigation simulator SimNav

Our proposal is an adaptive training system, added to

SimNav, which can automatically display aids according to

the learner’s performance. We do not want to suppress the

trainer presence, but to support him. This is possible by

monitoring the learner to extract heterogeneous data about

his/her performance. These data are then merged to take a

decision about the best guiding to display to the learner. This

system is intended to relieve the trainer from controlling the

whole training simulation.

In this paper, we present in section II an overview of existing

adaptive training systems for learner guiding and an overview

of the feedbacks used. Then, section III describes our adaptive

training system, GULLIVER, with an emphasis on the learner

monitoring and the heterogeneous data fusion for decision-

making. Section IV gives an application of our approach with

the heterogeneous data fusion for learner guiding. Section V

presents first evaluations of this approach. We conclude with

its advantages.

II. RELATED WORKS

A. Adaptive Training Systems for learners’ guiding

Several training systems propose an automatic guiding for

the learners. For example, TRUST [3], a truck-driving training

simulator, allows the trainer to add explanations/indications

which will automatically appear during the simulation to help

the learners. In this case, the guiding is automatic but the

system is non-adaptive. Indeed, the explanations are predefined

and will appear no matter how the learners are driving.

Therefore, they will not be appropriated to each learner. Too

much help will be present for some learners and not enough

for others.

What we use is an adaptive training system, like PE-

GASE [4] which is a generic intelligent system for virtual

reality learning. It can guide learners according to their errors,

their profile and domain knowledge. However, this system

can only be used for procedural tasks, because it needs to

know the actions that learners must perform. In our case,

fluvial navigation, it is not possible to have procedures about

good actions to perform for steering, because, in this case, the

situations are too complex. Hence, we use a non-deterministic

approach, as in TELEOS [5] which is a learning environment

for orthopedic surgery. With this environment, the learners

build their knowledge by interacting with the system. The

environment gives a learning situation to the learner in order

to allow him/her to take decisions to acquire more experience.

Therefore, the system must respond pertinently to the learner’s

actions so as to encourage him/her to think about his/her

actions.

More precisely, TELEOS provides feedbacks to the learners

based on their actions, their knowledge level and a standard

knowledge model. The decision-making module used is based

on an influence diagram which takes the uncertainty on the

learner model into account. In our case, we must also make

a decision based on uncertain data, coming from the learner’s

activity recognition and the learner’s state recognition. How-

ever, we do not use an influence diagram for two main reasons.

First, conditional probabilities necessary to build an influence

diagram are not obvious to draw up for an expert in fluvial

navigation [6]. Secondly, with the use of probabilities, we

cannot model ignorance properly [6] and this is necessary in

our case since we have incomplete data (the situation is not

perfectly known). For these reasons, we use the Dempster-

Schafer theory for decision-making.

In comparison with PEGASE, TELEOS and training sim-

ulators with a decision-making module like [7] and [8], we

propose to take not only learners’ actions and their profiles

but also their state (stress state, cognitive load, attention

level, etc.) into account by using physiological sensors. We

use the Dempster-Shafer theory for decision-making on these

heterogeneous data, in order to choose among the available

feedbacks those which will be triggered.

B. Multimodal Feedback Based on Human Perception

To guide the learners throughout the training session, help-

ing feedback (visual metaphors, audio messages, etc.) must

be triggered by the decision-making module. For example,

the car-driving training simulator VDI (Virtual Driving In-

structor) [7] uses a rule-based system to trigger feedbacks.

However, these feedbacks are only audio messages, thus trig-

gered one at a time. In [8], a truck-driving training simulator,

a decision-making module can decide to trigger multimodal

feedbacks: one feedback per canal (visual or audio). Neverthe-

less, these feedbacks are only warning messages, explanations,

complementary information or motivating messages. The feed-

backs are not linked to the virtual environment, for example

by highlighting a traffic sign. This allows triggering more

feedbacks at a time without disturbing the learners because

they do not have to read an explanation or pay attention to

an audio warning message. In PEGASE, the feedbacks are

integrated in the environment. For example, it is possible to

make some important objects flash. In our work, we have

followed this approach.

As multimodal feedbacks, we use visual and sound

metaphors to guide the learners. Visual metaphors are used

to highlight important objects to take into account in steering

(ex: other boats, beacons, obstacles, etc.) [9], [10], [11], [12],

[13]. We also add virtual objects in the virtual environment.

For example, we display the optimal path to follow on the

river, called the highway metaphor [9], [10], [11], [12], [13]

(Fig. 2). We also enrich the environment by adding virtual

instruments which are unavailable in reality. They bring addi-

tional information useful for steering (ex: information about

the position, the tilt, etc.) [10], [12].

Fig. 2. An example of visual aids: the highway metaphor

However, the multiplication of these aids can bring negative

side effects on the simulation:

• learners’ cognitive overload: too much information simul-

taneously, and learners cannot analyze all of them and

do not know which are the most important for the actual

situation.

• learners’ attention focuses on aids rather than on the

simulation: learners spend too much time in analyzing

aids (which are not available in a real situation), to the

detriment of the analysis of the simulated situation.

That is why in our system we automatically moderate the

amount of aids according to human’s perception. We use the

following solutions to avoid these negative side effects.

To avoid catching user attention on the HUD (Head Up Dis-

play) rather than on the environment, information is collocated

in the environment (displaying in the foreground forces to

change the focus between the HUD and the environment) [11].

Information is also gathered in the user’s field of view to

avoid forcing him/her to turn the head so as to save time for

information reading [10].

The visual canal is mainly used to add aids to a training

simulation. In order to not overload this canal to improve the

visibility of the simulation, several techniques exist like the

dynamic level of details of information. Just as the levels of

details of a geometric mesh, levels of details of information

allow an adjustment of the quantity of information displayed

(semantic zoom-in or zoom-out) [14]. This is done according

to several factors like the spatial distance between the user and

information collocated, the importance of this information, etc.

To reduce the amount of information on the visual canal,

the virtual environment can be represented in a simplified

way. In this case, users learn progressively by starting with

an environment which requires a low cognitive load for its

analysis. For example, this can be done by representing

the environment with a cognitive map, which is helpful for

orientation [9].

To display several different data simultaneously without

overloading the visual channel, other modalities can be used

like sound [15], [16] or haptic [17], [18] to display informa-

tion. In our case, we use visual and audio feedbacks.

In our system, we will use all the previous solutions, which

moderate the amount of aids, in order to avoid a cognitive

overload of the learner and to prevent the learner from focusing

on the aids rather than on the simulation.

III. TOWARD HETEROGENEOUS DATA FUSION FOR AN

ADAPTIVE TRAINING IN INFORMED VIRTUAL

ENVIRONMENT

A. User Monitoring

We monitor the learner by a user’s activity detection mod-

ule, a user’s state recognition module and a personalized

profile continuously updated (Fig. 3). This monitoring gives

heterogeneous data used to make a decision about the best

guiding for the learner.

The user’s activity detection module (not yet implemented)

has to determine learner’s mistakes and the risks taken. As nav-

igation is a difficult process to model, it cannot be expressed

as a set of procedures to follow. Therefore, mistakes cannot be

detected by comparing with a set of good actions to perform,

which is a deterministic approach. Instead, this module will be

realized with a non-deterministic approach: the system has to

answer in a credible manner according to the learner’s actions.

The response of the system is not expected to be realistic but

only credible, that is to say the learner must feel present in the

virtual world. He or she can interact freely with the system,

perceive the response of the environment, interact again, etc.

This allows a training by action/perception [19] thanks to the

strong coupling between the user and the system.

Steering errors and risks taken will be detected according

to the future position of the boat, extrapolated from the actual

position of the boat and the actual state of the ship controls.

For example, it will be detected that the boat will collide with

a bridge. This information is uncertain since it is based on

an extrapolation of the future position of the boat in the case

that the learner does not change the boat trajectory. However,

this uncertainty decreases gradually as the boat approaches the

bridge. A risk indicator will be also computed from the room

of maneuver to avoid the next obstacles.

Errors and risks will be detected with a predictive model,

as explained before, but also with a set of rules. For example,

navigation rules about the traffic signs will be used. There

will be also some rules about the use of ship controls. The

learner’s gestures on the ship controls will be tracked and

wrong maneuvers will be detected. We intend to also detect

errors with an eye tracking system allowing knowing where

the learner is looking at. Some elements have to be seen by the

learner to make good decisions (ex: traffic signs). Thus, this

is an error to not have seen such elements and, for example,

the decision-making module can decide to highlight these

elements so that they will be seen.

The user’s state recognition module is based on physiolog-

ical sensors. Currently, a heart-rate variability sensor is used

and we plan to use an eye tracking system (and possibly other

sensors) to determine the learner’s state. This gives indications

about stress, cognitive load, attention level and situational

awareness. The eye tracking system, which will be used to

know where the learner is looking at, will also be used to

determine the learner’s visual attention and his/her cognitive

load thanks to pupillometric measurement [20]. Heart-rate

variability is measured to detect cognitive overloads. Of course

these data are uncertain and this is taken into account by the

decision-making module.

Throughout the learner’s training, a personalized profile is

recorded. It contains the current level of the learner (novice,

intermediate, experienced) and his/her usage history (previous

errors, efficient aids, inefficient aids, etc.).

All the data coming from the learner’s profile, the user’s

state recognition module and the user’s activity detection

module are sent to the decision-making module (Fig. 3).

B. Heterogeneous Data Fusion

This incoming heterogeneous data (the learner’s mistakes,

the risks taken, his/her state determined by using physiological

sensors) have to be expressed in a common formal framework

so that they can be merged and allow taking a decision.

As a common formal framework, we use the Dempster-

Shafer theory. The belief is distributed on triggerable elements:

aids (ex: Fig. 2) and events (ex: adding some danger, like

dead trunks to avoid). Each piece of information (about the

learner’s state, his/her errors, etc.) assigns belief masses for

each triggerable element on the utility of the triggering, the

inutility or the ignorance of the triggering. Thus the frame

of discernment is Ω = utility, inutility and the power set

is 2Ω = ∅, utility, inutility, ignorance with ignorance =Ω. A basic belief assignment (bba) m is a function that assigns

belief masses to the subsets of 2Ω with∑

A⊆Ω

m (A) = 1.

Each information applies a bba to each triggerable element.

As soon as every belief mass has been assigned, they are

combined for each triggerable element using the Dempster’s

rule of combination (1). Then, the pignistic probability of the

utility (2) is computed for each triggerable element in order

to obtain an indicator about the priority of the triggering of

the aids and the events.

∀A ⊂ Ω, (m1 ∩©m2) (A) =∑

B∩C=A

m1 (B)m2 (C) (1)

BetP (utility) =∑

A⊆Ω,utility∈A

m (A)

(1−m (∅)) |A|(2)

Among the aids and events available, the decision-making

module must choose the best set to trigger, that is to say the

set which satisfies as many constraints as possible:

• Elements to trigger must have a high BetP (utility);• The sets of elements to trigger must not overload the

learner’s sensory canals and his/her cognitive load;

• Elements to trigger must be mutually compatible;

• Elements to trigger must be adapted to the learner’s level

and must respect his/her preferences;

• . . .

The trainer can intervene in the system by adding constraints

about the choice of aids or events to trigger (for example, it is

possible to forbid an aid). The system has to find a combina-

tion of triggerable elements that satisfies as many constraints

as possible among all these ones. To resolve this problem,

constraints are divided into two categories: strong constraints

(which necessarily have to be respected) and weak constraints

(which can be ignored but the solution must respect as many of

them as possible). A score is allocated for each rule respected

and the best solution is the one with the highest score (a similar

method is used in [8]). This is a constraint satisfaction problem

which can have a high complexity depending on the number

of aids/events available. In the case that the complexity is too

high to have an answer in real-time, a metaheuristic is used:

a genetic algorithm. This provides a good solution in a time

limit chosen but nothing ensures that this is the best solution.

Later, we will try to use a constraint programming solver in

order to have the best solution as fast as possible.

The computation of the aids/events to display is contin-

uously made so that the guiding is adapted to the current

situation. For example, a visual aid become useless, since the

associated error is not made anymore, will be removed, which

will let space on the visual canal to possibly add another aid.

In another case, if the learner makes a serious mistake, the

associated aid(s) will be displayed and maybe several less

important aids will be removed so as to not overload the

learner.

C. GULLIVER: a Model for an Adaptive Training in Informed

Virtual Environment

We built a model, GULLIVER (GUiding by visuaLization

metaphors for fluviaL navigation training in an Informed

Virtual EnviRonment), to propose an adaptive training in

informed virtual environment (IVE). An IVE is a ”virtual

environment including a knowledge-based model and in which

it is possible to both interact and allow behaviors by inter-

preting dynamic and static representations” [21]. GULLIVER

provides automatic guiding thanks to a decision-making mod-

ule based on heterogeneous data obtained by monitoring the

learner.

GULLIVER takes as input SimNav output data which are

the position, the direction and the speed of the barge controlled

by the learner thanks to the controls associated (Fig. 3).

From these data, the position of the barge is updated in the

virtual environment. Actions (boat movements) and events

(collisions, etc.) are transmitted by the virtual environment to

the module of the user’s activity detection. Information about

the learner’s gestures is also transmitted to this module which

is in charge of detecting the mistakes made by the learner

and the risks taken. The learner’s state is also recognized

thanks to data coming from physiological sensors. From the

learner’s state, his/her mistakes and the risks taken, a decision-

making module activates the right metaphors to guide the

learner. This module can also decide to trigger events. For

example, if the learner does not make mistakes and feels at

ease, the environment is complexified by adding some dangers,

SIMNAVShip controls

GULLIVER

Informed virtual environment

Decision-making module

Detection of user’s activity

User’s state

recognition

Actions, events

Mistakes,

risks

User’s state

User’s state

tracking

Information display metaphors,

event triggering

User’s profile

User’s gestures

tracking

Fig. 3. Model of an adaptive training system: GULLIVER

for instance floating objects to avoid or thick fog. In addition

to the learner’s state, his/her mistakes and the risks taken, the

system also takes the learner’s profile into account: his/her

usage history (previous errors, inefficient aids, etc.) and his/her

level (beginner, experienced, etc.). If the learner is a novice,

the guiding system must adapt to a cognitive speed compatible

with the learner’s perception and comprehension speed to

avoid cognitive overload [19].

IV. APPLICATION OF OUR HETEROGENEOUS DATA FUSION

FOR LEARNER GUIDING

A first set of triggerable elements and of possible incoming

information has been defined to test the decision-making

module. For example, the information I1 ”the learner is

taking a bad way under the next bridge” comes from the

user’s activity detection module with a certainty of 75 %. By

reading an XML tree linking the errors to the correspond-

ing triggerable elements, the decision-making module knows

that the aid A1 ”highlighting of the next bridge traffic-sign

indicating the right way” and the event E1 ”hiding of the

next bridge” can help the learner to correct his/her trajectory.

Therefore, m (utility) = 0.75 and m (ignorance) = 0.25for these two triggerable elements, since the certainty of the

information I1 is 75 %. The other triggerable elements have

m (ignorance) = 1, since the information does not bring

some belief on the utility/inutility of their triggering. With the

information I1, another information I2 is added: ”the learner

is under stress” with a certainty of 60 %. This information

comes from the learner’s state recognition module. For this

kind of information, generic rules are used to assign belief

masses. In this case, the following rule is used: ”if the learner

is under stress, the events which add difficulty must not be

triggered and those which simplify the training scenario are

useful at 50 %”. Therefore, events which add difficulty will

have m (inutility) = 0.6 and m (ignorance) = 0.4, since

the certainty of the information I2 is 60 %. Events which add

simplicity will have m (utility) = 0.3 (50 % of 60 %) and

m (ignorance) = 0.7. The other triggerable elements have

m (ignorance) = 1. After merging the bba of I1 and I2, A1

keeps the same belief masses and E1 has m (utility) = 0.825and m (ignorance) = 0.175 because I2 has influenced E1

(since it is an event which adds simplicity). The pignistic

probability of A1 is 87.5 % and for E1 it is 91.25 %. At

the end, the constraint satisfaction problem is solved and the

decision-making module decides to trigger E1 and not A1,

because they are incompatible (it is useless to highlight the

traffic sign of a bridge which is not visible any more) and E1

has been estimated better than A1 (its pignistic probability is

higher).

This simple example shows the different steps of the

decision-making module. First, uncertain information assigns

belief masses according to a strategy of belief assignment

(which depends on the nature of the information). The belief

assignments have been thought to be intuitive in order to be

defined by an expert in fluvial navigation. At the next step,

the belief masses are merged and the pignistic probability

is computed for each triggerable elements. Finally, the set

of triggerable elements which respects the most constraints

(constraints about human perception, etc.) is chosen to guide

the learner. Such an approach is flexible and allows us to

easily update the system by adding/removing constraints, input

information and triggerable elements.

V. FIRST EVALUATIONS OF HETEROGENEOUS DATA

FUSION FOR LEARNER GUIDING

The decision-making module has been evaluated using gen-

erated data which simulate the possible inputs corresponding

to an observation of a learner (example of inputs: I1 with a cer-

tainty of 65 % and I2 with a certainty of 72 %). The first values

of certainty are randomly chosen and then they change over

time (increase or decrease) with a higher probability to change

in a similar way as their previous trend (Fig. 4). Several sets of

500 values for 13 input data and 24 triggerable elements have

been tested with different parameters: for a beginner learner or

an experienced learner and with different values of the minimal

threshold of BetP (utility). This minimal threshold is used

to determine whether there is enough BetP (utility) for a

triggerable element to consider it as useful to trigger. This is

a strong constraint used in the decision-making module.

Fig. 4. Example of input data resulting on learner’s monitoring

The results (Fig. 5) enabled us to adjust the minimal

threshold of BetP (utility). They are encouraging and show

that the decision-making module gives satisfactory answers.

Fig. 5. Example of results

Further tests on the whole system will be realized with flu-

vial navigation learners, as soon as the system implementation

will be completed. This will allow adjusting more precisely the

different parameters. We intend to demonstrate the relevance

of taking the learner’s state into account and the relevance of

the data fusion with the Dempster-Shafer theory for decision-

making about the best learner guiding.

VI. CONCLUSION

We propose GULLIVER as an informed virtual environment

for fluvial navigation training. This environment automatically

guides learners with multimodal feedbacks adapted to human

perception. The system takes heterogeneous data about the

learner’s performance into account to make a decision about

the best multimodal feedbacks to trigger. The use of the

Dempster-Shafer theory for the fusion of these data brings

a new approach in the informed virtual environments. This

allows a strong coupling between the user and the virtual

environment. Our first results show that heterogeneous data

fusion is a powerful method for adaptive guiding based on user

monitoring. Our approach presents the following advantages:

1) better adaptation to the trainee, 2) possibility to specifically

display multimodal aids according to a decision-making mod-

ule and 3) possibility of adaptive guiding in complex situations

with uncertain data.

Our system, GULLIVER, is not limited to fluvial navigation

training and can be used, for example, for assisting car drivers

in augmented reality.

ACKNOWLEDGMENT

We deeply thank A. Pourplanche and F. Hissel, who work at

the CETMEF, for their enthusiastic support on the applicative

part of our work.

REFERENCES

[1] J. Olive, “Realite virtuelle pour la production de pneumatique,” Ph.D.dissertation, Universite de Technologie de Compiegne, 2010.

[2] M. Vayssade and A. Pourplanche, “A piloting SIMulator for maritimeand fluvial NAVigation: SimNav,” in Proc. Virtual Concept, Biarritz,France, 2003.

[3] D. Mellet d’Huart, “De l’intention a l’attention. Contributions a unedemarche de conception d’environnements virtuels pour apprendre apartir d’un modele de l’(en)action,” Ph.D. dissertation, Universite duMans, 2004.

[4] C. Buche, C. Bossard, R. Querrec, and P. Chevaillier, “PEGASE: AGeneric and Adaptable Intelligent System for Virtual Reality LearningEnvironments,” The International Journal of Virtual Reality, vol. 9,no. 2, pp. 73–85, 2010.

[5] D. Mufti-Alchawafa, “Modelisation et representation de la connaissancepour la conception d’un systeme decisionnel dans un environnement in-formatique d’apprentissage en chirurgie,” Ph.D. dissertation, UniversiteJoseph Fourier - Grenoble I, 2008.

[6] N. El-Kechaı, “Suivi et assistance des apprenants dans les environ-nements virtuels de formation,” Ph.D. dissertation, Universite du Maine,2007.

[7] I. Weevers, J. Kuipers, J. Zwiers, B. Dijk van, and A. Nijholt, “Thevirtual driving instructor: a multi-based system for driving instruction,”CTIT technical reports series, 2003.

[8] M. Lopez-Garate, A. Lozano-Rodero, and L. Matey, “An adaptiveand customizable feedback system for VR-based training simulators,”in Proc. Autonomous agents and multiagent systems, vol. 3, Estoril,Portugal, 2008, pp. 1635–1638.

[9] C. Benton and R. Walker, “Augmented Reality for Maritime Naviga-tion: The Next Generation of Electronic Navigational Aids,” in Proc.

Marine Transportation System Research and Technology Coordination

Conference, Washington, USA, 2004.

[10] O. Bjorneseth, “HOTS (Highway On The Sea), a new approach to low-visibility navigation,” in Proc. Marine Simulation and Ship Maneuvra-

bility, Japan, 2003.

[11] D. Foyle, R. McCann, and S. Shelden, “Attentional issues with su-perimposed symbology: formats for scene-linked displays,” in Proc.

International Symposium on Aviation Psychology, Columbus: Ohio StateUniversity, 1995, pp. 98–103.

[12] O. Hugues, J.-M. Cieutat, and P. Guitton, “An Experimental AugmentedReality Platform Application for Assisted Maritime Navigation: Follow-ing Targets,” in Proc. Virtual Reality International Conference, Laval,France, 2010, pp. 149–154.

[13] L. J. Prinzel III, L. J. Kramer, R. E. Bailey, J. J. Arthur, S. P.Williams, and J. McNabb, “Augmentation of Cognition and PerceptionThrough Advanced Synthetic Vision Technology,” in Proc. International

Conference on Augmented Cognition, Las Vegas, NV, USA, 2005.[14] D. A. Bowman, C. North, J. Chen, N. F. Polys, P. S. Pyla, and

U. Yilmaz, “Information-rich virtual environments: theory, tools, andresearch agenda,” in Proc. Virtual Reality Software and Technology,Osaka, Japan, 2003, pp. 81–90.

[15] B. Frohlich, S. Barrass, B. Zehner, J. Plate, and M. Gobel, “Exploringgeo-scientific data in virtual environments,” in Proc. Visualization, SanFrancisco, California, USA, 1999.

[16] S. Onimaru, T. Uraoka, N. Matsuzaki, and M. Kitazaki, “Cross-modalinformation display to improve driving performance,” in Proc. Virtual

Reality Software and Technology, Bordeaux, France, 2008, pp. 281–282.[17] G. Bouyer, “Rendu multimodal en Realite Virtuelle : Supervision des

interactions au service de la tache,” Ph.D. dissertation, Universite ParisXI, 2007.

[18] S. Onimaru and M. Kitazaki, “Visual and Tactile Information to Im-prove Drivers’ Performance,” in Proc. IEEE Virtual Reality, Waltham,Massachusetts, USA, 2010, pp. 295–296.

[19] S. Bottecchia, “Systeme T.A.C. : Tele-Assistance Collaborative. Realiteaugmentee et NTIC au service des experts et des operateurs dans le cadred’une tache de maintenance industrielle supervisee.” Ph.D. dissertation,Universite de Toulouse III, 2010.

[20] O. Palinko, A. L. Kun, A. Shyrokov, and P. Heeman, “Estimatingcognitive load using remote eye tracking in a driving simulator,” in Proc.

Symposium on Eye-Tracking Research - Applications, Austin, Texas,2010, pp. 141–144.

[21] I. Mouttapa Thouvenin, “Interaction et connaissance : construction d’uneexperience dans le monde virtuel,” Habilitation thesis, Universite deTechnologie de Compiegne, 2009.