ANTS--Augmented environments

9
Computers & Graphics 28 (2004) 625–633 ANTS—Augmented Environments Teresa Roma˜o a, *, Nuno Correia b , Eduardo Dias a , Jose´ Danado a , Adelaide Trabuco b , Carlos Santos b , Rossana Santos b , Anto´nio Caˆmara c , Edmundo Nobre c a Computer Science Department, University of E ´ vora, Rua Roma˜o Ramalho 59, E ´ vora 7000, Portugal b Computer Science Department and CITI, New University of Lisbon, Quinta da Torre, 2825 Monte da Caparica, Portugal c Environmental Systems Analysis Group, New University of Lisbon, Quinta da Torre, 2825 Monte da Caparica, Portugal Abstract Interaction with natural or urban environments creates numerous situations where it is relevant to have access to further information that we cannot perceive by direct observation through our senses. Augmented Reality (AR) technologies allow real time superimposition of synthetic objects on real images, providing an augmented knowledge about the surrounding world. Users of an AR system can visualize the real surrounding world together with additional data generated in real time in a contextual way. This paper presents the ANTS project, which carries out the development of an AR technological infrastructure that can be used to explore physical and natural structures, namely for environmental management purposes. The system’s architecture has a flexible design based on a client-server model, where several independent, but functionally interdependent modules are articulated. Therefore, modules can be moved from the server side to the client side or vice versa, according to the client processing capacity. Several applications have been developed and are also discussed in this paper. r 2004 Elsevier Ltd. All rights reserved. Keywords: Multimedia information systems; Augmented and virtual realities; Simulation and modelling; Applications; Spatial databases and GIS; Spatial information systems 1. Introduction There are many situations where information about physical locations could be useful, but it is not easy to access. AR technologies are based on combining, in real time, synthetic images with real images, giving more information about the real environment around us. The users of an AR system see the real world and additional data simultaneously, through a Head Mounted Display (HMD). AR provides users with a better perception of the real world and promotes a more efficient interaction with it. Virtual object composition on real images makes available additional insights that users do not detect daily through their senses. This information can help the users to have a better performance on the tasks at hand, minimizing the time spent completing them. The major advantages of AR systems, when com- pared with Virtual Reality (VR) systems, are [1,2]: * The computational resources necessary for compos- ing synthetic images on real images are much less than those necessary to generate the synthetic scenes. * Using real images gives a high level of performance and provides a more realistic view of the environment under study or observation. * The users stay connected with the real world. ARTICLE IN PRESS *Corresponding author. Tel.: +351-266-745-360; fax: +351- 266-746-373. E-mail address: [email protected] (T. Roma˜o). 0097-8493/$ - see front matter r 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.cag.2004.06.001

Transcript of ANTS--Augmented environments

Computers & Graphics 28 (2004) 625–633

ARTICLE IN PRESS

*Correspond

266-746-373.

E-mail addr

0097-8493/$ - se

doi:10.1016/j.ca

ANTS—Augmented Environments

Teresa Romaoa,*, Nuno Correiab, Eduardo Diasa, Jose Danadoa,Adelaide Trabucob, Carlos Santosb, Rossana Santosb, Antonio Camarac,

Edmundo Nobrec

aComputer Science Department, University of Evora, Rua Romao Ramalho 59, Evora 7000, PortugalbComputer Science Department and CITI, New University of Lisbon, Quinta da Torre, 2825 Monte da Caparica, PortugalcEnvironmental Systems Analysis Group, New University of Lisbon, Quinta da Torre, 2825 Monte da Caparica, Portugal

Abstract

Interaction with natural or urban environments creates numerous situations where it is relevant to have access to

further information that we cannot perceive by direct observation through our senses. Augmented Reality (AR)

technologies allow real time superimposition of synthetic objects on real images, providing an augmented knowledge

about the surrounding world. Users of an AR system can visualize the real surrounding world together with additional

data generated in real time in a contextual way.

This paper presents the ANTS project, which carries out the development of an AR technological infrastructure that

can be used to explore physical and natural structures, namely for environmental management purposes. The system’s

architecture has a flexible design based on a client-server model, where several independent, but functionally

interdependent modules are articulated. Therefore, modules can be moved from the server side to the client side or vice

versa, according to the client processing capacity. Several applications have been developed and are also discussed in

this paper.

r 2004 Elsevier Ltd. All rights reserved.

Keywords: Multimedia information systems; Augmented and virtual realities; Simulation and modelling; Applications; Spatial

databases and GIS; Spatial information systems

1. Introduction

There are many situations where information about

physical locations could be useful, but it is not easy to

access. AR technologies are based on combining, in real

time, synthetic images with real images, giving more

information about the real environment around us. The

users of an AR system see the real world and additional

data simultaneously, through a Head Mounted Display

(HMD).

AR provides users with a better perception of the real

world and promotes a more efficient interaction with it.

ing author. Tel.: +351-266-745-360; fax: +351-

ess: [email protected] (T. Romao).

e front matter r 2004 Elsevier Ltd. All rights reserve

g.2004.06.001

Virtual object composition on real images makes

available additional insights that users do not detect

daily through their senses. This information can help the

users to have a better performance on the tasks at hand,

minimizing the time spent completing them.

The major advantages of AR systems, when com-

pared with Virtual Reality (VR) systems, are [1,2]:

* The computational resources necessary for compos-

ing synthetic images on real images are much less

than those necessary to generate the synthetic scenes.* Using real images gives a high level of performance

and provides a more realistic view of the environment

under study or observation.* The users stay connected with the real world.

d.

ARTICLE IN PRESST. Romao et al. / Computers & Graphics 28 (2004) 625–633626

Another advantage and application of AR is related

with the recent introduction and dissemination of

mobile devices, such as high end mobile phones and

PDAs. These devices provide a good vehicle for non-

immersive AR. Our architecture also considers these

devices as targets for the applications. Although the

initial ANTS prototype uses a HMD, subsequent

developments currently allow the use of PDAs as client

devices. This avoids burdening the user with heavy and

cumbersome devices that require time consuming setups

and restrict mobility. Furthermore, the PDA can be seen

as a virtual window to the augmented world displaying

what the user is seeing superimposed with the additional

virtual objects.

Several complex technical and scientific problems

must be considered when implementing AR systems:

* The register of synthetic images on real images,* Position identification;* Information retrieval;* Presentation.

The development of ANTS, reported in this paper, is

a contribution towards the solution of these problems,

bearing in mind the increasing necessity for mobility,

which, in turn, brings out supplementary technological

limitations due to the use of small displays, small screen

resolution, limited processing power devices, low data

bit rates, out of coverage gaps in wireless networks or

network latency.

The main objectives of ANTS are:

* Establish a more appropriate configuration in AR

systems for the exploration of natural and urban

environments. This includes the definition of the

overall architecture, considering different devices for

access and presentation;* Study new methods for image registry and navigation

in augmented environments;* Study an AR infrastructure that makes it easier to:

* Visualize the evolution process of natural,

urban and artificial environments;* Provide information about geo-referenced ele-

ments in the environment.

ANTS system has been tested through three main

environmental management applications comprising:

* Monitoring water quality using pollutant transport

models;* Visualizing temporal evolution of physical structures

and natural elements;* Superimposing synthetic images on the ground to

reveal the soil’s composition at the user’s current

spatial location (for example, the location of under-

ground water supply networks and subsoil structure).

The remainder of this paper is structured as follows:

next section summarizes research work related with

this project. Section 3 presents the ANTS system’s

infrastructure and architecture. Section 4 describes

the applications developed for the proposed system.

The paper ends with conclusions and directions for

future work.

2. Related work

Augmented reality is a technology in which the user’s

view of the real world is enhanced with virtual objects

that appear to coexist in the same space as real objects.

From the beginning, registration of synthetic images on

real images has been a topic of continuing research.

Many research works have been given emphasis to the

correct implementation of 3D image registry mechan-

isms over a real scenario [3].

Accurately tracking of user’s position and viewing

orientation is crucial for AR registration. An overview

of tracking systems can be found in [4]. Many

approaches to position tracking require the user’s

environment to be equipped with sensors [5], beacons

[6] or visual fiducials [7]. While some recent AR systems

demonstrate to be robust for registration in prepared,

indoor environments [8], tracking in unprepared envir-

onments is still an enormous challenge [9], particularly

concerning outdoor and mobile AR applications. Many

systems employ hybrid-tracking techniques to exploit

strengths and compensate weaknesses of individual

tracking technologies [10,11]. In [12] the early stages of

an experimental mobile AR system that adapts its user

interface automatically to accommodate changes in

tracking accuracy are described. Ultimately, tracking

in unprepared environments may rely heavily on

tracking visible natural features [13,14]. Research on

how to track users with Bluetooth technology is in its

infancy. There are several research efforts using Blue-

tooth for positioning purposes with alternative conclu-

sions [15,16].

Most of the existing AR applications mainly encom-

pass the traditional areas of manufacturing, medicine,

architecture, education and professional training [17–

19]. More recently, research turned also towards mobile

AR systems. These may enable a host of new applica-

tions in navigation, situational awareness, and geo-

referenced information retrieval.

Rekimoto and Nagao [20] proposed a method for

augmented environment construction using a portable

device named NaviCam, which is able to identify the

user’s position by detecting colour codes in the real

world.

The first outdoor system was the Touring Machine [2].

This system, developed at Columbia University, includes

tracking devices (a compass, an inclinometer and a

ARTICLE IN PRESST. Romao et al. / Computers & Graphics 28 (2004) 625–633 627

differential GPS), a mobile computer with a 3D graphics

board and a see-through HMD. It provides the user with

world-stabilized information about an urban environ-

ment (the names of buildings and departments on the

Columbia campus). More recent versions of this system

render models of buildings that previously existed on

campus, display paths that users need to take to reach

objectives, and play documentaries of historical events

that occurred at the observed locations [21].

Human senses are unable to detect several dangerous

conditions, namely harmful radiation or toxic gases. At

the University of Michigan, an AR application is being

developed to allow humans to detect potentially hazard

conditions, combining the collected data in a three

geometric database, and using augmented reality to

present this information to the users (http://www.vrl.

umich.edu/sel prj/ar/hazard). BITS (Browsing in Time

and Space) interface was developed for the exploration

of virtual ecosystems. It allows users to navigate and

explore a complex virtual ecosystem, interact with the

objects that comprise it and make annotations indexed

in time and space [22].

In the Vienna University of Technology, the Hand-

held AR project is developing a PDA-based AR system

to be used in the SignPost project. This system allows a

user to stroll over an unknown building showing him or

her several navigational hints [23].

Apart from a few commercial AR systems, which

include the augmentation of broadcast video to enhance

sport events and to insert or replace advertisements in a

scene [24], the state of the art in AR today is comparable

with the early stages of VR—many systems have been

demonstrated, but few have evolved beyond lab-based

prototypes [9,25].

3. ANTS—augmented environments

ANTS is an AR system that assists users in the

exploration and study of their surrounding environment,

providing additional contextual data about physical

structures and natural elements that compose the real

world around them. The initial hardware infrastructure

includes a laptop/wearable, a HMD (Daeyang E&C Cy-

Visor Mobile Personal Display DH-4400VP) and a

tracker (InterSense Intertrax 2), a video camera and a

GPS receiver (PRETEC—CompactGPS) and it uses

mobile phones for providing the communications. In a

later prototype PDAs are used as client devices, where

the stronger focus is mobility instead of immersion.

ANTS follows a video see-through approach [9].

Users see the results of composing the video image

captured by the camera with the synthetic content

retrieved from the server database (text, graphics,

pictures or even other videos).

The computational system provides functionalities

ranging from image capture and user positioning to

data retrieval from remote databases. These different

functionalities are supported by an architecture that

integrates several modules split between the remote

server and the client that interacts with the users

(Fig. 1). For the PDA version, a proxy database is

necessary to speed up the process of querying the

geo-referenced database. Some modules can be

moved from or to the client entity according to its

available processing capacities and the applications

requirements.

3.1. System architecture

The main modules composing ANTS’s architecture

can be seen in Fig. 1 and are described in the following

subsections. The server components are accessible

through an HTTP server, thus allowing requests from

different clients and platforms.

The process of enhancing or augmenting the real-

world video is driven by the user’s actions. The system

must know the user’s position and orientation to be

able to provide this contextual information. When the

system starts, it identifies the user’s position and

orientation, by explicit interaction and manual calibra-

tion. After this initial calibration process, the tracking

module grants the User AR Module the capability to

track, as precisely as possible, the user position and

orientation.

This positioning process combines several methods:

* GPS data: The absolute position of the user is

indicated by a GPS system. This type of system is

used in combination with the other techniques below,

in order to overcome the limitations and lack of

precision associated with it.* User tracking using appropriate devices: a tracker is

used in order to obtain the current orientation of the

user’s head.* Environment mapping: knowledge of the physical

form and position of the entities on the environment

that is being augmented.* Explicit indication: sometimes, mainly in the boot-

strap process, the users can indicate their position

explicitly.

Every time the users ask for information the system

locates their position and orientation, as accurately as

possible and, with that information, a list of objects in

the user’s field of view can be requested to a remote

application that uses a 3D representation of the real

world. The query to the 3D model server returns basic

object properties, such as name, type and a unique

object identifier (UOI). This request and the correspond-

ing result are identified in steps 1 and 2 of Fig. 2. The

ARTICLE IN PRESS

Fig. 1. ANTS architecture.

Fig. 2. Information flow.

T. Romao et al. / Computers & Graphics 28 (2004) 625–633628

information returned by the 3D model server is the first

layer of information that the User AR Module displays.

For more detailed information about a particular object,

a query can be done to another remote server (Geo-

Referenced Database) using the object identifiers ob-

tained from the 3D model (steps A and B). The images

that are presented in the client device’s display result

from the composition of the images captured in real time

with a video camera, and additional data retrieved from

both servers. The server components are accessible

through HTTP, a common standard, allowing different

devices to perform requests.

3.1.1. 3D model

The 3D model server is an application that synchro-

nizes the position of multiple users in the virtual and real

worlds. With this server it is possible to locate and depict

the users and their surroundings, in order to correctly

map the contextual information.

ARTICLE IN PRESST. Romao et al. / Computers & Graphics 28 (2004) 625–633 629

This module encapsulates a 3D representation of the

physical real space, establishing a relationship between

the user’s experiences in the physical space and the

corresponding computational representation. As most

models, the models used here are simplified representa-

tions of the reality, which however must comply with

several rules to assure the required accuracy. The models

in the 3D model server must:

* Include all the relevant objects using a specific level

of detail.* Respect a scale, and all the relationships between

relevant objects.* Specify the relation between the model position and

orientation and the real world.

The models, used in the 3D model server, can be

defined with commonly available tools, such as Discreet

3D Studio. However, in order to quickly obtain a

representation of an urban or natural environment,

regarding its volumetric objects and their relative

position, without the burden of using a complex, all-

purpose tool we have developed a quick editor for 3D

environments (Fig. 3). This tool allows the definition of

a set of objects over a background image. This

background image can be a map or a blueprint of the

real environment used as the basis for edition. While

editing the model, the area occupied by each real world

element can be defined together with its height and an

identifier can be assigned to it. Each object is identified

by a unique object identifier (UOI), which will later be

used in search operations, to identify the objects in the

geo-referenced database.

The 3D model server is an HTTP server, receiving

queries from the client applications. Each request must

specify a set of parameters, including the user’s position

and orientation, the action radius, and possibly an

object identifier. If this identifier is specified the server

returns the direction of that object (relative to the

Fig. 3. 3D model editor.

current user position). The following exemplifies the

query format:

http://3DMS/request.xml? pos(lat,lon,alt);

ori(hor,vert);id(UOI);radius(rd)

The result of such a query is a list of UOI containing

all the objects (with the corresponding description) in

the user’s surrounding area. These objects are classified

in three main categories, accordingly to its spatial

relation with the current user position (Fig. 4):

* Inside objects: all objects the user is inside of. It can

be more than one as there is no requirement that the

model is restricted to physical non-overlapping

entities.* Visible objects: all objects in front of the user and

inside of a view volume, defined by an angle much in

the same way as the field of view of a camera.* Surrounding objects: all the other objects that are not

visible objects or inside objects, and that are inside

the action volume. These objects are further classified

in ‘‘Left’’ and ‘‘Right’’ to enable user orientation

when displaying information.

All the relevant objects (with the corresponding

description) are returned as an XML file (Fig. 5)

containing the classification of the different objects

inside the action volume. XML is used to help the

parsing process and provide standard access interfaces.

3.1.2. Geo-referenced database

The Geo-referenced database keeps the information

concerning all the objects of the physical space under

analysis that are used to enhance or augment the real

world view. It works in combination with the 3D model,

in order to locate an element and retrieve the

corresponding contextual data.

The stored information can be found through the

UOIs returned by the 3D model. A list of multimedia

elements to be shown to the users can be obtained by

Fig. 4. Object classification.

ARTICLE IN PRESS

Fig. 5. 3D model response example.

Fig. 6. User AR module architecture.

Fig. 7. Outdoor AR example.

T. Romao et al. / Computers & Graphics 28 (2004) 625–633630

using the set of queries supported by the database. These

elements can be of several types including text, graphics,

images, audio and video. The result of a query is an

XML file that describes the multimedia elements to be

delivered to the client application for visualization

purposes (Fig. 2). Currently a MySQL database is being

used, but other SQL-based database engines can also be

used.

3.1.3. User AR module

This module acts like an interface to the whole system.

It gathers all tracking and position information, sends it

to the 3D model server and retrieves information either

from the module managing the 3D model or from the

multimedia geo-referenced database. It combines this

information with the images of the real world captured,

in real time, by the camera (under the control of this

same module) and displays the resulting image on the

HMD or PDA screen.

The user AR module, represented in Fig. 1 by its

functional components, can be described in a way that is

more close to its implementation. Fig. 6 shows its

subsystems responsible for the image composition as a

set of filters (implemented using Microsoft DirectShow)

that interact with the other components of the system.

These two main filters are the InfoComposer and the

ObjectComposer.

The InfoComposer accepts images, captured by the

video camera, composes them with the information

related to the objects in the real world around the user,

embedded in the XML file received from the 3D model

(Fig. 5). For visible objects, the UOI, name, type and

screen position are returned.

Fig. 7 shows images composed by the InfoComposer.

At the top, two arrows are visible containing a list

representing the objects on each side of the user field of

view, the surrounding objects. The information about

the surrounding objects, allows the users to change their

orientation with the purpose of placing these objects in

their field of view. The PDA version of the AR

composition module only shows the objects seen by

the user at any instant (Fig. 8). Notwithstanding, the

arrows are shown without labeling and the users are able

to select them, sending a request for that information.

There is an icon and a label attached to each visible

object. If the user selects one of these icons, the

ObjectComposer, using the UOI of the corresponding

object, will query the geo-referenced database to obtain

more detailed information. The returned data elements

are then composed with the real image captured by the

camera. The resulting composed image is returned to the

user display fully finished, avoiding flickering.

ARTICLE IN PRESS

Fig. 8. PDA version interface.

Fig. 9. Monitoring water quality application.

T. Romao et al. / Computers & Graphics 28 (2004) 625–633 631

4. Applications

Although the image registration methods and the AR

infrastructure developed in the scope of the ANTS

project can be applied in the development of a variety of

AR applications, covering different scientific and work-

ing areas, we are concerned with the environmental

management field. At the present time three main pilot

applications have been developed:

* Monitoring water quality levels in natural water

bodies and artificial lakes, using pollutant transport

models;* Visualization of the characteristics and temporal

evolution of physical structures and natural elements

by the superimposition of synthetic images of the

past or predicted scenes on real images;* Superimposition of synthetic images on the ground

to reveal the soil’s composition at the user’s current

spatial location (for example, the location of under-

ground water supply networks and subsoil structure).

The herein described AR system allows the user to

explore and analyse a spatial location, having real time

access to contextual geo-referenced information not

available through conventional observation methods.

Thus, users become able to see-through the elements

that compose their surrounding environment: water, soil

and physical elements. The reality is augmented with

geo-referenced information.

4.1. Monitoring water quality

This application allows users to explore a water body,

such as the sea or a river, and visualize, in real time, the

corresponding water quality data provided by a

pollutant transport model.

Using the ANTS system, or more precisely this

application, users are able to interact, in real time, with

a pollutant transport model, configuring specific para-

meters of the model, in order to control the data they

need to observe. The supplementary data dynamically

generated by the simulation model is then superimposed

on the real world images and can be seen and controlled

by the users. This means that the additional data is

calculated in real time, as opposed to being stored in a

static database.

The model, which supports the development of this

application, is called DisPar (Discrete Particle distribu-

tion model). The DisPar model is a mathematical

formulation to solve advection–diffusion problems in

aquatic systems [26].

This application has been deployed on a Compaq

iPAQ with GPS, video camera, orientation tracker and

wireless network capabilities. The users are tracked in

real time, which allows the system to supply them

information about the water quality levels in their

vicinity (Fig. 9). In order to see evolution of the model,

users have two distinct views.

In the first view, user position is marked with an icon

in the map of the region where the model is evolving.

This view is similar to the approach used in the PC view

of the model and allows a more general view of the

model. In this view, the user is also able to adjust model

parameters to simulate the desired situation.

Since a Compaq iPAQ has a small screen, and the

interaction with the model requires a huge number of

parameters to be defined, templates are used to enhance

interaction with the DisPar transport model. Template

ARTICLE IN PRESST. Romao et al. / Computers & Graphics 28 (2004) 625–633632

icons, representing pollutant agents, with predefined

values are embedded in the interface. This way, users

just have to choose between them and select the initial

point in the map where the pollution began.

In the second view, the users are able to see the

evolution of the DisPar transport model in their

surrounding area superimposed in the view of the real

environment. In this case, the water body is replaced by

the model evolution in the corresponding position.

4.2. Visualization of characteristics and temporal

evolution of physical structures and natural elements

The main goal of this application is to allow users to

navigate through a certain spatial area and have real

time access, in real time, to additional information

related to the physical structures and natural elements

that can be found in that area. Two possible examples of

this application are the superimposition of the image of

a digital terrain model of a landfill before being

renovated into a park, or the superimposition on a

building of the synthetic image of its restoration.

Therefore, the users are able to simultaneously see the

same spatial area or the same object at different stages of

its life cycle. This spatial parallelism takes advantage of

the human notable capacity to compare and reason

about multiple images that appear simultaneous within

our eye span [27]. This human capacity facilitates the

detection and analysis of changes in the natural and

artificial components of a landscape.

This application was developed for the Parque das

Na@oes in Lisbon (former area of the Expo 98). It allowsusers to walk through the park and have access to

contextual information about the different buildings and

natural elements surrounding them. This application

was developed to operate in two different devices, a

laptop computer and a PDA.

4.3. Subsoil structure

A possible scenario for the use of this application

consists in locating infrastructures for public supply

networks (water, sewage, telephone) in order to avoid

damage when intervention to the subsoil is necessary.

Through the use of this application the users are able to

look at the soil and see, projected on it, synthetic images

revealing its internal constitution (subsoil) at the current

point in space and time.

This application was also deployed for the Parque das

Na@oes in Lisbon. Similarly to the application seen

above it is targeted into two different devices, a laptop

and a PDA. While in a stroll, the users may not wish

to be in an immersive environment, so the PDA may be

a better interface, allowing them to see the environment

directly and look for additional information on

the PDA.

5. Conclusions and future developments

The ANTS project proposes an augmented reality

technological infrastructure that can be used in different

applications, ranging from the visualization of physical

structures to complex models of natural and artificial

phenomena. This infrastructure deals with information

presentation and storage, location and identification of

objects, and location of the users. These characteristics

allow the ANTS infrastructure to contribute to the

resolution of the fundamental problems traditionally

found in the construction of AR systems. Thus, the

system described in this paper allows for the creation of

new AR tools, which aid in environmental management by

facilitating the perception and interaction with the

involving spatial area and its natural and artificial

components. The system’s architecture follows the client–

server model and is based on several independent, but

functionally interdependent modules. It has a flexible

design, which allows the transfer of some modules to and

from the client side, according to the available processing

capacities.

Future work includes using the ANTS infrastructure in

new applications and enhancing some of its characteristics.

Our research is currently following a new comple-

mentary course towards the exploration of indoor

environments, which requires distinct positioning me-

chanisms due to physical restrictions. For instance, GPS

receivers can no longer be used, since they can not work

accurately indoor. To overcome these limitations we are

presently developing a server-side Bluetooth positioning

system, which is able to track the users in three

dimensions and is independent of the Bluetooth access

points being used. This positioning system will expand

ANTS AR system capabilities, which will easily support

the development of indoor applications.

In a recently funded project, InStory, we will explore

interactive narratives that take place in the real world.

This project will reuse and extend some of the

components that were developed in the scope of the

ANTS project, namely the 3D model server.

Acknowledgements

The ANTS project was funded by the Funda@ao para

a Ciencia e Tecnologia (FCT, Portugal) (project no.

POCTI/MGS/34376/99-00). We would like to thank

YDreams (www.ydreams.com) for support in the work

described in this paper.

References

[1] Azuma R. A survey of augmented reality. Presence

1997;6(4):355–85.

ARTICLE IN PRESST. Romao et al. / Computers & Graphics 28 (2004) 625–633 633

[2] Feiner S, Macintyre B, Hollerer T, Webster A. A

touring machine: prototyping 3D mobile augmented

reality for exploring the urban environment. In: Proceed-

ings of International Symposium on Wearable Comput-

ing’97, Cambridge, MA, USA 13–14 October 1997.

p. 74–81.

[3] Azuma R, Bishop G. Improving static and dynamic

registration in an optical see-through HMD. In: Proceed-

ings of SIGGRAPH’94, Orlando, USA 24–29 July 1994.

p. 197–204.

[4] Rolland JP, Davis LD, Baillot Y. A survey of tracking

technologies for virtual environments. In: Barfield W,

Caudell T, editors. Fundamentals of wearable computers

and augmented reality. Mahwah, NJ: Lawrence Erlbaum;

2001. p. 67–112.

[5] Harter A, Hopper A, Steggles P, Ward A, Webster P. The

anatomy of a Context-Aware application. In: Proceedings

of the Fifth ACM/IEEE International Conference on

mobile computing and networking (MobiCom’99), Seattle,

WA, USA August 1999. p. 59–68.

[6] Butz A, Baus J, Kruger A. Augmenting building with

infrared information. In: Proceedings of the International

Symposium on Augmented Reality (ISAR 2000), Munich,

Germany 5–6 October 2000. Silver Spring, MD: IEEE CS

Press; 2000. p. 93–6.

[7] Cho Y, Lee J, Neumann U. A multi-ring fiducial system

and an intensity-invariant detection method for scalable

augmented reality. In: Proceeding of the First IEEE

International Workshop on Augmented Reality’98, San

Francisco, CA, USA 1 November 1998. Natick, Mass:

A.K. Peters; 1998. p. 147–65.

[8] Yokokohji Y, Sugawara Y, Yoshikawa, T. Accurate image

overlay on video see-through HMDs using vision and

accelerometers. In: Proceeding of IEEE Virtual Reality’00,

New Brunswick, NJ, USA 18–22 March 2000. Silver

Spring, MD: IEEE CS Press, 2000. p. 247–54.

[9] Azuma R, Baillot Y, Behringer R, Feiner S, Julier S,

Macintyre B. Recent advances in augmented reality. IEEE

Computer Graphics and Applications 2001;21(6):34–47.

[10] Azuma R, Lee JW, Jiang B, Park J, You S, Neumann U.

Tracking in unprepared environments for augmented

reality systems. Computers and Graphics 1999;23(6):

787–93.

[11] You S, Neumann U, Azuma R. Orientation tracking for

outdoor augmented reality registration. IEEE Computer

Graphics and Applications 1999;19(6):36–44.

[12] Hollerer T, Hallaway D, Tinna N, Feiner S. Steps toward

accommodating variable position tracking accuracy in a

mobile augmented reality system. In: Proceedings of

Second International Workshop on Artificial Intelligence

in Mobile Systems (AIMS’01), Seattle, WA, USA 4

August, 2001. p. 31–7.

[13] Behringer R. Registration for outdoor augmented reality

applications using computer vision techniques and hybrid

sensors. In: Proceeding of IEEE Virtual Reality’99,

Houston, TX, USA 13–17 March 1999. Silver Spring,

MD: IEEE CS Press; 1999. p. 244–51.

[14] Neumann U, You S. Natural feature tracking for

augmented reality. IEEE Transactions on Multimedia

1999;1(1):53–64.

[15] Feldmann S, Kyamakya K, Zapater A, Lue Z. An indoor

Bluetooth-based positioning system: concept, implementa-

tion and experimental evaluation. In: Proceedings of the

International Conference on Wireless Networks, ICWN

’03, Las Vegas, Nevada, USA, 2003. p. 109–13.

[16] Yoneyama Y, Shinoda S, Makino M. An indoor location

system with Bluetooth. In: Proceedings of International

Technical Conference on Circuits/Systems, Computers and

Communications (ITC-CSCC’02), Phuket, Thailand, 2002,

CD-ROM.

[17] Curtis D, Mizell D, Gruenbaum P, Janin A. Several devils

in the details: making an AR application work in the

airplane factory. In: Proceedings of the First IEEE

Workshop on Augmented Reality (iwar’98), San Francis-

co, CA, USA 1 November 1998. p. 47–60.

[18] Fuchs H, Livingston M, Raskar R, Colucci D, Keller K,

State A, Crawford J, Rademacher P, Drake S, Meyer A.

Augmented reality visualization for laparoscopic surgery.

In: Proceedings of First International Conference on

Medical Image Computing and Computer-Assisted Inter-

vention (MICCAI’98), Cambridge, MA, USA: MIT; 11–

13 October 1998. p. 934–43.

[19] Webster A, Feiner S, Macintyle B, Massie W, Krueger T.

Augmented reality in architectural construction, inspection

and renovation. In: Proceedings of ASCE Third Congress

on Computing in Civil Engineering’96, Anaheim, CA,

1996. p. 913–9.

[20] Rekimoto J, Nagao K. The world through the computer:

computer augmented interaction with real world environ-

ments. In: Proceedings of ACM UIST’95, Pittsburgh, PA,

USA 15–17 November 1995. New York: ACM Press; 1995.

p. 29–36.

[21] Hollerer T, Feiner S, Terauchi T, Rashid G, Hallaway D.

Exploring MARS: developing indoor and outdoor user

interface to a mobile augmented reality system. Computer

and Graphics 1999;23(6):779–85.

[22] Dias AE, Silva JP, Camara AS. Bits: browsing in time and

space. In: Computer CHI’95 Human Factors in Comput-

ing Systems, Denver, CO, USA 7–11 May 1995. p. 248–9.

[23] Wagner D, Schmalstieg D. First steps towards handheld

augmented reality. In: Proceedings of the Seventh Inter-

national Symposium on Wearable Computers’03, White

Plains, NY, USA 21–23 October 2003. Silver Spring, MD:

IEEE CS Press; 2003. p. 127–37.

[24] Cavallaro R. The foxtrax hockey puck tracking system.

IEEE Computer Graphics and Applications 1997;17(2):6–12.

[25] Brooks Jr FP. What’s real about virtual reality? IEEE

Computer Graphics and Applications 1999;16(6):16–27.

[26] Ferreira JS, Costa M. A determinist advection-diffusion

model based on Markov processes. Journal of Hydraulic

Engineering, ASCE 2002;128(4):399–411.

[27] Tufte ER. Visual explanations: images and quantities,

evidence and narrative. Cheshire, CT: Graphics Press;

1997.