Evolving tuis with smart objects for multi-context interaction

6
Evolving TUIs with Smart Objects for Multi-context Interaction Abstract We present our ongoing work, an application framework created to extend the concept of natural and tangible interfaces to environments composed of many interactive systems disseminated in an indoor space. In such environments users can perform solo or collaborative activities using different systems (like interactive tabletops or walls) and interacting with them through tangible smart objects provided with sensors, storage, processing and wireless communication capabilities. The smart objects become the representatives of the user navigating in the environment, and while retaining their basic affordance suggested by their shape, can assume different roles in relation to the system they approach. We investigated some application scenarios and present early observations related to the design and implementation, as well as future directions. Keywords Tangible Interaction, Natural Interaction, Smart Objects, Wireless Sensors, Vision systems. ACM Classification Keywords H.5.2 [User Interfaces]: Evaluation/methodology, Graphical user interfaces (GUI), Input devices and strategies, Interaction styles, Prototyping, User- centered design. Copyright is held by the author/owner(s). CHI 2008, April 5 – April 10, 2008, Florence, Italy ACM 978-1-60558-012-8/08/04. Stefano Baraldi MICC, DSI University of Florence [email protected] Luca Benini Micrel Lab, DEIS University of Bologna [email protected] Omar Cafini Micrel Lab, DEIS University of Bologna [email protected] Alberto Del Bimbo MICC, DSI University of Florence [email protected] Elisabetta Farella Micrel Lab, DEIS University of Bologna [email protected] Giulia Gelmini School of Psychology University of Nottingham [email protected] Lea Landucci MICC, DSI University of Florence [email protected] Augusto Pieracci Micrel Lab, DEIS University of Bologna [email protected] Nicola Torpei MICC, DSI University of Florence [email protected] CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy 2955

Transcript of Evolving tuis with smart objects for multi-context interaction

Evolving TUIs with Smart Objects for Multi-context Interaction

Abstract

We present our ongoing work, an application

framework created to extend the concept of natural and

tangible interfaces to environments composed of many

interactive systems disseminated in an indoor space. In

such environments users can perform solo or

collaborative activities using different systems (like

interactive tabletops or walls) and interacting with them

through tangible smart objects provided with sensors,

storage, processing and wireless communication

capabilities. The smart objects become the

representatives of the user navigating in the

environment, and while retaining their basic affordance

suggested by their shape, can assume different roles in

relation to the system they approach. We investigated

some application scenarios and present early

observations related to the design and implementation,

as well as future directions.

Keywords

Tangible Interaction, Natural Interaction, Smart

Objects, Wireless Sensors, Vision systems.

ACM Classification Keywords

H.5.2 [User Interfaces]: Evaluation/methodology,

Graphical user interfaces (GUI), Input devices and

strategies, Interaction styles, Prototyping, User-

centered design.

Copyright is held by the author/owner(s).

CHI 2008, April 5 – April 10, 2008, Florence, Italy

ACM 978-1-60558-012-8/08/04.

Stefano Baraldi

MICC, DSI

University of Florence

[email protected]

Luca Benini

Micrel Lab, DEIS

University of Bologna

[email protected]

Omar Cafini

Micrel Lab, DEIS

University of Bologna

[email protected]

Alberto Del Bimbo

MICC, DSI

University of Florence

[email protected]

Elisabetta Farella

Micrel Lab, DEIS

University of Bologna

[email protected]

Giulia Gelmini

School of Psychology

University of Nottingham

[email protected]

Lea Landucci

MICC, DSI

University of Florence

[email protected]

Augusto Pieracci

Micrel Lab, DEIS

University of Bologna

[email protected]

Nicola Torpei

MICC, DSI

University of Florence

[email protected]

CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy

2955

Introduction

Applications that deal with browsing and exploration of

multimedia contents through direct and spontaneous

actions are usually based on large interactive surfaces

(tabletops or walls) [6]. Such systems are based on

gesture recognition and analysis of users’ bare hands

[2]. Some examples of multi-context systems found in

literature [10] highlight the need of going beyond

simple hand gestures using the paradigm of Tangible

User Interfaces (TUIs [9]), that is introducing physical,

tangible objects that the system interprets as

embodiment of the interaction language elements. In

this way, more complex and expressive interfaces can

be implemented, without compromising the natural

approach in handling digital objects. Tangible objects

can be anything that can be recognized by the system

sensors. Usually they are passive objects whose unique

identification’s code is either visually tagged on them or

transmitted, bounded to particular areas where the

sensing can occur. Moreover, multi-context

environments composed of many different interactive

systems [10] do not fully support a seamless transition

of user’s data and profile among different spaces and

contexts, requiring the user, for example, to explicitly

authenticate.

Our investigation in this research domain has led to the

development of a framework called TANGerINE:

TANGible Interactive Natural Environment [3]. In this

work we present further developments of the

framework to support cross-contextual interaction.

Instead of embedding sensors in the environment to

follow the user everywhere, our approach is to exploit

the capabilities of every interactive system to establish

a connection with nearby smart objects carried by the

users. Such tangible objects are equipped with sensors

and are able to transmit data through a wireless

connection. A proximity-based awareness mechanism

permits both the systems and the objects to know

when the user is moving across the system’s

boundaries, switching to the right modality and role.

Systems can rely both on computer vision and on the

information processed and provided by the smart

objects, exploiting redundancy to improve the accuracy

of the recognition.

Interactive Environments

Our use case is an indoor environment containing some

interactive systems (tabletops, walls, whiteboards or

other structures), which have a stable position. We

consider these interactive systems presenting data in

different ways, but with a consistent visual language,

so that users can perform editing or fruition activities

on a common set of digital contents. We believe that

the user experience with these systems should be

extended considering other contextual information [5]

regarding the environment as a whole, including also

the history of how the different systems are used by

different users over time. The interactive environments

are divided into different contexts (Figure 1), each of

them is conceived and defined in relation with an

interactive system. Contexts are in turn physically

divided into Areas.

1) Active Area: the space where users directly interact

with the system’s digital contents.

2) Nearby Area: the area right around the system,

where users can still see the digital contents but cannot

reach them directly.

Interactive systems like tabletops or walls engage the

users in editing and arranging digital multi-media

contents. It should be possible for users to move across

different interactive systems transporting with them

CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy

2956

some of these contents, as well as other meta-data

regarding the user profile and history of operations.

figure 1: the interactive environment: every system creates a

different interactive context divided in the active and nearby

area. User access the contexts by approaching them with the

tangible smart objects.

Instead of asking the user to explicitly authenticate

every time he accesses another interactive context

(going against the guidelines for natural interaction), or

tracking the user actively across the space (addressing

hard issues in obtaining a robust and efficient

recognition, requiring a complete sensor

instrumentation of the environment or forcing the users

to wear some device to perform identification), we

modeled our scenario around another entity that is the

real subject of the interaction. As users interact

naturally with digital elements on interactive surfaces,

they are now able to transport data across different

contexts just carrying with them a tangible object. This

possibility can be implemented using the paradigm of

Tangible User Interfaces (TUIs). In this way, we

designed a framework where all the interactive

functions and sensing capabilities are embedded

directly in the systems which can be also moved and

arranged in space.

In the scenario described so far, we are interested in

tangibles as the embodiment of some aspects of the

interaction between the user and the domain of multi-

media contents handled by the application. Such

tangibles can assume different roles depending on the

type of workflow provided by an interactive system. If

the system principally provides a “fruition” interface

towards digital contents, it provides the user a

presentation of choices that can be selected by using a

personal smart object. Consequently, as the users

move in the environment, a history of their choices is

associated to the user, providing contextual information

useful in the following experiences. Instead, if the

system provides more production-oriented functions, in

which digital contents can be arranged and

manipulated, the production can be stored in the object

and moved across the systems.

Sensing Architecture

The sensing architecture is made up of two components

working together. The Smart Micrel Cube (SMCube) [3]

and a computer vision based tracker.

Sensing of the Tangible Object

The SMCube (figure 2) is a wooden 6,5 cm3 cube case

that embeds a wireless sensor node with Bluetooth

capabilities and inertial sensing as default. Its modular

architecture provides the necessary flexibility to extend

sensing and actuation of the cube.

CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy

2957

figure 2: (left) The SMCube and its infrared LED pattern

(right).

Each SMcube is identified by a programmable id

number and can receive queries and controls to

exchange bidirectional information with the context in

which is placed. Its basic functionality consists in

extracting tilt to derive which of the six faces is the top

or the bottom face at a certain instant. The result is

stored, sent and translated in visual feedback by

activating the LEDs as explained in our previous work

[3]. We implemented also a vibra-motor based layer to

provide tactile feedback to the user. We can control and

set both the duty-cycle and the duration of the motor

feedback. In this way we can select a range of tactile

feedbacks by use of appropriate commands sent

wirelessly. The SMCube can also be programmed

wirelessly thanks to the boot loader on the micro-

controller. Moreover, the SMCube functionalities have

been upgraded:

1) Manipulation state awareness. The accelerometer

embedded in the SMCube provides the ability to

understand if the cube is held by a person or is

motionless on the tabletop. The detection is based on

the tilt and has been tested taking into account noise

due to furniture structural vibrations and accidental

noise (keyboard typing, etc.). We considered also the

disambiguation between the ‘still on table’ case and the

‘still in hands’ case, taking into account hands tremor

[8] and other clues.

2) Bluetooth-based proximity awareness. It is possible

to exploit the ability of Bluetooth protocol to discover

neighbor Bluetooth devices, their identity and to

exchange proximity information. In fact, by means of

the inquiry procedure, the RSSI (Received Strength

Signal Indicator) referred to a certain device can be

read. The Bluetooth transceiver inside the SMCube

enables the general system to use the inquiry

procedure to extract its RSSI and consequently the

proximity of the cube to the work area and decide to

automatically associate it. Therefore, the cube can be

used to interact with the application. The inquiry

procedure can be repeated periodically to check the

proximity of other SMcubes. Similarly, the SMcube can

be disconnected if moved far from the system, since

the RSSI connection value has decreased under a

certain threshold.

Sensing of the Active Area

Computer Vision techniques are applied to obtain LEDs

detection and tracking in the Active Area of the tabletop

in order to understand cube's position and orientation

on a its surface. Since we are working in a multi user

system, the analysis of LEDs pattern gives us the

information about cube’s unique visual id [3]. Just

simple image processing operations (noise removal,

background subtraction, thresholding and connected

components analysis) are done.

CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy

2958

Application Scenarios

We have started designing some application scenarios

using the TANGerINE framework, and implemented

working systems. The first project focuses on the

interaction of a single SMCube with an whole audience,

while the second is a prototype collaborative application

integrating the tangible smart object with interactive

tabletops.

TANGerINE Theatre

Interactive theatre is a not-fully addressed field in the

research community, and it is often dealt as a

performance in which the interaction is played between

human actors and virtual and multimedia contents, with

no roles for the audience.

An interesting work is shown in [7] where the

Improvisational Theater Space is described. We agree

with the motivations of bringing the interactivity into

improvisation theater: a pre-fixed performance would

not provide the chance for the audience to actively

“join” the performance. But we do not think that such

interaction could mean engaging the audience into the

improvisation, on the stage. This project concerns a

new kind of long form improvisation performance in

which the audience is able to change scenographies.

Our idea starts from Jam Theater, a long form

improvisation performance (www.longform.it)

conceived by the cultural association Contaminazioni

Teatrali. In this performance, the actors don’t follow a

pre-conceived play script inspiring the stories to a

theme or a word suggested by the audience. The

question is: how can we improve such interaction

degree? We tried introducing a new tangible dialogue

device in order to enable the communication between

audience and actors. By manipulating the SMCube, the

audience is able to switch through six different

multimedia scenographies projected in a large screen

used as a frame for the stage. Every scenography is

associated to a particular story improvised by the

actors so that the audience become de facto a

“director-audience”.

A first version of the performance has been presented

to the “Creativity Festival 2007” in Florence where a

numerous audience have enjoyed and joined the show.

TANGerINE Tales

Another promising application scenario for our system

focuses on supporting children’s face-to-face

collaborative story-making.

Although extensive work has been carried out in the

development and evaluation of children’s collaborative

story-making systems [1], we feel some aspects have

not been entirely addressed yet. Firstly, only few

educational applications aimed at supporting children’s

story-making [1,4] focus on encouraging children’s

reflection on the story structure. Secondly, many

educational applications have been designed with the

aim of supporting children’s collaboration when making

a story. However, a few main aspects of collaboration

still need to be addressed. One of the main concerns

when designing for collaboration is that of supporting

distributed participation: in a synchronous setting such

as a face-to-face collaborative story-making activity, a

system should enable multiple users’ simultaneous

interaction. SMCubes can support this by allowing each

child to use his own cube to interact with the system at

the same time as other children. By switching the cube

functionalities, the roles can also be fluidly re-assigned

to different children according to emerging or pre-

defined activity scripts. The spatial configuration of

sensitive areas can also encourage different levels of

engagement: a child in the Nearby Area can be

CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy

2959

involved in the activity peripherally, while a child in the

Active Area is taking the leading role.

Finally, lessons from the design of collaborative

systems have stressed the importance of accounting for

authoring identity, especially when the supported

activity is an open ended, creative one. Our system

supports this, because it delivers a clear history of the

actions performed by each individual, the system

provides not only a strong motivational aspect for

children’s participation, but also a useful tool for

educators to assess each child’s level of participation

both in quantitative and qualitative terms.

FURTHER INVESTIGATIONS

A full user evaluation has not been conducted yet.

However our initial experience with users highlighted

that the approach is very lightweight: especially on

collaborative systems we found users to promptly

understand the paradigm and use the cube in a natural

way after some trial attempts. Our future work will

address both technological advances in the sensing

platform and application design. We are going to enrich

the smart objects with new sensors to detect more

events (e.g. capacitive sensors for grasp detection) and

to reinforce estimates (e.g. magnetometer to provide

measure of the orientation also in the Nearby Area)

also exploiting sensor fusion techniques. The sensing of

the Active Area of the tabletop will be expanded with a

context-camera developing an algorithm able to track

users in the Nearby Area.

Our very next step is advancing with both Theatre and

Tales project, and also creating real-world scenarios of

interactive environments with multiple system featuring

multiple tabletops and walls.

Citations

[1] M.J. Ananny, J.C., TellTale: A toy to encourage written literacy skills through oral storytelling, in Text, Discourse Cognition. 2001: Jackson Hole, WY.

[2] Baraldi, S., Del Bimbo, A., Landucci, L., and Valli, A., “wikiTable: finger driven interaction for collaborative knowledge-building workspaces”, In Proc. of IEEE CVPRW, 2006.

[3] Baraldi, S., Del Bimbo, A., Landucci, L., Torpei, N., Cafini, O., Farella, E., Pieracci, A., and Benini, L., “Introducing TANGerINE: a TANGible Interactive Natural Environment”, In Proc. of ACM MULTIMEDIA, 2007.

[4] A. Cappelletti, G.Gelmini, F. Pianesi, F. Rossi, M. Zancanaro, Enforcing Cooperative Storytelling: First Studies, in ICALT. 2004, IEEE Computer Society Press:

Joensuu, Finland.

[5] Dey, A., Kortuem, G., Morse, D., and Schmidt, A. “Situated Interaction and Context-Aware Computing”, Personal Ubiquitous Comput. 5, 2001.

[6] Mazalek, A., Reynolds, M., and Davenport, G. “TViews: An Extensible Architecture for Multiuser Digital Media Tables” IEEE Comput. Graph. Appl.26, 2006.

[7] F. Sparacino, K. Hall, C. Wren, G. Davenport, and A. Pentland,“Improvisational Theater Space,” Symposium on Arts and Technology, 1997

[8] Strachan, S., Murray-Smith, R., “Tremor as an Input Mechanism”, UIST 2004.

[9] Ulmer; B. & Ishii, H., ''Emerging Frameworks for Tangible User Interfaces'' IBM Systems Journal. 39 (3&4), 2000.

[10] Wakkary, R., Hatala, M., “ec(h)o: Situated Play in a Tangible and Audio Museum Guide”, Designing Interactive Systems, 2006.

CHI 2008 Proceedings · Works In Progress April 5-10, 2008 · Florence, Italy

2960