Simulation Driven Experiment Control in Driver Assistance Assessment

10
Simulation Driven Experiment Control in Driver Assistance Assessment Andreas Riener and Alois Ferscha Johannes Kepler University Linz, Institute for Pervasive Computing Altenberger Str. 69, A-4040 Linz, Austria Tel. +43/7326/3343-920, Fax. +43/732/2468-8524 {riener,ferscha}@pervasive.jku.at Abstract Embedded systems technologies and advances in micro electronics have accelerated the evolution of driver assis- tance systems towards more driving safety, comfort, enter- tainment and wayfinding. With the technological and qual- ity progress, however, also goes the growing of interaction complexity, information overload, and intricate interface designs. Assessing interaction designs for in-car assistance services is an emerging and vibrant field of research. To avoid situations of possibly fatal danger when as- sessing driver assistance services in real driving situations, we propose trace-driven simulation to steer the experiments with users in a automotive driving simulator. Based on our own developments of driver assistance systems involving the sense of touch, i.e. exploiting haptics as a communi- cation channel in vehicle-to-driver interactions, we demon- strate how pre-recorded traces of driving situations can control user studies. Our experiments show, that simulated driving journeys is a viable alternative to the more hazardous ”on-the-road” user studies. With respect to haptics as an additional channel of communication we find that vibro-tactile stimuli are a promising means to raise driver attention when the visual and auditive channels fail due to overload. Keywords. Trace-Driven Simulation, Vibro-tactile Feedback, Multimodal Interfaces, Haptic Perception, Driving Performance, User-centered Design, Realtime. 1 Motivation Vehicle handling gets a more and more challenging task, because of (i) an increasing number as well as complexity of IT services in cars, (ii) excessive or even overstrained use of visual and auditive information channels (capacity should be freed to keep focus on the main activity of driving), (iii) traditional interaction paradigms, which are often not quali- fied for in-car appliance control (input devices, such as full- functional keyboards or mice are not available, output ca- pabilities are limited due to fixed indicator elements in the dashboard or small/low-resolution screens; thus, in-vehicle I/O have to to be reworked or replaced by adequate alterna- tives [1]), and (iv) person-dependency and environmental- influence of auditive and visual information channels (voice is affected by a multitude of user-specific parameters like age, gender, cognitive load or emotional stress, and ambi- ent noise is furthermore responsible for distortion of spoken instructions [2, p.368]; face or image detection often lacks on illumination and background variation problems; slight changes in pose and/or illumination produce large changes in objects’ visual representation [3], which results in perfor- mance loss in visual identification tasks [2, p.21f], [4]). In general, less than 20% of overall information is per- ceived with the sense of touch; this is quite few regarding to 70% to 80% of information gathered via visual and auditive sensory modalities (approximately 80% of all sensory input is received via the eyes [5], [6], and another 15% via the ears [7, p.41]), but however opens new perspectives for future vehicle-interaction in assisting todays permanently charged eyes and ears with an additional information channel. With this paper we propose a haptic display for vehicle- driver notifications, integrated in the car seat and back. Ex- periments for investigating the potential and performance of haptics, and it’s dependency on human parameters, such as reaction time and age driven, have been processed in a stopped car by using a trace-driven simulation approach in order to avoid road accidents and protect test persons from hazardous situations. The aims of this paper are: (i) A comparison of the adequancy and accuracy of hap- tic stimulations in contrast to vision and sound, considering following assumptions: (a) A test driver reacts on haptic sensations as fast as on heard or seen stimulations. If this expectation turned out to be correct, we can recommend the transmission of notifications from (at least secondary) tasks in vehi- cles by haptics.

Transcript of Simulation Driven Experiment Control in Driver Assistance Assessment

Simulation Driven Experiment Control in Driver Assistance Assessment

Andreas Riener and Alois FerschaJohannes Kepler University Linz, Institute for Pervasive Computing

Altenberger Str. 69, A-4040 Linz, AustriaTel. +43/7326/3343-920, Fax. +43/732/2468-8524{riener,ferscha}@pervasive.jku.at

Abstract

Embedded systems technologies and advances in microelectronics have accelerated the evolution of driver assis-tance systems towards more driving safety, comfort, enter-tainment and wayfinding. With the technological and qual-ity progress, however, also goes the growing of interactioncomplexity, information overload, and intricate interfacedesigns. Assessing interaction designs for in-car assistanceservices is an emerging and vibrant field of research.

To avoid situations of possibly fatal danger when as-sessing driver assistance services in real driving situations,we propose trace-driven simulation to steer the experimentswith users in a automotive driving simulator. Based on ourown developments of driver assistance systems involvingthe sense of touch, i.e. exploiting haptics as a communi-cation channel in vehicle-to-driver interactions, we demon-strate how pre-recorded traces of driving situations cancontrol user studies.

Our experiments show, that simulated driving journeysis a viable alternative to the more hazardous ”on-the-road”user studies. With respect to haptics as an additionalchannel of communication we find that vibro-tactile stimuliare a promising means to raise driver attention when thevisual and auditive channels fail due to overload.

Keywords. Trace-Driven Simulation, Vibro-tactileFeedback, Multimodal Interfaces, Haptic Perception,Driving Performance, User-centered Design, Realtime.

1 Motivation

Vehicle handling gets a more and more challenging task,because of (i) an increasing number as well as complexity ofIT services in cars, (ii) excessive or even overstrained use ofvisual and auditive information channels (capacity shouldbe freed to keep focus on the main activity of driving), (iii)traditional interaction paradigms, which are often not quali-

fied for in-car appliance control (input devices, such as full-functional keyboards or mice are not available, output ca-pabilities are limited due to fixed indicator elements in thedashboard or small/low-resolution screens; thus, in-vehicleI/O have to to be reworked or replaced by adequate alterna-tives [1]), and (iv) person-dependency and environmental-influence of auditive and visual information channels (voiceis affected by a multitude of user-specific parameters likeage, gender, cognitive load or emotional stress, and ambi-ent noise is furthermore responsible for distortion of spokeninstructions [2, p.368]; face or image detection often lackson illumination and background variation problems; slightchanges in pose and/or illumination produce large changesin objects’ visual representation [3], which results in perfor-mance loss in visual identification tasks [2, p.21f], [4]).

In general, less than 20% of overall information is per-ceived with the sense of touch; this is quite few regarding to70% to 80% of information gathered via visual and auditivesensory modalities (approximately 80% of all sensory inputis received via the eyes [5], [6], and another 15% via the ears[7, p.41]), but however opens new perspectives for futurevehicle-interaction in assisting todays permanently chargedeyes and ears with an additional information channel.

With this paper we propose a haptic display for vehicle-driver notifications, integrated in the car seat and back. Ex-periments for investigating the potential and performanceof haptics, and it’s dependency on human parameters, suchas reaction time and age driven, have been processed in astopped car by using a trace-driven simulation approach inorder to avoid road accidents and protect test persons fromhazardous situations. The aims of this paper are:(i) A comparison of the adequancy and accuracy of hap-tic stimulations in contrast to vision and sound, consideringfollowing assumptions:

(a) A test driver reacts on haptic sensations as fast as onheard or seen stimulations. If this expectation turnedout to be correct, we can recommend the transmissionof notifications from (at least secondary) tasks in vehi-cles by haptics.

(b) Mean deviation times for the three interaction modal-ities are strongly person-dependent, and therefore themodality best suited to use is respecting the individualpersons. Furthermore, we believe that specific driv-ing situations probably could be faster resolved with adedicated modality.

(c) Reaction time on haptic stimulation decreases with theprogress of the experiment, caused by the fact that hap-tic feedback is unusally and the user therefore needs toget trained on it.

(ii) A qualitative assessment of driver-vehicle interface per-formance and reality-connection when replacing on-the-road driving experiments with trace-driven simulation.

Outline

The paper is organized as follows: The next section givesa detailed overview of related work in human-vehicle inter-action, influencing the presented work. Section 3 describesthe experimental setting, as well as utilized hard- and soft-ware components. Furthermore, experiment execution andregulations are stated here. Section 4 presents and discussesthe findings of the conducted experiments, concluding sec-tion 5 summarizes the paper.

2 Related Work

A constantly rising cognitive load is caused by factorslike (i) increasing number of cars and road signs on thestreets, (ii) advanced driver assistance systems, advisingthe driver about dangerous situations, but also notifyinghim/her on relatively unimportant system messages, (iii)increasing number and complexity of in-car infotainmentsystems, requiring drivers’ attention on vehicle- and traffic-independent messages. Arising information needs on thestreet and in the vehicle increases the risk of driver distrac-tion by taking drivers’ eyes off the road and it’s hands offthe steering wheel [8], [9], [10]. The development of saferdriver-vehicle interfaces (DVI), without comprising the pri-mary function of driving, and considering the full range ofoperator behaviour (age, reaction time, etc.), has become aever important challenge in vehicle design.

Vilimek et al. stated, that a single modality does notallow for efficient interaction across all tasks, while mul-timodal interfaces enable the user to (implicitly) select thebest suited [9]. According to Erdogan et al., every modal-ity has it’s own importance and improves special parts ofthe recognition system [11]. McCallum et al. reports anincrease in driving performance, and a dimishing cognitiveworkload of the driver when using speech interfaces insteadof ”normal” interaction paradigms for in-vehicle devicecontrol (but without considering the well-known constraints

of speech-interfaces, e.g. environmental noise, drivers con-stitution, interference with conversations, etc.) [10, p.32].This publications strengthen our resolve in experimentationwith different (combinations of) sensory modalities.

Haptic Sensations

In [12], Bengtsson et al. reported on using haptic interfacesfor improving human-machine interfaces (HMI) without in-creasing the visual load on the driver (stressed by the neces-sity of dealing with rising interaction-complexity in vehi-cles, caused by an increasing number of in-vehicle comfortfunctions).

Amditis et al. have presented design aspects for futureautomotive environments, focusing on the optimization andadaptation of HMI on driver, vehicle, and environment [13].One objective of the project was to find the best way to pro-vide information to the driver, concentrating on availableinformation sources in cars (vision, sound, and touch sen-sory modalities) and drivers’ workload at a specific time.An adaptive integrated driver-vehicle interface concept hasbeen proposed as early result.

Information about an event takes different amounts oftime to be processed, depending on which sensory channelthe event was activated. Harrar and Harris did reaction timeexperiments with visual, auditive and haptic stimuli (pre-sented in random order). They found, that reaction timeswere stable following repeated exposures [14]. These re-sults have been used in the definition of our experiment.

Jones et al. [15] investigated on tactile cues of visualspatial attention. Test persons had to perform a visualchange detection task, following the presentation of a tac-tile spatial cue on their back (locations corresponds to oneof the four quadrants of a computer monitor). They hadshown that the cross-modal attentional link between touchand vision is natural and easily established, and that thislink is largely a learned strategic shift in attention (it can bebroken, e.g. when participants are verbally informed thatthey should ignore the cues). This seems to be an indicatorfor the implicit, not distracting perception of haptic stimuli.

In [16], Ho et al. reports about experiments on vibro-tactile warning signals for presenting spatial informationto car drivers. Results show that the presentation of suchstimuli on the torso can lead to a shift of visual attention(e.g. time-critical response to visual events seen in the dis-tal space) and confirms our assumption of the potential ofvibro-tactile cues for attracting added attention (and it’s usefor at least secondary tasks in vehicle-handling).

Human Reaction Times

The investigation on human reaction times is of scientificinterest since 70 years, but only in the last decades increas-ing effort has been attempted on driver reaction times in the

automotive field. Much of the research in this filed stemsfrom the early work of Robert Miller (and his 1968-paperon performance analyzing). Exemplarily, in [17] he pro-posed that a ideal response time, which is essentially toknow when designing convenient user interfaces, should notbe longer than 2 seconds (later, in [18, p.63], this result wasconfirmed by Testa et al.). Furthermore, it has been inves-tigated, that mean simple reaction times for light and soundstimuli are below 200msec and that light stimuli are approx-imately 20% higher than that of sound stimuli1.

We agree to the definition of human reaction time givenby Dix [19] as: ”human reaction time is defined as thetime it takes a human to react to an event such as a lightor a sound”. Shneiderman et al. [20] and Teal et al. [21]defines reaction time from the opposite side as system re-sponse time: ”The computer system’s response time is thenumber of seconds it takes from the moment users initiatean activity until the computer begins to present results”, seeFigure 1. In [22, p.4] and [20] it was assumed that sys-tem delay has an effect on user performance, and that thiseffect can be evidenced through increased user productiv-ity at decreased system response times. Triggs and Harris

„Model of system response time“ Adapted from Shneiderman, 1998 / Teal, 1992

t

Userinitiatessystemactivity

Userinitiatessystemactivity

Systemstarts

activityresponse

Systemcompletes

activityprocessing

Userpreparessysteminput

SYSTEM RESPONSE TIME(Computation, Overhead, Delay)

USER DELAY

EXECUTION TIME

HRT(human reaction time)

Figure 1. Model of system response time(adapted from Shneiderman [20], Teal [21]).

mentioned that human reaction time depends (linearily) onthe number of possible alternatives that can occur [23, p.4].As one result, cognitive activity is limited to a small num-ber of items at any one time; Testa and Darie discovered in[18], that this number is between five and nine (grouping,sequencing, or neglecting on more items).

Age-related Impact on Performance

It has been evidenced that the accuracy of perceptionis affected by the age, exemplarily for haptic stimuli in[24]: They found, that threshold mediated by the PacinianMechanoreceptors increases 2.6dB per 10 years (measure-ments on the fingertips). Response time (e.g. hitting the

1Reaction Times, URL: http://biae.clemson.edu/bpc/bp/Lab/110/reaction.htm\#Kinds, retrieved July 29, 2008

brake pedal of a car when the traffic light turns to red)when processing ordinary stimuli gains with increasing age.Analysis of traffic accidents in finland shows a drastic age-proportional increase, caused by declining speed of infor-mation processing and response time [25]. But on the otherhand it has been determined, for instance by L. Breytspraak,that experience with a specific task apparently compensatesfor the decline with age [26]. In [27], Shaffer and Har-rison confirms that (i) human Pacinian Corpuscles (PC)decrease in number with advanced age, (ii) vibro-tactilesensitivity involving PC pathways becomes impaired withage, and (iii) older adults (x=68.6 years) required signifi-cantly greater amplitudes of vibration to achieve the samesensation-perceived magnitude as younger subjects (x=23.5years). Likewise, Smither et al. found that older peo-ple experience a decline in the sensitivity of skin, and alsohave more difficulty in discriminating shapes and texturesby touch [28].

Measuring and interpretation of human reaction timeshas a long history, but was particulary investigated in tra-ditional human-computer interfaces. Due to the fact, thattraditional interaction paradigms are mostly not suitable forvehicle handling, evaluations on reaction times have to berepeated in cars, considering available interaction modali-ties, and facts such as overaging population (as there is evi-dence that stimulations are age-dependent). In order to pre-vent road casualties and protect test participants and otherdrivers from accidents, a trace-driven simulation should beused instead of classical on-the-road studies.

3 Experimental Design

Convincing evaluation of human reaction times in theautomotive domain, considering visual, auditive and hap-tic stimuli, is normally carried out using real driving ex-periments. We propose a trace-driven simulation approach,mainly for two reasons:

(i) Repeatability: When performing real driving jour-neys for each test person, the ride could not be repro-duced because of high road traffic dynamics.

(ii) Equality: Similar experiment-realization for any testattendee can only be guaranteed with simulation, usinga pre-recorded trace of a real trip.

In order to prevent road accidents and casualties, andfurthermore to secure test participants from injuries, thesimulation-driven approach provides additional benefitagainst on-the-road studies.

In the next paragraphs an overview about experimentaldesign, utilized software and hardware components, as wellas the processing of the experiment itself, is given.

Signal Visual Auditory Haptic

Turn Left Symbol ”Left” ”Turn Left. . . ” All 8 left are(Superimposed to thevideo)

(Spoken instruction) activated simultaneously

Turn Right Symbol ”Right” ”Turn Right. . . ” All 8 right are(Superimposed to thevideo)

(Spoken instruction) activated simultaneously

Lights On∗) Symbol ”Lights on” ”Lights On. . . ” All 6 tactors on the(Superimposed to thevideo)

(Spoken instruction) seat are oscillating

Lights Off∗) Symbol ”Lights off” ”Lights Off. . . ” All 6 tactors on the(Superimposed to thevideo)

(Spoken instruction) seat are oscillating

∗) The light switch is a binary item, therefore the same (haptic) patterns could be used for switching on and off.

Table 1. Evaluated activities and corresponding feedback signals.

Taping: Prior experiment processing we recorded a drivingscenario cross-town the city of linz, austria with controlledand uncontrolled crossings, road tunnels and freeway com-ponents. Waiting times on crossings and unsubstantial orpointless driving sections had been excluded from the tapedrun, so that we finally got a video of 11min. 22sec. inlength.

„ Respected sir, at the next crossing please prepare to turn left...“

Auditory(Misconfigured)

Auditory(Revised)

HapticVisual

„ Turn left...“

Perc

eption T

ime

Figure 2. Perception times of individual feed-back channels have to be aligned one to eachother in order to get meaningful results.

Tagging and Instruction Editor: After that, the video hadbeen integrated into our evaluation application by using theJava Media Framework (JMF), version 2.1.1e. 44 points inthe time line had been tagged as initial positions for latertriggering specific user actions – initially, only the four ac-

tivities ”turning left”, ”turning right”, ”low-beam lights on”,and ”low-beam lights off” had been differentiated and eval-uated. For each of these activities we defined a visual, audi-tory, and haptic signal (see Table 1 for details). For the tag-ging task we have internal discussed and commited only totag valid actions for specific points in time (this means e.g.that a test participant cannot receive a ”turn left”-request ata situation when left turns are impossible in the correspond-ing video-section).

For the assignment of vehicle-control instructions to spe-cific points in the video (on which test participants shouldlater in the experiment react), we have implemented the ”in-struction editor”. It is a component of the software frame-work which enables us to define, modify, and store individ-ual instruction sets per video in a .eli-file (=event list). Forthe current trace-driven simulation experiment we used thiseditor only to inspect and adjust the individual trace lines,recorded during the journey as described above.

Parameter list of the instruction editor is extendable, andfacilitate us up to now the selection of visual, auditive, andhaptic notifications. Duration time for user notificationscould be set individually, or can be assigned automaticallyfrom the software. It is possible to choose a specific inter-action modality for each instruction-point (e.g. for testingone driving situation only with haptic instructions), or tolet the application select one randomly (which is preferredand has been used in these experiments). Corresponding tothe behaviour of the modality, different visuals, audio files,or vibro-tactile patterns could be declared. Additional pa-rameters can be specified for each modality, for examplevibration intensity, frequency, duration, occurance, etc. forthe haptic feedback channel. Principle structure and param-eters of a .eli-file (event list), as well as a short example

Listing 1. Valid task identifiers and their parametersASSIGNED IDs:0...Visual Task1...Auditive Task2...Haptic Task3...Random Task (one out of 0, 1 or 2)

STRUCTURE OF INDIVIDUAL TASKS (0,1,2,3):ID;Task Name;Stop Sign;Trigger Time (ms);Serial ID;Image Path;LabelID;Task Name;Stop Sign;Trigger Time (ms);Serial ID;Sound PathID;Task Name;Stop Sign;Trigger Time (ms);Serial ID;Touch PathID;Task Name;Stop Sign;Trigger Time (ms);Serial ID;Image Path;Sound Path;Touch Path;Label

EXAMPLES (SINGLE TASKS ONLY):0;Right1;s;22484;Turn Right;C:\\driveSim\\images\\turnRight.jpg;right2;Left1;a;40030;Turn Left;C:\\driveSim\\haptics\\turnLeft.bis1;Right2;s;69722;Turn Right;C:\\driveSim\\sound\\turnRight.wav

composed with the instruction editor, are shown in Listing1 below.

Sequence Creator: A toolset for defining and organizinga library of individual vibro-tactile patterns, the so-called”tactogram alphabet”. A ”Tactogram” specifies the dy-namic behaviour of all vibro-tactile elements (tactors) in asystem. Tactograms could be loaded from the file-system,changed and stored again. Pattern files are indicated by thefile extension .bis (=board instruction set). This applica-tion is predestinated to be used as rapid-prototyping tool forhaptic patterns. Individual templates could be defined, withthe possibility for varying in following parameters: (i) arbi-trary selection of tactors at each step in the instruction set,(ii) vibration frequency is selectable in the full tactor range(in 10Hz-steps from 10 to 2500Hz, with a nominal centerfrequency of 250Hz), (iii) four discrete gain levels are se-lectable, (iv) activation and pause periods are free config-urable in ms-resolution (known as pulse-pause ratio). Addi-tionally there is no limit in the length and complexity of ainstruction list.

A set of on-the-fly defined (or loaded) instructions canbe directly transmitted to the tactor system and evaluated in-stantly. If the perception of the tested pattern is unpleasant,it could be changed immediately at runtime. We have in-tegrated a functionality to verify tactograms without a con-nected tactor system, simply by inspecting patterns visuallyon the ”visual tactor board”.

Mapping: For the definition of activities we attempted tofind intuitive mappings, and furthermore looked after therule that each of the three signals for a specific activity isrecognizeable in approx. the same amount of time (as ex-plained in Figure 2). We discussed our mapping suggestionsin detail, and changed them several times; finally we spec-ified the mappings as stated in Table 1. It is quit plain thattime for unique identification of patterns would increase

when the number of possible samples is raised (see Testaand Dearie, [18]). In the present case, the number of sam-ples is kept constant for the entire experiment, and acrossthe different feedback channels.

Hardware Portion and Experiment Processing: Thepresent study has been processed in a parked car; it was thefirst in a series of data acquisition experiments in vehicles.The system itself has been designed universally, so that fur-ther simulated or on-the-road experiments could be con-ducted with the same setting. A autonomous power supplysystem has been provided, transforming the 12V on-boardDC voltage to 230V alternative current (required from note-book computer, vibro-tactile controller, etc.). The utilizedvibro-tactile actuators (tactor driver) have been selected ac-cording to their capability of providing a strong, pointlikesensation that can be easily felt and localized from personson their body, even through clothings.

The experiments were conducted in a comfort stationwagon (type Audi A80) which was parked in a single garagenear the university campus. In order to prepare for this ex-periment, following pre-arangements had been made (seeimages in Figure 4 as well as the second picture of Figure5 for an overview of experimental setup and placement):(i) the software framework for experiment processing anddata acquisition was executed on a notebook computer; (ii)sensors and actuators were interconnected to this notebookcomputer by using standard USB-ports; (iii) a video-beamerwith high luminous intensity had been mounted on the roofof the car, projecting the pre-recorded road journey on a2x3 meter-sized projection screen, placed ahead the frontwindshield, so that test participants could see the entirevideo while sitting on the driver seat; (iv) auditive feed-back was delivered with stereo headphones (to prevent dis-tractions from the influence of unintentional environmentalnoise); (v) visual instructions were displayed on the pro-

Vibro-tactile Elements

TURN RIGHT(8 tactors vibrating simultaneously)

SE TUP OF THE VIBRO -TAC TILE SEAT(2 stripes of 8 elements each)

No Vibration

High VibrationLow Vibration

SWITCH LIGHTS(same signal for on/o�)

Figure 3. Setup of the vibro-tactile seat used in the exeriment and visual representation of two pat-terns (or ”tactograms”) for right turns and switching the lights on/off.

jection screen, as superimposition to the video of the roadjourney; (vi) vibro-tactile feedback was given with 16 C-2 linear tactors (arranged in two strips of eight, as shownin Figure 3); (vi) Users’ reaction times from the direction-indicator control or the light switch were captured as elec-trical signals with a Atmel AVR ATmega8 microcontroller(placed on a STK500 development board, and extendedwith a voltage regulation circuit; see the left picture in Fig-ure 5) and passed over to the computer, running the play-back and evaluation application (see the left picture in Fig-ure 5); (vii) during video-playback, the synchronized trace-engine processes events, passes them over to a random gen-erator which selects one out of the three feedback modalitiesand transmits associated notification signals to the test per-son in the car (by now activating either the visual, auditory,or haptic channel). Simultaneously, a timer is started, mea-suring the delay between the notification and correspondingusers’ (re)action. The latter is captured from real turn indi-cators or light switches.

The experiment procedure itself, which takes about 15minutes in time, is fully automated (a supervisor only en-sures correct processing of the experiment). For each eventwhich is enqueued by the trace engine, a dataset containingnotification time, the channel used for the feedback, user’sreaction time, and the switch firstly activated by the user (todetermine if the driver has activated a incorrect switch in-stead of the right one, e.g. a ”light on”-switch instead of the”left-turn indicator”-switch) is stored.

4 Evaluation and Results

For the present study, position of vibro-tactile actuatorsin the seat (respectively on drivers’ body), as well as acti-vation frequency and intensity, has been configured to pro-vide optimal stimulation for Pacinian corpuscles (this kindof receptors are that with highest sensitivity on vibrations).As discussed in the Related Work-section, there is evidence

Trait Min Max Mean Median Std.Dev.xmin xmax x x σ

All (18 subjects, 15 male, 3 female)Age 18.00 38.00 25.00 25.00 5.12Weight 50.00 120.00 81.06 75.00 19.29Size 167.00 197.00 178.94 178.00 6.87DEY∗) 1.00 20.00 7.11 6.00 4.92∗) ”DEY” stands for ”Driving experience in years”.

Table 2. Relevant personal statistics of exper-iment participants.

that perceived vibration intensity as well as reaction accu-racy remains not constant over lifetime. Considering theseissues, we selected test participants according to their age– a narrow range of age (and thus, a little standard devi-ation σ) should improve simulation-results by eliminatingage-related distortions. Finally, the experiment has beenconducted on 18 test persons (15 male, 3 female) whichwere university students, research staff and friends in theage-range from 18 to 38 years, all with a valid driving-licence. Table 2 gives a summary about test participant per-sonal statistics.

Briefing: Before starting the simulated driving journey fora test attendee, he/she has been briefed shortly about the re-quirements, expectations and goals of the experiment. Af-terwards, the test has been started immediately, without arun-in test. This possibly causes longer reaction times forthe first few stimulation events, and furthermore has an in-fluence to the linear trend lines (which expects a correlationof decreasing reaction times with progress of the experi-ment), as depicted in Figure 6).

Results: Figure 7 shows the reaction times separatelyfor each of the three utilized notification channels vision,sound, and touch (5% confidence interval, 752 datasets).

Figure 4. The garage with projection screen, data-acquisition vehicle, and processing equipment.

Figure 5. ATmega8-16PU microcontroller, placed on a STK500 development system, with externalconnectors and voltage regulation circuit (left image), schematic representation of the experimen-tal setting for trace-driven simulation in a garage (middle image), video frame with superimposedcontrol window for showing visual notifications (right image).

The channel of sense used for a particular notification hasbeen selected randomly and thus, implicates that the num-ber of tasks for the three modalities is not uniformly dis-tributed (depending on the quality of the random numbergenerator). Tasks in x-axis are sorted according to their oc-curance over all test attendees (e.g. for the sense of touch:initially, the reaction times of all eighteen ”first” hapticstimuli are selected, after that the reaction times of all ”sec-ond” haptic notifications, and so on).

Comparing this three stem diagrams only visually, al-lows already to identify that fastest response has been givenon haptic stimulation, followed by visual and auditive no-tifications. Furthermore, the linear trend lines of reactiontime faces downwards for all three modalities – meaningthat learning improves reaction performance (the signifi-cance of improvement from training can be deduced fromthe gradient of the trend line; auditive sensory modality per-forms a little better than haptic, decline of the visual channelis nearly zero).

Based on our evaluations (summarized in Table 3), wecan state results in more detail (considering the 5% confi-dence interval): Mean reaction time from haptic notifica-tions (xh = 690.62ms) is 13.56% faster than from visual(xv = 784.25ms), and 63.56% faster than from auditive

Trait Min Max Mean Median Std.Dev.xmin xmax x x σ

Confidence Interval 3%(768 Datasets, 96.97%)ALL 281.0 4,829.0 931.4 828.0 466.4Visual 391.0 4,829.0 843.5 703.0 478.3Auditive 641.0 3,532.0 1,149.0 1,078.0 325.0Haptic 281.0 4,156.0 705.4 641.0 341.3

Confidence Interval 5%(752 Datasets, 94.94%)ALL 281.0 1,985.0 889.2 812.0 349.9Visual 391.0 1,922.0 784.3 703.0 295.8Auditive 641.0 1,984.0 1,129.6 1,078.0 269.6Haptic 281.0 1,625.0 690.6 641.0 255.9

Table 3. Statistics on reaction times for twoconf. intervals, separated for each modality.

(xa = 1, 129.61ms) stimulation. Improvement in responsetimes when using haptic notifications are very promisingand support our position of using the sense of touch for pro-viding system feedback to the user for time-critical tasks (ina first step at least for less-important tasks, with the aim toreduce workload on auditive and visual sensory modalities).

20 40 60 80 100 120 140 160 180 200 2200

200

400

600

800

1000

1200

1400

1600

1800

2000

Number of Task

Rea

ctio

n Ti

me

[ms]

0 200 400 600 800 1000 1200 1400 1600 1800 20000

10

20

30

40

50

60

Reaction Times [ms]

Pro

porti

on [N

umbe

r of c

ount

s]

50 100 150 200 2500

200

400

600

800

1000

1200

1400

1600

1800

2000

Number of Task

Rea

ctio

n Ti

me

[ms]

0 200 400 600 800 1000 1200 1400 1600 1800 20000

10

20

30

40

50

60

Reaction Times [ms]

Pro

porti

on [N

umbe

r of c

ount

s]

50 100 150 200 2500

200

400

600

800

1000

1200

1400

1600

1800

2000

Number of Task

Rea

ctio

n Ti

me

[ms]

Figure 6. Reaction times for auditive, visualand haptic stimuli for a 5% confidence inter-val (from top). Mean reaction time is lowestfor haptic stimuli; the linear trend line on re-action times faces downwards for all threenotification modalities.

0 200 400 600 800 1000 1200 1400 1600 1800 20000

10

20

30

40

50

60

Reaction Times [ms]

Pro

porti

on [N

umbe

r of c

ount

s]

Figure 7. Histograms shows reaction timesfor the three notification modalities sound,vision, and touch (confidence interval as inthe left column). Reaction on haptic notifica-tions performs best, followed by visual andauditive sensations.

5 Conclusions

We have implemented a novel vehicle-control systembased on vibro-tactile feedback, and evaluated it in a sim-ulated navigation scenario against the common used inter-action modalities vision and sound. To get an all-purposetest environment with similar conditions for each test par-ticipant and reproduceable results as well as to prevent ca-sualties and secure drivers’ from accidents, we used a trace-driven simulation approach to conduct the steering experi-ments for vehicle-navigation instead of on-the-road evalua-tion. The simulation trace had been recorded prior the ex-periment, together with a movie from the driving trip. Dur-ing experiment execution in a parked car the trace engineprocesses the pre-recorded trace file, and executes drivingtasks precisely and synchronized to the displayed video, sothat test participants reported a ”closed to reality” drivingbehaviour.

Results of our excercises on trace-driven simulation canbe summarized as follows:

(i) Purpose of Haptic Stimuli: Simulation results con-firmed our assumptions that haptic feedback is eligibleto support interaction based on vision or sound. Re-action times on navigational tasks such as turn or lightsignals from vibro-tactile stimuli are rather similar tothat from visually presented or spoken ones, and oftenperforms better. As a major consequence we can sug-gest the usage of vibro-tactile interfaces in vehicles,e.g. for relieving the driver from distraction (whichcontributes to 25% in crashes [p.28][8]), increasingdriving comfort, or reducing cognitive load.

As vibro-tactile stimulation is age-dependent, thishighlights the necessity of compensating impaired pro-prioception and vibration to establish a universal hap-tic interface.

The maximum measured response time over all modal-ities was 1,985ms (5% confidence interval) – this valuegoes along with the recommendation of a maximumresponse time of 2 seconds (as proposed for examplein [18, p.63], [17], or [20, Chap.11]).

(ii) Efficiency: As shown in this paper, utilization of trace-driven simulation is viable in human computer interac-tion (HCI) studies and experiments. Up to now, userexperiments have often been designed spontaneously,processing models with random scripts – and there-fore with no possibility of repeating it. Results fromthe trace-driven simulation experiment support our as-sumption, that the class of trace-driven applicationshas great potentials in simulating HCI problems.

For the purpose of comparing data and for a qualitativeproof of the here presented results, we are currently

working on a on-the-road experiment, using the samedata acquisition system.

Acknowledgements

We would like to acknowledge the valuable help of Mar-tin Mascherbauer and Martin Weniger, both computer sci-ence students at the Johannes Kepler University in Linz, fortheir technical assistance in experiment design and imple-mentation as well as for supporting us in the processing ofthe experiment.

References

[1] A. Wilson and N. Oliver, “Multimodal Sensing forExplicit and Implicit Interaction,” in In Proceed-ings of the 11th International Conference on Human-Computer Interaction (HCII’05). Mahwah, NJ:Lawrence Erlbaum, Las Vegas, Nevada, 2005.

[2] E. Erzin, Y. Yemez, A. M. Tekalp, A. Ercil, H. Er-dogan, and H. Abut, “Multimodal person recogni-tion for human-vehicle interaction,” IEEE MultiMe-dia, vol. 13, no. 2, pp. 18–31, 2006.

[3] M. Osadchy and D. Keren, “Image detection undervarying illumination and pose,” Proceedings of EighthIEEE International Conference on Computer Vision(ICCV’01), vol. 2, pp. 668–673, 2001.

[4] L. Torres, “Is there any hope for face recognition?(position paper),” in International Workshop on ImageAnalysis for Multimedia Interactive Services (WIAMIS2004), Lisboa, Portugal, 21-23 April 2004.

[5] G. Mauter and S. Katzki, “The Application of Op-erational Haptics in Automotive Engineering,” Teamfor Operational Haptics, Audi AG, Business Brief-ing: Global Automotive Manufactruing & Technology2003 pp. 78–80, 2003.

[6] B. L. Hills, “Vision, mobility, and perception in driv-ing,” Perception, vol. 9, pp. 183–216, 1980.

[7] M. Dahm, Grundlagen der Mensch-Computer-Interaktion, 1st ed., ser. Pearson Studium. PearsonEducation, December 2005, 368 pages, ISBN:978-3-8273-7175-1.

[8] N.N., “Driver distraction trends and issues,” IEE Com-puting & Control Engineering Journal, pp. 28–30,Feb.-March 2005, volume 16, pages 28–30.

[9] R. Vilimek, T. Hempel, and B. Otto, “Multimodal in-terfaces for in-vehicle applications,” in HCI (3), 2007,pp. 216–224.

[10] M. McCallum, J. Campbell, J. Richman, J. Brown, andE. Wiese, “Speech recognition and in-vehicle telemat-ics devices: Potential reductions in driver distraction,”International Journal of Speech Technology, vol. 7,no. 1, pp. 25 – 33, January 2004.

[11] H. Erdogan, A. Ercil, H. K. Ekenel, S. Y. Bilgin,I. Eden, M. Kirisci, and H. Abut, “Multi-modal per-son recognition for vehicular applications,” in Multi-ple Classifier Systems, 2005, pp. 366–375.

[12] P. Bengtsson, C. Grane, and J. Isaksson, “Haptic/-graphic interface for in-vehicle comfort functions,”in Proceedings of the 2nd IEEE International Work-shop on Haptic, Audio and Visual Environments andtheir Applications (HAVE’2003). Piscataway: IEEEInstrumentation and Measurement Society, 2003, pp.25–29, ISBN: 0-7803-8108-4.

[13] A. Amditis, A. Polychronopoulos, L. Andreone, andE. Bekiaris, “Communication and interaction strate-gies in automotive adaptive interfaces,” Cogn. Tech-nol. Work, vol. 8, no. 3, pp. 193–199, 2006.

[14] V. Harrar and L. R. Harris, “The effect of exposure toasynchronous audio, visual, and tactile stimulus com-binations on the perception of simultaneity,” Experi-mental Brain Research, vol. 186, no. 4, pp. 517–524,April 2008, ISSN: 0014-4819 (Print) 1432-1106 (On-line).

[15] C. M. Jones, R. Gray, C. Spence, and H. Z. Tan, “Di-recting visual attention with spatially informative andspatially noninformative tactile cues,” ExperimentalBrain Research, vol. 186, no. 4, pp. 659–669, April2008, ISSN: 0014-4819 (Print) 1432-1106 (Online).

[16] C. Ho, H. Tan, and C. Spence, “Using spatial vibrotac-tile cues to direct visual attention in driving scenes,”Transportation Research Part F: Traffic Psychologyand Behaviour, vol. 8, no. 6, pp. 397–412, November2005.

[17] R. B. Miller, “Response time in man-computer conver-sational transactions,” in Proceedings of AFIPS FallJoint Computer Conference, vol. 33, 1968, pp. 267–277.

[18] C. J. Testa and D. B. Dearie, “Human factors de-sign criteria in man-computer interaction,” in ACM 74:Proceedings of the 1974 annual conference. NewYork, NY, USA: ACM, 1974, pp. 61–65.

[19] A. Dix, “Closing the Loop: Modelling action, percep-tion and information,” in Advanced Visual Interfaces(AVI’96), T. Catarci, M. F. Costabile, S. Levialdi, andG. Santucci, Eds. ACM Press, 1996, pp. 20–28.

[20] B. Shneiderman and C. Plaisant, Designing the UserInterface: Strategies for Effective Human-ComputerInteraction. Pearson Education, Inc., Addison-Wesley Computing, 2005, 4th edition, ISBN: 0-321-19786-0.

[21] S. L. Teal and A. I. Rudnicky, “A performance modelof system delay and user strategy selection,” in CHI’92: Proceedings of the SIGCHI conference on Hu-man factors in computing systems. New York, NY,USA: ACM, 1992, pp. 295–305.

[22] J. A. Hoxmeier and C. DiCesare, “System ResponseTime and User Satisfaction: An Experimental Studyof Browser-based Applications,” in In Proceedings ofthe Fourth CollECTeR Conference on Electronic Com-merce. Breckenridge, Colorado, USA: CollaborativeElectronic Commerce Technology and Research (Col-lECTeR), April 2000.

[23] T. J. Triggs and W. G. Harris, “Reactions Time ofDrivers to Road Stimuli,” Human Factors Group,Monash University (Accident Research Centre), De-partment of Psychology, Monash University, Victoria3800, Australia, Human Factors Report HFR-12, June1982, ISBN: 0-86746-147-0.

[24] A. J. Brammer, J. E. Piercy, S. Nohara, H. Naka-mura, and P. L. Auger, “Age-related changes inmechanoreceptor-specific vibrotactile thresholds fornormal hands,” The Journal of the Acoustical Societyof America, vol. 93, no. 4, p. 2361, April 1993.

[25] K. W. Kallus, J. A. J. Schmitt, and D. Benton, “Atten-tion, psychomotor functions and age,” European Jour-nal of Nutrition, vol. 44, no. 8, pp. 465–484, Decem-ber 2005, ISSN: 1436-6207 (Print) 1436-6215 (On-line).

[26] L. Breytspraak, “Center on Aging Studies, Universityof Missouri-Kansas City,” Website, last retrieved July09 2008, http://missourifamilies.org/quick/agingqa/agingqa18.htm.

[27] S. W. Shaffer and A. L. Harrison, “Aging of theSomatosensory System: A Translational Perspective,”Physical Therapy, vol. 87, no. 2, pp. 193–207,February 2007. [Online]. Available: http://www.ptjournal.org/cgi/content/abstract/87/2/193

[28] J. A. Smither, M. Mouloua, P. A. Hancock, J. Duley,R. Adams, and K. Latorella, Human performance, sit-uation awareness and automation: Current researchand trends. Erlbaum, Mahwah, NJ, 2004, ch. Ag-ing and Driving Part I: Implications of Perceptual andPhysical Changes, pp. 315–319.