Blind Spot Detection and Warning Systems (BDWS): Concepts of Warning

72
Kampweg 5 P.O. Box 23 3769 ZG Soesterberg The Netherlands www.tno.nl T +31 346 35 62 11 F +31 346 35 39 77 [email protected] TNO report TNO-DV 2010 C419 Blind spot Detection and Warning Systems (BDWS): concepts of warning Date 30 november 2010 Authors dr. E.S. Wilschut ing. V. Meijering B. Merkus dr. W. IJsselsteijn dr. J. Ham Commisioned by Connekt Projectnumber 032.32250 Number of pages 89 (incl. appendices) Number of Appendices 2 All rights reserved by TNO and Connekt. It is permitted to copy and publish this by TNO authorised report under condition that there is an explicit reference to the rightsholders. . © 2010 TNO

Transcript of Blind Spot Detection and Warning Systems (BDWS): Concepts of Warning

Kampweg 5

P.O. Box 23

3769 ZG Soesterberg

The Netherlands www.tno.nl T +31 346 35 62 11

F +31 346 35 39 77

[email protected]

TNO report

TNO-DV 2010 C419

Blind spot Detection and Warning Systems

(BDWS): concepts of warning

Date 30 november 2010

Authors dr. E.S. Wilschut

ing. V. Meijering

B. Merkus

dr. W. IJsselsteijn

dr. J. Ham

Commisioned by Connekt Projectnumber 032.32250

Number of pages 89 (incl. appendices)

Number of Appendices 2

All rights reserved by TNO and Connekt.

It is permitted to copy and publish this by TNO authorised report under condition that there is an explicit reference

to the rightsholders.

.

© 2010 TNO

TNO report | TNO-DV 2010 C419 3 / 66

Samenvatting

Dit rapport schetst een beeld wat voor een Dode hoek Detectie- en Signalerings

Systeem (DDSS) de beste manier is om na detectie van een gevaarlijke situatie de

vrachtautochauffeur en/of de kwetsbare verkeersdeelnemer te waarschuwen.

Er is al veel bekend in de literatuur over de informatieverwerking door mensen van

signalen door systemen, waarbij vertrouwen, acceptatie en de rol van Persuasive

Technologies een grote rol spelen. Het primaire doel is om via theoretische

verkenningen te bepalen op welke wijze de bestuurder optimaal zou kunnen worden

geïnformeerd en gewaarschuwd, zodat deze op een adequate manier gaat reageren

waardoor ongelukken voorkomen kunnen worden.

Uit de literatuur over interne en externe motivatie blijkt dat hoe minder ingrijpend de

interventie is om een bepaalde gedragsverandering te bereiken, hoe beter de resultaten

zijn op de lange termijn. Een DDSS zou dus in eerste instantie zo min mogelijk moeten

interveniëren, dus bijvoorbeeld een systeem dat een suggestie doet over waar mogelijke

fietsers zich bevinden. Maar wanneer de situatie urgenter of gevaarlijker wordt zal het

systeem steeds dwingender en leidend moeten optreden.

Het vertrouwen dat een bestuurder in een systeem heeft is in hoge mate afhankelijk van

zijn mentale model van het gedrag van dit systeem. Dat vertrouwen is zeer belangrijk

wanneer er snel gereageerd moet worden op waarschuwingen. Vertrouwen is een

subjectieve maat die aangeeft in hoeverre de bestuurder denkt dat de DDSS de dode

hoek situatie betrouwbaar, veilig, adequaat en effectief kan detecteren. Om dit

vertrouwen te kunnen opbouwen is zowel de verhouding tussen juiste detecties en valse

alarmen/missers van belang, als de voorspelbaarheid van valse alarmen/missers. Het

gegeven dat de detectiemethodes bij de huidige stand van de techniek nog behoorlijke

beperkingen kennen maakt dat de signaleringsopzet rekening moet houden met valse

alarmen/missers. Wanneer mensen interactieve technologieën gebruiken, reageren ze

vaak op dezelfde manier als wanneer ze met andere mensen omgaan. Dat uit zich bijv.

in boos worden op een computer die niet doet wat verwacht wordt maar ook in positieve

reacties als er sociale cue‘s gegeven worden. Dat geeft mogelijkheden om een DDSS op

te laten treden als een sociale actor (“de digitale bijrijder”) en sociale feedback gericht

te gebruiken in de signaleringsopzet. Dit kan het vertrouwen in het systeem versterken.

Aan de hand van de theorie zijn vijf verschillende concepten opgesteld hoe de

signalering van een DDSS vorm gegeven zou kunnen worden: getrapte

waarschuwingen, dynamisch waarschuwen, zelf instelbare waarschuwingen, laterale

waarschuwingen en het waarschuwen van kwetsbare verkeersdeelnemers. Deze

concepten kunnen onafhankelijk van elkaar gebruikt worden, maar kunnen ook

gecombineerd worden. Het actief waarschuwen van de kwetsbare verkeersdeelnemers

heeft veel nadelen: dit is alleen als laatste redmiddel in te zetten. Ondanks de voor- en

nadelen die elk van deze concepten heeft is het duidelijk dat er flinke verbeteringen te

zijn te behalen door de signaleringsopzet met behulp van deze concepten te ontwerpen..

In een vervolgonderzoek zouden één of meerdere concepten van DDSS signalering als

prototype ontwikkeld kunnen worden. In een rijsimulator kunnen deze concepten van

signalering geëvalueerd worden in gecontroleerde omstandigheden waarbij de kwaliteit

van de detectie geen invloed heeft. Tegelijk kan in een rijproef met een combinatie van

sensoren bekeken worden hoever de kwaliteit van detectie nog verbeterd kan worden.

TNO report | TNO-DV 2010 C419 4 / 66

Een daaropvolgende stap om de succesvol bevonden en bewezen signaleringsconcepten

op de openbare weg te evalueren waarbij op de openbare weg wordt gereden met

aangepaste vrachtauto‟s . Bij deze ritten wordt geëvalueerd wat het niveau van

vertrouwen en acceptatie is in dit type DDSS en wat de werklast is van de bestuurder.

TNO report | TNO-DV 2010 C419 5 / 66

Summary

This report describes what kind of blind spot detection and warning system (BDWS)

provides the best way to signal the truck driver and/or the vulnerable road user, after

detection of a dangerous situation.

A lot is already known from literature about human information processing of system

signals, in which trust, acceptance and persuasive technologies play an important role.

The main aim is to determine through theoretical research in which way the driver can

be optimally informed and warned, leading to the most adequate reaction and thereby

preventing possible accidents.

From literature about internal and external motivation it is known that when a system

intervenes less to effect a behavioural change, better long-term results are achieved.

Therefore a BDWS should intervene as little as possible in the early stages e.g. only

suggesting where bicycles may be in the blind spot. But when the situation becomes

more urgent and dangerous, the BDWS should warn more strongly and act more

decisively.

Trust in the BDWS system depends on the mental model that the driver has formed

about the system during a learning phase. Trust is very important when a fast warning

reaction is required but trust is a subjective measure that indicates the driver‟s opinion

about the effectiveness, safety, adequacy and reliability of the systems blind spot

detection performance. To build up trust the right ratio between hits and false

alarms/mishits is important as well as the predictability of the false alarms/mishits.

Given the fact that the current state of technical performance is still limited, we need to

take into account false alarms and mishits. When humans use interactive technology

they sometimes react towards it in the same way that they would do when interacting

with other humans (e.g. get angry at the computer). This could be used for BDWS by

making it work like a social actor that gives social feedback – the digital passenger –

and thus strengthen trust in BDWS.

Based on the theory, five different BDWS concepts were described: levelled warnings,

dynamic warnings, adjustable warnings, lateral warnings, and warning the vulnerable

road user. These concepts can be used separately or in combination. Actively warning

the vulnerable road user has a number of disadvantages and therefore it can only be

used as a last remedy. Apart from the advantages and disadvantages of each concept, it

is clear that improvement can be made by designing driver warning in blind spot

situations with the help of the these concepts.

In further research one or more of these BDWS concepts can be developed as a

prototype. The concepts can be evaluated in a driving simulator under controlled

circumstances where they are not dependent on the quality of the actual detection

sensors. At the same time the quality of detection sensors can be worked on and tested

in a field operational test or test track. A next step is to evaluate the successful warning

concepts on the public roads in a BDWS field operational test with modified trucks.

Evaluation in the field operational test can then be aimed at the level of trust and

acceptance and the level of driver workload.

TNO report | TNO-DV 2010 C419 6 / 66

TNO report | TNO-DV 2010 C419 7 / 66

Contents

Samenvatting .................................................................................................................. 3

Summary ........................................................................................................................ 5

1 Introduction .................................................................................................................... 9

2 What makes BDWS unique? ...................................................................................... 11

3 Persuasive technology .................................................................................................. 12 3.1 Persuasive Technology .................................................................................................. 12 3.2 The three roles of Persuasive Technology ..................................................................... 13

4 Technology Acceptance ............................................................................................... 16

5 Trust, reliability, and ‘trust calibration’ ................................................................... 19 5.1 Trust Calibration ............................................................................................................ 20 5.2 The next step for trust and BDWS? ............................................................................... 21

6 Signal Detection Theory .............................................................................................. 22 6.1 How can you experience a false alarm? ......................................................................... 23 6.2 What is needed to successfully implement this theory in a BDWS? ............................. 24

7 Automation and Control: Inform / warn / intervene ................................................ 25 7.1 The relationship between automation and the other aspects in a BDWS ....................... 27 7.2 Adaptive Automation ..................................................................................................... 28 7.3 Current implications for the control dimension ............................................................. 29 7.4 The key points of this chapter ........................................................................................ 29

8 Cognitive dimension .................................................................................................... 30 8.1 Task and environment: OWI .......................................................................................... 30 8.2 The environment ............................................................................................................ 30 8.3 The (normative) task ...................................................................................................... 31 8.4 Vehicle characteristics ................................................................................................... 31 8.5 Observation and attention .............................................................................................. 32 8.6 Characteristics of the road user ...................................................................................... 33 8.7 Workload ....................................................................................................................... 33 8.8 World view and interpretation ....................................................................................... 34 8.9 Scenario evaluation, decision and action ....................................................................... 35 8.10 Interruption Reaction Comprehension model ................................................................ 35 8.11 Interruption .................................................................................................................... 35 8.12 Reaction ......................................................................................................................... 36 8.13 Comprehension .............................................................................................................. 36 8.14 Categorisation ................................................................................................................ 36 8.15 The key points in this chapter ........................................................................................ 38

9 The timing of warnings and reaction times ............................................................... 39 9.1 Timing van warnings ..................................................................................................... 39 9.2 Reaction times................................................................................................................ 40

TNO report | TNO-DV 2010 C419 8 / 66

10 Modality dimension / Human Machine Interface ..................................................... 41 10.1 Setting priorities upon several signals, what first? ........................................................ 41 10.2 The key points in this chapter ........................................................................................ 43

11 Type of feedback .......................................................................................................... 44 11.1 Social feedback .............................................................................................................. 44 11.2 Ambient/peripheral feedback ......................................................................................... 44 11.3 Subliminal feedback ...................................................................................................... 45 11.4 The key points this chapter ............................................................................................ 45

12 A human view of a BDWS ........................................................................................... 47 12.1 Keeping a lookout .......................................................................................................... 47 12.2 More awareness of the (traffic) situation ....................................................................... 47 12.3 Sensitivity ...................................................................................................................... 47 12.4 Making the driver aware of remaining cool ................................................................... 47 12.5 Indicate possible dangers ............................................................................................... 48 12.6 Shout, gesture, nudge ..................................................................................................... 48 12.7 A tug on the steering wheel ........................................................................................... 48 12.8 Key points in this chapter .............................................................................................. 48

13 The human-machine interface of existing blind spot detection systems ................. 49

14 Discussion and conclusion ........................................................................................... 51 14.1 Conclusions from theory ................................................................................................ 51 14.2 BDWS concepts ............................................................................................................. 54 14.3 Ideas from the workshop ................................................................................................ 55 14.4 Criteria for BDWS ......................................................................................................... 56 14.5 An ideal system versus a practical first version ............................................................. 57 14.6 BDWS field test ............................................................................................................. 58

15 References ..................................................................................................................... 60

A HMI checklist ............................................................................................................... 67

B Theory-based issues ..................................................................................................... 69

16 Signature ....................................................................................................................... 72

Appendices

A HMI checklist B Theory-based issues

TNO report | TNO-DV 2010 C419 9 / 66

1 Introduction

The report “Modelling for blind spot detection and warning systems (BDWS)” contains

qualitative criteria to which a blind spot detection and warning system (BDWS) should

comply (Hoedemaeker et al., 2010). A more extensive BDWS should also not only

inform but also be able to warn or intervene. The criteria are:

1 The system must properly detect the vulnerable road user in the blind spot.

2 The system may not increase the workload of the driver.

3 The information about vulnerable road users in the blind spot must be clearly

observable.

4 A BDWS must be able to clearly distinguish between critical and non-critical

situations.

5 A BDWS that warns must warn such that the driver has time to react.

6 A BDWS that intervenes does so only in very time-critical situations and then

brings the truck to a stop.

7 In a BDWS the manner of warning the driver must be essentially different to the

manner of informing the driver.

When analysing the factors that contributed to the 37 blind spot accidents studied, the

vehicle characteristics or the task load were the most cited. A system that informs the

truck driver that cyclists are adjacent to the vehicle could have prevented many of the

accidents studied. Further analyses of critical situations reveal that a limited number of

possibilities exists in which a critical situation for a blind spot accident occurs. This is

key to the design of a system that has to recognise the critical situation. A system that,

for instance, only gives a warning if the wheels are turned in, will not be able to prevent

all blind spot accidents where critical situations also exist when the wheels are not yet

turned in. More quantitative verdicts on the requisite quality of detection and warning

within which the systems would have to work are contained in a second study that

includes detailed simulations (Hogema, 2010).

In contrast to the previous two reports, this report looks at the design of the BDWS

from the perspective of the characteristics and need of the driver. It outlines a picture of

the best way for BDWS to warn, given what is known about the information processing

of people, trust, acceptance and persuasive technologies. The aim is to determine via

theoretical preliminary studies how to inform and warn the driver optimally such that he

can react adequately (criteria 1-5 and 7). This means that the direct intervention of

BDWS on the driving task is not part of this project since this requires no action from

the driver (criterion 6).

This report is structured as follows. Chapter 2 considers briefly what makes a BDWS

unique compared to other warning systems. In Chapter 3 the role of persuasive

technologies is discussed along with its importance for BDWS. Since information

systems like BDWS do not have to be accepted by definition when equipped in a truck

or car, Chapter 4 discusses the technology acceptance of models in the light of BDWS.

An important factor in effective influence by and acceptance of automated systems is

the trust that the user has in the system, an issue discussed in Chapter 5. Another factor

is the extent of automation incorporated in a BDWS and thus the extent to which a user

must submit control. This is considered in Chapter 7. Current detection systems are not

able to detect risk situations error-free. The implications of this on the user of a BDWS

TNO report | TNO-DV 2010 C419 10 / 66

are discussed in Chapter 6 under warning detection theory. In Chapter 8 the various

cognitive factors of the truck driver are considered and how effective warning can

optimally deal with these. One key aspect here is what the driver has learned, what

situation he finds himself in, whether the driver is attentive to the situation or is

distracted, and what the driver observes. In the design of a BDWS every millisecond is

crucial in avoiding an accident. Chapter 9 looks more closely at this by considering

existing knowledge about the reaction times of the driver and the corresponding timing

of warning signals, which may be presented visually, auditively, haptically or

combined. The implications of the modality of the warning signal are discussed in

Chapter 10. Other forms of warning signals like social, ambient/peripheral and

subliminal feedback are considered in Chapter 11 while Chapter 12 relates the theory to

different concepts inspired by how a human co-driver would warn the driver. Chapter

13 looks a number of existing BDWS options. Finally, Chapter 14 draws the main

conclusions from the theory presented and discussed, suggesting possibilities for

subsequent study and presents five BDWS concepts based on the theories considered.

TNO report | TNO-DV 2010 C419 11 / 66

2 What makes BDWS unique?

The following definition of the blind spot is used in this project: “The blind spot is the

area in front of a next to the truck where vulnerable road users are in danger of being

struck by a truck”. The dangerous blind spot situation is not static but develops in time

and space.

The development of the blind spot situation in time is expressed by the fact that the

situation can remain harmless for a long time (a cyclist and truck both go straight at the

crossroads), but can suddenly become critical (truck turns right) and then the time path

to a possible incident is short. The detection of a potentially critical situation is complex

since the blind spot zone changes during a manoeuvre. It is essential that the dangerous

situation is detected properly as early as possible. If the situation is correctly estimated

by the system, the driver must be informed or warned, whereupon he forms a picture of

the situation and makes a decision of the most adequate way of reacting and acting. So

long as there is still time in the traffic situation for the driver to gain a more accurate

world view through the information and so take appropriate action (brake), informing

the driver is probably sufficient but if the situation quickly becomes critical, then the

system has to warn. In this respect it is important that the warning is credible (attention

dips if there are too many warnings) and that the warning can be quickly and easily

interpreted (intuitive). Because the time between informing or warning and the action

must be as short as possible for a successful BDWS, it is important that the driver is

reliably warned, given the very short reaction time of the driver. The driver has to be

able to trust in the system blindly and that is only possible if the reliability of the

BDWS is high. Reliability means that:

The warning can be properly understood by the driver.

The warning indicates a real chance of possible danger.

For instance, if a warning is given of a stationary cyclist, warnings are all too

quickly regarded as false alarms and, therefore, unreliable.

The warning comes in time. In other words, taking account of the required reaction

time of the driver to make a decision and take action (brake) and still avert an

accident.

Reliability of the BDWS is a key aspect in the concepts for designing the warning to the

driver and the human-machine interface (HMI). This is elaborated in the following

chapters. The development of these blind spot detection systems lies clearly in the line

of the safety of other vulnerable road users (e.g., airbags on the outside of the car for

cyclists). A BDWS is not, therefore, initially geared to the safety or comfort of the

driver as with ABS, ESP and ACC.

In the event of the blind spot the vulnerable road users themselves can also be warned,

for instance by alerts outside the truck to draw the cyclist‟s attention to a potentially

dangerous situation. This concept is, however, not discussed in the following chapters

that exclusively pertain to the information processing and warning of the driver but is

incorporated as a concept in the discussion and conclusion of this report.

TNO report | TNO-DV 2010 C419 12 / 66

3 Persuasive technology

In this chapter the general characteristics and underlying factors of persuasive

technology are discussed. The main aim of this chapter is to introduce and present a

structure of underlying factors.

3.1 Persuasive Technology

Persuasive Technologies (PT) are general technologies “explicitly designed to change

someone‟s attitude or behaviour” (IJsselsteijn et al., 2006; Fogg, 2003). Examples

include systems that give car drivers information about their fuel consumption, anti RSI

software on computers to allow users to take more breaks or robots that take the role of

a social actor and compliment or criticise users on their behaviour. Persuasive

technologies are especially important in areas where an urgent change of attitudes or

behaviour of users is desired. This necessity tends to derive from a conflict between the

behaviour and wishes of individuals and the social values we consider important. A

clear example is traffic safety. Nearly everyone knows that responsible traffic behaviour

is crucial for a safe society and yet there are individuals that drive under the influence,

excessively fast or without wearing a seat belt. Despite the widespread consensus about

social values, they are not prevalent enough in people‟s daily decision-making.

People are the most persuasive tools – they have an unmistakeable social presence and

impact, and are able to judge and use the right moment, mood and context to persuade

another. They also have an intuitive sense of applying the persuasion mechanisms as

described in social psychology. Man‟s power of persuasion and the role this can play in

BDWS are also discussed in chapter 12. However, Fogg (2004) makes it clear that

computers also have certain benefits compared to people when it comes to persuasion.

Computers have greater perseverance, enable anonymity, can use an almost unlimited

stream of data and can use any modality to create a generally persuasive environment.

Furthermore, successful persuasive software can be easily copied and distributed, and

since computers are becoming more embedded in everyday objects, persuasive

technologies can gain access to place where people are neither welcome nor able to gain

physical access.

Increasingly self-regulating systems are taking over human tasks. Implementing

automating systems appears prompted by a wish to reduce personal difficulty and errors

and boost safety. However, an automatic system can fail or be unsuitable for complex

tasks whereby people reject its use. What is essential is the right balance of task and

allocation of control. So an automatic braking system will only be effective if the user

trusts the system. A user must be willing to submit control if the technology is to be

effective. A key issue in persuasive technologies and, in this case, the BDWS is how to

optimise the distribution of control given the false alarm ratio1 of the detection system

and the will of the user to submit control to the system.

1 A false alarm can in principle be both a “false positive” (alarm without cause) and a “false

negative” (no alarm while there is cause, so a mishit). In this case, alarm observations and

thus the “false positives”, or the ratio of the number of false alarms to total number of

alarms (all observed alarms including false alarms). If this ratio is higher, there are relatively

more and more false alarms to the total number of alarms.

TNO report | TNO-DV 2010 C419 13 / 66

In general, a BDWS can be regarded as persuasive technology, which enables BDWS to

be seen in a new light and knowledge in the literature about persuasive technology and

how people influence each other to be used to develop BDWS. This makes the theories

of Fogg (2004) about persuasive technology highly relevant. Fogg (2004) describes

three roles that technology can assume to influence people: as tool, as medium and as

social actor. Insights from social psychology are also relevant in influencing people.

The following chapters refer to this kind of knowledge and its relevance to BDWS.

First, however, a closer look at the three roles that persuasive technology can assume.

3.2 The three roles of Persuasive Technology

Fogg shows that interactive technologies can influence people in three ways: as tool, as

medium and as social actor (Figure 1).

Figure 1 Computers persuade in various ways, depending on their functional role (Fogg 2004, p. 25).

3.2.1 Tool

The first basic function of computers is to act as an aid or tool to facilitate certain

activities, and to ease this or make it more efficient a PT can, for instance, support

certain competences, take over tasks, present the relevant knowledge and make

information clear. Examples include websites that support decision-making

(stemwijzer.nl) or encourage a purchase by making suggestions (bol.com). PT

calculations and measurements can motivate and help achieve certain targets, give

feedback on progress in a particular process (like fitness apparatus) or consumption like

energy consumption).

As a tool computers can influence and motivate people in various ways. Fogg

distinguishes seven different ways.

“Reduction”. The reduction of complex actions to simple tasks thereby

improving the costs/benefits ratio and encouraging certain behaviours. An

example is the one-click-shopping used in specific webstores.

“Tunnelling”. The supervision of users through a particular process or

experience to gradually persuade someone. People allow themselves to be

supervised in making choices by selectively seeking information. PT can steer

TNO report | TNO-DV 2010 C419 14 / 66

people towards a particular goal through presenting information. Use can also

be made of the foot-in-the-door method as derived from door-to-door salesmen

whereby the persuasion lies in getting people to make a small commitment to a

product/behaviour (like agreeing to a product demo), which significantly boosts

the chance of more commitment (like the purchase of the product) when this is

offered.

“Tailoring”. Information acquired by computers is more persuasive when

tailored to individual desires, interests and context of use, etc. An example

from Google is advertisements that correspond with the search topic in the

Google search engine.

“Suggestion”. By making suggestions at the right moment, the persuasive

power can be reinforced, making the presented information more relevant and

usable. The right timing makes people more susceptible to influence. Like

offering anti-virus software when a virus is detected.

“Self-Monitoring”. Enabling

people to monitor themselves

helps achieve goals. This can be

done by giving people insight

into their progress towards a

certain goal (through feedback)

or helping people to set goals

(“goal setting”). An example of

this is the Watcher as shown in

Figure 2, a living-room product

that indicates the consumption of

electricity and compares the

consumption with a target value.

“Surveillance”. The monitoring

of someone by others can

influence the behaviour of the

monitored person. Social

psychology studies reveal that

whenever others can observe

someone‟s behaviour, with the possibility of sanctioning or rewarding that

person, the person quickly acts according to the expectations of the observer

(Turner, 1991). Of all mechanisms the monitoring of others is the most used on

the work floor, such as the monitoring of call centre employees and the

customer friendliness of their customer calls.

“Conditioning”. Conditioning enables complex behaviour to be shaped or

changed. Rewarding the required behaviour ensures that this behaviour

becomes prevalent and punishment reduces the frequency of unwanted

behaviour. This is also referred to as operant conditioning. The reward may be

tangible in the form of money but may also be the arousal of positive emotion,

like the little games used in banners to attract people to a specific webpage.

Figure 2. The Watcher

TNO report | TNO-DV 2010 C419 15 / 66

Studies on internal and external motivation reveal that the less one intervenes to achieve

a specific behavioural change, the better the results in the long term (Lepper & Greene,

1978). For a BDWS too, it makes sense to take this approach. A BDWS should initially

intervene as little as possible, so for instance a system that suggest where there may be

cyclists. But whenever the situation becomes more urgent or dangerous, the system will

have to act more imperatively and decisively

3.2.2 Medium

Computers can also act as media. The consequences of certain behaviour can thus be

found whereby the cause-effect relationship is normally poorly perceptible. An example

is a study in which a dike breach is simulated to make clear what the consequences are

of an exacerbated greenhouse effect (Zaalberg et al., 2009). Simulators can also be used

to counteract phobias or practise desired behaviour. The use of these simulated “direct”

experiences can therefore alter attitudes and behaviour. A BDWS does not so much act

as a medium but driving simulators can be used to study the effect of BDWS on truck

drivers and thus correct use of the BDWS and the right reaction to a BDWS warning

signal can be trained.

3.2.3 Social actor

The third perspective of the functional triangle is the role of computer as a social actor.

Whenever people use interactive technologies, they often respond in the same way as

when they interactive with living beings. This is painfully evident, for instance, when

users become angry with their computer when it fails to perform as expected.

Computers can reinforce their presence as a social actor by using one of five different

social cues (Fogg, 2004).

- The first cue is imitating the physical characteristics of a person, such as the

use of faces, eyes, a body or human movement. Physical attractiveness also

influenced the persuasive power of a PT.

- Psychological cues are generated by ascribing a personality and

implementing/simulating feelings or empathy in a PT.

- A key human characteristic is the use of language. The application of

interactive language use, spoken language and language recognition can

reinforce the social presence and therefore the persuasive power of a PT.

- Socially dynamic cues imitate human behaviour, like taking turns, cooperating,

complimenting, answering questions and reciprocity.

- The final social cue that a PT can give is to take on a specific social role, such

as that of a doctor to exude authority. Other possibilities include the role of

team player, opponent, teacher, pet or guide.

Possibilities for a BDWS to act as social actor are discussed in chapter 11 under social

feedback.

TNO report | TNO-DV 2010 C419 16 / 66

4 Technology Acceptance

Various models exist that take account of user acceptance of technology. In the light of

the technology acceptance of BDWS one model (TAM; Davis, 1986) is dealt with in

greater detail with other models and their implications presented subsequently.

The technology acceptance model (TAM) introduced by Davis (1986) is an adaptation

of the Theory of Reasoned Action (TRA) (Ajzen and Fishbein 1980) geared to the

modelling of user acceptance of information systems. According to the model two

constructs are key in the acceptance of computer systems: perceived usefulness

(perceived usability, PU) and perceived ease of use (PEOU; Figure 3). PU is defined by

Davis as the extent to which someone believes that using the system improves his or her

performance as measured by: “using the system could boost my performance at work”,

“using the system at work could boost my productivy”, “using the system could

improve effectiveness in my work”, and “I consider the system useful in my work”.

PEOU is defined as the extent to which someone believes the system easy to use and

this is also measured by four aspects: “learning to use the system would be easy for

me”, “I would find it easy to enable the system to do what I want it to do”, “I would

find it easy to be in complete control of the system”, and “I would find it easy to use the

system”.

Figure 3. Technology acceptance model (Davis, Bagozzi and Warshaw, 1989)

A further inventory of the technology acceptance literature (e.g. Morris &Turner, 2001;

Nielsen, 1993) generates similar underlying constructs: usefulness, effectiveness,

usability, affordability and social acceptability. In brief, to be useful the user must

experience that the system serves a specific goal. To be effective the user must

experience that the system does what it is designed to do. To be usable the user must

experience the system as user-friendly. Affordability relates to a user being able to

afford and maintain the system, and social acceptability concerns broader social issues

relating to whether a user considers the system acceptable or not. According to Nielsen

(1993) these constructs are related as shown in Figure 4

TNO report | TNO-DV 2010 C419 17 / 66

Figure 4. Factors that influence the acceptance of a system. (from Nielsen, 1993)

However, the acceptance is not only determined by the interface and efficacy of the

system. Studies into the underlying factors of PEOU reveal that general computer self-

efficacy is a key factor (Venkatesh & Davis, 1996). Self-efficacy is someone‟s trust in

his or her ability to be successful in a particular situation. In the case of a BDWS an

underlying problem may be the low computer self-efficacy of future users. According to

Venkatesh and Davis (1996) training the target Group may be more effective in such a

case than a better interface design t boost the acceptance.

Little research has been done into the acceptance of warning systems in vehicles that

use well proven acceptance models such as that of Davis (1989) or Nielsen (1993). In

many cases it is not acceptance but irritation that is discussed, which is only a minor

aspect of the technological acceptance whole. A number of institutions has investigated

the acceptance of support systems by truck drivers by carrying out surveys (Connekt,

BuckConsultants & TNO, 2009; Korse et al., 2003; TransportCanada, 2009). These

surveys mainly point to an excessive ratio of false alarms that provoke irritation and

prompt a number of drivers to switch off the system (Korse et al., 2003). The study by

Connekt and others reveals that the usefulness of the system (the degree of support the

driver experiences) and the agreeability (including the false alarm ratio) are related to

each other: if a system is felt to be very useful, a higher degree of disagreeability is

accepted (Connekt, BuckConsultants & TNO, 2009; Katteler, 2003).

Other research points to the importance of weighing up warning and irritation in using

auditive warning signals. Repetitive auditive signals combined with regular false alarms

lead to distraction and are considered irritating. This may even lead to the warnings

being ignored (Campbell et al., 2002; Kiefer et al., 1999; ISO, 2005). Campbell et al.

(2002) and Dinges et al. (2005) also conclude that certain quantifiable noise parameters

like speed, volume, the number of repetitions and the frequency can have an effect on

the irritation.

The research described above reveals that the quantity of false alarms and mishits that a

system generates and the modality of the warning signals are related to the irritation and

usefulness experienced in a system, and thus the acceptance of the system. The

importance of false alarms is elaborated in the chapter on the theory of signal detection

(Chapter 6), the modality of the system is looked at in more depth in the modality

dimension chapter (Chapter 10) and the reliability of explored in the chapter on trust

(Chapter 5).

The key points of this chapter

TNO report | TNO-DV 2010 C419 18 / 66

- Training the target group on the use of BDWS to boost self-efficacy.

- A high false alarm ratio leads to irritation

- If a BDWS is experienced as very useful a greater degree of irritation is

accepted

- Repeated auditive warnings combined with regular false alarms lead to

distraction and are experienced as irritating

- Speed, volume, the number of repetitions and the frequency of auditive

warnings have an effect on the irritation

TNO report | TNO-DV 2010 C419 19 / 66

5 Trust, reliability, and „trust calibration‟

Trust (trust) in technology plays an important role in the design and correct use of a

BDWS system – if the driver does not have enough trust in the system the information

and warnings generated by the system will be ignored (Muir & Moray, 1996). A lack of

trust in the BDWS system leads to the system being ignored and potentially critical

situations arising. On the other hand, too much trust in the system could lead to the

driver overestimating the system‟s performance. Trust is crucial, therefore, to the proper

use of a BDWS system by the driver.

From earlier studies (including de Vries, van den Berg & Midden, in the press) we

know that the degree of trust in a system is a key factor is in the submission of control.

Trust in a system is not always properly calibrated (Lee& Moray, 1992). People can

overestimate their own capacities, which leads to sub-optimal choices, tiredness and

human error. In contrast, the users‟ level of trust may be unfounded, which leads to

lower sensitivity to errors by the system and reduced alertness. In certain conditions

trust in systems works in the same way as trust in human actors. As with people, smart

complex systems can represent social values shared or otherwise with the user. As with

trust in people we expect shared values to influence the trust in a BDWS (Earle &

Cvetkovich, 1996; Cvetkovich et al., 1999; 2002; 2007). For example, a BDWS that is

compelled by the government will create little trust since it is regarded as a compulsory

measure that hinders the real goal of the driver, to transport the cargo quickly from A to

B. For trust to be optimal, a BDWS must get to market because the drivers themselves

or an umbrella organisation are behind the measure. Trust differs from confidence in the

sense that confidence arises from previous experience with and performance by a

system while trust relates to a sense of mutual understanding. Figure 5 shows the

mutual relationships between these concepts. Trust is elaborated in the Trust dimension

chapter and the allocation of control in the Control dimension chapter.

Figure 5 Underlying factors in persuasive technology, a model of trust, confidence and allocation of control.

An important element in the trust dimension is not only how much trust the user has in

the system before a warning is given but how much trust there is after the system has

erroneously interpreted a situation and given an erroneous warning. Since in Blind spot

Detection Systems the frequency with which the system actually has to operate is

minimal, in designing such a system account one has to take account of ensuring that

the system works with sufficient insight that errors (especially false alarms) be

interpreted as such by the driver still leave trust in the system intact.

TNO report | TNO-DV 2010 C419 20 / 66

Trust is a subjective yardstick that indicates the extent to which the driver thinks that

the BDWS can detect the blind spot situation reliably, safely, adequately and

effectively. The trust that a driver has in a system depends largely on his mental model

of this system, one that he builds predominantly during the first physical interactions

with the BDWS – i.e., in the learning phase. The learning phase may comprise different

components like verbal information about the system, reading the manual and actually

driving with the system on. It is during this phase that the driver forms a picture of the

system: the reliability, usability and efficiency. These judgements may fluctuate early

on but will ultimately converge into a stable mental model of the BDWS (Boers &

Hoedemaeker, 1998). Ideally this learning process should occur in a very short period

of time and result in a proper mental model of the efficacy and limitations of the

BDWS.

Even though trust is a central theme in the literature of human-machine interaction,

much of the research takes place in a laboratory in a simulated environment that is,

quite simply, not always able to emulate reality.

5.1 Trust Calibration

To make optimum use of the added value a technological system like the BDWS

provides, there must be a calibrated relationship between the trust the user has in the

system and the actual reliability of the system. This relationship is known as „trust

calibration‟. A mismatch between trust and reliability can lead to errors, inefficient use

of the technology, or even the complete rejection of the technology.

If the user leans too heavily on the system‟s capability, i.e., „over-reliance‟ (Lee &

Morray, 1992), this may cause the user to be less able to recognise errors in the system

and increasingly agree with the system, even in the event of an erroneous conclusion.

On the other hand, if a user makes too little use of a system due to excessive mistrust,

the chance is considerable that the value of system is not fully used (inefficiency), or

that the system falls into complete disuse and thus all potential benefits are negated.

Research into the support of Trust Calibration in automated decision tools through

dynamic „confidence information‟ indicate that it is possible to boost the Trust

Calibration (McGuirl & Sarter) by supplementing updates of the system itself to trust in

the system‟s own advice. In brief, the system expresses its own trust in the form of trust

in the diagnosis. Results suggest that the safety is improved because not only had the

Trust Calibration been improved but also the task allocation between user and system.

It is important to note that trust is a social emotion, so an interface that masquerades as

a person or as someone the user can trust leads the user to accept the interface a

„friend‟, including the influence this implies for the work and situation of the user (see

also Reeves & Nass, 1996). Research suggests that in a closed environment, like a car

or truck, the absence of social connections actually creates a connection between the

user and the non-human controller of the system (Epley, Akalis, Waytz, & Cacioppo,

2008).

Trust plays a crucial role in deciding to trust an automated system to perform user tasks.

An experiment that bears similarities with driving with a route planner by Midden, de

Vries and Bouwhuis (2008) suggests that trust is reduced if the system scores a high

error ratio in the automated setting. A high error ratio whereby the user performed the

TNO report | TNO-DV 2010 C419 21 / 66

task himself led to less self-confidence in comparison with a low error ratio. The

conclusion was that the user is biased towards and has a higher estimate of his own

competence than that of the system. This makes it difficult for a Blind spot Detection

System because the frequency of a possible danger is very low yet the user has the

continuous availability of the system.

A study more geared to detection systems in vehicles is that of Bliss and Acton (Bliss &

Acton, 2003). It emerges that driving performance can be influenced by the reliability

of detection systems in cars. In a more reliable detection there were fewer „swerve

manoeuvres‟ made by the driver. It was emphasised that the realisation of a detection

system requires maximisation of the alarm reliability and the minimisation of the

penetration of the alarm.

5.2 The next step for trust and BDWS?

In this discussion there is a variation in the amount of trust that drivers have in the

system. Not only among various users but also in various situations and user

experiences of the system. If one fully trusts the system this means that the user is

tolerant towards the system but this may carry the danger that the user has too much

trust in the system and thus fails to react if the system makes an error. A key aspect in

this situation is to not only look at how trust manifests itself during use but also prepare

the user for a system he has to trust. Training may give the user insight into the system,

its efficacy and mainly what it is capable of and what one must be aware of during use.

An important addition for subsequent research in, for example, a laboratory study is to

look at how a user can regain trust in the system after the system has given the user a

false alarm since a driver continues to operate with the system during his working

period and so has no possibilities to avoid working with such a system.

The key points in this chapter

- Over-reliance is the consequence of a user trusting excessively in the efficiacy of the

system and thus accepting system errors.

- Better transparency in the operation of a system can help the user get to know the

system and thus more trust in how it works.

- If a user accepts the system, trust in the system is also greater.

- It is not only use during working with a BDWS that influences trust but also

familiarity and learning to work with the system for actual situations.

- How do you regain trust in the system if the system has made an error and trust is

damaged?

TNO report | TNO-DV 2010 C419 22 / 66

6 Signal Detection Theory

Another aspect in the development phase of a concept for BDWS is Signal Detection

Theory (SDT) that describes a method to distinguish a signal from noise. A central issue

is the extent to which a person is able to detect a stimulus in the context of potentially

distracting factors (distracters) and false alarms. Is a danger correctly detected,

incorrectly detected, detected but not regarded as a danger or even missed? In the case

of a BDWS it is therefore essential to study when a user of a BDWS detects a danger. If

the user fails to detect it, the system must be modified such that the user does correctly

detect it. In terms of SDT this means adjusting the sensitivity criterion whereby the

number of missed signals falls (desirable) but the number of false alarms rises

(undesirable). Finding an optimum balance is one of the main challenges in the design

of a good BDWS system.

Signal Detection Theory (SDT) is an extensive and much used theory (Wickens, 2002).

Each application of SDT has to weigh up safety (few false alarms and mishits) and

precision (as many hits as possible). This consideration must be repeated in each

application area, for example radar systems or hospital analysis equipment (Wickens &

Parasuraman, 2008).

In the field of BDWS systems it is vital to have as much detection certainty as possible

(the fewest possible wrongly detected objects, false alarms and mishits) without risk to

safety (which makes for more false alarms). A collision is a consequence that has to be

avoided, so in current detection systems greater certainty is preferred through the

reduction of the number of mishits, and thus also the higher number of false alarms. The

downside of this is less trust in the efficacy of the system.

The TNO report (Hoedemaeker, Doumen, Goede, Hogema, Brouwer, & Wennemers,

2010) refers to false alarms as follows: „A false alarm is a signal from the system

without cause (there is no vulnerable road user in the blind spot). The false alarm ratio

is normally expressed in the number of times that the signal is given incorrectly divided

by the number of times the signal is given correctly (a hit). False alarms determine the

specificity of the system: the fewer false alarms, the more specific the system. A mishit is

a situation whereby a vulnerable road user is in a dangerous zone of the truck but the

system gives no signal. Mishits determine the sensitivity of a system: the fewer mishits,

the more sensitive a system is. These two yardsticks are interrelated: if a system is set

up to be very sensitive (high sensitivity), it will generate few mishits but probably more

false alarms, and vice versa.‘

The implementation of SDT in a BDWS is not a standalone implementation but is

strongly linked to other theories like the control dimension (Automation), and the

attention and trust (Trust) that a user may have in the system (Wickens & Parasuraman,

2008). This interrelationship is elaborated in the American report on guidelines for the

design of interfaces for Crash Warning systems (Campbell, Richard, Brown, &

McCallum, 2007). The example below (Figure 6) shows what the implications are for

false alarms and trust when introducing a one-stage or two-stage warning in a Forward

Collision Warning system. What is interesting about this model is that it proposes a one

or two-stage warning system, both of which have advantages and disadvantages and

each its own value in terms of a specific BDWS. The number of warning stages (one or

two) refer to the number of warning levels presented to the driver during a driving

TNO report | TNO-DV 2010 C419 23 / 66

situation preceding a possible accident. A one-stage warning system generally requests

immediate corrective action while a two-stage system requests immediate attention and

a possible corrective action in which the relationship with trust is clear and the number

of false alarms affects the trust a user has in the efficacy of the system. The more false

alarms, the less trust there is in a correct detection.

Figure 6 False alarms and effects on step-by-step warning (from Campbell, Richard, Brown, & McCallum, 2007)

6.1 How can you experience a false alarm?

To provide extra insight into the relationship between SDT, control and trust in a

system a study looked at whether false alarms are worse than mishits in respect of

compliance with and trust in a system (Dixon, Wickens, & McCarley, 2007). The aim

of the study was to investigate through a follow-and-control task assisted by automatic

diagnosis how errors (in the automation system) had an effect on the compliance and

trust of the operator as well as whether compliance and trust can be regarded as

independent of each other in this context. The results of the application revealed that

false alarms have a more damaging effect on performance than a mishit. In view of the

potential consequences of a mishit in a BDWS system, the results of this lab study do

not translate as such to a criterion for a BDWS system but they do illustrate that false

alarms have a clear effect on the trust a user may have in a system. In addition, false

alarms tend to be more obvious than mishits if the latter has no serious consequences,

which is often the case (Maltz & Shinar, 2003). For a BDWS the mishits may, however,

have fatal consequences.

The report by Campbell et. al (2007) refers to a maximum number of false alarms of 1

per week for „inside the lane‟ and 1 per week for „outside the lane‟ for a Lane Departure

Warning system. It also considers detection sensitivity adjustment by the user to enable

the quantity of false alarms to be minimised without significantly influencing the

performance capacity of the equipment. This not only keeps the user involved in the

system but also retains his trust in the system.

TNO report | TNO-DV 2010 C419 24 / 66

6.2 What is needed to successfully implement this theory in a BDWS?

To successfully implement SDT in a BDWS further study is needed of the precise

relationship between SDT, trust and the control dimension; in the area of false alarms

and mishits in detection systems this would concern the relationship between trust in the

system and the extent of false alarms, specifically in BDWS for trucks/buses. For

example, are there different SDT values needed for different warning stages? In the

literature there is plenty on applications in various fields and corresponding values but

not in the specific BDWS case and the corresponding values for detecting danger. Since

this concerns an initial phase in the study of the relationships between SDT and other

theories discussed in this report, a lab study would be the first step towards an improved

BDWS.

The key points of this chapter

- Identification of the relationship between SDT and other theories is essential for an

effective BDWS.

- An effective BDWS must not produce mishits and must generate as few false alarms

as possible

TNO report | TNO-DV 2010 C419 25 / 66

7 Automation and Control: Inform / warn / intervene

The control dimension refers to the extent to which the user has control over the system

and vice versa. Control is a central theme in automation, i.e., the assumption of user

tasks by a technical system. This can lead to less control, especially at the level at which

the automation is applied, but may also lead to greater control at a somewhat more

abstract level. This is often called „supervisory control‟, whereby the user oversees the

automated process and controls it at a high level. Automation (and the related control

dimension) can intervene at different levels. In the control dimension it is largely the

question of where to draw the line between different stages of control. In the case of

detection systems in trucks one can ask how far does the user want to be informed,

warned or whether intervention should occur. The current report investigates both the

inform and warn aspects but intervention is beyond the perspective of this report.

TNO draws the following line between inform and warn:

‗Inform means acquiring information about the presence of cyclists in the blind spot. In

a certain sense ‗without judgement‘. Warn goes a step further. The report

(Hoedemaeker et al., 2010) states:

―To be able to warn it is essential to make a distinction between a critical situation in

which the driver is warned (e.g., auditively) and a non-critical situation in which the

driver is only informed. This distinction can be made by establishing whether the

presence of a vulnerable road user in the blind spot will lead to a possible accident and

how much time remains until this accident. A non-critical situation concerns only the

presence of a vulnerable road user in the blind spot (inform).‖

A triangular warning sign thus serves as a warning to show a critical situation and

informs when it concerns a non-critical situation.‘

The difference between inform and warn must be evident for the driver to avoid

confusion about whether it concerns a critical or non-critical situation (Hoedemaeker,

Doumen, Goede, Hogema, Brouwer, & Wennemers, 2010). The design of a BDWS

must be such that this distinction is immediately clear. This is a point of further study,

whereby a more elaborate distinction can be useful. Parasuraman, Wickens & Sheridan

(2000) have proposed a model in this context to distinguish various types and levels of

automation (Table 1).

Table 1. Model of Automation (from Parasuraman, Wickens, & Sheridan, A Model for Types and Levels of

Human Interaction with Automation, 2000)

Stages

10= high

1= low

Automation Level

Information Decision

Information

Acquisition

Information

analysis

Decision selection Action

implementation

10

The computer decides

everything, controls itself

and thus ignores the

person.

Route planner that

indicates the

presence of a traffic

jam or congestion

at a crossroads

Brake action

undertaken by the

system

TNO report | TNO-DV 2010 C419 26 / 66

9 Informs the person only if

the computer decides to.

Informs the

environment of a

change in driving

direction

8 Informs the person only if

subsequently requested.

7 Automatic action and

subsequently informs the

person.

Trend display gives

possible danger

points at the point

of visual focus

6 Gives the person the time

to make a decision before

itself making a decision

automatically.

5 Undertakes the suggestion

if the person agrees.

4 Gives an alternative. Prediction, for

example of what

the cyclist is likely

to do (direction)

3 Reduces the selection to a

small amount.

2 The computer offers a

complete set of

decision/action options.

A „lock on‟

function like

marking moving

objects

1

The computer offers no

assistance; the person takes

all decisions and actions.

There are four different stages of automation with ten levels (see Table 1) of automation

(Levels of Automation - LOA), from a High LOA (system has complete control,

ignores the user entirely) to Low LOA (system give no assistance whatsoever, user in

full control). The four different stages of automation are (with example system):

Information acquisition. Information about an object, a „lock on‟ function for radar as

in a detection system.

Information analysis like predictor displays at a low LOA level, trend displays at a

higher LOA and a display tool that displays the landing strip on the windscreen of the

cockpit is at a high LOA level.

Decision and action selection. Example is a route planner that shows a traffic jam on

the route and offers an alternative.

Action implementation, such as automatic warning from truck to other road user or

even braking undertaken by the system of the truck itself (high LOA).

TNO report | TNO-DV 2010 C419 27 / 66

A system like a BDWS can thus contain all stages of automation if necessary while a

more complex BDWS can even use several levels parallel, in various constellations.

Secondary aspects in the model are reliability and cost consequences so it is important

to determine in which area, or areas, in the model the system will have to function. Each

area has a consequence for the signal-noise ratio (see Signal Detection Theory, Chapter

6) as well as the trust a user must have in the system. The more the automation takes

over control from the user, the more trust the user must (be able to) have in the system.

The levels reveal that the user especially in the information phase must rely on his own

interpretation of the objective information provided by the system. By nature this gives

the user more trust in the system since he makes the decision himself on the basis of the

information provided. In the decision phase the system takes a decision whereby the

user has to trust the system to make the right choice.

It is clear, then, that in this phase distinction can be a tool in structuring a BDWS for

different situations in informing, warning and possible intervention. More tangible

information about the application of a more simple form of automation can be gained

from the report on Interfaces for Crash Warning Systems (Campbell, Richard, Brown,

& McCallum, 2007) in which a guideline for display design is used in relation to 5

stages that also fit in with the stages of automation:

No detection of vehicle environment

Detection of vehicle environment

Warning

Approaching threat

Threat

7.1 The relationship between automation and the other aspects in a BDWS

The efficacy of automated systems depends on the relationship with other aspects

discussed in this report:

Trust and automation (with a minor attention component)

Signal Detection Theory and automation

Multimodality and automation

7.1.1 Trust and automation

Trust and reliability in automation play a role in the detection of a circumstantial error

when performing several tasks at once. As discussed in chapter 10 of this report, trust in

a system, and particularly the trust a person has in an automated system, is important for

persons and BDWS to work well together. The relationship between automation and

trust is clearly evident in the study by Parasuraman et. al (Parasuraman, Sing, &

Molloy, 1997) that explores the fact that an error in the automation is not attributable to

the system itself but in the manner that people expect an automated process to behave,

the misinterpretation (over-reliability on automation). To counter the misinterpretation

(excessive trust in a system) the study proposes boosting the attention of the use

through training during automated tasks to positively influence the attention. For a

BDWS this may mean that the attention during a blind spot situation is maintained or

increased by training (increasing the attention by increasing control over the system).

What happens when an automated process is doubted? How does this affect people that

undertake a task in which trust in an automated process must be high? Answers to these

questions are essential for application in a BDWS. A simulator study (Sarter &

Schroeder, 2001) of pilots able to trust their system in extreme flying conditions (like

TNO report | TNO-DV 2010 C419 28 / 66

ice on the aircraft) reveals that the type of automation can make a difference. Two types

were tested: information on the current status and a decision advice on how to

manoeuvre in icy conditions. Good reliability saw both types improve the performance

of the pilots while lower reliability of the automation that advised on a decision saw the

pilots perform worse.

7.1.2 Signal Detection Theory and automation In the relationship between Signal Detection Theory and automation Wickens and

Parasuraman (Wickens & Parasuraman, 2008) indicate the consequences of high false

alarm rates of automation in the design of a system. High false alarms are often

implemented in a system because of a need for certainty of detection, whether false or

simply because it is better to detect something than accept the consequences of non-

detection. This just makes the line between detection and non-detection wider and

affects the trust of the user. The model of automation (Fig. 2) can be used here to

optimise the ratio of correct detection and false alarms. Clear priority setting for

detection for the different stages and levels in the automation model can help improve

the ratio between correct detection and false alarms.

7.1.3 Multimodality and automation

A brief sketch of the situation: a driver gets into a blind spot situation and is informed

by a BDWS that there is a potential danger. The system correctly detects the danger and

warns the driver who can respond. What is not correctly detected, however, and just as

great a danger, is the second cyclist that may well be further away than the first cyclist

but is cycling that bit faster and so will arrive in the blind spot at the same time. Is there

information available that can tell us about how a BDWS has to behave in this

situation?

Studies of cue selection effects (choosing between different detected objects) when

searching for targets indicate that this search can be accelerated but when a second, less

important target is added, the less important is detected first (Yeh, Wickens, & Seagull,

1999). The prioritising of a target relates to making a decision based on the choices

offered. Because a detected target in a BDWS may present a possible danger, it is

important to know how the user establishes (according to the possibilities given) what

the danger is in the case of several targets. Campbell et.al (Campbell, Richard, Brown,

& McCallum, 2007) also examine this by even drawing up a guideline to prioritise

several warnings in a detection system.

7.2 Adaptive Automation

Adaptive Automation (Parasuraman, Bahri, Deaton, Morrison, & Barnes, 1992) is

varying the automation system to maximise the benefits of automation and minimise the

disadvantages. The concept has been around a long time but prototypes are only recent.

Adaptive Automation can be applicable in two ways:

To reduce stress among users in situations where a high workload is detected (e.g.,

distraction, multi-tasking), processing capacity is reduced (e.g., tiredness) or user

stress is measured (like a psycho-physiological reading such as skin ailment). In such

cases an adaptive automation system can take on a fuller degree of control thereby

alleviating the user temporarily.

Function allocation geared to the person rather than the system. Reducing an

automated task to a manual task to bring the fulfilment in respect of using the system

back to the user (during a task with a low workload). Various options exist to

measure the workload, but acceptance of such systems is a problem.

TNO report | TNO-DV 2010 C419 29 / 66

7.3 Current implications for the control dimension

For automation too, the relationship with other subjects is considerable though not

much (or even no) research has been done on the influence of automation on the user

and the trust that the user has or the extent of detection recognised by the user as a

danger. A system may have several functions as shown in the various stages of

automation. Further study is recommended to examine how automation is accepted and

to what degree this influences the decisions taken by the driver, the trust in the system

and the extent of detection programmed in the system.

The distinctiveness of the control dimension lies in the design of a similar control

structure and account of the various stages and levels of control, better identification of

the relationship with the other subjects, and a more transparent system. In this case this

may produce a more efficient system since clarity exists as to which areas the system

will have to perform specific actions and what consequences this will have for the user.

7.4 The key points of this chapter

- The distinction between inform, warn and intervene must be well structured because

there are many stages in the automation model whereby the line between inform and

warn, for instance, may overlap.

- The relationship between automation, trust and SDT is considerable. The effect of

one on the other must be clearly examined.

- Adaptive automation is a form of control that offers possibilities to improve the

relationship between driver and system because the user is able himself to take

temporary the control of the system.

- In detecting several dangers the priorities per danger must be clear to the driver.

TNO report | TNO-DV 2010 C419 30 / 66

8 Cognitive dimension

The main cause of accidents is the combination of a driver not looking where he should

look and the occurrence of an unexpected event (Dingus et al., 2006). A BDWS system

can focus on these two factors by ensuring that the driver looks where he should look

and making the unexpected even (a cyclist entering the blind spot) less unexpected.

This chapter provides insight into the concepts that are key to a BDWS by considering

different psychological theories and using the OWI (environment-road user interaction)

model as described in an earlier report by TNO (2010). The OWI is used here as a guide

to classify supplementary literature. The Interruption Response Comprehension (IRT)

(McCrikcard et al. 2003) will also be discussed.

8.1 Task and environment: OWI

An earlier study by TNO (2010) generated a model that describes the relationship

between the environment, road users and a blind spot situation – the OWI model (see

Figure 7). The concepts in the model are described briefly and supplemented where

possible.

Figure 7. OWI (environment-road user interaction) model (Hoedemaeker et al., 2010)

8.2 The environment

The environment in the OWI model includes the road being used, the weather and sight

circumstances, the direct environment of the road and the other road users. The

environment determines to a large extent the task performed by a driver. The incorrect

or inadequate observation of the environment quickly produces danger and is one of the

main causes of blind spot accidents. A BDWS system can support the driver in this

TNO report | TNO-DV 2010 C419 31 / 66

aspect by giving more insight into the environment. Such an informed system can make

the location of roads users that are difficult to see or are invisible visible to the driver in

order to prevent a possible accident. The system can also go a step further by making a

risk analysis of the different environmental factors like sight conditions and intensity. In

a high-risk situation the system can try to free up resources for the driver by making

secondary tasks (phone, radio) impossible thus allowing the driver to better observe the

environmental situation. This is also called workload management. Examples of

workload management systems are the Codrive (Zoutendijk et al., 2003) and AIDE

(Engström, 2006) projects discussed in this chapter. Such a system could be part of a

BDWS to prevent a blind spot situation early. Key here is that the system properly

assesses the traffic risks.

8.3 The (normative) task

The environment and type of vehicle determine the normative task requirements in a

certain traffic situation. This normative task consists of rules that the driver must know

and are partly scripted in the long-term mental models of the road user. The correct

execution of these rules (like when to look in the mirror) should minimise the chance of

a blind spot accident. However, in practice, the normative task is not always perfectly

executed because a truck driver is under a heavy task load. A BDWS with workload

management functions would help here since the resources can be freed up to enable

better adherence to the assimilated rules. A BDWS system can also act as a memory jog

for the assimilated rules by focusing attention on these (like mirrors) if it becomes

apparent that a driver is not complying with the crucial rules (like forgetting to check a

mirror).

The route of the truck can also be regarded as part of the normative task. This route is

most predetermined by a route planner. Use can be made of this in a BDWS because it

will be known at an early stage where the truck will turn. Integrating a BDWS with

navigation information is highly recommended since the potential risk (the moment that

a cyclist enters the blind spot and the navigation system indicate the driver will turn

right) can be assessed very early.

8.4 Vehicle characteristics

The truck equipped with aids determines to a great extent what the driver can see of the

environment from the vehicle. But vehicle characteristics influence not only the view of

the environment but also the „behaviour‟ of a vehicle, like acceleration, braking

distance and turning circle as well as where possible displays of the BDWS can be

placed and the range of detection. The braking distance combined with the reaction time

of the driver determine the last moment when a warning signal can be presented by a

BDWS, the timing of which would have to be modified to the type of vehicle and its

load.

A major conclusion of the previous TNO report is that vehicle characteristics in most

cases (in 13 of the 37 case) of the blind spot accidents studied are decisive in an

accident occurring. These included wrongly adjusted (or absent) mirrors or stuff on the

dashboard that hindered the view. So in addition to a BDWS warning function an

informing function is also advisable since one cannot assume by definition that existing

aids like mirrors are informative enough. Certainly given that detection systems in the

near future will not be free of false alarms and thus not 100% trustworthy, a driver will want to know why he is being warned. Absent or poorly adjusted mirrors or other

TNO report | TNO-DV 2010 C419 32 / 66

hindrances to the view will not compensate and the driver may interpret the warning as

a false alarm.

8.5 Observation and attention

Most actions executed by road users are visually guided. The truck driver observes the

environment largely visually and selects certain information to execute action. This

selection of information occurs through attention (see van der Heijden, 1992). Both the

attention and the eyes of a road user are able to focus on one place at one moment while

the whole field of view to be covered is much larger. However, information can also be

processed unconsciously without attention. Peripheral stimuli can be processed more

quickly that way (Nikolic & Sarter, 2001). The subconscious can perceive smaller

differences than the conscious (Dijksterhuis & Aarts, 2003) and sometimes works faster

than conscious observation (Bargh, Chaiken, Govender, & Pratto, 1992). The repeated

presentation of certain stimuli can generate habituation and thus more automated and

unconscious processing (Bargh, 1994). Applied to a BDWS this means that the

information or warning does not necessarily presented with focus to have an effect on

the behaviour of the driver. Argued from the perspective of the excessive workload of

the driver it is particularly advisable to place a BDWS in the periphery of the driver.

While processing the information presented may incur a slightly higher workload in the

first instance, in time the information will be largely processed unconsciously and thus

minimise an increase in workload. This is especially true for the informing function of a

BDWS because the consequence of not noticing a warning signal many be too great.

What the road user observes of the environment is partly determined by the

environment in combination with the vehicle characteristics and experience. Attention

(and thus where the focus lies) is partly determined by the external environment. The

„attention can be drawn in‟, as in a sudden change in the peripheral field of view or by

something very conspicuous, or can be internally „guided‟ by the road user by

experience or expectation (see Wierwille et al., 1992).

Distraction also plays a role in what a road user observes. Is the road user just

concentrating on the driving task or with other things like phoning, taking with friends

on the bicycle, listening to the radio, etc.? Smiley (2005) defines distraction as

misallocated attention. Distraction can, like attention, come from outside (external) or

from the driver himself (internal). Distraction is important in a BDWS because it leads

to:

- Slower reaction times to warning signals; distracted drivers tend to be 0.4 s

slower (Lee et al., 2000).

- Less capacity to detect events and objects (Young et al., 2003).

- Impaired capacity to make decisions (Young et al., 2003).

Just as when a co-driver would repeatedly warn of danger we have already seen, we

would also get irritated if a BDWS would do the same. Because this lessens the

acceptance of the system it is important to detect whether a truck driver is distracted or

not. Distraction detection is already being used in a number of accident warning

systems. These systems are also known as distraction adaptive systems (distraction-

adapted collision warning systems) and generally warn earlier or are more sensitive if a

driver is distracted or cancel a warning if the driver is fully attentive (Roland et al.

2007; Opperud et al., 2005). Research into distraction adaptive systems reveals that

earlier warnings are more useful and valued despite the doubling of the number of

TNO report | TNO-DV 2010 C419 33 / 66

warnings (Roland et al. 2007), and that the cancellation of warnings is more valued than

systems without distraction adaptation. However, if warnings are cancelled this

sometimes creates confusion for the driver because he does not always realise why a

warning has been cancelled. In the event of curve speed warning systems distraction

adaption has resulted in better performance in maintaining a safe speed in the bend.

(Roland et al., 2007). No more certainty can be given here about whether use of

distraction adaptation in a BDWS is desirable due to the absence of research but it could

help to boost trust in and acceptance of the system, especially when there is a

significant possibility of false alarms. The use of distraction adaptation by cancelling

warnings can reduce the incidence of false alarms, although it is difficult to ascertain

whether the driver is distracted or that he has already seen the cyclist. This could be an

interesting subject for future research: is it possible to make a reliable assessment of

whether the driver has seen the cyclist?

8.6 Characteristics of the road user

The OWI model distinguishes between „stable‟ characteristics that do not change very

fast over time and temporary characteristics that may change quickly over time. More

or less stable characteristics of road users are: attitude/persuasion, knowledge,

competence and long-term mental models, the latter being known also as „scripts‟ and

based on experiences of previous traffic situations and how to act in them (Takemoto,

Kosaka, Noda, Nishitani, Uechi & Sasaki, 2007). The attitude/persuasion of the road

user and knowledge may influence interpretations of the situation. Attitudes regarding

the BDWS can also be counted among these characteristics, such as the truck driver

having the notion that the system shares his goal and corresponding social values (see

chapter 11).

The stable and temporary characteristics influence the workload a road user experiences

in a particular situation. The characteristics of the driver are important in detecting

critical situations, although a difficulty here is the strong interrelationship between the

factors that contribute to general state of the driver, like stress, emotion and tiredness.

Based on different kinds of research among bus and truck drivers clear psycho-

physiological, speech and subjective changes have been demonstrated when drivers

become tired (Milosevic, 1997). Although not a core task of a BDWS, measuring

tiredness, stress and emotion could be a very useful supplement to a distraction adaptive

system on a BDWS, especially because this makes it possible to optimise timing and

modality, for example, on the basis of user characteristics.

8.7 Workload

The previous TNO report shows that, for the driver, workload is also a key factor

besides vehicle characteristics in the occurrence of a blind spot accident. This may be

related to the intensity of the traffic, an awkward manoeuvre or looking for the route.

Different concepts of OWI model affect the workload (the previous TNO report

contains a full description).

- The environment

- The normative task

- Secondary tasks

- Stable road user characteristics

- Temporary road user characteristics

TNO report | TNO-DV 2010 C419 34 / 66

Mental workload is the amount of mental resources needed to perform more than one

task in relation to the resources that are available (de Waard, 1996). There is a causal

relationship with distraction since exhausted mental resources increase the risk of

misallocation of attention. But a high workload is not prescriptive for distraction

because the misallocation of attention can also result from a low workload, as in day-

dreaming or looking at roadside advertising hoarding (Engström & Victor, 2009).

One way to prevent high workload among truck drivers is workload management. The

notion behind this is the prevention of excessive workload and distraction by

dynamically supporting the driver to manage different driving-related and other tasks.

This is effected especially through controlling the information provided by systems in

the vehicle and by limiting the functionalities available to the driver in demanding or

potentially demanding situations (Engström & Victor, 2009). There are several studies

that have made use of workload management, like the TNO Co-drive (Zoutendijk et al.,

2003) and AIDE (Engström, 2006) projects. These systems use so-called driver-vehicle

environment assessment estimation algorithms and tend to monitor the following

parameters (Engström & Victor, 2009):

- The driving load

- The secondary task load

- The traffic situation

- Temporary driver limitations

- Individual driver characteristics

Two conclusions can be drawn from studies into workload management systems

(Engström & Victor 2009). First, the priorities and plans in the displayed information

appear to work well in demanding driving situations and, second, the effect seems to be

highly dependent on the type of information that is “scheduled”. In an optimal situation

a BDWS is integrated with a workload management system such that the workload does

not quickly increase too much and that prevention is better than warning, especially in

view of the significant chance of false alarms. Such a system can also modify the

BDWS feedback outside the control of the stimuli towards the driver such that it is less

disruptive or more obvious depending on the situation and urgency. This may take

place, for instance, by modifying the modality to the modality where the fewest

resources are used. For this functionality control of other vehicle systems is not

necessary, only the capability to detect the status of other systems or the state of the

driver resources. This makes it generally more easily applicable, also for retrofit

systems.

8.8 World view and interpretation

The world view is a representation of the environment and its subjective interpretation.

Different processes and factors may construct an incorrect world view in the sense that

it does not correspond to the „physical‟ world. A driver may fail to see a cyclist because

he is in the blind spot or the work load is too high causing visual information to be

missed. The use of an information display to check where cyclists are may boost the

validity of the world view and give the driver an opportunity to check his world view in

this area. The world view of the cyclist can also be improved, especially the realisation

that a dangerous situation could occur despite the fact that the cyclist has right of way.

This could be aided by the application of a sticker.

TNO report | TNO-DV 2010 C419 35 / 66

8.9 Scenario evaluation, decision and action

On the basis of the world view a driver has formed of the situation and the

corresponding prediction of the imminent future, the driver decides on action. The

consideration of scenarios and the subsequent decision are influenced by the

characteristics of the driver (is he tired, impatient, experienced in such situations, etc.).

These characteristics can be influenced by using a workload management system to

enable better consideration and decisions.

8.10 Interruption Reaction Comprehension model

This section looks at a model geared to notification systems to classify different forms

of BDWS. Three parameters – interruption, reaction and comprehension (McCrickard,

Chewart, Somervell and Ndiwalana, 2003) – are used to identify paths through the

information processing model of Wickens and Holland (2000), see Figure 8. While this

model does not correspond entirely with the OWI model as discussed above, there are

enough commonalities to use it for BDWS. These paths can then help identify and use

the most relevant cognitive concepts and theories. Using this model will not generate

direct recommendations for BDWS, but it does place it better in the existing literature

of research into information processing models. The OWI model is, however, much

more geared to this specific situation, and is the basis for future research.

8.11 Interruption

McCrickard et al. define interruption as an event that sets in motion a shift of attention

focus from task to notification. In the event of an informative BDWS a notification by

the system that overly disrupts the attention on driving is undesirable. Guidelines for in-

vehicle information systems suggest a minimal number of and type of interactions with

displays, limiting the degree of display change and the time that a display is visible

(Ballas et al. 1992; Green 1999; Tufano et al. 1996; Sheridan 1991). For a warning

Sensory

Processing

Perception

Feedback

Long Term

Working

Memory

Working

Memory

Long Term

Memory

Response

Execution

Response

Selection

Figure 8 Information processing model (Wickens and Holland, 2000)

TNO report | TNO-DV 2010 C419 36 / 66

BDWS, however, a higher degree of interruption can be incorporated because in that

case the avoidance of a possible blind spot accident is key.

8.12 Reaction

The second parameter in the model is a rapid and accurate response to the stimuli given

by the notification system, referred to by McCrickard as reaction. In a BDWS cues are

presented that keep the driver abreast of key information, so it is essential that the driver

makes a distinction between two or more system values. Several studies have

investigated how the reaction to a notification can be quickly observed and processed

by means of pre-attentive processes. In this respect a different colour, shape and

movement are important (Enns and Rensink 1991; Healey et al. 1996; Healey and Enns

1999; Bartram 1998; Bartram et al. 2001; Bartram 2001) and account must also be

taken of this in a BDWS. When a BDWS gives a critical warning, the desired reaction

driver is to brake the truck. However, it is more likely that the driver will want to check

the reason for the signal, an effect reinforced by the detection being susceptible to false

alarms. To ensure the right reaction from the driver, good comprehension of the

situation and the BDWS is necessary.

8.13 Comprehension

While it is important to react to an informative or warning cue quickly and accurately, it

can also be important to remember, comprehend and later transfer that information.

McCrickard et al. Call this the comprehension parameter. This may not be the key

parameter in a BDWS but the driver does have to comprehend where the cyclist is and

possibly remember it because he is occupied with the driving task. Research has been

done into several factors that influence the processing and comprehension of

notifications. The position, area and colour of the notification are important (Chewar et

al. 2002). As stated above, comprehension of the situation and the BDWS are key to the

right reaction to a notification. Comprehension of the BDWS can be boosted by good

training and perhaps through training the right reaction in a simulator. Comprehension

in the situation can be achieved by the informative function of a BDWS, especially the

very important matter of showing the location of cyclists in relation to the truck.

8.14 Categorisation

An informative, warning and combined systems, with and without stages, can be

categorised according to the three parameters interruption, reaction and comprehension

based on extremes and illustrated in

Table 2. An informative system may not

cause interruption because driving is the

primary task and the workload of the

driver is already high. For the

interruption parameter a “0” must be

filled in and so no auditive or haptic

signals may be used, just visual

information offered peripherally. This

information may not be allowed to

change, move or flash too quickly

because this may attract the attention of

the driver. Depending on how an

Figure 9 Illustration of an infomative BDWS.

TNO report | TNO-DV 2010 C419 37 / 66

informative system is implemented, a certain level of reaction will be desired, so a

reaction parameter greater than zero must be filled in. To simplify matters here we use a

“1” because some reaction is expected. This may be checking the mirrors, taking the

bend more slowly or stopping the truck. However, more importantly an informative

system produces more comprehension in the situation. In this case a “1” must be filled

in. This comprehension can be created by showing relatively complex information, such

as a diagram of the truck and the location of the detected cyclists and pedestrians along

with an indication of the direction and speed of their movement (see Figure 10 for an

example).

A warning system is intended to attract the attention based on the fact that the system

only warns in a critical situation, so work must be done on focal visual signals or clear

flashing, moving, bright signals in the peripheral field of view and clearly perceivable

auditive signals like a voice or a shout (a social cue works better, see also Chapter 3 and

11). Use can also be made of haptic feedback like a vibration in the steering wheel or

seat. For the interruption parameter a “1” can be filled in for a warning system. The

reaction is key here, namely that the driver swerves or brakes. For this parameter also a

“1”. To ensure this the warning must correspond with the desired behaviour. By

allowing the feedback to come from the direction of the danger, the driver can, for

example, steer immediately in the right direction to avoid a collision. A vibration in the

brake pedal can also indicate the need to depress it or a turning force on the steering

wheel can help the driver to steer away from the danger. Comprehension plays no

significant role here because this has already been generated earlier. A “0” can thus be

filled in. The other two system scan be categorised using similar arguments, being a

system (3) that warns uses more complex information so that insight is also

immediately given into the situation and a system (4) that shifts from an informative to

a warning function the more urgent the situation.

Table 2 Categorisation of BDWS by interruption (I), reaction (R) and comprehension (C) parameters

System Description I R C

1 A system that is purely informative/supportive 0 1 1

2 A system that warns only in critical situations 1 1 0

3 A system that informs and warns at the same time

1 1 1

4 A system that adjusts to the urgency of the situation and moves from informative to warning

0 1 1 1 1 0

Following this categorisation, paths can be sketched through the information processing

model as shown in Table 2. For future BDWS elaboration the salient concepts from the

information processing model (Wickens and Holland, 2000) can be related to the

concepts of the OWI model. Subsequently concept scan be identified that are the most

central to the different BDWS systems.

TNO report | TNO-DV 2010 C419 38 / 66

Figure 10. Paths through information processing model (McCrickard et al. 2003)

8.15 The key points in this chapter

- A BDWS must make visible the location of road users that are difficult to see or are

not visible.

- A BDWS with workload management functions must be a solution because the

resources can be freed up to better follow the assimilated rules.

- A BDWS can also focus the attention to aspects of these rules (like the mirrors) if it

is clear that a driver transgresses these crucial rules. By integrating a BDWS with

navigation early on it the occurrence of a risk situation can be assessed.

- The timing of the warning signal must be modified in a BDWS to the type of vehicle

and cargo of the truck.

- An informative BDWS display must be placed in the peripheral field of view of the

driver.

- Distraction adaptation can be used by cancelling warnings to reduce the incidence of

false alarms.

- The cyclist must be clearly informed that he/she is in a blind spot.

- The cyclist must be warned in accident critical situations.

- The feedback from the BDWS must be modified to make it less disruptive or very

obvious depending on the situation and urgency.

- An informative display must be adjusted so that the driver can check where cyclists

are and why a warning signal has been given.

TNO report | TNO-DV 2010 C419 39 / 66

9 The timing of warnings and reaction times

9.1 Timing van warnings

For the timing of BDWS warnings comparison can be made with other driver support

systems. In the case of forward collision warning (FCW) the driver is warned if the

possibility exists of a fontal collision with an object or other road user. A study (Abe &

Richardson, 2005) has investigated the effect of different warnings on the brake

reaction of the driver of a vehicle driving at 50 km/hr whereby the preceding vehicle

braked hard or normally. The conditions for the warnings were: early, late or no

warnings (baseline). The results led to the following conclusions:

1. In situations where the preceding vehicle braked normally the FCW did not cause

faster braking but when the preceding vehicle braked hard the FCW did cause the

reaction time to decrease and less variation in the braking than for the baseline

conditions in which no warnings were given. The faster reaction came mainly due

to the acceleration pedal being released faster.

2. Trust in the early warnings was greater than in the late warnings even when this had

no direct effect on the brake reaction time.

3. Drivers that experience late warnings tend to brake earlier than the warning that is

given, which reduces the trust experienced in the system since the utility is

underestimated and the late warning experienced as a missed alarm.

In another study by Abe & Richardson (2006) it was found that when the time headway

(THW) is varied, this determines to a large degree the trust in the FCW warnings, since

when a driver drives a short THW it is even more essential for the warning to be given

in time. What is interesting is that when the THW to the preceding vehicle was greater,

the drivers still had more trust in the early warning while their brake reaction did not

differ from their reaction to the late warnings. This reveals that it is not always possible

to translate trust in the system to parameters of driving behaviour.

In the same study (Abe & Richardson, 2006) a hypothesis was drawn up that also links

the trust to the “normal” reaction time of the driver if no warnings are given. In the

baseline condition the accelerator was released at an average of 0.72 s and indications

were that when the warnings are earlier than average, trust scores better while warnings

given later than 0.7 s will reduce the trust.

So does this mean that the warnings must be given as early as possible to give the drive

as much time as possible to react? It seems logical to assume this but early warnings are

often regarded as irritating or as a false alarm even when they are correct (Janssen et al.,

1993) and so relatively early warnings may negatively affect the perceived efficiency of

a system (Hancock and Parasuraman, 1992). The main aspect of determining the timing

in the design of the BDWS system is finding the balance between these two different

aspects, namely giving a warning that is late enough not to be experienced as

irritating/superfluous yet at the same time ensure that a warning is early enough to

allow adequate driver reaction and trust.

TNO report | TNO-DV 2010 C419 40 / 66

9.2 Reaction times

A key indicator in searching for this balance is the reaction time (RT) of the driver to a

blind spot event and the expectation of drivers that they will be warned earlier than their

own reaction time. If this is not the case, then this mismatch between the timing and

expectation will, according to Abe & Richardson (2006), result in less trust in the

BDWS. In the case of FCW is the RT of the driver is relatively easy to determine by

allowing the driver to brake as late as possible, for instance in the driving simulator.

However, measuring the RT of the driver to a blind spot event is not so easy to

determine.

An estimate has been made, however, of the reaction time of the driver to a BDWS

warning, whereby the average reaction time is estimated at 1.5 s (Hoedemaeker et al.,

2010, para. 7.3), but this RT depends on trust in the system and it is stated that when the

truck driver trusts blindly in the signal from the BDWS and thus reacts immediately

without checking why the signal is given, the reaction time is shorter than this average.

If the truck driver has less trust in the system, then 1.5 seconds may be a realistic

estimate. So a potential optimal BDWS system should always warn at least 1.5 s before

the critical moment.

TNO report | TNO-DV 2010 C419 41 / 66

10 Modality dimension / Human Machine Interface

10.1 Setting priorities upon several signals, what first?

Signalling can inform the driver about the status of the situation he finds himself in.

BDWS is geared to warning the driver by a signal, whether a single signal or a series of

signals or signals coming from different modalities at the same time.

The addition of warning signals in a Blind spot Detection System can come in different

forms – visual like (flashing) light, auditive, haptic (vibration) or the collation of

different forms, known as multi-sensory.

To begin with a study of the braking response times of unimodal or multi-sensory

signals reveals that the subjects quickened the brake response times as a result of the

combination of auditive and haptic signals. The few (unimodal) signals had a longer

brake response time. A cognitive neuroscientific study has brought these result forward

(Ho, Reed, & Spence, Multisensory In-Car Warning Signals for Collision Avoidance,

2007).

The efficiency of multi-sensory information processing can be strengthened if the

information presented to the different senses comes from the same spatial location

(Driver & Spence, 2004) at the same point in time (Spence & Squire, 2003).

In good visual circumstances the visual senses dominate the auditive but in poor

visual circumstances hearing becomes more prominent (Alais & Burr, 2004). This

could indicate that for cyclists approaching the vehicle the auditive can be switched

to a combination of visual supported by auditive signals. However, the consequence

of this is that if another auditive task has to be performed (radio on, phone

conversation, ambient noise) the extra signal will cause more stress. A combination

of two different auditive signals combined with visual support can reveal several

objects in a display.

Simple tones are best if direct action is demanded (Campbell, Richard, Brown, &

McCallum, 2007). Whether this applies to the environment if a truck transmits a

warning signal, for instance, is not known. Auditory icons, recognisable ambient

noises that provide intuitive information about the object or the environment they

represent are recommended in the design of detection systems because they ensure

shorter reaction times.

A study that tested in a simulator the effects of different locations of visual displays

(Head-up/ Head Down Display, HDD) combined with a phone related task (visual

and auditive) in which an unexpected danger on the road was given as a distraction

(Horrey & Wickens, 2004) provides an important indication of how much reaction

time can be gained. The main conclusion is that in a HDD the most delay in reaction

time can be gained (700ms) against to a Head-up display. The measurements were

done by registering lane position, vehicle control and speed control.

The report by Campbell et.al (Campbell, Richard, Brown, & McCallum, 2007) also

considers the pros and cons of using auditive, visual and haptic warnings (see Figure

11).

TNO report | TNO-DV 2010 C419 42 / 66

Figure 11 Pros and cons of auditive, visual and haptic warnings (from Parasuraman, Wickens, & Sheridan, A

Model for Types and Levels of Human Interaction with Automation, 2000)

Improvements in the modality dimension mean that the reaction times of the user can be

reduced. In this case milliseconds can reduce the braking distance by several

centimetres, which could be the difference between a collision or no collision. In the

quest to reduce reaction times multi-sensory warnings could be more valuable than

simple warnings. The human senses have their limitations but these can be compensated

for by adding several sensory stimuli (Ho & Spence, The Multisensory Driver:

Implications for Ergonomic Car Interface Design, 2008). The use of haptic feedback

appears to be able to boost the spatial perception of the user.

Do auditive display tools have a positive or negative effect in the combination of

visual displays (McBride & Ntuen, 1997)? Their effect is positive (some 800ms

quicker reaction) and for the tested combinations significant effects were found for

the response times for meters and flashing light, meters and red light, meters and

TNO report | TNO-DV 2010 C419 43 / 66

flashing light and meters and red light. The combination of auditive and visual

signals is also significant but not significantly better. The lights had longer response

times than the meters and dials.

Haptic feedback is the last additional warning tool that results in a shorter reaction time.

Mechanical and electrical haptic displays are currently the best application options

(Chouvardas, Miliou, & Hatalis, 2008). Research has been done into whether tactile

stimulation has an effect on the sensory areas of the brain and what that means for the

processing of stimulus information (Bauer, Oostenveld, & Fries, 2009). The test is the

fixation on a point in the centre of a visual screen where objects appear in the corner of

the screen. The response is to push a button with a left or right index connected to a

haptic sensor, depending on the direction of the object that appears. The result was that

reaction times were reduced by 40ms with tactile support. A change in the frequency of

the haptic display could have a further effect on reducing the reaction times. In

combination with auditive signals an effect occurs whereby a low vibration frequency

causes an auditive to be perceived longer (Yoblick & Salvendy, 2007).

Research (Campbell, et al., 2007) into multi-sensory applications in detection systems

like Forward Collision, Lane Departure and Road Departure Collision Warning provide

information that can be used in the design of BDWS as well as, for instance, the

addition of vibrations in the steering wheel as extra signalling and so a better

interpretation of the danger and improve performance of those participating in an

experiment for Lane Departure systems.

10.2 The key points in this chapter

- The use of different modality signals in various stages of inform, warn and intervene

affect the reaction time of the driver. Multi-sensory signals give a faster reaction time

than single.

- The gradual signalling of the situation sees modalities used in certain sequences. For

informing this is largely visual, for warning auditive and visual. Further research will

have to show what the role haptic feedback may have in improving the reaction time

of the driver.

- The way in which the different modalities are input and how the driver emotionally

reacts (like irritation) must be carefully examined.

- Little is known about the different forms of haptic feedback and the influence on the

reaction time in specific BDWS situations.

TNO report | TNO-DV 2010 C419 44 / 66

11 Type of feedback

This chapter looks at different alternative forms of feedback in comparison with

standard focal factual feedback: social, ambient/peripheral and subliminal.

11.1 Social feedback

Studies reveal that under observation people perform different well trained tasks better

– this term is known in psychology as social facilitation (Zajonc, 1965; Bond & Titus,

1983; Guerin, 1993). The effect is even reinforced when the person being observed

expects his performance to be evaluated, or evaluation apprehension (o.a. Seta & Seta

1992). Cortrell et al. (1968) found that the effect of a co-driver on a driver is less when

the former is blindfolded. A person in the cabin of a truck therefore has an effect on

driving performance and the presence of a virtual persoon can boost the persuasive

force (Vossen et al. 2009).

The study by Vossen et al. Tested the effect of positive and negative social feedback

and compared these effects with more common factual feedback. A robot, a so-called

iCat, was used to give the feedback. The results reveal a greater effect from social

feedback compared to factual feedback. Negative feedback is also more effective than

positive feedback. Use can also be amde of this in a BDWS. The system will then have

to contain a social element to give the impression that the driver is being observed. A

key concept here is the “agency” of the system, or the degree to which a system is

considered as autonomous social actor. Agency can be increased by, for instance, a

human sound instead of a bleep. Or instead of a flasjing LED an icon with a human

element such as eyes. Or the BDWS can be entirely represented by a virtual agent.

However, Roubroeks et al.(2009) show that social feedback can in some cases lead to

more psychological reactance to persuasion when compared to factual feedback.

According to the psychological reactance theory (Brehm, 1981), persuasive attempts

fail if they are regarded by people as an intrusion into autonomy and this is reinforced

when a social actor gives feedback instead of a neutral message. Whether a social actor

in the case of a BDWS also has a promoting effect cannot be stated with any certainty

because no research has been carried out in this specific application to the authors‟

knowledge, nor can any statement be made on the occurrence of possible reactance.

These are subjects for possible future research.

11.2 Ambient/peripheral feedback

A second type of feedback is ambient feedback. According to Miller and Rich (2009)

ambient displays are designed not to be intrusive or irritating. Ishii and Ullmer (1997)

suggest that the use of ambient media like noise, light, air flow and water movement

have to act as interfaces in the background and periphery of human attention. They also

note that while ambient displays are often processed in the background, they can also

shift to the centre of attention. Ambient Technologies are in the “periphery of our

attention”, characterised by a “subconscious comprehension” and possibly encompass

“all our senses”.

Nikolic and Sarter (2001) have done a simulator study in which the effectiveness of

central feedback is compared with two ways of informing pilots with peripheral visual

TNO report | TNO-DV 2010 C419 45 / 66

feedback about unanticipated changes in the status of an automatic pilot. Central

feedback is feedback that has to be centrally processed, with attention and checked. In

other words, this feedback is not processed in the background but in the focus of the

attention of a user. Peripheral visual displays resulted in both a higher level of detection

and lower response times, influencing the performance of a simultaneous visual task

more than existing feedback mechanisms. According to the authors, possible

applications include improved display design that supports effective attention allocation

in a variety of complex dynamic environments like aviation, process control and

healthcare. The nature of these environments is similar to that of a BDWS and so the

conclusions of this study can be applied to a possible application in a BDWS. Ham et

al. (2009) also show that in product initiated feedback systems ambient persuasion is

more effective and faster to process in high cognitive load than in numerical, central

feedback systems.

However, according to Miura (1985) a high task load results in higher response times

and a reduced UFOV (useful field of view). In the case of a high task load peripheral

visual stimuli are therefore not, or are only poorly, processed. It has even been

suggested actively reducing the UFOV to be able to better process foveal stimuli

(Miura, 1990). One conclusion of the report by Campbell et al. (2007) supports, by

contrast, the use of peripheral feedback claiming that higher false alarm rates may occur

if use is made of peripheral, non-intrusive visual feedback, something that a study by

Talmadge et al. (2000) of a LCW system (Lane Changing Warning) also found. In this

study visual feedback given outside the central field of view of the driver had a high

false alarm ratio (42 per hour), but drivers accepted this and it did not lead to further

irritation. When more intrusive (auditive) or focal feedback is used (as in Korse et al.,

2003) an excessive false alarm ratio may cause drivers to switch off the system and lead

to irritation.

Whether and how ambient feedback has to be applied in a BDWS cannot be stated with

any certainty in the reference literature but a certain level of, possibly urgent adaptive,

ambient feedback does appear to have potential to inform and warn the driver in a

BDWS. How the attention is allocated, what the influence on false alarm acceptance is

and whether ambient feedback is equally applicable for informing and warning would

have to become apparent in future studies. A simple example of how this sort of

feedback could be used is to reduce the volume of the radio or slowly open a window.

That would provide feedback on the situation outside the vehicle (by the intrusion of

ambient noises) without heavily taxing the driver cognitively.

11.3 Subliminal feedback

Another type of feedback, subliminal feedback, is presented entirely subconsciously,

though little research has been done on the effectiveness of this kind of feedback in a

comparable BDWS system. And the results that do exist show only a very weak effect

(Ham et al, 2009). Application in a BDWS does not appear relevant.

11.4 The key points this chapter

- The system must contain a social element to give the driver the impression he is

being observed or the system could be represented entirely by a virtual agent.

- Peripheral/ambient feedback has potential to inform and warn the driver.

TNO report | TNO-DV 2010 C419 46 / 66

- The use of subliminal feedback in a BDWS is not currently recommended.

TNO report | TNO-DV 2010 C419 47 / 66

12 A human view of a BDWS

The previous chapters argued from a theoretical perspective the possible

recommendations for a BDWS. This chapter takes the opposite perspective – how a

person could assist the driver in practice, as estimated by the researcher. For each

concept the relationship is laid with the theories discussed above.

Optimally a truck driver should have a co-driver that monitors the blind spot to: 1)

ascertain the extent of the danger, 2) ascertain whether the driver has seen the possible

danger, 3) adjust his warning to the urgency of the danger, 4) convey his warning such

that the driver accepts it and reacts and 5) to be more alert if the situation becomes

vague/more intensive. An optimal BDWS should also be able tocover these points.

What is important to the design of a BDWS can be made clear by simply asking: what

would you do if you see that your driver threatens to collide with a cyclist?

12.1 Keeping a lookout

If danger threatens the co-driver will keep an eye on the driver. In many cases a person

will notice this and this fact alone may lead to better driving. Use can be made of this in

a BDWS. The system will then have to contain a social element to give the impression

that the driver is being observed. This could be a human noise instead of a bleep and an

icon with human characteristics instead of a licht. See chapter 11 on social feedback.

12.2 More awareness of the (traffic) situation

A co-driver that trusts the driver will not be much occupied with the traffic situation but

if this is not the case, the co-driver wwill keep a good eye out for possible dangers to

warn the driver in good time. Applied to a BDWS this means that the sensitivity of the

system must be increased if environmental factors increase the risk of a blind spot

accident. This may be through tiredness or other thoughts of the driver, the driver being

distracted by the radio, a phone conversation or a navigation system, or if the road

situation prevents the attention of the driver from checking the blind spot. An adaptive

BDWS is an option in that case. See chapter 3 on automation and chapter 6 on workload

management and distraction adaptation.

12.3 Sensitivity

A person is able to communicate in a subtle way and reveal his inner state as well as be

highly sensitive to such signals. It is not without reason that 80% of human

communication is non-verbal. Given the high workload of a truck driver it many be

important to consider more automatic/unconscious information processing mechanisms

because these require minimal attention and processing is faster in some cases. A

BDWS should, for instance, use more peripheral feedback, also known as ambient

feedback. See chapter 6 on attention and chapter 11 on ambient feedback.

12.4 Making the driver aware of remaining cool

A person is able to ascertain whether someone is concentrating on a certain task or not.

If a driver seems distracted his co-driver can help the driver focus his attention on what

he should be doing. A BDWS should also be able to do this. In this case an adaptive

TNO report | TNO-DV 2010 C419 48 / 66

BDWS not only influences the sensitivity of the system itself but also informs/warns the

driver if he does not have the desired attention. See chapter 6 on workload management.

12.5 Indicate possible dangers

If danger threatens a co-driver can indicate where the possible danger is by showing or

saying where the danger is. A BDWS can also do this by visual or auditive warning

signals from the direction of the danger or in the direction of the danger. See chapter 10

on modality.

12.6 Shout, gesture, nudge

The more imminent the danger the more intensity a co-driver can give to his warning or

choose the most effective modality. In a noisy environment a visual warning, for

instance, can be more effective because it does not compete with the same perceptive

subsystem. Aadjusting the modality according to the situation would be a useful BDWS

function. If visual or auditive warning does not work, the co-driver can nudge the

driver. A BDWS can achieve the same effect by vibrating the seat of the driver. See

chapter 3 on control and chapter 6 on workload management and chapter 10 on

modality.

12.7 A tug on the steering wheel

A final remedy a co-driver can use is to take control himself by, for instance, tugging

the steering wheel. A BDWS could also simulate this through giving haptic feedback on

the steering wheel, for example. This would have to be a shock in the direction the

driver has to swerve, to the left in the event of a danger to the right. See chapter 3 on

control and chapter 10 on haptic feedback.

12.8 Key points in this chapter

A BDWS can be use the way people interact in various ways. This chapter presents

suggestions to implement these in the design of a BDWS.

- BDWS must be able to assess the level of the danger

- Assess whether the driver has seen the possible danger

- Adjust the warning to the urgency of the danger

- Convey the warning such that the driver accepts it and reacts

- BDWS should be made more alert whenever the situation becomes

vague/more intensive

- Apply external human characteristics

- Use more peripheral/ambient feedback

- Apply risk adaptation

- Apply attention adaptation

- Adjust the modality to the urgency

- Show the warning from the direction of the danger

- Use haptic feedback on the steering wheel as a final remedy

TNO report | TNO-DV 2010 C419 49 / 66

13 The human-machine interface of existing blind spot detection systems

Whereas the previous report investigated the detection of objects in the blind spot using

cameras, radar and ultrasonic sound, here we look at the HMI of existing blind spot

detection systems. This list came about through a scan on internet among car

manufacturers that often develop this blind spot detection system for passenger cars.

The assistance system of General Motors warns of lane-changing if there is a road user

in the blind spot (see Figure 12). A visual signal warns the driver if there is a car in a

spot that is not visible in the door mirrors. A LED (Light Emitting Diode) lamp in the A

pillar or door mirror is activated on the relevant side of the car. The warning becomes

more intensive if the driver creates a potentially dangerous situation by turning on his

indicator on that side. The LED lamp starts to flash and the driver‟s chair vibrates on

that side. The system takes account of the speed of each car: the faster the overtaking

vehicle, the earlier the warning is given.

Figure 12 General Motors blind spot detection system

The Blind Spot Monitoring System of Chrysler indicates the presence of another

vehicle and which side it is in the blind spot by visual symbols in the door mirrors

(Figure 13 A). No other modalities are used.

BMW has an Active Blindspot Detection that supports the driver by vibrations in the

steering wheel. A triangle in the base of the door mirror shows the presence of another

vehicle in the blind spot and the potential danger (Figure 13B). If the driver then also

indicates to change lane, the driver is given an additional warning by a flashing triangle

and vibrations in the steering wheel.

TNO report | TNO-DV 2010 C419 50 / 66

Figure 13 A: Chrysler Blind Spot Monitoring B: BMW Active Blindspot Detection

Volvo has a Blind Spot Information System (BLIS; Figure 14A) that works with

cameras in the door mirror. The driver receives a warning if a vehicle enters the blind

spot zone. Depending on the position of the vehicle the BLIS warning light comes on

left or right in the door pillar next to the door mirror. The controlled zone is 9.5 metres

long and 3 metres wide. BLIS is active at speeds in excess of 10 km/h. It warns the

driver of vehicles driving a maximum of 20 km/h slower or a maximum of 70 km/h

faster. The system is programmed to detect both cars and motorcycles, in daylight and

in the dark. According to Volvo it does not react to parked vehicles, cyclists, crash

barriers, street lighting or objects that resemble these. The cameras on the left and right

work independently of each other. BLIS can be switched off using a button on the

central consol unit. The status is shown on the information display.

Figure 14 A: Volvo‟s Blind Spot Information System (BLIS) and B: Mercedes Blind Spot Assist

The Blind Spot Assist of Mercedes (Figure 14 B) uses six short-range radar sensors

whose range is the vicinity of the car and which are integrate into the front and rear

bumpers. They monitor the immediate vicinity of the rear and both sides of the car.

They see literally more than the driver because they also cover blind spot (the oblique

area behind the car that is awkward to see via the door mirrors). If the Blind Spot Assist

detects another vehicle in the blind spot, a red warning signal appears in the glass of the

door mirror. The driver knows that he has to be more careful when changing lanes. If

the driver ignores the warning and turns on the indicator, the red warning symbol begins

to flash and a warning signal also sounds.

The technical solutions for the blind spot referred to in the report of Hoedemaeker et al.

(2010; appendix A) do not contain a clear description of the signalling to the driver and

are therefore not described further. An exception to this are systems based on video

images, whereby the driver can see part of the street view on a screen mounted on the

dashboard.

The systems described above are mainly geared to detecting other vehicles and not

cyclists, moped riders or pedestrians. Only Volvo indicates that the system also works

for motorcyclists. Interestingly, some of these systems opt for a step-by-step warning

and/or adjust the warning to the speed of the vehicle. BDWS cannot, therefore, be

compared entirely to these systems because it is designed for truck drivers and concerns

mainly the safety of vulnerable road users. However, there is value in looking at how

carmakers give the HMI form to these systems and the tendency to opt for subtle visual

warnings located in or at the mirror combined, where appropriate, with warnings in

other modalities.

TNO report | TNO-DV 2010 C419 51 / 66

14 Discussion and conclusion

This report is based on theory since the theoretical frameworks enable concepts to be

generated for BDWS from different perspective. An overview of the unanswered

theoretical questions is given in appendix B.

This chapter summarises the key implications of the theoretical preliminary studies for

BDWS and then describes different concepts for BDWS signalling that emanate from

this theory. The criteria established earlier for BDWS are evaluated according to the

findings in this report and the different technological phases of BDWS are discussed

along with the implications for signalling and trust. Finally, conclusions are drawn and

recommendations for field testing given.

14.1 Conclusions from theory

The current report proposes regarding BDWS as Persuasive Technology, and thereby

actually employing all the persuasive technologies (theory) that people use to influence

each other (e.g., mutuality, scarcity, consensus – see also Cialdini). This makes the

three roles of persuasive technology relevant (tool, medium, social actor). Furthermore,

the influencing mechanisms that persuasive technology can utilise as identified by Fogg

(2004) are very relevant for the development of innovative BDWS (reduction,

tunnelling, tailoring, etc.). When they use interactive technologies people often react in

the same way as they do with living beings. This is painfully evident when users

become annoyed with their computer for nor performing as expected. Interactive

systems can reinforce their presence as social actors by applying five different social

cues (Fogg, 2004): the right physical characteristics (like eyes); psychological cues (like

a certain look); language use; social dynamic cues (like imitating the driver);

assumption of social role (like that of an expert).

In general a BDWS must in the first instance intervene as little as possible, like a

system that suggests where cyclists may be present. But whenever the situation

becomes more urgent or dangerous, the system must act more urgently and decisively.

Theories and models on acceptance of technology are very suitable in analysing

acceptance of BDWS. More specifically in the case of a BDWS an underlying problem

may be the low computer self-efficacy of future users. According to Venkatesh and

Davis (1996) training the target group may in this case be more effective than a better

interface design to boost acceptance.

-An excessive ratio of false alarms provokes irritation.

-If a system is experienced as very useful, a higher degree of disagreeability is accepted.

-Repetitive auditive signals combined with regular false alarms lead to distraction and

are experienced as irritating.

- Campbell et al. (2002) and Dingus et al. (2005) also conclude that certain quantifiable

noise parameters like speed, volume, the number of repetitions and frequency can have

an effect on the irritation.

Signal detection theory (SDT) is also relevant for the analysis of BDWS. An important

addition to previous warning concepts could be a two-phase warning. The advantages of

single and two-phase warning systems are outlined table 1 of this chapter. Identifying

TNO report | TNO-DV 2010 C419 52 / 66

the relationship between SDT and other theories is essential for an effective BDWS.

Our theoretical analysis shows mainly that an effective BDWS may not produce mishits

and as few false alarms as possible.

The main conclusion in respect of the factor control (inform, warn, intervene) is the

need to clearly separate inform, warn and intervene because in the model for

automation there are many stages where the line between inform and warn can overlap.

Also the relationship between automation, trust and SDT is significant, so the effect of

one on the other has to be investigated. This analysis also reveals that adaptive

automation is a form of control that offers possibilities to improve the relationship

between driver and system because the user is able to temporarily take control over

from the system. An important final conclusion is that in the detection of more than one

danger, the driver must be clear about which danger has priority.

As for the cognitive dimension the conclusion is that an informative system can make

visible road users that are difficult to see or cannot otherwise be seen by the driver and

so prevent a possible accident. The system can, however, go a step further by making a

risk analysis of the different environmental factors like visibility and intensity. Here,

too, a BDWS with workload management functions offers a solution because the

resources can be freed up to allow the partly assimilated rules to be followed. A BDWS

system can also act as a memory aid for the assimilated rules by focusing the attention

of the driver on aspects (like the mirrors) if it becomes evident that a driver is not

complying with crucial rules (like forgetting to check a mirror).

Integrating a BDWS with navigation information is also highly recommended because

very early on a warning can be given the moment that a cyclist comes into the blind

spot and the navigation system indicates that the driver will turn right. In general, the

timing of the warning signal in a BDWS must also be adjusted to the type of vehicle

and cargo of the truck.

This argues an informing function in addition to a warning function for a BDWS

because it cannot necessarily be assumed that the existing aids (like mirrors) are

adequately informative. Argued from the excessive workload of the driver, it is

particularly advisable to place a BDWS in the periphery of the driver and to incorporate

distraction adaptation to cancel out warnings and so reduce the frequency of false

alarms. Measuring tiredness, stress and emotion as an addition to a distraction adaptive

system can also be a very useful supplement to a BDWS.

In general, not only can a BDWS check the stimuli towards the driver but also adjust

the feedback of the BDWS so that it is less disruptive or more obvious, depending on

the situation and urgency.

The analysis from the perspective of the Interruption-Reaction-Comprehension model

suggests that in the case of an informative BDWS, it is not desirable for the system

notification to excessively disrupt the attention on driving. Guidelines for information

systems in vehicle (in-vehicle information systems) indicate a preference for a minimal

number and type of interactions with displays, limiting the degree of display change and

the time that a display is visible. However, for a warning BDWS a higher level of

interruption can be assumed because in that case the avoidance of a possible blind spot

accident is key. These analyses also reveal that another colour, form and movement are

crucial, and account must be taken of this in a BDWS. More specifically, research has

TNO report | TNO-DV 2010 C419 53 / 66

been done into several factors that influence the processing and comprehension of

notifications, with position, are and colour of the notification important aspects.

The main aspect in determining the timing in the design of the BDWS system is the

balance between these two different aspects. The system has to provide a warning that

is late enough to not be seen as irritating/superfluous but be early enough to be able to

adequately react and generate enough driver trust. More specifically, a potential

optimum BDWS system should always warn at least 1.5 seconds before the critical

moment.

Other keys to actual use of BDWS (allocation of control) both trust and confidence, and

their determinants (like overlap in social values and experience with the system

respectively). It is important to take account of over-reliance, the consequence of the

user having too much trust in the efficacy of the system and so accepting system errors.

The trust that a driver has in a system is largely dependent on his mental model of this

system mainly built up during the first physical interactions with the BDWS – i.e., in

the learning phase. The learning phase may comprise different components like verbal

information about the system, reading the manual and actually driving with the system

on. It is during this period that the driver forms a picture of the system: the reliability,

usability and efficiency. These judgements will vary in the beginning of the learning

phase but will ultimately converge to a stable mental model of the BDWS (Boers &

Hoedemaeker, 1998). Ideally this learning process should occur quickly and result in an

adequate mental model of the efficacy and limitations of the BDWS.

In general, better transparency in the efficacy of a system is a way to allow the user to

become familiar with the system and thus gain trust in how it works. If a user accepts

the system, he trusts mort in it. This is why it is important that not only using the

BDWS (confidence) influences trust but also familiarity and learning to work with the

system for a real situation (trust).

The use of signals through various modalities in different stages of informing, warning

and intervention influences the reaction time of the driver. Multi-sensory signals

produce a faster reaction time than uni-sensory signals. Step-by-step signalling uses

modalities in certain sequences. To inform this is mainly visual, with warning auditive

and visual. Furthermore, the manner of input of the different modalities and how the

driver emotionally reacts (like irritation) need more study. Finally, little is known about

the different forms of haptic feedback and the influence on the reaction time in specific

BDWS situations.

Analysis of the literature concerning feedback reveals pros and cons for each type of

feedback. Importantly, social feedback (approval or rejection by a social agent) seems

to be most influential, so the system will have to contain a social element to give the

driver the impression he is being observed. In this case an agency (social agency; the

degree to which equipment reveals intentions, agency) can be increased by a human

sound rather than a beep. Or instead of a flashing LED an icon with a human element

like eyes. Or the BDWS can be represented entirely by a virtual agent. It is also

important that a BDWS appears to have a certain amount of, possibly urgency adaptive,

ambient feedback potential to inform and warn the driver.

TNO report | TNO-DV 2010 C419 54 / 66

14.2 BDWS concepts

Different concepts relating to the form BDWS signalling could take have been drafted

on the basis of theory. These concepts should not be regarded as standalone solutions

but as concepts that can be combined and verified simultaneously.

Concept 1: Multi-stage warnings A BDWS with 3 different warnings gives (1) an early warning that something may be

amiss, (2) a later warning and ultimately (3) an urgent warning or even intervention.

This multi-stage warning can be supported by using different modalities, such as first an

auditive signal, then auditive and haptic and finally intervention. The advantage of this

is that the driver can also quickly check whether there is danger. He constructs a better

mental model of the system because it often informs but seldom warns. This is also

good for calibrating the trust. A disadvantage is that this concept gives more signals and

the driver must know the difference between inform and warn. The theoretical

argumentation for this concept can be found in chapter 6 (signal detection theory) that

presents the research of multi-stage warning. Chapter 10 (modality dimension) presents

multi-stage warnings from a different modality that may be equally effective. It is also

relevant here to consider the type of feedback (chapter 11) like social feedback to create

a strong effect and its particular suitability for later phases of a warning.

Concept 2: Dynamic warning: This BDWS adjusts the sensitivity and/or timing of the warning to the traffic situation

or to the speed/THW of the vehicle or to the workload of the driver. The advantage of

such a concept is probably improved accuracy of the system thus more trust and

acceptance by the driver. The disadvantage is the possible lack of clarity about the

criteria or the difficulty for the driver to construct a mental model. It is important here

to take account of the workload of the driver (see chapter 8, cognitive dimension). Not

only is the literature described in chapter 9 (timing) relevant but also information on

modality (chapter 10): reaction times to stimuli from different modality variations.

Concept 3: Self-adjustable warnings

This BDWS lets the truck driver determine when and how he wants to be warned by the

system. The advantage of this concept is that the acceptance is increased and perhaps

also the realisation that the device does not always warn optimally. The downside is that

if more drivers drive in a particular truck, it may be awkward to keep having to readjust

the settings. Self-adjustable warnings could be really good for driver trust in the system

(see chapter 5) and for driver acceptance (see chapter 4). This concept can also

optimally incorporate the cognitive characteristics of the driver since the driver can

adjust the system to his own characteristics and workload (see chapter 8).

Concept 4: Lateral warnings

This BDWS offers the warnings laterally from the side of the possible conflict. So to

the right if there is a cyclist on the right-hand side of cabin and to the left if the cyclist is

on the left. The advantage of this concept is that the attention and eyes of the driver are

immediately geared to the correct side. The disadvantage is that the blind spot situation

also appears to the front of the truck. This concept fits the characteristics of the driver

described in chapter 8 (cognitive dimension). Driver expectations of the characteristics

of this danger are much better perceived since the noise is more ecologically valid, and

so trust (chapter 5) and acceptance of the system (chapter 4) can be boosted. An

important aspect here is that these lateral warnings could really be a component of

multimodal warnings, for instance in the combination of auditive and haptic warnings.

So such a lateral warning could be used in concept 1, 2, or 3.

TNO report | TNO-DV 2010 C419 55 / 66

Concept 5: Warn of vulnerable road user

Passive: A marking or text on the side of the truck can increase the general awareness

of the vulnerable road users and of the drivers about the blind spot (Figure 15), though

the effect and permanence this effect are unknown.

Active: In the case of the blind spot the vulnerable road users can also be warned by

diverting their attention to a possibly dangerous situation by alerts (light/sound) outside

the truck to the cyclist. However, there are major objections to this concept: 1) it is not

clear for the vulnerable road user. If there is no auditive warning this does not mean that

the situation is safe because the truck may not be equipped with BDWS; 2) it

invalidates right of way. The cyclist going straight ahead has right of way over a right-

turning truck. That right of way must always be given by the truck driver; 3) it removes

part of the responsibility of the driver and lays it at the vulnerable road user. 4) this

concept could provoke risk compensation (behavioural adaptation) by the driver who

might then be less attentive to vulnerable road users or just drive the turn assuming that

the cyclists will get out of the way; 5) if the amount of preventive warnings by this

system is large, then the cyclist will lose trust in it and thus the effectiveness of the

system will decline.

The advantages, as in the fact that cyclists and moped-riders are more manoeuvrable

and can stop more quickly than a truck (estimation) and that the driver needs no extra

system or screen in the cabin, do not outweigh the objections. The warning to the

vulnerable road user should only be the final measure if it is virtually certain that an

accident will occur.

Figure 15 Example of marking and text (“watch out for blind spot”) on the outside of a truck to alert other

road users to the blind spot.

14.3 Ideas from the workshop

On 7 October 2010 a workshop was held with TU Eindhoven, TNO, SWOV and

Robbert Verweij at the Ministry of Transport, Public Works and Water Management in

The Hague. The workshop‟s aim was to discuss the theory chapters of this report and to

generate a number of concepts. Trust (trust/confidence) and acceptance were much

debated during the workshop as well as the importance of reliable detection by BDWS,

TNO report | TNO-DV 2010 C419 56 / 66

so the ratio between false alarms and mishits. Especially the fact that the blind spot

situation is actually very rare makes it difficult for the driver to construct an adequate

mental model of the system and thus the right amount of trust (trust calibration). The

so-called multi-stage concept could help in this respect and such a system that first

informs (often), then warns (seldom) and perhaps intervenes (very seldom) was

discussed. This multi-stage warning does need to be technically feasible, of course.

The importance of a BDWS not being a patronising system was also discussed along

with the need to look at how the system would be commercialised at the appropriate

time. It must not be a system that makes experience truck drivers feel limited or

undervalued.

During the workshop the SWOV sketched an interesting scenario of not one but two or

more cyclists present in the blind spot. In that case the driver might incorrectly assume

that the system warns of the first cyclist (the only one he himself sees), while the second

cyclist is still in the blind spot. The question is how often such a situation occurs but it

does show how important it may be for BDWS design in giving the driver an accurate

world view and/or informing or warning the driver of more objects in the blind spot. It

is vital to be able to verify the direction of the warnings. This complete world view

could be generated by camera images but it is better to avoid these so as not to increase

the amount of information in the cabin too much (and thus workload and distraction).

The concept whereby the vulnerable road user is actively warned was discussed

extensively and the outcome was that the drawbacks of such a system would be too

great. Dynamic warnings that take account of driver workload and tasks were also

considered interesting. However, the discussion was about whether BDWS could

estimate the workload of the driver or whether that should be a separate system set

centrally between different systems and driver tasks (Engström & Victor, 2009).

Dynamic warnings may, for instance, adjust the level of noise to that of the cabin (so

that the warnings sound louder if the radio is on) or even remove the noise (the radio is

switched off).

14.4 Criteria for BDWS

In addition to the criteria contained in a previous report (Hoedemaeker, 2010) this

report generates new, more specific and comparable criteria:

1 It is recommended that BDWS systems reliably detect the location, speed and

direction of the vulnerable road user in the blind spot so that a DSS can estimate the

danger of the situation and whether the situation may also be dangerous in the near

future. The system can only predict if information is known about the path the

cyclist will take. These parameters are also important for the informative function

of a BDWS as discussed in point 2 below. This is a technical challenge but it is a

prerequisite that a BDWS is able to properly detect the vulnerable road user if the

driver is to be properly warned and thus essential for the trust and the acceptance of

the driver in the BDWS system.

2 A BDWS should both inform and warn. Because the detection of vulnerable road

users is not (yet) entirely reliable and false alarms and mishits have a major

influence on trust and acceptance in the system, the system must indicate all

TNO report | TNO-DV 2010 C419 57 / 66

possible danger before the situation becomes critical (only inform). To this end a

distinction can be made between at least two stages but a multi-stage system is also

possible.

3 The system may not increase the workload of the driver in the long term. The

learning curve of using new systems will mean a temporary increase in the

workload. Because the workload of a truck driver in blind spot situations is already

high, this learning curve must be as short as possible. This can be achieved by

intensive training in the use and efficacy of a BDWS, something that will also

increase the acceptance of false alarms and the system as a whole.

4 An informative BDWS must reveal non-critical information peripherally so that it

does not distract. This information may be relatively complex by nature because it

will be subconsciously and automatically processed by assimilation over time. The

prognosis is that the workload of the truck driver will be increased in part while

BDWS is still in early development (option 3) and not yet integrated with the

systems in the cabin.

5 A warning BDWS must show critical information centrally and draw the attention

of the driver. The information must occur in the right modality, timing, place and

level so it is vital that all information supports a rapid decision and action from the

driver and that the driver can check this and that the system helps construct a

correct world view (situational awareness). For quick processing point 6 is

important.

6 Warning, critical information may not contain complex information though it may

show the location of the danger in the warning signal, such as a central visual

warning that points to the direction of the danger, an auditive warning from the

direction of the danger, a haptic warning on the correct side of the seat or

counterforce on the steering wheel away from the danger.

7 The informative and warning information/signals must be clearly distinguishable

with a consistent different between the informative and warning signal. This can be

effected by using different modalities for the two types of signal.

8 A BDWS that warns must do so promptly to give the driver the time to react.

Hogema (2010) has used simulations to calculate the place and time in different

blind spot situations that signalling to the driver must occur to be able to brake in

time and prevent an accident, given a certain reaction time from the truck driver. He

concludes that the warning must occur in almost every calculated scenario before

the turn to the right is made, so before a BDWS system can actually be certain that

a dangerous situation exists (the truck can always go straight ahead).

9 The active warning of the vulnerable road user should only be a final measure if it

is virtually certain that an accident will occur.

14.5 An ideal system versus a practical first version

The technical maturity of the BDWS system plays a key role in the degree of reliability

of the detection of possible dangers in the blind spot. Moreover, the reliability of the

TNO report | TNO-DV 2010 C419 58 / 66

detection is key to determining how the driver is warned and with how many false

alarms and mishits. As described in this report such is important for the trust and

acceptance of the driver in the system. A successful BDWS system must warn the

driver in critical situations such that he can react as fast as possible and trust “blindly”

in the detection of the system. In this concluding chapter we therefore list the

implication of the three options in which the technology is at different phases of

development:

1) the optimal BDWS system whereby the technique is so advanced that no false

warnings are given;

2) a BDWS system that still has possible false alarms and mishits but whereby the

system is fully integrated with other systems in the dashboard;

3) the BDWS system as a retro-fit system that can be developed in the near future as an

after-market system with false alarms and mishits.

Option 1: perfect detection by BDWSS

In the ideal case of the BDWS always giving good warnings the trust of the driver in the

system will increase. This means that the reaction times of the driver to a warning are

short, provided that the HMI is comprehensible and intuitive. A perfect effective

BDWS could also intervene in the driving task and independently brake or swerve in

the most crtitical situations. If the blind spot situation is detected in time in the

“operational envelope” (Hogema, 2010), all blind spot accidents can be avoided by the

actions of the driver and automatic intervention. This option is not currently feasible

since a perfect effective BDWS technology is not (yet) possible.

Option 2: not yet perfect detection but integration of BDWS in the cabin

Integration of the BDWS in the cabin and with other systems in the truck has significant

benefits. For example, the system reduces workload because information and warnings

from other systems are held back at moments when the BDWS informs or warns, like

telephone conversations being suppressed or the radio volume lowered so driver can

focus his attention fully on the blind spot situation. This option is feasible but it is

estimated that a full integration of systems and a central workload allocator is only

possible if undertaken by an ambitious OEM.

Option 3: not yet perfect detection and BDWS as retro-fit.

This option is feasible in the near future and has the advantage of being able to be

quickly implemented in existing car fleets, although the advantages of options 1 and 2

will not be immediately achievable. So there will be no perfect detection and no

integration with other systems and information in the truck. The main issues with this

system are: What is the ratio between the number of errors the system makes (mishits)

and the quantity of false alarms? What is the timing of the informative and warning

signals? Which modality can best serve these signals?

14.6 BDWS field test

In this report we have elaborated on different BDWS concepts that guide the design of

the HMI for BDWS based on what is known about the information processing and the

trust of the driver. Some design choices cannot yet be made on the basis of theory and

can be better answered when tested in the field.

In a subsequent BDWS field test study one or more BDWS warning concepts (the

Human Machine Interface) can be developed as a prototype. An initial evaluation about

TNO report | TNO-DV 2010 C419 59 / 66

which concepts draw the attention of the driver at the right moment to the blind spot

situation and do not increase the workload can be carried out in a controlled driving

simulator experiment. The concept scan then be compared in the same conditions, and

the warning concepts will not depend on the quality of the detection.

At the same time the quality of detection can be improved in a field test such that

danger in the blind spot is reliably and promptly recognised. A combination of different

already developed sensors might offer a solution here.

A subsequent step for a BDWS field test is to evaluate the successful and proven

warning concepts on public roads using trucks in which these interfaces are built in

along with the detection sensors tested earlier. The level of trust and acceptance in this

type of BDWS can then be tested along with the workload of the driver. For trucks, a

technical experiment will have to show what the reliability is of the detection of dangers

in the blind spot and the range of the detection.

TNO report | TNO-DV 2010 C419 60 / 66

15 References

Abe, G., & Richardson, J. (2005) The influence of alarm timing on braking response

and driver trust in low speed driving. Safety Science, 43, 639-654

Abe, G., & Richardson, J. (2006)Alarm timing, trust and driver expectation for forward

collision warning systems. Applied Ergonomics, 37, 577-586

Ajzen, I., Fishbein. M. (1980). Understanding attitudes and predicting social behaviour.

Englewood Cliffs, NJ: Prentice-Hall.

Alais, D., & Burr, D. (2004). The Ventriloquist effect results from near-optimal

Bimodal Integration. Current Biology , 257-262.

Ballas, J. A.,Heitmeyer, C. L., &Perez, M. A. (1992). Evaluating two aspects of direct

manipulation in advanced cockpits. In Proceedings of the ACMConference on Human

Factors in Computing Systems (CHI ‟92). Monterey, CA, 127–134.

Bargh, J. A., Chaiken, S., Govender, R., & Pratto, F. (1992). The generality of the

automatic evaluation effect. Journal of Personality and Social Psychology, 62, 893–

912.

Bargh., J. (1994). The four horsemen of automaticity: Awereness, intentention,

efficiency, and control in social cognition. In T. Wyer Jr. R.S. and Srull, Handbook of

Social Cognition 2nd edn (pp. 1-40). Hillsdale: Lawrence Erlbaum Associates.

Bartram, L. (1998). Enhancing visualizations with motion. In Proceedings of the IEEE

Symposium on Information Visualization (InfoVis ‟98). Raleigh, NC, 13–16.

Bartram, L. R. (2001). Enhancing information visualization with motion. Ph.D. thesis,

Simon Fraser University, Canada.

Bartram, L.,Ware, C., &Calvert, T. (2001). Moving icons: Detection and distraction. In

Proceedings of the IFIP TC.13 International Conference on Human-Computer

Interaction (INTERACT 2001). Tokyo, Japan.

Bauer, M., Oostenveld, R., & Fries, P. (2009). Tactile stimulation accelerates

behavioral responses to visual stimuli through enhancement of occipital gamma-band

activity. Vision Research 49 , 931-942.

Boer, E.R.& Hoedemaeker, M. (1998). Modeling driver behavior with different degrees

of automation: A hierarchical decision framework of interacting mental models.

Proceedings 17th European Annual Conference on Human Decision Making and

Manual Control, Valenciennes, France, December 14-16 1998. pp 63-72.

Bond, C. F., & Titus, L. J. (1983). Social facilitation: A meta-analysis of 241 studies.

Psychological Bulletin, 94, 265–292

Brehm, S., & Brehm, J. (1981). Psychological reactance: A theory of freedom and control. New York: Academic Press.

Brehm, S., & Brehm, J. (1981). Psychological reactance: A theory of freedom and

control. New York: Academic Press.

Brouwer, R. F., & Hoedemaeker, D. M. (2006). Driver support and information

systems: Experiments on learning, appropriation, and effects of adaptiveness. AIDE.

TNO report | TNO-DV 2010 C419 61 / 66

Campbell, J.L., Richard, C.M., Brown, J.L. & McCallum, M. (2007). Crash

warningsystem interfaces: human factors insights and lessons learned. National

Highway Traffic Safety Administration, Washington D.C..

Campbell, J.L., Richman, J.B., Carney, C., and Lee, J.D. (2002). In-vehicle display

icons and other information elements.Task F: Final in-vehicle symbol guidelines.

Federal Highway Administration, Washington, DC

CHEWAR, C. M., MCCRICKARD, D. S., NDIWALANA, A., NORTH, C., PRYOR,

J., AND TESSENDORF, D. (2002). Secondary task display attributes: Optimizing

visualizations for cognitive task suitability and interference avoidance. In Proceedings

of the Symposium on Data Visualization (VisSym ‘02). Eurographics Association,

Barcelona, Spain, 165–171.

Chouvardas, V., Miliou, A., & Hatalis, M. (2008). Tactile displays: overview and recent

advances. Displays 29 , 185-194.

Connekt, Buck Consultants & TNO (2009). Anti-ongevallensystemen voor vrachtauto's

grootschalige praktijkproef met het oog op vermindering ongevallen, meer veiligheid en een positief effect op doorstroming. Connekt, Delft.

Crawford, A. (1961). Fatigue and driving. Ergonomics, 4 , 143-154.

Cvetkovich, G., & Löfstedt, R. (Eds) (1999). Social trust and the management of risk.

London: Earthscan.

Cvetkovich, G., and K. Nakayachi. (2007). Trust in a high-concern risk controversy: A

comparison of three concepts. Journal of Risk Research 2: 223–37.

Cvetkovich, G., Siegrist, M., Murray, R., & Tragesser, S. (2002). New information and

social trust: Asymmetry and perseverance of attributions about hazard managers. Risk

Analysis, 22(2), 359–367.

Davis, F.d, Bagozzi, R.P & Warshaw, P.R. (1989) User acceptance of computer technology: a comparison of two theoretical models. Management Science. Vol 35(8).

982-1003

Davis, F.D. (1986). A Technology Acceptance Model for Empirically Testing New

End-User Infor mation Systems: Theory and Results. Doctoral dissertation, MIT Sloan

School of Management, Cambridge, MA.

de Waard, D. (1996). The measurement of drivers' mental workload, Traffic Research Centre.Groningen: University of Groningen.

Dijksterhuis, A., & Aarts, H.(2003).On wildebeests and humans: The preferential

detection of negative stimuli. Psychological Science, 14, 14–18.

Dinges, D., Maislin, G., Krueger, G., Brewster, R., and Carroll, R. (2005). Pilot Test of

Fatigue Management Technologies (RN-FMCSA-RT-05-002). Washington DC:

Federal Motor Carrier Safety Administration.

Dingus, T. A., Klauer, S. G., Neale, & al., V. L. (2006). The 100-car naturalistic

driving study, Phase II - results of the 100-car field experiment. Washington, D.C.:

National highway traffic Safety administration.

TNO report | TNO-DV 2010 C419 62 / 66

Dixon, S., Wickens, C., & McCarley, J. (2007). On the Independence of Compliance

and Reliance: Are Automation False Alarms Worse Than Mishits? Human Factors Vol.

49 No. 4 .

Driver, J., & Spence, C. (2004). Crossmodal spatial attention: Evidence from human

performance. In Attention and Performance XV: Conscious and nonconscious

information processing (pp. 311-331). Oxford: Oxford University Press.

Earle, T.C. & Cvetkovich, G. (1996). Social Trust: Toward a Cosmopolitan Society.

Westport, CT: Praeger.

Eijk, R. v. (2008). Audio-visual synchrony perception.Eindhoven: Technische

Universiteit Eindhoven.

Engström, J., & Victor, T. W. (2009). Real-time distraction countermeasures. In M. A.

Regan, Driver distraction: Theory, Effects, and mitigation (pp. 465-483). Boca Raton:

CRC Press.

Engström, J., Arfwdsson, J., & al., e. (2006). Towards te automotive HMI of the future:

Mid-term results of the AIDE project. In J. a. Valldorf, Advanced micosystems for automotive applications. Berlin: Springer.

Engström, J., Arfwidsson, J., Amditis, A., Andreone, L., Bengler, K., Cacciabue, P.C.,

Janssen, W., Kussman, H. and Nathan, F. (2006). Towards the automotive HMI of the

future: Mid-term results of the AIDE project. In: J. Valldorf and W. Gessner (Eds.)

Advanced Microsystems for Automotive Applications 2006. Berlin: Springer.

Enns, J. T. &Rensink, R. A. (1991). Preattentive recovery of three-dimensional

orientation from line drawings. Psychological Review 98, 335–351.

Epley, N., Akalis, S., Waytz, A., & Cacioppo, J. (2008). Creating Social Connection

Through Inferential Reproduction: Loneliness and Perceived Agency in Gadgets, Gods and Greyhounds.Psychological Science Vol. 19: 114-120.

Fereidunian, A., Lucas, C., Lesani, H., Lehtonen, M., & Nordman, M. (2007).

Challenges in Implementation of Human-Automation Interaction Models. 15th

Mediterranean conference on Control & Automation. Athene, Greece.

Fogg, B. (2003). Persuasive Technology: Using computers to change what we think an do. San Fransisco: Morgan Kaufman.

Green, M. (2000). How long does it take to stop? Methodological analysis of driver

perception-brake times. Transportation Human Factors, 2(3) , 195-216.

Green, P. (1999). The 15-second rule for driver information systems. In Proceedings of

the ITS America Ninth Annual Meeting. Washington, DC.

Guerin, B. (1993). Social facilitation. Cambridge, UK: Cambridge University Press.

Ham, J., C.J.H., M., & Beute, F. (2009). Unconscious persuasion by ambient persuasive

technology: Evidence for the effectiveness of subliminal feedback. Conference proceedings of artificial intelligence and simulation of behaviour , Edinbugh UK.

Hancock, P.A., Parasuraman, R., 1992. Human factors and safety in thedesign of

intelligent vehicle-highway systems (IVHS). J. Safety Res. 23, 181–198.

TNO report | TNO-DV 2010 C419 63 / 66

Healey, C. G. &Enns, J. T. (1999). Large datasets at a glance: Combining textures and

colors in scientific visualization. IEEE Transactions on Visualization and Computer

Graphics 5, 2, 145–167.

Healey, C. G.,Booth, K. S., &Enns, J. T. (1996). High-speed visual estimation using

preattentive processing. ACM Trans. Hum. Comput. Inter. 3, 2, 107–135.

Heijden, A.H.C. van der (1992). Selective attention in vision. London: Routledge

Ho, C., & Spence, C. (2008). The Multysensory Driver: Implications for Ergonomic

Car Interface Design. Aldershot, Hampshire: Ashgate Publishing Limited.

Ho, C., Reed, N., & Spence, C. (2007). Multisensory In-Car Warning Signals for

Collision Avoidance. Human Factors, 49, 1107-1114.

Hoedemaeker, D.M, Doumen, M., De Goede, M., Hogema, J.H., Brouwer, R.F.T. and

Wennemers, A.S. (2010) Modelopzet voor Dode hoek Detectie en Signalerings

Systemen (DDSS). TNO-DV 2010 C150 Soesterberg, The Netherlands: TNO Human

Factors

Hogema, J.H. (2010). Dode hoek Detectie en Signalerings Systemen: Simulaties (in

press oktober 2010). Soesteberg: TNO Human Factors.

Horrey, W., & Wickens, C. (2004). Driving and Side Task Performance: The Effects of

Display Clutter, Separation and Modality. Human Factors Vol. 46 No. 4 , 611-624.

IJsselsteijn, W.A., Kort, Y.A.W. de, Midden, C.J.H., Eggen, J.H., & Hoven, E.A.W.H.

van den (2006). Persuasive technology for human well-being: Setting the scene. In W.

A. IJsselsteijn, Y. A. W. de Kort, C. J. H. Midden, B. Eggen, & E. A. van den Hoven

(eds.), Proceedings of the First International Conference on Persuasive technology for

Human Well-Being, Berlin: Springer Publishers, 1-5.

Ishii H., Ullmer, B. (1997). Tangible Bits: Towards Seamless Interfaces between

People, Bits and Atoms. In: Proceedings of CHI‘ 97, ACM Press, 234–241.

Janssen, W. H., Alm, H., Michon, J.A. and Smiley A., Driver support. In: J.A. Michon,

Editor, Generic Intelligent Driver Support, Taylor & Francis, London (1993).

Katteler, H. (2003). Acceptance of lane departure warning assistance (LDWA) systems:

final report. TNO, Soesterberg.Leser, H., Icke, S. & Komoll, O. (2006). Right-turning

trucks and cyclists. Paper gepresenteerd op Symposium Safety of Commercial Vehicles,

12-13 oktober, Neumünster.

Kiefer, R., LeBlanc, D., Palmer, M. Salinger, J., Deering, R. and Shulman, M. (1999).

Development and Validation of Functional Definitions and Evaluation Procedures for

Collision Warning/Avoidance Systems. National Highway Traffic Safety

Administration, Washington, DC

Korse, M.J., Schermers, G., Radewalt, N.M.D., Hoog de, A., & Alkim, T. (2003). Op

koers!? Resultaten van de proef met het Lane Departure Warning Assistant systeem. Adviesdienst Verkeer en Vervoer, Den Haag.

Lee, J. D. (2000). Can collision warning systems mitigate distraction due to in-vehicle

devices? Proceedings of the internet forum on the safety of driver distraction when using in-vehicle technologies .

TNO report | TNO-DV 2010 C419 64 / 66

Lee, J., & Morray, N. (1992). Trust and the allocation of function in the control of

automatic systems. Ergonomics Vol.35 , 1243-1270.

Maltz, M., & Shinar, D. (2003). New alternative methods in analyzing human behavior

in cued target acquisition. Human Factors, 45, 281–295.

McBride, A., & Ntuen, C. (1997). The effects of multimodal display aids on human

performance. Computers ind. Engng , 197-200.

McGuirl, J., & Sarter, N. (n.d.). Supporting Trust Calibartion in Automated Decision

Aids through the Presentation of System-Generated, Dynamic Confidence Information.

Columbus, Ohio: Institute for Ergonomics, The Ohio State University.

Miller, T., & Rich, P. (2008) ADAPT - audience design of ambient persuasive

technology. Mentored Advanced Project (MAP) Report, August 2008.

Milosevic, S. (1997). Drivers' fatigue studies . Ergonomics, 40 (3) , 381 - 389.

Miura, T. (1985). Coping with situational demands: a study of eye movements and

peripheral vison performance. In A. e. Gale, Vision in vehicles. Volume 1 (pp. 205-216).

Nottingham: North-Hollland.

Miura, T. (1990). Active function of eye movement and useful field of view in a

realistic setting. In R. d. Groner, From eye to mind: Information acquisition in perception, action, and reading. Amsterdam: North-Holland.

Morris, M.G & Turner, J.M. (2001). Assessing users‟ subjective quality of experience

with the world wide web: An exploratory examination of temporal changes in

technology acceptance. International Journal of Human- Computer Studies, 54, 877-

901.

Muir, B., & Moray, N. (1996). Trust in automation. Part II: Experimental studies of

trust and human intervention in a process control simulation. Ergonomics, 39(3):429-

460

Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press.

Nikolic, M., & and Sarter, N. (2001). Peripheral visual feedback: A pwerful means of

supporting effective attention allocation in even-driven, data-rich environments. Human

Factors, 43 , 30-38.

Opperud, A., Jeftic, Z., & Jarlengrip. (2005). Real-time adaption of advanced driver

assistence systems to driver state. Proceedings of the 2005 international truck ans bus

safety and security symposium .

Parasuraman, R., Bahri, T., Deaton, J., Morrison, J., & Barnes, M. (1992). Theory and

design of adaptive Automation in aviation systems. Warminster, PA, Naval Air Warfare

Center.

Parasuraman, R., Sing, I., & Molloy, R. (1997). Automation-induced monitoring

inefficiency: role of display location. Int . J . Human – Computer Studies 46 , 17-30.

Parasuraman, R., Wickens, C., & Sheridan, T. (2000). A Model for Types and Levels of

Human Interaction with Automation. IEEE TRANSACTIONS ON SYSTEMS, MAN,

AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 30, NO. 3, , 286-297.

TNO report | TNO-DV 2010 C419 65 / 66

Reeves, B. & Nass, C. (1996). The Media Equation. Cambridge Univ. Press.

Roland, J., Horst, D., Paul, A., Blanco, R., & Di Stasi, L. (2007). Adaptive Integrated

Driver-vehicle interface (AIDE): User Evaluations of Adaptive Warning strategies. IST.

Roubroeks, M., Midden, C., & Ham, J. (2009). Does It Make a Difference Who Says It?

Exploring the Role of a Social Agent for Psychological Reactance. Conference proceedings of Persuasive 2009, Claremont, USA.

Sarter, N., & Schroeder, B. (2001). Supporting decision making and action selection

under time pressure and uncertainty: The case of in-flight icing. Human Factors Vol. 43, 573-583.

Seta, C. E., & Seta, J. J. (1992). Increments and decrements in mean arterial pressure as

a function of audience composition: An averaging and summation analysis. Personality

and Social Psychology Bulletin, 18, 173–181.

Sheridan, T. (1991). Human factors of driver-vehicle interaction in the IVHS

environment. Tech. Rep. DOT HS 807 837, National Technical Information Service,

Springfield MA.

Smiley, A. (2005). What is Distraction? International Conference on Distracted Driving.

Spence, C., & Squire, S. (2003). Multisensory integration: Maintaining the perception

of synchrony. Current Biology Vol. 13 , R519-R521.

Takemoto, M., Kosaka, H., Noda, M., Nishitani, H., Uechi, M., Sasaki, K. (2007). What

causes dangerous driving behaviors when entering a non-singalized intersection? -

Extracting inner factors and constructing a driver model -. Proceedings of the 2007

IEEE Intelligent Vehicles Symposium.

Talmadge, S., Chu, R., Eberhard, C., Jordan, K., and Moffa, P. (2000). Development of

performance specifications for collision avoidance systems for lane change crashes.

National Highway Traffic Safety Administration, Washington D.C..

Transport Canada (2009). Comparison of low speed backing aids: final project report. Transport Canada, Ottawa.

Tufano, D., Knee, H., &Spelt, P. (1996). In-vehicle signing functions and systems

concepts. In Proceedings of the 29th International Symposium on Automotive

Technology and Automation (ISATA) Dedicated Conference on Global Deployment of

Advanced Transportation Telematics/ITS. Florence Italy, 97–104.

Venkatesh, V., & Davis, F.D. (1996). A Model of the Antecedents of Perceived Ease of

Use: Development and Test. Decision Sciences, 27,(3), pp 451–481.

Vossen, S., J.R.C., H., & C.J.H., M. (2009). Social influence of a persuasive agent: the

role of agent embodiment and evaluative feedback. In S. a. Chatterjee, Proceedings of the 4th international conference on persuasive technology 2009. New York: ACM.

Vries, P. d., Midden, C., & Bouwhuis, D. (2003). The effects of errors on system trust,

self-confidence, and the allocation of control in route planning. Int. J. Human-Computer

Studies Vol. 58 , 719-735.

Wickens, C., & Parasuraman, R. (2008). Humans: Still Vital After All These Years of Automation. Human Factors vol.50, No. 3 , 511-520.

TNO report | TNO-DV 2010 C419 66 / 66

Wickens, C., Mavor, A., Parasuraman, R., & McGee, J. (1998). The future of air traffic

control: Human operators and Automation. Washington, DC: National Academy Press.

Wickens, T. (2002). Elementary Signal Detection Theory. Oxford; New York: Oxford

University Press.

Wierwille, W., et al. (1992). Heavy vehicle driver workload assessment Task 4: Review of workload and relates research. NHTSA, Virginia.

Wisneski, C., Ishii, H., Dahley, A., Gorbet, M.G., Brave, S., Ullmer, B., & Yarin, P.

(1998): Ambient Displays. In: Streitz, N.A., Konomi, S., Burkhardt, H.-J. (eds.) CoBuild 1998. LNCS, 1370. Springer.

Yeh, M., Wickens, C., & Seagull, F. (1999). Target cueing in visual search; The effects

of conformality and display location on the allocation of visual attention. Human

Factors Vol. 41 , 524-542.

Yoblick, D., & Salvendy, G. (2007). Influence of frequency on the estimation of time

for auditory, visual, and tactile modalities: The kappa effect. Journal of environmental

psychology Vol.86, Issue 2 , 157-164.

Young, K., Regan, M., & Hammer, M. (2003). Driver distraction: A review of literature. Monash University Accidant Research Centre.

Zaalberg, R., Midden, C., Meijnders, A. and McCalley, T. (2009), Prevention,

Adaptation, and Threat Denial: Flooding Experiences in the Netherlands. Risk Analysis,

29, pp 1759–1778

Zajonc, R. B. Social facilitation. Science. (1965), 149, 269-274.

Zoutendijk, A., H. M., Vonk, T., & al, e. (2003). Implementing multiple intelligent

services in an intelligent vehicle with workload aware HMI. Proceedings of the ITS

world Congress Madrid .

TNO report | TNO-DV 2010 C419 67 / 66

A HMI checklist

The following checklist was made for the EU-project CVIS based on the sources

described in the guidelines section of the report and should be taken into account as a

guideline for the HMI to help identify likely problem areas with a design. Because

alternative one is designed for maximal effectiveness some of the guidelines are not

completely applied (yet). Compliance with other guidelines still has to be tested in the

progress of the project e.g. the amount of visual and auditory distraction while driving.

Only relevant items of the guidelines are listed here, for the complete list we refer to

Pauwelussen & Hoedmaeker (2008).

General

1. The system should not give rise to potentially hazardous behaviour by the driver or

other road users.

2. The system does not distract or visually entertain the driver.

3. No part of the system should obstruct the driver‟s view of the road scene.

4. Information relevant to the driving task should be timely and accurate.

5. Information which has the highest safety relevance should be given priority.

6. Systems providing non-safety related dynamic visual information should be capable

of being switched into a mode where that information is not provided to the driver.

7. Information should be presented to the driver about current status, and any

malfunction within the system that is likely to have an impact on safety.

8. In the event of a partial or total failure of the system, the vehicle should remain

controllable, or at least should be capable of being brought to a halt in a safe

manner.

9. The system should not provide the driver with information that could be illegal or

could lead in aggregation to an illegal action.

Display Use

1. Visual displays should be positioned as close as practical tot the driver‟s normal

line of sight.

2. Visual displays should be designed and installed to avoid glare and reflections.

3. Visual messages should be designed such that the driver is able to assimilate the

relevant information with a few glances, which are brief enough not to adversely

affect driving.

4. Frequently looking shortly at the display is better than looking long every now and

then.

5. Because people are better in observing unexpected appearances than in observing

that something expected is missing, it is advised to use signals and messages to

inform the user that something was not successful or not correct. No signal /

message means that everything is okay.

Display lay-out

1. Minimum height of characters and text should be 6.4 mm, assuming a looking

distance of 700-750 mm. According to ISO standard 9241-3 the character height

should be at least 16 arc min.

2. No more than 5 different and well distinguishable colours should be used. Advised

colours are: green, yellow, red, black and white. Advised combinations are: yellow

on red, black on green and green on black. The use of the colour blue should be

avoided.

TNO report | TNO-DV 2010 C419 68 / 66

3. Attention should be paid to the association with colours: red for danger or alarm,

yellow for take care and green for safe.

Audio

1. The messages should have consistent and simple vocabulary.

2. If situations arise in which multiple auditory messages have to be presented at the

same time, these messages should not be presented concurrently, but the priority of

each message should be taken in account.

3. A message concerning the safety of the driver or other people, and requiring

immediate action of the driver, should not be presented exclusively by auditory

means, but should also be presented using another sensory channel.

4. System generated sounds, with sound levels that cannot be controlled by the driver,

should not mask audible warnings from within the vehicle or the outside.

5. The driver should have control of auditory information where there is likelihood of

distraction.

6. It is desirable that the signal levels adapt automatically to the frequency spectrum

of the background noise.

7. It is important to make sure that there is no confusion among the various signals in

the vehicle‟s auditory repertoire. When new auditory signals are proposed or

introduced in a vehicle, it should be verified that it will not be confused with

existing signals, particularly if this signal is a safety critical message.

8. Based on the background noise in cars and the audibility needed, a loudness range

between 50 dB(A) and 90 dB(A) should be used. Signals above 90 dB(A) should be

avoided.

9. The recommended frequency range for in-vehicle speech signals is 200 Hz to 8.000

Hz. For tonal signals the recommended range lies between 400 and 4,000 Hz.

TNO report | TNO-DV 2010 C419 69 / 66

B Theory-based issues

Chapter 3 – Persuasive Technology: Future research

-A central issue of persuasive technologies and in this case the BDWS is how to

optimise the allocation of control given the number of false alarms (false-alarm rate) or

the detection system and the will of the user to submit control to the system.

Chapter 4 – Technology Acceptance: Future research

-Future research that looks at the underlying factors of acceptance of a BDWS, using

proven technology acceptance theories, can provide valuable insight into the optimal

design of a BDWS.

Chapter 5 – Signal detection theory: Future research

To successfully implement Signal Detection Theory (SDT) into a BDWS further

research is needed into the precise relationship between SDT, trust and the control

dimension. Further studies on false alarms and mishits in detection systems could

concern the relationship between trust in the system and the extent of false alarms,

specifically in BDWS for trucks/buses. Are different SDT values required for the

various warning stages, for instance? While there is plenty in the literature about

applications in different areas and corresponding values, there is nothing on the specific

BDWS and corresponding values for detecting the danger. This concerns an initial

phase in the study of the relationships between SDT and other theories as discussed in

this report, with lab research being a possible First step in an improved BDWS.

Chapter 6 – Control dimension - inform/warn/intervene: Future research

Automation also has significant relationships with the other aspects, though little or no

research has been done into the influence of automation on the user and the trust that

the user has or degree of detection whereby the user recognises a danger. A system can

have several functions such as those given in the various stages of automation. More

research is recommended into how automation is accepted and the extent to which this

influences the decisions taken by the driver, trust in the system and the degree of

detection programmed in the system.

Where the control dimension can be distinctive is if a control structure is designed that

takes account of the various stages and levels of control, better identifies the

relationship with other aspects and gives the system better transparency. In this case it

can facilitate the efficacy of the system by making clear in which areas the system must

perform specific actions and the consequences that has for the user.

Chapter 7 – Cognitive dimension: Future research

-Is it possible to make a reliable assessment of whether the driver has seen the cyclist?

Chapter 8 – Interruption Reaction Comprehension model: Future research

-For future elaboration of BDWS the pertinent concepts from the information

processing model (Wickens and Holland, 2000) are related to the concepts from the

OWI model. Subsequently concept scan be identified that re most relevant to the

different BDWS systems.

Chapter 10 -- Trust dimension: Future research

TNO report | TNO-DV 2010 C419 70 / 66

-How do you regain trust in the system if the system has made an error and the trust is

damaged?

Furthermore, in relation to this trust dimension the trust that a user may have in the

system varies, not only among different users but also between different situations and

the experiences the user has with the system. If there is full trust in the system, then the

user is tolerant of the system but excessive trust in the system can be a danger itself if

the user does not react if the system makes an error. A key aspect in this situation is not

only to look at how trust is manifested during use but also to prepare the user to use a

system in which he must trust. Training can give the user insight into the system, how it

works and particularly what is possible and what to look out for during use.

An important addition for subsequent research in a lab, for example, is to investigate

how a user can regain trust in the system after the system has given the user a false

alarm. A driver will continue to use the system in his job so he will have no possibilities

to escape from working with such a system. Also there is little information on the

behaviour of trust in the different stages of automation in combination with the

variation of false alarms.

Chapter 11 -- Modality dimension / Human Machine Interface: Future research

What is lacking in information is what multi-sensory signalling does in a realistic

environment with a certain level of stress borne by the user when carrying out various

tasks. Adding a third signal, haptic feedback, appears to have many advantages but the

best way of implementing this feedback is not yet known. This could be studied in a

controlled lab setting whereby an overview is obtained of the different kinds of haptic

feedback in BDWS situations using a simulator.

Further research could also look at the influence of environment and environmental

noise on the reaction time during warnings generated by the detection system to

determine the necessity of preparing or alerting the environment concerning an accident

by allowing the truck to communicate with its surroundings. Further studies should

show the role of haptic feedback could have in improving the reaction time of the

driver.

Chapter 12 -- Type of feedback: Future research

-Whether a social actor also promotes efficacy in a BDWS cannot be stated with any

certainty because in this specific application no relevant study is known by the authors.

Nor can anything can be said about the incidence of possible reactance.

-Allocation of the attention, the influence on false alarm acceptance and whether

ambient feedback is equally applicable for inform and warn will have to be subjects of

future studies.

TNO report | TNO-DV 2010 C419 71 / 66

TNO report | TNO-DV 2010 C419 72 / 66

16 Signature

<City>, <datum> TNO Defence, Security and Safety

<naam afdelingshoofd> dr. E.S. Wilschut

ing. V. Meijering

B. Merkus

dr. W. IJsselsteijn

dr. J. Ham

Head of department Author