Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and...

12
Objective: We examine whether trust in smart systems is generated analogously to trust in humans and whether the automation level of smart systems affects trustworthiness and acceptability of those systems. Background: Trust is an important factor when considering acceptability of automation technology. As shared goals lead to social trust, and intelligent machines tend to be treated like humans, the authors expected that shared driving goals would also lead to increased trustworthiness and acceptability of adaptive cruise control (ACC) systems. Method: In an experiment, participants (N = 57) were presented with descriptions of three ACCs with different automation levels that were described as systems that either shared their driving goals or did not. Trustworthiness and acceptability of all the ACCs were measured. Results: ACCs sharing the driving goals of the user were more trustworthy and acceptable than were ACCs not sharing the driving goals of the user. Furthermore, ACCs that took over driving tasks while providing information were more trustworthy and acceptable than were ACCs that took over driving tasks without providing information. Trustworthiness mediated the effects of both driving goals and automation level on acceptability of ACCs. Conclusion: As when trusting other humans, trusting smart systems depends on those systems sharing the user’s goals. Furthermore, based on their description, smart systems that take over tasks are judged more trustworthy and acceptable when they also provide information. Application: For optimal acceptability of smart systems, goals of the user should be shared by the smart systems, and smart systems should provide information to their user. Keywords: adaptive cruise control systems, social trust, system trust, acceptance, automation level, shared value similarity INTRODUCTION Although it may sound like science fiction today, cars might be able to drive themselves in the near future. In 2010, Google tested fully autonomous cars that do not need a human driver (Markhof, 2010). These experimental cars have been driving autonomously among other human-controlled cars on real roads for 140,000 miles with only occasional human intervention and 1,000 miles without any human intervention. The example above demonstrates that modern- day, advanced technologies are capable of creat- ing intelligent automation in vehicles. Safer driving, less congestion, and better fuel effi- ciency could result from intelligent automation. However, would people simply sit back and relax when the car takes over driving tasks? The cur- rent research investigates psychological factors that contribute to the acceptability of automation technology, such that users delegate control to smart systems. Throughout this article, the term smart systems is used to refer to intelligent auto- mation technology. Previous studies have shown that trust is a crucial psychological factor that contributes to the acceptability of automation technology (e.g., Lee & Moray, 1992; Lee & See, 2004; Lewandowsky, Mundy, & Tan, 2000; Muir, 1994; Muir & Moray, 1996; Parasuraman & Riley, 1997; Riley, 1996). Definition of Trust Although no universally accepted definition of trust exists, a broadly accepted definition of trust has been proposed by Mayer, Davis, and Schoorman (1995), who define trust as the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party. (p. 712) Address correspondence to Frank Verberne, Department of Human-Technology Interaction, Eindhoven University of Technology, IPO 1.27, P.O. Box 513, 5600 MB, Eindhoven, Netherlands; [email protected]. HUMAN FACTORS Vol. XX, No. X, Month XXXX, pp. X-X DOI:10.1177/0018720812443825 Copyright © 2012, Human Factors and Ergonomics Society. Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and Acceptability of Smart Systems in Cars Frank M. F. Verberne, Jaap Ham, and Cees J. H. Midden, Eindhoven University of Technology, Eindhoven, Netherlands SPECIAL SECTION: Human Factors and Automation in Vehicles by guest on April 5, 2016 hfs.sagepub.com Downloaded from

Transcript of Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and...

Objective: We examine whether trust in smart systems is generated analogously to trust in humans and whether the automation level of smart systems affects trustworthiness and acceptability of those systems.

Background: Trust is an important factor when considering acceptability of automation technology. As shared goals lead to social trust, and intelligent machines tend to be treated like humans, the authors expected that shared driving goals would also lead to increased trustworthiness and acceptability of adaptive cruise control (ACC) systems.

Method: In an experiment, participants (N = 57) were presented with descriptions of three ACCs with different automation levels that were described as systems that either shared their driving goals or did not. Trustworthiness and acceptability of all the ACCs were measured.

Results: ACCs sharing the driving goals of the user were more trustworthy and acceptable than were ACCs not sharing the driving goals of the user. Furthermore, ACCs that took over driving tasks while providing information were more trustworthy and acceptable than were ACCs that took over driving tasks without providing information. Trustworthiness mediated the effects of both driving goals and automation level on acceptability of ACCs.

Conclusion: As when trusting other humans, trusting smart systems depends on those systems sharing the user’s goals. Furthermore, based on their description, smart systems that take over tasks are judged more trustworthy and acceptable when they also provide information.

Application: For optimal acceptability of smart systems, goals of the user should be shared by the smart systems, and smart systems should provide information to their user.

Keywords: adaptive cruise control systems, social trust, system trust, acceptance, automation level, shared value similarity

INTRODUCTIONAlthough it may sound like science fiction

today, cars might be able to drive themselves in the near future. In 2010, Google tested fully autonomous cars that do not need a human driver (Markhof, 2010). These experimental cars have been driving autonomously among other human-controlled cars on real roads for 140,000 miles with only occasional human intervention and 1,000 miles without any human intervention.

The example above demonstrates that modern- day, advanced technologies are capable of creat-ing intelligent automation in vehicles. Safer driving, less congestion, and better fuel effi-ciency could result from intelligent automation. However, would people simply sit back and relax when the car takes over driving tasks? The cur-rent research investigates psychological factors that contribute to the acceptability of automation technology, such that users delegate control to smart systems. Throughout this article, the term smart systems is used to refer to intelligent auto-mation technology. Previous studies have shown that trust is a crucial psychological factor that contributes to the acceptability of automation technology (e.g., Lee & Moray, 1992; Lee & See, 2004; Lewandowsky, Mundy, & Tan, 2000; Muir, 1994; Muir & Moray, 1996; Parasuraman & Riley, 1997; Riley, 1996).

Definition of Trust

Although no universally accepted definition of trust exists, a broadly accepted definition of trust has been proposed by Mayer, Davis, and Schoorman (1995), who define trust as

the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party. (p. 712)

Address correspondence to Frank Verberne, Department of Human-Technology Interaction, Eindhoven University of Technology, IPO 1.27, P.O. Box 513, 5600 MB, Eindhoven, Netherlands; [email protected].

HUMAN FACTORSVol. XX, No. X, Month XXXX, pp. X-XDOI:10.1177/0018720812443825Copyright © 2012, Human Factors and Ergonomics Society.

Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and Acceptability of Smart Systems in Cars

Frank M. F. Verberne, Jaap Ham, and Cees J. H. Midden, Eindhoven University of Technology, Eindhoven, Netherlands

SPECIAL SECTION: Human Factors and Automation in Vehicles

by guest on April 5, 2016hfs.sagepub.comDownloaded from

2 Month XXXX - Human Factors

Three types of trust have been discussed in psychological literature: general, social, and interpersonal trust. General trust is seen as a per-sonality trait (e.g., Earle, Siegrist, & Gutscher, 2007) and indicates how trustful a person is in general. Social trust is based on social relations and shared values (Siegrist, Cvetkovich, & Gutscher, 2001) and indicates how trustful a person is toward other people and institutions. Interpersonal trust is based on interaction(s) with another person and is seen as an expecta-tion that a person has about another person’s behavior specifically related to an interaction (e.g., Bhattacharya, Divinney, & Pillutla, 1998). In the current study, we were interested in trust before an actual interaction, and specifically in trust in smart systems, so the theory of social trust was used to investigate trust in smart systems.

Research on the media equation hypothesis suggests that people might trust smart systems in the same way as they trust humans. Reeves and Nass (1996) proposed that humans respond socially to technology: Humans apply the same social rules to technology as to other humans. For example, people like a person more when that person is from the same group (minimal group paradigm; Tajfel, 1970; also see, e.g., Turner, Brown, & Tajfel, 1979). Nass, Fogg, and Moon (1996) showed that when partici-pants interacted with a computer that was pre-sented as a team member, that computer was seen as more similar to them and was rated as friendlier than when it was presented as a non–team member. Thus, the effect of being grouped with a computer is comparable to the effect of being grouped with another human. From their studies, Reeves and Nass (1996) conclude that people apply social rules to computers auto-matically, even though people are aware that computers are different from humans.

Although much research has been conducted on social trust, the research on system trust is mostly about reliability of the system (e.g., Dzindolet, Peterson, Pomranky, Pierce, & Beck, 2003; Lee & Moray, 1992; Moray & Inagaki, 1999; Muir, 1994; Muir & Moray, 1996). In line with Reeves and Nass (1996), we argue that as systems become smarter (and thus more human-like), findings from the field of social trust can

be applied and extended to system trust. Although we do not want to suggest that social trust and system trust are exactly the same, see De Vries, Midden, and Meijnders (2007) for a discussion regarding trust in humans versus trust in systems. In this research article, both the term trust and the term trustworthiness are used. We consider trust as an affective judgment of the user and trustworthiness as an attribute of the system. Both trust and trustworthiness are antic-ipatory in nature. For acceptance, however, we consider acceptance as the judgment of the usability of the system by a user after use and acceptability as the judgment of acceptance potential before use (see, e.g., Schuitema, Steg, & Forward, 2010, for a similar distinction between acceptance and acceptability).

In the research on social trust, a model of salient value similarity (SVS model) to trust has been proposed, which states that people are more likely to trust other people and institutions that have values similar to theirs (Cvetkovich, Siegrist, Murray, & Tragesser, 2002; Poortinga & Pidgeon, 2006; Siegrist, Cvetkovich, & Roth, 2000; Siegrist et al., 2001; Vaske, Abscher, & Bright, 2007). One key component of the SVS model is salient values: the saliency of the person’s goals and values that are relevant in a particular situation. Another key component is value similarity: the similarity of the salient values of the perceiver and the salient values of the person being judged on trustworthiness (Siegrist et al., 2000). Values as used in the SVS model include goals of the other person, so according to the SVS model, mutual social trust will be enhanced when two persons share each other’s goals.

When the user trusts a smart system, it is not necessary that the user immediately hands over all control to that system, it can take over tasks to different degrees. Sheridan and Verplank (1978) have proposed a continuum of 10 auto-mation levels ranging from the lowest level at which the human must make all decisions and actions, to the highest level at which the com-puter decides everything, ignoring the human. With low automation levels (1–4), only infor-mation is given to the user. With intermediate automation levels (5–9), automation technology executes action but still provides information to

by guest on April 5, 2016hfs.sagepub.comDownloaded from

TrusT in smarT sysTems 3

the user. With the highest automation level (10), action is taken over by automation technology without giving information to the user. From this continuum, we propose that both the provi-sion of information and the execution of action by automation technology determine its auto-mation level (also see Parasuraman, Sheridan, & Wickens, 2000).

The Current Research

The current research focuses on a specific smart system: adaptive cruise control (ACC) systems. ACCs are the improved version of traditional cruise control. ACCs allow users to preset the desired driving speed and following distance (in seconds) to a lead vehicle. As long as the following distance is not violated, ACCs maintain the desired driving speed. When the following distance is violated, ACCs will slow down the vehicle to reach and maintain the preset following distance. In all situations, maintaining the preset following distance is the first priority of ACCs, and maintaining the preset speed, the second. Maintaining follow-ing distance and speed can serve different driving goals: Braking and accelerating can be done safely, comfortably, quickly, or energy efficiently. These driving goals can be either shared with potential users or not.

In the current research, we investigate whether describing an ACC as sharing driving goals with a potential user will affect the trust-worthiness and acceptability of that ACC. Furthermore, we investigate whether trustwor-thiness and acceptability of an ACC depends on its automation level. Combining the literature on social trust, the media equation, and automa-tion levels, we provide three hypotheses.

Our first hypothesis is that goal similarity leads to increased trustworthiness and accept-ability of an ACC. Just as shared goals lead to social trust, we expect that shared driving goals lead to increased trustworthiness of ACCs. As trust is needed for acceptability, we also expect that shared goals will increase the acceptability of ACCs.

Our second hypothesis is that the level of automation of an ACC will affect both trustwor-thiness and acceptability. In general, one could

argue that systems with a low automation level are perceived as more transparent than systems with a high automation level (Dzindolet et al., 2003). In the current research we expect that, because of higher transparency, an ACC system described as taking over control and giving diag-nostic information will be judged more trustwor-thy and acceptable than an ACC system described as taking over control without giving diag- nostic information. With the information, poten-tial users can monitor the behavior of the ACC, thereby increasing transparency of the system.

Our third hypothesis is that trustworthiness mediates the effects of both shared driving goals and of automation level on the accept-ability of ACCs. We expect that an increase in acceptability is the result of an increase in trustworthiness. This hypothesis is in line with previous work indicating that trust is necessary for acceptance of automation technology (e.g., Muir & Moray, 1996) and extends this line of reasoning with an antecedent of social trust and automation level.

MeThODParticipants and Design

A total of 59 participants (23 women and 36 men; age M = 27.8 years, SD = 13.4) were ran-domly assigned to the conditions of a 2 (driving goals: shared versus unshared) × 3 (automation level: ACCinfo vs. ACCinfo+action vs. ACCaction) mixed-model design with driving goals as a between-subjects factor and automation level as a within-subjects factor. The two dependent variables are trustworthiness and acceptability. All participants were native Dutch speakers and had a driver’s license. Our sample consisted mainly of students, so it represents young driv-ers with limited driving experience. The experi-ment lasted approximately 30 min, for which participants were paid 5 euros. Gender was not a significant covariate and is therefore not used as a covariate in the analyses presented in the results section. For two participants, scores on multiple variables were outliers (on the basis of outlier criteria of Tukey, 1977), and therefore we excluded these two participants from data analyses, leaving a sample size of 57 participants.

by guest on April 5, 2016hfs.sagepub.comDownloaded from

4 Month XXXX - Human Factors

Materials

ACCs. We presented participants with des-criptions of three ACCs (see Figure 1) that dif-fered in their automation level. One ACC system (ACCinfo) was described as a system that pro-vided to the user only information about when and how hard the user needed to accelerate or brake to reach the driving goal of the ACC. A second ACC system (ACCinfo+action) was described as a system that would take over accel-erating and braking of a car to reach the driving goal it was made for, while giving information about when and how hard it would accelerate and brake. A third ACC system (ACCaction) was described as a system that would take over accel-erating and braking of a car to reach the driving goal it was made for, without giving information.

We acknowledge that an information-only ACC system. ACCinfo can be considered not to be an ACC because ACCinfo does not take over acceler-ating and braking. However, the system was pre-sented to participants as ACCinfo, and it was included in the study to include automation lev-els 1 through 4 of Sheridan and Verplank’s (1978)levels of automation. These four levels of auto-mation consist of systems that only provide information. To incorporate these levels of auto-mation in our study, and to acknowledge the informational characteristic of this type of ACC clearly, we labeled this type of ACC as ACCinfo. As we used only descriptions of ACCs, partici-pants did not receive actual information of ACCs or get to experience actual ACC systems.

Trustworthiness. Trustworthiness of the ACCs was measured by seven questions (see Appendix

Figure 1. Example of a description of an ACC system. Meaning of the labels: a = goal ranking of the participant; b = goal ranking of the ACC system; c = list of the driving goals; d = icon depicting automation level of the ACC system (d1 = ACCinfo, d2 = ACCaction, d3 = ACCinfo+action); e = icon (adapted from www.emofaces.com) depicting most important driving goal of the ACC system (e1 = comfort, e2 = energy efficiency, e3 = safety, e4 = speed); f = depiction of the ACC system (adapted from autorepair.about.com, copyrighted by ALLData (www.alldata.com)). In this example, the ACC system shares the driving goals of the participant. An ACC system that did not share the driving goals of the participant had the reversed ranking (no. 1 of participant was no. 4 of system, etc.).

by guest on April 5, 2016hfs.sagepub.comDownloaded from

TrusT in smarT sysTems 5

A), with a 7-point Likert-type scale (1 = totally disagree, 7 = totally agree). The questions are based on the questionnaire of Jian, Bisantz, and Drury (2000) that measures trustworthiness of automation technology. Answers to these ques-tions were averaged to form a reliable measure of trustworthiness (Cronbach’s alpha = .91). Responses were coded such that higher scores indicate more trustworthiness.

Acceptability. Acceptability of the ACCs was measured by 26 questions (see Appendix B), with a 7-point Likert-type scale. The questions are based on the questionnaire of the technology acceptance model (Venkatesh, 2000) and the acceptance questionnaire of Van der Laan, Heino, and De Waard (1997), which includes emotional and attitudinal elements of accep-tance. Answers to these questions were averaged to form a reliable measure of acceptability (Cronbach’s alpha = .97). Higher scores reflect more acceptability of the ACC system.

Procedure

Participants were welcomed, and each par-ticipant was seated in a cubicle in front of a computer. Next, four driving goals were pre-sented to participants. The driving goals (with their framing in parentheses) were comfort (relaxed driving, no sudden braking, and accel-erating), energy efficiency (saving fuel while driving), speed (reaching the desired destina-tion in the least amount of time), and safety (driving with the least risk of accidents). Participants were instructed to rank the driving goals from 1 to 4, 1 being the most important driving goal, 4 being the least important. The order of the goals presented was randomized to exclude a structural influence of a certain order on the goal ranking of the participant. Next, participants received information regarding ACCs, so that all participants understood what ACCs do. Furthermore, participants were intro-duced to the icons that would be used later in the experiment to depict the driving goal and automation level of the ACCs.

Participants were then presented with descrip-tions of the three different ACCs (see Figure 1 for an example). Each description included the ranking of the four driving goals for the partici-pant and the ranking of the four driving goals for

the ACC system, both presented on the right part of the screen. In the shared driving goals condi-tion, all ACCs had the same ranking as the par-ticipant. In the unshared driving goals condition, all ACCs had the reversed ranking to that of the participant (e.g., if the participant ranked speed as the most important driving goal, speed was the least important driving goal for the ACC sys-tem). The description also included a picture with icons displaying both the goal of the ACC system and the automation level of the ACC system.

The three ACCs were presented in a random order, and for each ACC system, we made sure all participants understood the driving goals and the automation level of the ACC system by ask-ing two open questions. The first question was included to describe the driving tasks to be per-formed by the ACC system and the participants themselves while driving with the system. The second question was meant to assess their thoughts about the system sharing their goals or not. Very few participants answered these ques-tions incorrectly, and when we excluded these from the analyses, the results did not change significantly. Furthermore, for each ACC sys-tem, trustworthiness and acceptability were measured using the questionnaires described earlier. Also, for each ACC system, partici-pants were asked whether the system would be helpful to reach their most important driv-ing goal and how much control the system would take over from them when used in real-ity. These questions served as manipulation checks. After the experiment, participants were thanked, paid for their participation, and debriefed.

ResUlTsManipulation Checks

Driving goals. A two-way mixed ANOVA was conducted on the manipulation check ques-tions with driving goals and automation level as factors. Results revealed a main effect of driv-ing goals, F(1, 55) = 27,40, p < .001, ηp

2 = .33. In the shared driving goals condition, ACCs were seen as more helpful in reaching the par-ticipants’ most important driving goal (M = 4.37, SD = 1.07) than in the unshared driving goals condition (M = 2.86, SD = 1.10). These

by guest on April 5, 2016hfs.sagepub.comDownloaded from

6 Month XXXX - Human Factors

results indicate that our manipulation of driving goals was successful.

Automation level. Mauchly’s test indicated that the assumption of sphericity had been violated, χ2(2) = 15.07, p < .01; therefore, degrees of free-dom were corrected using Greenhouse–Geisser estimates of sphericity (ε = .80). Results revealed a main effect of automation level, F(1.61, 88.60) = 80.47, p < .001, ηp

2 = .59. Planned contrast analy-ses showed that ACCinfo+action (M = 5.23, SD = 1.21) was rated as taking over more control than was ACCinfo (M = 2.54, SD = 1.60), F(1, 56) = 143.56, p < .001, ηp

2 = .72, and also that ACCaction (M = 5.07, SD = 1.47) was rated as taking over more control than was ACCinfo, F(1, 56) = 78.29, p < .001, ηp

2 = .58. Results indicated no difference between the perceived amount of control that was taken over by ACCinfo+action and by ACCaction, F(1, 56) = 0.71, ns. These results indicate that our manipulation of automation level was also successful.

Trustworthiness

Driving goals. A two-way mixed ANOVA was conducted on trustworthiness with driving goals and automation level as factors. Results revealed a main effect of driving goals, F(1, 55) = 8.78, p < .01, ηp

2 = .14 (see Figure 2). In the shared driving goals condition, ACCs were judged more trust-worthy (M = 4.62, SD = 0.81) than in the unshared driving goals condition (M = 3.91, SD = 1.01). All other effects of driving goals were nonsignificant (all Fs < 1.10). Just as humans are trusted more

when they share the goals of another, ACCs are judged more trustworthy when they share the driving goals of the user.

Automation level. Mauchly’s test indicated that the assumption of sphericity had been vio-lated, χ2(2) = 19.70, p < .001; therefore, degrees of freedom were corrected using Greenhouse–Geisser estimates of sphericity (ε = .77). Results revealed a main effect of automation level, F(1.53, 84.25) = 7.78, p < .01, ηp

2 = .12 (see Figure 3). Planned contrast analyses showed that ACCinfo was judged more trustworthy (M = 4.54, SD = 1.16) than was ACCaction (M = 3.85, SD = 1.37), F(1, 55) = 9.85, p < .01, ηp

2 = .15. Furthermore, ACCinfo+action was judged more trustworthy (M = 4.37, SD = 1.22) than was ACCaction, F(1, 55) = 16.27, p < .001, ηp

2 = .23. Furthermore, results do not suggest a difference between trustworthiness of ACCinfo and ACCinfo+action, F(1, 55) = 0.83, ns, nor an interac-tion between driving goals and automation level, F(2, 54) = 0.56, ns. Thereby, results sug-gest that ACCs taking over driving tasks while providing information are judged more trust-worthy than ACCs taking over driving tasks without providing information.

Acceptability

Driving goals. A two-way mixed ANOVA was conducted on acceptability with driving goals and automation level as factors. Results revealed a main effect of driving goals, F(1, 55) = 8.88, p < .01, ηp

2 = .14 (see Figure 4). In the shared driving goals condition, ACCs were

3

3.5

4

4.5

5

Driving goals

Mea

n tr

ustw

orth

ines

s sc

ore Unshared

Shared

Mean trustworthiness scores forboth condi�ons

Figure 2. The main effect of driving goals (shared vs. unshared) on trustworthiness of ACCs. Error bars indicate one standard error.

3

3.5

4

4.5

5

Automa�on levelMea

n tr

ustw

orth

ines

s sc

ore

ACCinfo

ACCinfo+ac�on

ACCac�on

Mean trustworthiness scoresfor the three different ACCs

Figure 3. The main effect of automation level (ACCinfo, ACCinfo+action, and ACCaction) on trust-worthiness of ACCs. Error bars indicate one standard error.

by guest on April 5, 2016hfs.sagepub.comDownloaded from

TrusT in smarT sysTems 7

judged more acceptable (M = 4.62, SD = 0.91) than in the unshared driving goals condition (M = 3.84, SD = 1.05). All other effects of driving goals were nonsignificant (all Fs < 1). Thus, ACCs are judged as more acceptable when goals with the user are shared.

Automation level. Mauchly’s test indicated that the assumption of sphericity had been vio-lated, χ2(2) = 22.32, p < .001; therefore, degrees of freedom were corrected using Greenhouse–Geisser estimates of sphericity (ε = .75). Results revealed a main effect of automation level, F(1.49, 82.18) = 3.05, p < .05 (one-tailed), ηp

2 = .05 (see Figure 5). It is interesting that planned contrast analyses showed that only the accept-ability of ACCinfo+action (M = 4.42, SD = 1.27) was higher than was the acceptability of ACCaction (M = 3.98, SD = 1.44), F(1, 55) = 13.67, p < .01, ηp

2 = .20. Results do not suggest a difference between the acceptability of ACCinfo (M = 4.23, SD = 1.16) and ACCaction, F(1, 55) = 1.39, ns, nor between the acceptability of ACCinfo and ACCinfo+action, F(1, 55) = 0.99, ns. Also, we did not find a significant interaction effect, F(2, 54) = 1.46, ns. Thereby, results show that ACCs taking over driving tasks while providing information are more acceptable than ACCs taking over driving tasks without providing information.

Mediation Analyses

Driving goals. A mediation analysis (follow-ing the steps of Baron & Kenny, 1986; also see Preacher & Hayes, 2008) was conducted to

reveal the direct (Path c) and indirect effects (Paths a and b) of driving goals on the accept-ability of ACCs. A Sobel test (Sobel, 1982) showed that the indirect effect was significant (Sobel z = 2.85, p < .01). The initial effect of driving goals on acceptability (Path c) becomes nonsignificant after controlling for trustworthi-ness (Path c′; see Figure 6), which shows that trustworthiness mediates the initial effect.

Automation level. A second mediation analy-sis was conducted to reveal the direct (Path c) and indirect effects (Paths a and b) of automa-tion level on acceptability of ACCs. Because automation level was a within-subjects factor, we used a linear mixed model to follow the steps

3

3.5

4

4.5

5

Driving goals

Mea

n ac

cept

abili

�y

scor

e UnsharedShared

Mean acceptability scores forboth condi�ons

Figure 4. The main effect of driving goals (shared vs. unshared) on the acceptability of ACCs. Error bars indicate one standard error.

3

3.5

4

4.5

5

Automa�on level

Mea

n ac

cept

abili

ty s

core ACCinfo

ACCinfo+ac�on

ACCac�on

Mean acceptability scores forthe three different ACCs

Figure 5. The main effect of automation level (ACCinfo, ACCinfo+action, and ACCaction) on accept- ability of ACCs. Error bars indicate one standard error.

Figure 6. Mediation analysis depicting the coefficients (Bs) of the direct (Path c) and indirect (Paths a and b) effects of shared driving goal on the acceptability of ACCs. Although the direct effect (Path c) is significant, this effect becomes nonsignificant after adding trustworthiness as a mediator (Path c′). *p < .01. **p < .001.

by guest on April 5, 2016hfs.sagepub.comDownloaded from

8 Month XXXX - Human Factors

of Baron and Kenny (1986). A Sobel test showed that the indirect effect was significant (Sobel z = 3.93, p < .001). The initial effect of automation level on acceptability (Path c) becomes nonsig-nificant after controlling for trustworthiness (Path c′; see Figure 7), which shows that trust-worthiness mediates the initial effect.

DIsCUssIONThe current research investigated the influ-

ence of driving goals (shared vs. unshared) and automation level on the trustworthiness and acceptability of an ACC system. Therefore, we presented participants with descriptions of three ACCs: one that provided only information (ACCinfo), one that took over driving tasks and that provided information (ACCinfo+action), and one that only took over driving tasks, without providing information (ACCaction). For half of the participants, these ACCs did not share their own driving goals; for the other half, these ACCs did share their driving goals. For every ACC system, trustworthiness and acceptability were measured. Results confirm all three hypotheses.

Our first hypothesis is confirmed; results show that ACCs that share the driving goals of the participants are judged more trustworthy and acceptable than ACCs that do not share the driving goals of the participants. These results are in line with findings from both the SVS

model (Cvetkovich et al., 2002; Poortinga & Pidgeon, 2006; Siegrist et al., 2000, 2001; Vaske et al., 2007) and the media equation hypothesis (Reeves & Nass, 1996) and show that social trust and system trust have a similar determinant: shared goals. Thus, two important implications of these results are that the SVS model can be applied successfully to automated systems and that the media equation hypothesis also applies to perceived trustworthiness and acceptability of smart systems.

Our second hypothesis is also confirmed; results suggest that the automation level of ACC affects both trustworthiness and acceptability of ACC. We show that ACCs that take over actions and provide information are judged more trust-worthy and acceptable than are ACCs that take over action without providing information. These findings suggest that real ACCs (that take over driving tasks) not providing information to its user will be trusted and accepted less than real ACCs that do provide information to its user. These results are in line with the model of Sheridan and Verplank (1978) that shows that one factor determining perceived automation level is whether or not information is provided.

Finally, our third hypothesis is confirmed; results suggest that trustworthiness is a mediator for both the effect of shared driving goals and the effect of automation level on the acceptability of ACCs. These results are in line with previous research showing that trust in an automated sys-tem is important for acceptability (e.g., Muir & Moray, 1996). Thereby, the results of this study provide a first indication that both sharing goals and providing information increase trustworthi-ness and acceptability of smart systems.

An important detail in this study was that participants were given only the perception that the ACCs either shared their driving goals or not. The system descriptions did not contain functional differences between the ACC system that shared the driving goals with the user and the ACC system that did not share the driving goals with the user. So whether or not the description of actual ACC behavior indicated sharing the driving goals of the user did not seem to be necessary for creating the perception of shared driving goals. However, when apply-ing these findings in practice, we would expect

Figure 7. Mediation analysis depicting the coefficients (Bs) of the direct (Path c) and indirect (Paths a and b) effects of automation level on the acceptability of ACCs. Although the direct effect (Path c) is significant, this effect becomes nonsignificant after adding trustworthiness as a mediator (Path c′). *p < .01. **p < .001.

by guest on April 5, 2016hfs.sagepub.comDownloaded from

TrusT in smarT sysTems 9

that, to find similar positive results of shared goals in the long term, the experience with the ACC system should be congruent with the per-ception of shared goals.

Open questions with regard to smart systems that share driving goals of drivers are how stable driving goals of drivers are and how a smart sys-tem could cope with changing driving goals. When a driver encounters an accident on the road for example, the driving goal of safety would get a higher priority. A smart system could respond to this change of goal by providing options to the user to change the “goal mode” of the system. If the smart system could adapt to the changing goals of the user while driving, acceptance of the system could be enhanced. Experience with the system in a certain “goal mode” could also change the goals of the user. A smart system could suggest a certain goal to be adopted by the user, and when the experience with the smart system is positive, the goal could also become more positive for the user.

What is the conceptual reason that providing information increases acceptability of ACCs? Although our results showed that ACCs that took over driving tasks and provided informa-tion were judged more trustworthy and accept-able than are ACCs that took over driving tasks without providing information, one could argue that when information was provided, trustwor-thiness was no longer an issue. As we see it, smart systems (like ACCs) create a situation of uncertainty and risk: Smart systems that take over tasks do not work perfectly accurately and can also make errors. Increasing trustworthi-ness by sharing the goals of the user is one way to reduce uncertainty and risk. Giving informa-tion is another way to reduce uncertainty and risk. Users do not have to rely on trust anymore if accurate and complete information is pro-vided by the smart systems.

Regarding our conclusion that providing information is beneficial for judgments of trust-worthiness and acceptability of smart systems, we think that providing information is neces-sary in the early stages of smart system usage. When users have enough experience with a system, providing information becomes less relevant to maintain trust and acceptance and could be experienced as annoying. However,

we think that by offering the option to receive information from a smart system, even after a lot of experience, users’ feelings of control will increase. Thereby, the user will be able to moni-tor the “inner workings” of the system so that the user will be able to predict the behavior of the system. This increase in feelings of control will be especially relevant for smart systems with high automation levels.

We argue that with the inclusion of automa-tion level in the current study, our results can be extended to trusting humans. That is, trust in another person also depends on the trusting sit-uation. For example, in one situation, a trainer could show a trainee how to accomplish a cer-tain goal and give information to the trainee how he or she could accomplish the goal by himself or herself. In another situation, the trainer could show the trainee only how to accomplish a certain goal without giving infor-mation to the trainee. We believe that trust in these situations differs, based on the results of our current study. That is, if the trainer has total control over the situation and does not give information to the trainee, the trainee will be less prone to trust and accept the trainer than when the trainer gives additional information.

A caveat in the current study was that partici-pants only read descriptions of smart systems and based their judgments of the trustworthi-ness and acceptability of the smart systems solely on that information. We think that it is difficult to make accurate impressions of a smart system without having any experience with the system. Therefore, future research could extend the current finding that descrip-tions of smart systems can change trustworthi-ness and acceptability of these systems by investigating how actual experience with smart systems that, for example, do or do not share user goals influences issues such as trustworthi-ness and acceptability. Also, future research could investigate the behavioral expression of trust and acceptability by measuring users’ responses to an ACC system in a driving simu-lator. Such a study could assess whether increased trustworthiness and/or acceptability will increase actual use of ACCs.

Furthermore, in the current study, we mainly used undergraduate students who do not have a lot

by guest on April 5, 2016hfs.sagepub.comDownloaded from

10 Month XXXX - Human Factors

of driving experience. Less experienced drivers will probably differ in judgments of trustworthi-ness and acceptability of a new automation tech-nology in a car than will more experienced drivers. Especially driving experience with ACC or other automation technologies will probably have an effect on users’ trust and acceptance in new auto-motive technologies. Future research could explore the effect of driving experience on users’ trust and acceptance of automation technologies in cars more thoroughly.

In our study, we focused only on whether or not the smart system shared the driving goals of the user. A suggestion for future research is to explore whether it matters what specific driving goal is most important for the user. We think that it matters whether users either have safety or comfort as their most important goal and whether or not a smart system shares these goals. A system that violates the safety goal would probably be seen as more negative than a system that violates the comfort goal, regard-less of the ranking of the participant. Future research should look further into this issue.

Although exploratory in nature, we had expected an interaction between shared goals and automation level. We had thought that shared values would not make much of a differ-ence for smart systems that provide only infor-mation. Users might be able to simply ignore information they do not like, or shut off the sys-tem (meaning that the automation technology is not accepted). Furthermore, malfunctioning of an informational system would not result in a severe safety risk. However, when actual action is taken over by the automation technology, the driving goals of the system might become more important because the users cannot ignore the system anymore (however, they could shut it off). We had thought that when users would have to give up control, they would be more willing to give up control to automation tech-nology only when that technology shares their driving goal. However, no interaction effects were found. A suggestion for further research is to explore this possible interaction more closely.

When applying the findings in practice, our results show that two processes should be taken into account with regard to acceptability of a smart system. Prior to accepting a smart system,

either the user will need the opportunity to rely on information that conveys the workings of the system or the user will need to rely on trust based on, for example, shared goals. In this way, trust can (partially) compensate for lack of information. For optimal acceptability of smart systems, the systems should both provide infor-mation and share goals with the user.

In conclusion, this study provides the first evi-dence (to our knowledge) that the SVS model can be applied to smart systems. Furthermore, we provide evidence supporting the media equa-tion hypothesis (Reeves & Nass, 1996), showing that social trust and system trust have a similar determinant: shared goals. Finally, we provide further evidence that trust is important for the acceptability of smart systems. The current research opens new research possibilities: It shows that determinants of trustworthiness and acceptability of smart systems can be studied and may be comparable, to a certain extent, to the determinants of trust that humans have in other humans. Thereby, we started to explore the fac-tors that contribute to people’s decision to sit back and relax when their car itself takes over driving. The current research emphasizes the importance of user goals and automation level for people’s judgments of trustworthiness and acceptability of smart systems.

APPeNDIx AListed below are the questionnaire items for measuring trustworthiness of ACCs.

1. I am wary of this ACC system. 2. This ACC system is reliable. 3. I would entrust my car to this ACC system. 4. I would be able to count on this ACC system. 5. This ACC system would have harmful

consequences. 6. I trust this ACC system. 7. I can assume that this ACC system will work

properly.

APPeNDIx BListed below are the questionnaire items for measuring acceptability of ACCs.

1. This ACC system helps me with driving. 2. This ACC system enables me to drive well.

by guest on April 5, 2016hfs.sagepub.comDownloaded from

TrusT in smarT sysTems 11

3. I think this ACC system is useful to have in a car. 4. This ACC system does what I want. 5. This ACC system would be easy to operate. 6. This ACC system would work properly. 7. I would experience this ACC system as pleasant. 8. Using this ACC system makes driving less fun. 9. Using this ACC system would not require much

attention.10. Assuming I had access to this ACC system, I

intend to use it.11. Given that I had access to this ACC system, I

predict that I would use it.12. To what extent would using this ACC system

evoke the feeling worry?13. To what extent would using this ACC system

evoke the feeling satisfaction?14. To what extent would using this ACC system

evoke the feeling annoyance?15. To what extent would using this ACC system

evoke the feeling stress?16. To what extent would using this ACC system

evoke the feeling calmness?17. To what extent would using this ACC system

evoke the feeling powerlessness?18. To what extent would using this ACC system

evoke the feeling hope?19. To what extent would using this ACC system

evoke the feeling aversion?20. To what extent would using this ACC system

evoke the feeling fear?21. To what extent would using this ACC system

evoke the feeling joy?22. All taken together, I would rate this ACC system

as bad–good.23. All taken together, I would rate this ACC system

as stupid–smart.24. All taken together, I would rate this ACC system

as unfavorable–favorable.25. All taken together, I would rate this ACC system

as harmful–beneficial.26. All taken together, I would rate this ACC system

as negative–positive.

ACkNOwleDgMeNTs

This research was supported by Research Grant LMVI-08-51 from the Netherlands Organisation for Scientific Research (NWO). We would like to thank the members of the Persuasive Technology lab group of the Department of Human-Technology Interaction at the Eindhoven University of Technology for their

comments and our colleague Ron Broeders for feed-back on an earlier draft of this article.

key POINTs

• Social trust and system trust have a similar deter-minant: shared goals.

• Describing ACCs as sharing driving goals with a user leads to more trustworthy and acceptable judgments than ACCs that do not.

• Trust mediates the effects of shared driving goals and automation level on the acceptability of ACCs.

RefeReNCes

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Concep-tual, strategic and statistical considerations. Journal of Person-ality and Social Psychology, 51, 1173–1182.

Bhattacharya, R., Divinney, T. M., & Pillutla, M. M. (1998). A for-mal model of trust based on outcomes. Academy of Management Review, 23(3), 459–472.

Cvetkovich, G., Siegrist, M., Murray, R., & Tragesser, S. (2002). New information and social trust: Asymmetry and persever-ance of attributions about hazard managers. Risk Analysis, 22, 359–367.

De Vries, P., Midden, C., & Meijnders, A. L. (2007). Antecedents of system trust: Cues and process feedback. In M. Siegrist, T. C. Earle, & H. Gutscher (Eds.), Trust in cooperative risk management: Uncertainty and scepticism in the public mind (pp. 241–266). London, UK: Earthscan.

Dzindolet, M. T., Peterson, S. A., Pomranky, R., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reli-ance. International Journal of Human-Computer Studies, 6, 697–718.

Earle, T. C., Siegrist, M., & Gutscher, H. (2007). Trust, risk per-ception and the TCC model of cooperation. In M. Siegrist, T. C. Earle, & H. Gutscher (Eds.), Trust in cooperative risk management: Uncertainty and scepticism in the public mind (pp. 1–49). London, UK: Earthscan.

Jian, J., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4, 53–71.

Lee, J. D., & Moray, N. (1992). Trust, control strategies and alloca-tion of function in human-machine system. Ergonomics, 35, 1243–1270.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46, 50–80.

Lewandowsky, S., Mundy, M., & Tan, G. (2000). The dynamics of trust: Comparing humans to automation. Journal of Experi-mental Psychology–Applied, 6, 104–123.

Markhof, J. (2010, October 9). Google cars drive themselves, in traffic. New York Times. Retrieved from http://www.nytimes .com

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integra-tive model of organizational trust. Academy of Management Review, 20, 709–734.

Moray, N., & Inagaki, T. (1999). Laboratory studies of trust between humans and machines in automated systems. Trans-actions of the Institute of Measurement and Control, 21, 203– 211.

by guest on April 5, 2016hfs.sagepub.comDownloaded from

12 Month XXXX - Human Factors

Muir, B. M. (1994). Trust in automation: 1. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37, 1905–1922.

Muir, B. M., & Moray, N. (1996). Trust in automation: 2. Experi-mental studies of trust and human intervention in a process control simulation. Ergonomics, 39, 429–460.

Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be team-mates? International Journal of Human-Computer Studies, 45, 669–678.

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230–253.

Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automa-tion. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, 30, 286–297.

Poortinga, W., & Pidgeon, N. F. (2006). Prior attitudes, salient value similarity, and dimensionality: Toward an integrative model of trust and risk regulation. Journal of Applied Social Psychology, 36, 1674–1700.

Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40(2), 879–891.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York, NY: Cambridge University Press.

Riley, V. (1996). Operator reliance on automation: Theory and data. In R. Parasuraman & M. Mouloua (Eds.), Automation and human performance: Theory and applications (pp. 19–35). Hillsdale, NJ: Lawrence Erlbaum.

Schuitema, G., Steg, L., & Forward, S. (2010). Explaining differ-ences in acceptability before and acceptance after the imple-mentation of a congestion charge in Stockholm. Transportation Research Part A: Policy and Practice, 44, 99–109.

Sheridan, T., & Verplank, W. (1978). Human and computer control of undersea teleoperators (Tech. rep.). Cambridge: Massachusetts Institute of Technology, Man-Machine Systems Laboratory.

Siegrist, M., Cvetkovich, G. T., & Gutscher, H. (2001). Shared values, social trust, and the perception of geographic cancer clusters. Risk Analysis, 21, 1047–1053.

Siegrist, M., Cvetkovich, G., & Roth, C. (2000). Salient value simi-larity, social trust, and risk/benefit perception. Risk Analysis, 20, 353–362.

Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equations models. In S. Leinhart (Ed.), Sociological methodology 1982 (pp. 290–312). Washington, DC: American Sociological Association.

Tajfel, H. (1970). Experiments in intergroup discrimination. Scien-tific American, 223, 96–102.

Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.

Turner, J. C., Brown, R. J., & Tajfel, H. (1979). Social comparison and group interest in ingroup favoritism. European Journal of Social Psychology, 9, 187–204.

Van der Laan, J. D., Heino, A., & De Waard, D. (1997). A simple procedure for the assessment of acceptance of advanced trans-port telematics. Transportation Research, C5, 1–10.

Vaske, J. J., Abscher, J. D., & Bright, A. D. (2007). Salient value similarity, social trust and attitudes toward wildland fire man-agement strategies. Human Ecology Review, 14, 223–232.

Venkatesh, V. (2000). Determinants of perceived ease of use: Inte-grating control, intrinsic motivation, and emotion into the tech-nology acceptance model. Information Systems Research, 11, 342–365.

Frank M. F. Verberne is a PhD student in the Department of Human-Technology Interaction, Eindhoven University of Technology. He received his MSc in social psychology from Radboud University Nijmegen in 2009.

Jaap Ham is an assistant professor in the Department of Human-Technology Interaction, Eindhoven University of Technology. He received his PhD in social psychology from Radboud University Nijmegen in 2004.

Cees J. H. Midden is a full professor in the Department of Human-Technology Interaction, Eindhoven University of Technology. He received his PhD in social psychology from Leiden University in 1986.

Date received: December 17, 2010Date accepted: February 1, 2012

by guest on April 5, 2016hfs.sagepub.comDownloaded from