Download - Social influence and end-user training

Transcript

t is a commonly heldaxiom that user trainingis a key element in MISsuccess [3, 16]. Includedamong positive outcomesafforded by user trainingare improved user atti-tudes, behavior, and per-formance. Although thetypical focus of trainingprograms is their techni-cal content, many practi-t ioners have demon-strated that social factorscould be instrumental in

the target system’s success or failure.1 Indeed, it is rec-ommended that researchers examine how methods oftraining can enhance motivation to learn and use soft-ware [3, 16, 17]. Unfortunately, there is little or no lit-erature that includes actual manipulation of such “soft”variables [16].

The study presented in this article provides suchmanipulation. More specifically, we wanted to discov-er the extent to which training outcomes such as atti-tudes, behavior, and performance are influenced bypeers through informal, verbal, word-of-mouth(WOM) communication, rather than derived solelythrough direct experience or formal channels. Thisarticle reports on a deception experiment thatemployed confederates in three experimental groups.

Training OutcomesMajor streams of research have for some time inves-tigated user attitudes, behavior, and performance,as surrogates to be used for measuring success. MISresearchers have focused on user satisfaction [8]and expectations [11] in exploring user attitudes.Olfman [16] and Bostrom et al. [3] utilize user atti-tudes as indicators for motivation to use a system.Most user-satisfaction studies have addressed itsmeasurement, its conceptual bases, and its relation-ships with other variables.

Expectations might be important antecedents of userattitudes, especially when considering user training.Ginzberg [11] found that the contrast between what auser expects and what is delivered, was overshadowed bythe consistency between expectations and resultant atti-tudes. That is, early attitudes are likely to persist, some-what independent of the actual quality of the system.

User behavior has been studied from two generalperspectives: acceptance and usage. Acceptance ofinformation technology has been studied from thepoint of view of the user’s intentions to adopt, usingthe Technology Acceptance Model (TAM) and The-ory of Planned Behavior (TPB). These models are

70 July 1995/Vol. 38, No. 7 COMMUNICATIONS OF THE ACM

SocialInfluence

and End-UserTraining

D e n n i s F . G a l l e t t a

M a n j u A h u j a

A m i r H a r t m a n

T h o m p s o n T e o

A . G r a h a m P e a c e

I

1 National news coverage of our study included a front page story in the WallStreet Journal, June 2, 1994, as well as stories in Computerworld (July 18, 1994,p. 86), Computer Reseller News (July 25, 1994, p. 101), Information Week (July 4,1994, p. 8), and PC Magazine (August, 1994).

based on the Theory of Reasoned Action (TRA) [1](see [14] for a review of MIS models based on this lit-erature). Another commonly-used model is the theo-ry of innovation-diffusion [21].

The TRA derivatives support the assertion thatperceived usefulness and perceived ease of use affectintentions to use technology. The TAM model issomewhat unique because it includes “external vari-ables,” which have not been studied systematically todate. WOM communications might be a highlyimportant external variable.

Innovation-diffusion theory has investigated accep-tance of technology by focusing on many variables,studies, and methods. Rogers discovered that aninnovation’s rate of adoption was found to be highlydependent not only on perceived attributes of inno-vations themselves, but also by communication chan-nels. Evidence of the importance of WOM messageswas substantial in his study [21].

A study by Brancheau and Wetherbe [4] providedevidence that the source of greatest influence in allstages of adoption decision-making was from workcolleagues. As the stages progress from initial knowl-edge to persuasion to the decision itself, the percent-age of influence attributed to work colleagues rosesteadily. Very little of the persuasion was attributed tocomputer specialists, consultants, vendors, massmedia, teachers, and friends.

In summary, one might conclude that both accep-tance of technology (measured via intentions) andusage of technology are behavioral variables that help usbetter understand information systems users. Further,when training is under way, WOM messages may bepowerful determinants of the adoption of technology.

User performance is most often studied in the litera-ture of human-computer interaction, a confluence ofseveral fields such as psychology, computer science, andMIS. The literature gained substantial theoretical devel-opment in the Keystroke Model work by Card, Moran,and Newell [5]; principles set forth as a result of theirresearch have been applied to a large portion of theexperiments conducted in the field.

MIS researchers have traditional-ly not focused on design alternativesat or near the level of the keystroke,but have evaluated the relationshipsbetween performance and severalother variables. Examples includedata presentation alternatives (e.g.,see [24], users’ backgrounds (e.g.,see [10]), and training approaches(e.g., see [7, 17].). In the MIS train-ing literature, performance mea-sures such as number and types oferrors, and system comprehensionhave been used as training outcomemeasures [3, 22].

In this article, we explore thepotential relationship between the

domain of attitudes and performance, following atraining exercise, working within the WOM paradigm.

Word-of-MouthFor many years, researchers in marketing have exploredthe effects of pre-purchase information received by con-sumers. Researchers have studied the role of WOM mes-sages, how changed expectations affect behavior, andwhat factors affect the power of WOM messages.

Consumer expectations have been shown to beaffected more by WOM messages than by any otherfactor overall [25]. Interestingly, such messages morestrongly affect expectations than past personal expe-rience, advertising, and sales promotion. One reasonthat consumers allow themselves to be influenced byothers is to “learn about products or services byobserving others and/or seeking information fromothers” ([2], p. 474), in particular for uncertaintyreduction [21]. Other reasons are identification withor enhancement of one’s image among others, orwillingness to conform to others’ expectations.

Consistent with the work of Ginzberg [11] in MISdescribed previously, researchers in marketing havefound that high expectations can lead to satisfaction,even when they are disconfirmed (e.g., see [18, 23]).It is therefore important to make sure that the WOMmessage is highly effective.

Many researchers have attempted to understand theeffectiveness of WOM messages. Several studies detectstronger effects when information about a product isunfavorable rather than favorable, and when informa-tion is verbal rather than written (e.g., see [13]). Final-ly, Feick and Higie [9] show that preferenceheterogeneity and lack of coorientation (describedlater) can diminish the efficacy of WOM messages.

In summary, face-to-face WOM messages haveproven to be powerful influences on consumer atti-tudes and behavior. The results of the MIS and mar-keting literature suggest that there is promise inattempting to merge the perspectives of both fields.One study that has examined the impacts of interper-

sonal word-of-mouth communica-tion among MIS users concludedthat this impact was indeed strongerthan the impact of advertising in thediffusion of end-user computingtechnology [4]. Our study continuesthe investigation into the effects ofsuch communication on users.

HypothesesBased on research of Bostrom et al.[3] and Davis and Bostrom [2, 7]2

as well as the marketing literature

COMMUNICATIONS OF THE ACM July 1995/Vol. 38, No. 7 71

2 The original models include motivation andusage, which could be classified as attitude andbehavior measures, respectively. Our model isexplicitly structured around the training outcomeswe studied.

Figure 1. Research modelof word-of-mouth influence(adapted from [7] and [11]).

TargetSystem

TrainingApproaches

Word ofmouth

Attitudes

Behavior

Performance

TrainingOutcomes

and the goals of our study, the model presented inFigure 1 is proposed. The model asserts that the rela-tionships between selected training outcomes andthe impacts of their antecedents are moderated byWOM peer influence. In our study, both the targetsystem (a direct manipulation interface) and trainingapproach (instruction) are fixed; we vary only theWOM information. In general, WOM messages havebeen powerful determinants of consumer expecta-tions (e.g., see [25]), attitudes (e.g., see [13]), andintentions to purchase (e.g., see [20]).

H1: Post-training attitudes will be more favorable in the pos-itive word-of-mouth condition than in the negative condition.H2: Post-training intentions to purchase the software willbe greater in the positive word-of-mouth condition than inthe negative condition.

While the marketing literature focuses on the pur-chase decision, in the information system setting,usage is often the only outcome variable that is appro-priate. The purchaser of software is often a complete-ly different party than the user; at issue in many casesis a psychological, rather than an economic, purchase.

Studies in TAM and TPB [14], and the Theory ofReasoned Action [1] demonstrate the importance ofstudying behavioral intentions as well as behavior.There is evidence to suggest high correlation betweenthe two constructs (see [1]), however the intentionmust refer specifically to the behavior. Because ourlaboratory setting could not capture meaningfully asubsequent intention and behavior, we have chosen toinvestigate one intention (to use the software in thefuture) and a slightly different behavior (optionalusage at the end of the experimental session). Bothvariables are of great potential importance in our con-

text, although they might not cor-relate very highly. We thereforeexpress them as two separate, butrelated, hypotheses.

H3a: Post-training intentions to usethe software again will be greater inthe positive word-of-mouth conditionthan in the negative condition.H3b: The amount of optional use by

subjects will be greater in the positive word-of-mouth condi-tion than in the negative condition.

The final training outcome in our model is a user’sperformance in using software. All other things beingequal, a user with diminished intentions to use thesoftware again is expected to be less motivated inaccomplishing an experimental task, and perhapsless committed to learning it.

The importance of motivation for task performancehas been discussed by [15] and [17], among others. Thefollowing exploratory hypotheses formalize the expect-ed role of motivation and commitment to learning.

H4a: Post-training performance on a comprehension taskwill be higher in the positive word-of-mouth condition thanin the negative condition.H4b: Post-training performance on the experimental taskwill be higher in the positive word-of-mouth condition thanin the negative condition.

MethodAn experiment was conducted to identify the impactof peer influence on new users who learned to per-form a task. There were several important guidelinesin designing our study. It was important to maximizethe effects of WOM communication. That is, it wasnecessary to base the study on face-to-face communi-cation with peers and to provide multiple opportuni-ties for recipients of WOM communication tobecome cognitively engaged in the message.

It is difficult to determine the preference hetero-geneity of software [9]. The marketing literature tellsus that most people will agree on evaluations of a taxicab ride (low preference heterogeneity), while fewwill agree on a particular hair style (high preferenceheterogeneity). It is indeed possible that computersoftware is perceived very differently by different peo-ple, and that it is more like a hair style than a cab ridein the level of agreement likely to be generated by itsusers. Feick and Higie found that the WOM source’ssimilarity to the recipient is very important when highpreference heterogeneity exists, and that the source’sexperience is very important when there is low pref-erence heterogeneity.

Therefore, we also made use of message sources

72 July 1995/Vol. 38, No. 7 COMMUNICATIONS OF THE ACM

confederate

confederate confederate

6

7 5

8

2

3 1

4

9

Figure 2.The tempo-rary experi-mentallaboratory

Social Influence and End-User Training

that were perceived as experienced while having sim-ilar values as the recipients, to mitigate the effects ofhigh preference heterogeneity that might exist inevaluations of software.

Subjects and IncentivesSubjects were solicited from several sections of arequired MBA class, presenting the opportunity to earna $4 fee for participating, with a $3 bonus for perform-ing above the mean, and additional prizes of between$9 and $18 if they were one of the top five performers.

All subjects in the sampling frame had receivedbasic training in computing and spreadsheet applica-

tions nine months earlier, and since that time hadreceived several assignments to be completed usingPCs. Of the approximately 200 subjects we canvassed,74 volunteered to participate in the study, and 53actually attended their designated sessions.

The large number of “no-shows” caused the actualgroups to be irregularly sized, with slight variation inthe number of empty seats per group. Three subjectswere dropped from analysis because the backgroundquestionnaire revealed they were already experi-enced with the somewhat rare integrated softwarepackage used in the study (described later).

Examination of all demographic measuresrevealed that the participating subjects mirrored the

sampling frame (the single school’s MBA program).The average age of the subjects was 27 (standarddeviation 3.2) years, and 39 of the 50 subjects weremales. The three largest majors represented werefinance (42%), marketing (22%), and informationsystems (10%).

Experimental DesignThree separate types of experimental treatmentswere designed, termed as positive, negative, and control,named after the type of word-of-mouth stimuli pre-sented to the subjects in each group. The controlgroup subjects received no stimuli. Although no

hypotheses address the controlgroup, we included that group toprovide additional data useful inexplaining further the possiblerejection of any hypotheses.

Each of the three treatmentswas administered in four separatesessions due to limitations in thesize of the temporary computer lab constructed forthe study (see Figure 2). Subjects signed up for a con-venient session, and we chose each group’s treatmenttype randomly to ensure that every subject had anequal chance of being assigned to each treatment.The stations were arranged to reduce the ability of

COMMUNICATIONS OF THE ACM July 1995/Vol. 38, No. 7 73

DiskApps

This spreadsheet reflectssales and other information.

EditCreate[goal][message]

Locate Frames Words Numbers Graph Print 3:40 pm

1234567891011

Units soldVar costsFixed costs

SalesVar costsGross profFixed costsNet profit

Likely Pess.110020%

10000

1320026481056010000560

99022%

10500

118802614926610500–1234

121018%9500

1452026141190695002406

10105

Optim. Var%A B C D E

[numbers]

Likely Pess. Optim.

3000

2000

1000

0

–1000

–2000

[Profit Picture]

Doc: 1/1

Net profit

goal

Figure 3. Thegoal present-ed to the sub-jects

subjects to see the work of others.To ensure the three groups were roughly equiva-

lent, we compared the groups on the basis of age, gen-der, and four other variables (extent of current use ofPCs, regularity of use of word processors, regularity ofuse of spreadsheets, and the extent to which subjectswere familiar with 21 computing devices). No differ-ences were significant, except for the age variable.

The average age of subjects in the negative group wasabout 25, while the average age of subjects in theother two groups was about 28. We then removed theoldest 10 and youngest 9 subjects from our testing toequalize the ages among the groups, and found mostof the statistical results (described later) to be similarto those when including all subjects. Also, we comput-ed correlations of the age variable with all of thedependent variables and found none to be signifi-cant. Thus, we found no evidence that the age differ-ence affected our results.

MaterialsSubjects were given three packets during the course ofthe experiment.3 The first was distributed to subjectswhile giving them general instructions, and includedan informed consent form for their signature, and apre-experiment questionnaire. The questionnaireasked about their personal characteristics and pastexperiences with computing (described earlier).

The second packet contained what was called a“Quick Tour” of using Borland’s integrated packageFramework III. The Quick Tour was inspired by, butdid not closely follow, the “Minimal Manual”approach by Carroll et al. [6]. The goal was for sub-jects to create a “frame” that contained other frames,including text, a spreadsheet, and a graph tied to thespreadsheet (see Figure 3).

The self-paced training packet contained eightpages of keystroke-by-keystroke instructions toexplain how to accomplish the task. Although the actof typing the basic spreadsheet formulas was familiarto the subjects, as were the basic typing functionsinvolved in the text frame, these functions did notdominate the exercise. Besides the primary new fea-ture of organizing a hierarchical document and link-ing its components, other novel functions were thefollowing: building frames, navigating among frames,building the particular model, addressing ranges,selecting multiple cells, creating the graph, saving thecontaining frame, and formatting cell information. It

is particularly realistic to use a task in which someoperations are familiar and others are not, as manynew systems represent “upgrades” from old ones.

At the end of the packet, subjects were asked toperform “what-if” analysis twice to determine how thegraph and profit results would change by alteringsome of the assumptions in the spreadsheet (in thetop right frame). Subjects were invited to perform

this analysis on their own (up to eight times) and fur-ther observe the effects of these changes. The num-ber of times (measured automatically using macros)each subject performed “what-if” analysis on theirown gave us one measure of behavior (voluntary use,without financial incentive). The macros also savedthe files for our inspection later.

The third packet contained a post-experimentquestionnaire, which assessed subjects’ attitudes,behavioral intentions, and amount of retention. Mostmeasures were new because there were no existinginstruments allowing assessment of the particular typeof package used. Subjects’ reactions to the softwarewere assessed by summing across seven 7-point seman-tic differential scales (see Appendix). Behavioralintentions were assessed by using two more scales (seeAppendix) that asked how likely subjects would be toreuse or purchase the software under the assumptionthey possessed adequate resources and needs. A quizconsisted of 10 multiple-choice questions (see Appen-dix), to measure comprehension task performance.Upon inspection of the saved spreadsheets, an exper-imental performance task score was assigned bydeducting from a possible score of 10 points, 1 pointfor each error in following the instructions.

Pilot TestsThe materials and procedures were pilot-tested inseveral iterations, resulting in many changes andrefinements. The Quick Tour document presentedmany interesting problems. First, subjects sometimesbecame confused and needed to backtrack when set-ting up the screen layout. Instructions and macroswere added to allow backtracking; the material wassubdivided into what became eight small, indepen-dent steps. If a subject became lost, he or she couldreturn to the beginning of that particular step bypressing an Alt+ sequence.

Experimental ProceduresThe main part of the study was intended to furnishthe positive or negative cues both before and duringthe training. This timing was chosen because of the

74 July 1995/Vol. 38, No. 7 COMMUNICATIONS OF THE ACM

3 All packets can be obtained by contacting the first author.

Just before the second outburst …the experimenter walked out to the hall for a moment and

returned as if he was not aware of the outburst.

Social Influence and End-User Training

potentially opposing effects of cognitive dissonancetheory and contrast theory [11]. The former wouldpredict that pre-training cues would facilitate devel-opment of attitudes consistent with those cues. How-ever, contrast theory would predict that some subjectsmight then become surprised in the opposite direc-tion, because they might raise or lower their expecta-tions excessively. Therefore, additional outburstsduring training were thought likely to reinforce theinitial cues, strengthening the overall manipulation.

Upon the arrival of all subjects in a group, theexperimenter described what was to be accom-plished, and gave the name of the software packageto be used. One of the confederates immediatelyraised his hand, asked for verification of the pack-age’s name, and then stated that he was already wellfamiliar with Framework III. In the positive (nega-tive) group, the confederate then expressed his favor-able (unfavorable) perceptions of the packageaccording to a prepared script. He also mentionedthe high market share Framework holds in Europe(poor market performance in the U.S.) In the con-trol group, the confederate expressed no opinionsbut merely told of his familiarity with the package.The confederate was then excused from the sessionand the experiment continued.

The remaining subjects were given the secondpacket and were asked to sit “at any computer” they

wished. The remaining two confederates were alwaysseated closest to the computers and were easily ableto position themselves immediately at stations 2 and6. The experimenter stood in the way of station 9,which was completely unable to run the software dueto the lack of a hard drive.4

Macros were used for unobtrusive measurements.Several macros were invoked for a variety of purpos-es, including capturing the number of times userschanged assumptions, saving the working documentat various times, and presenting a title and exit screensummarizing the experimenter’s verbal instructions.

On average, subjects worked on the task for 28 min-utes (standard deviation 7.7). While the subjectsworked, confederates in the positive (negative) groupsmade positive (negative) comments even though sub-jects were instructed to keep as silent as possible. Forexample, positive comments included several differentloudly-whispered expressions of admiration andexcitement, while negative comments included

groans, sighs, and loudly-whispered expressions of howpainful the exercise seemed. These scripted outburstswere timed at about 5–6 minute intervals to ensure aproper balance of salience and realism.

The experimenter reacted to the first outburst,asking if there was a question or problem, thenreminded subjects to be as quiet as possible. Justbefore the second outburst (involving obviously sym-pathetic interaction between the two confederates),the experimenter walked out to the hall for amoment and returned as if he was not aware of theoutburst. Just before the third (large) outburst, theexperimenter walked quickly to the door as if therewas an urgent message and seemed to miss it onceagain. Depending on timing, the fourth or fifth(minor) outburst was scheduled to occur at about thetime when the experimenter was preoccupied dis-tributing packet 3 to the first real subject who com-pleted the task. The experimenter again admonishedsubjects lightly for talking.

Minor outbursts continued on fairly regular inter-vals as subjects began to hand in their work. Theseoutbursts were kept brief and were only meant to“renew” the salience of the joy or frustration for theremaining subjects. We considered it necessary totreat the outbursts as unsanctioned behavior becausewe needed to control the setting as much as possible.If real subjects were allowed to speak out loud, then

each group would be different; our level of analysiswould then be at the group rather than at our target,the individual level. Therefore, the risk of makingsubjects uncomfortable with outbursts was consid-ered to be outweighed by the added control fromensuring that all groups within a treatment wereexposed to identical messages.

After collecting packet 3, each subject was thanked,paid, and followed out into the hallway. As a manipu-lation check, the experimenter asked each subject ifthe outbursts distracted him or her. He then remind-ed the subject not to discuss any aspect of the experi-ment (including the outbursts) with anyone else untilthe experiment was completed (in three days).

The manipulation check was not very effectivebecause subjects seemed reluctant to disclose thatthey heard the outbursts. About a third of the sub-jects said they heard nothing, but when the experi-menter pointed in the general direction of machines2 and 6, all but five subjects suddenly acknowledgedthat they were aware of the outbursts, acting as if theyhad at first misunderstood the question. This conflictbetween their initial and revised answers was puzzling

COMMUNICATIONS OF THE ACM July 1995/Vol. 38, No. 7 75

4 Workstation 9 was never intended to be used, but the lack of a ninth stationmight have raised subjects’ suspicions.

We considered it necessary to treat theoutbursts as unsanctioned behavior

because we needed to control the setting as much as possible

to the experimenters, who speculated that the sub-jects might have indeed been made a bit uncomfort-able by the outbursts. Most of the subjects quicklyassured the experimenter that the outbursts did notaffect their performance.

The procedure met the goals described earlier.Face-to-face WOM statements were provided by usingconfederates. Cognitive engagement was maximizedthrough consistent and repetitive outbursts. Sourceexperience and peer similarity were simultaneouslyachieved by using multiple student confederates.

Due to the small number of subjects, univariate t-testscomparing only the positive and negative groups wereused to test each hypothesis. This is appropriate becauseno non-hypothesized relationships were explored.

ResultsResults will be discussed in three sections, coveringuser attitudes, behavior and behavioral intentions,and performance.

Attitudes. Cell means for the custom-made 7-item atti-tude scale5 are shown in Table 1. As Hypothesis 1 pre-dicts, the negative group is significantly lower thanthe positive group.

Behavior and Behavioral Intentions. Behavior was exam-ined using conventional behavioral intentions surro-gates, and also using a new approach: counting thenumber of voluntary “what-if analysis” iterations per-formed by subjects. Subjects were asked if they woulduse the software again, and if they would purchasethe software.6 Cell means (see Table 2) both differedas predicted by Hypotheses 2 and 3a. Once again, thecontrol group appears to be more similar to the pos-itive group than to the negative group.

In accordance with Hypothesis 3b, actual behaviorwas examined. Subjects in the two groups appearedto perform about the same number of additional,voluntary “what-if” analysis iterations (see Table 3).Although the control group mean appears to be

larger than either of the experi-mental group means, even the mostliberal testing failed to indicate asignificant difference, as did theappropriate, conservative post-hoccomparisons [12].

We also investigated the degreeto which the optional iterations cor-related with intentions to purchaseor use the software. Both correla-tions were non-significant and only1% of the variation in the numberof optional iterations would beexplainable by knowing each of theintentions if the correlations hadbeen significant.

Performance. The final, exploratoryhypothesis was examined by mea-suring two performance variables: a10-point quiz score and a 10-pointtask score. As Table 4 illustrates,mixed results were obtained forHypothesis 4. Means for the quizscores for negative and positivegroup subjects differed significantlyin the hypothesized direction, butnot the means for the task score.

DiscussionThere were significant findings ineach of the three categories weexamined. These findings appear tosupport the assertion that word-of-

76 July 1995/Vol. 38, No. 7 COMMUNICATIONS OF THE ACM

NegativeGroupn=16

ControlGroupn=18

PositiveGroupn=16

Negative vs. Positive(1-tailed)

Mean Score (std. dev.) 31.6 (7.8) 36.3 (6.5) 37.1 (8.6) t=1.87; p=0.035*

Table 1. Attitude ScoresMean Scores on Semantic Differential Item Indicating Overall AttitudesAbout the Software (sum of 7 items; 7=positive; 1=negative)

*significant at the p=0.05 level

NegativeGroupn=16

ControlGroupn=18

PositiveGroupn=16

Negative vs. Positive(1-tailed)

Number of times 0.9 (1.4) 1.9 (2.2) 0.9 (1.2) t=–0.01; ns

Table 3. Behavior ScoresMean Number of Times Subject Performed Optional Task (minimum of 0; maximum of 8)

not significant at the p=0.05 level

NegativeGroupn=16

ControlGroupn=18

PositiveGroupn=16

Negative vs. Positive(1-tailed)

4.0 (1.6) 5.1 (1.9) 5.4 (1.8) t=2.40; p=0.012*

Table 2. Behavioral Intentions ScoresMean Scores on Semantic Differential Items (7=positive; 1=negative)

*significant at the p=0.05 level

Likelihood of using again

Likelihood of purchasing 3.4 (2.0) 4.4 (2.0) 4.8 (2.0) t=1.94; p=0.031*

NegativeGroupn=16

ControlGroupn=18

PositiveGroupn=15

Negative vs. Positive(1-tailed)

8.1 (1.3) 8.8 (1.0) 8.8 (0.9) t=1.85; p=0.038*

Table 4. Performance ScoresMean Scores on Quiz and Task (higher score indicates better performance)

*significant at the p=0.05 level

Score on 10-item quiz

Score on 10-point task 9.7 (0.6) 9.2 (1.4) 9.2 (1.3) t= –1.42; ns

5 Cronbach’s alpha = 0.89

6 The two behavioral intention scores were highlycorrelated (r=0.78; p<0.001), exhibiting satisfactorylevels of concurrent validity.

Social Influence and End-User Training

mouth communication can be a significant andimportant determinant of training outcomes, name-ly, attitudes, behavior, and performance. Along witha short discussion of each of the major findings, wediscuss some of the limitations of the study.

Major Findings. Our subjects who were exposed tounfavorable WOM statements appeared to adoptunfavorable attitudes toward the software, in compar-ison to the subjects exposed to positive statements.Interestingly, the control group mean appears to bevirtually equivalent to that of the positive group,which would suggest that negative WOM commentsare more potent than positive comments. This find-ing is consistent with that of much of the marketingliterature, which has found that negative WOM ismore salient and hence has more of an impact thanpositive WOM: “negative information tends to bemore diagnostic or informative than positive or neu-tral information” ([13], p. 460).

There are two alternative explanations for theapparent lack of effect in the positive group. First,our positive treatment might simply not have beenas convincing as our negative treatment. Anotherpossible explanation might be that the use of word-of-mouth communication during training may havehad offsetting effects. Positive outbursts by the con-federates may have indeed created in the subjects amore positive frame of mind, but the outbursts mayalso have created somewhat of a negative feelingtoward the confederates or the setting because ofthe distractions they created. These conflicting feel-ings might have neutralized the positive treatment.Nevertheless, the WOM literature would supportour first explanation.

Both measures of behavioral intentions appearedto differ as a result of the negative WOM communi-cation. However, actual behavior (voluntary itera-tions in “what-if analysis”) did not differ betweengroups. Several alternative explanations can accountfor this unexpected finding.

Perhaps the additional work was not meaningfulfor them, or perhaps subjects tried to finish quickly.Another possible explanation is the skewness of theresults; nearly half of the subjects performed none ofthe optional iterations, and almost 10% performedonly one of the two required iterations. Finally, dif-ferent subjects might have possessed several differenttypes of motivation for continuing the iterations;experimental artifacts such as subjects who attempt toplease the experimenters (or not please the experi-menters) might have influenced subject behavior to agreater degree than did the treatment.

Performance results were also mixed. The quizscore appeared to be significantly different betweenthe positive and negative groups, but not the actualtask score. The lack of variability of the task scoremight explain the failure in detecting a significantdifference. Over half of the subjects (31) received

perfect scores on the task. Only six subjects mademore than one error on the task. There was muchmore variation in the quiz scores (although quizscores were also quite high). Our pilot testing failedto reveal this lack of discrimination of the task score,and future studies should be conducted with moredifficult tasks and testing.

While most of the hypothe-ses were fully or partiallysupported by the data,there are several otherlimitations of this study.First, the subjects wereMBA students in a labora-tory, not working profes-

sionals using the software to accomplish real tasks.However, nearly all of the MBA students from whichthe sample was drawn have previous work experience,which raises the extent to which we can generalizethe results. Also, the time period of the study wasextremely short; the entire cycle from the WOM com-munication to task performance was completed inunder an hour. A future study in the field that exam-ined long-term effects could be an especially fruitfulventure. However, such assessment will be difficultwithout strong controls that are available in an exper-imental setting. Finally, the random assignment oftreatments to groups rather than subjects to treat-ments made us vulnerable to systematic dependentvariable differences. However, our testing revealedthat our final results were unaffected by bias in sub-ject assignment.

ImplicationsIn this study we demonstrated how word-of-mouthcommunication can affect the outcomes of training,even keeping constant the characteristics of the soft-ware and the training approach. The findings showthat the same training method will be less successfulwhen trainees are exposed to negative WOM ratherthan positive WOM. Trainers concerned with end-user attitudes and performance need to be wary ofnegative WOM before, during, and after training,because such communication can undermine train-ing efforts. Both researchers and practitioners canbenefit from the implications of this experiment. Ingeneral, this study has provided evidence thatresearchers should consider WOM communication asone possibly important determinant of trainee atti-tudes, behavior, and even performance. Also, thereare many unanswered questions pertaining to thepreference heterogeneity of software.

Although software would initially appear to bequite heterogeneous in how it is perceived by users,and thus less susceptible to word-of-mouth effects,perhaps another factor could explain the strength ofour results. Many of the items offered as examples inthe definitions by marketing researchers (e.g., hair

COMMUNICATIONS OF THE ACM July 1995/Vol. 38, No. 7 77

styles; restaurant dinners) are highly heterogeneousin the extent to which different people react to thesame treatment, and therefore word-of-mouth com-munications are not as valuable in predicting a givenperson’s reaction.

Interestingly, highly complex computer software islikely to be even higher in preference heterogeneity;software opinions tend to be passionate and highly var-ied. What, then, could possibly explain the value ofWOM communication? The answer is likely to be relat-ed to the high amount of effort involved in personallyexperiencing software. This effort raises the value ofothers’ negative opinions; a bad report could save aperson significant installation and learning time.

Researchers should investigate the reasons positiveWOM communication was ineffective in enhancingtraining outcomes in our study. It would be useful todetermine if negative information indeed tends to bemore informative, if our positive treatment was not asconvincing, or if distraction is a key difficulty in dis-

seminating positive WOM. Perhapsfuture research could also deter-mine the potential effects of varyingcharacteristics of the source or char-acteristics of individual subjects.7

Practitioners should probablypay more attention to detectingany rumors and gossip that occurbefore new software is introduced.The potential effects of negativeWOM communications on trainingoutcomes should be of great con-cern, whether the software is pro-duced or purchased, required oroptional. In the case of requiredsoftware, user attitudes, behavior,and performance can suffer. In thecase of optional software, userswho have been exposed to negativecommunications can choose toadopt unsupported packages orfail to adopt any software, bothundesirable outcomes.

The new evidence of a perfor-mance effect can be important toconsider when designing trainingprograms; some time should prob-ably be spent conveying accurateinformation to users before begin-ning the sessions, perhaps includ-ing videos of peer testimonials.

Although theoretical andempirical findings tell us that posi-tive word-of-mouth information isnot likely to help significantly, itmight be very valuable to at leastmaintain active, open, and honestcommunication with users. Suchcommunication can help detect

negative sentiment early enough to correct any mis-understandings and perhaps take the opportunity tomake favorable changes where the negative commu-nications are correct.

User attitudes, behavior, and performance areimportant but elusive information systems trainingoutcomes. By extending our understanding of thesevariables and their interrelationships, we might avoidneedless difficulties when introducing new systems.

AcknowledgmentsThe authors would like to thank Kieran Mathieson forcomments on earlier versions of this article. An earlierversion of this article was presented at the FifteenthInternational Conference on Information Systems,December 1994, Vancouver. Please contact the firstauthor for a copy of the experimental materials.

78 July 1995/Vol. 38, No. 7 COMMUNICATIONS OF THE ACM

Appendix. Questionnaire ItemsSoftware Evaluation

Overall reactions to the software:

Future Use

Quiz items

terrible1

frustrating1

dull1

difficult1

inadequate power1

rigid1

not useful1

2

2

2

2

2

2

2

3

3

3

3

3

3

3

neutral 4

neutral4

neutral4

neutral4

neutral4

neutral4

neutral4

5

5

5

5

5

5

5

6

6

6

6

6

6

6

wonderful7

satisfying7

stimulating7

easy7

adequate power7

flexible7

useful7

How likely are you to use this software again if given the opportunity ?

never1

never1

2

2

3

3

neutral4

Assuming you have the resources and need, would you purchase this software ?neutral

4

5

5

6

6

definitely7

definitely7

Ten multiple-choice questions asked subjects how to:

1. Combine spreadsheets, text, and graphics 2. Size frames 3. “Jump” inside a frame 4. Create a graph 5. Set margin width 6. Underline cells 7. Perform “what-if” analysis 8. Select more than one cell 9. Cancel a selection10. Describe its overall features available

7 For example, see work by Petty and Cacioppo [19] who investigate sourcecredibility and ability of a receiver to process a message.

Social Influence and End-User Training

References1. Ajzen, I., and Fishbein, M. Understanding Attitudes and Predicting

Social Behavior. Prentice-Hall, Englewod Cliffs, NJ, 1980. 2. Bearden, W.O., Netemeyer, R.G., and Teel, J.E. Measurement

of consumer susceptibility to interpersonal influence. J. Con-sumer Research 15, (1989), 473–481.

3. Bostrom, R.P., Olfman, O., Sein, M.K. The importance oflearning style in end-user training. MIS Q. 14, 1 (1990),101–119.

4. Brancheau, J. C., and Wetherbe, J. C. The adoption of spread-sheet software: Testing innovation diffusion theory in the con-text of end-user computing. Info. Syst. Research 1, 2 (1990),115–143.

5. Card, S., Moran, T. and Newell, A. The Psychology of Human-Com-puter Interaction. Erlbaum, Hillsdale, NJ, 1983.

6. Carroll, J.M., Smith-Kerker, P.L., Ford, J.R., and Mazur-Rimetz,S.A. The minimal manual. Human-Computer Int. 3, 2 (1987–88),123–154.

7. Davis, S.A., and Bostrom, R.P. Training end users: An experi-mental investigation of the roles of the computer interface andtraining methods. MIS Q, 17, 1 (1993), 61–85.

8. DeLone, W.H., and McLean, E.R. Information systems success:The quest for the dependent variable. Info. Syst. Research 3, 1(1992), 60–95.

9. Feick L.F., and Higie, R.A. The effects of preference hetero-geneity and source characteristics on Ad processing and judg-ments about endorsers. J. Advertising, XXI, 2 (June, 1992), 9–23.

10. Galletta, D.F., Abraham, D., El Louadi, M., Lekse, W., Pollalis,Y., and Sampler, J. An empirical study of spreadsheet error-finding performance. Accounting, Management, and InformationTechnology, 3, 2 (1993), 79–95.

11. Ginzberg, M.J. Early diagnosis of MIS implementation failure.Manage. Sci. 27, 4 (1981), 459–478.

12. Hays, W.L. Statistics, 2nd. ed. Holt, Rinehart, and Winston, NY,1988.

13. Herr, P.M., Kardes, F.R., and Kim, J. Effects of word-of-mouthand product-attribute information on persuasion: An accessi-bility-diagnosticity perspective. J. Consumer Research 17, (1991),454–462.

14. Mathieson, K. Predicting user intentions: Comparing the tech-nology acceptance model with the theory of planned behavior.Info. Syst. Research 2, 3 (Sept. 1991), 173–191.

15. Moran, T.P. Guest editor introduction, special issue on the psy-chology of human-computer interaction. Comput. Surv. 13, 1(1981), 1–6.

16. Olfman, L. A Comparison of Applications-Based TrainingMethods for DSS Generator Software. Doctoral dissertation,Indiana University., 1987.

17. Olfman, L. and Bostrom, R.P. End-user software training: Anexperimental comparison of methods to enhance motivation.J. Info. Syst. 1, 4 (1991), 249–266.

18. Oliver, R.L. and DeSarbo, W.S. Response determinants to sat-isfaction judgments. J. Consumer Research 14 (Mar. 1988),495–507.

19. Petty, R.E., and Cacioppo, J.T. The elaboration likelihoodmodel of persuasion, in L. Berkowitz, Ed., Advances in Experi-mental Social Psychology, 19, Academic Press, NY, 1986.

20. Richins, M.L. Negative word-of-mouth by dissatisfied con-sumers: A pilot study. J. of Marketing, 47 (Winter 1983), 68–78.

21. Rogers, E.M. Diffusion of Innovations 3rd ed. Free Press (Macmil-lan publishing), NY, 1983.

22. Sein, M.K. Conceptual Models in Training Novice Users ofComputer Systems: Effectiveness of Abstract vs. AnalogicalModels and Influence of Individual Differences. Doctoral dis-sertation, Indiana University, 1988.

23. Tse, D.K., and Wilton, P.C. Models of consumer satisfactionformation: An extension. J. Marketing Research, 25 (May 1988),204–212.

24. Vessey, I., and Galletta, D.F. Cognitive fit: An empirical study ofinformation acquisition. Info. Syst. Research 2, 1 (1991), 63–84.

25. Webster, C. Influences upon consumer expectations of ser-vices. J. Services Marketing 5, 1, (Winter 1991) 5–17.

About the Authors:DENNIS F. GALLETTA is an associate professor of BusinessAdministration at the Katz Graduate School of Business, Universityof Pittsburgh. Current research interests include end-user comput-ing and user training, particularly in the areas of end-user attitudes,behavior, and performance. email: [email protected]; URL:http://www.pitt.edu/~galletta

MANJU K. AHUJA is a doctoral candidate in Management Infor-mation Systems at the University of Pittsburgh. Current researchinterests include computer-supported cooperative work, virtualorganizations, organizational learning, and management of sys-tems development. email: [email protected]

AMIR HARTMAN is a doctoral candidate at the Katz School of theUniversity of Pittsburgh. Current research interests include socialand organizational implications of information technology, sociol-ogy of scientific inquiry, and telecommunications policy. email:[email protected]

A. GRAHAM PEACE is a doctoral candidate in Management Infor-mation Systems at the University of Pittsburgh. Current researchinterests include software piracy, privacy issues, accessibility, andsystems analysis and design. email: [email protected]

Authors’ Present Address: Katz Graduate School of Business, Uni-versity of Pittsburgh, Pittsburgh, PA 15260.

THOMPSON TEO is a lecturer in the Faculty of Business Admin-istration at the National University of Singapore. Current researchinterests include strategic impacts of information technology,adoption and diffusion of information technology, telecommut-ing, and end-user training. Author’s Present Address: NationalUniversity of Singapore, Department of Decision Sciences, 10 KentRidge Crescent, Singapore 0511; email: [email protected]

Permission to copy without fee all or part of this material is granted provid-ed that the copies are not made or distributed for direct commercial advan-tage, the ACM copyright notice and the title of the publication and its dateappear, and notice is given that copying is by permission of the Associationfor Computing Machinery. To copy otherwise, or to republish, requires a feeand/or specific permission.

© ACM 0002-0782/95/0700 $3.50

COMMUNICATIONS OF THE ACM July 1995/Vol. 38, No. 7 79