Are three methods better than one? A comparative assessment of usability evaluation methods in an...

7
Please cite this article in press as: M.F. Walji, et al., Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.ijmedinf.2014.01.010 ARTICLE IN PRESS IJB-3069; No. of Pages 7 i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx j ourna l h omepage: www.ijmijournal.com Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR Muhammad F. Walji a,,1 , Elsbeth Kalenderian b,1 , Mark Piotrowski a , Duong Tran a , Krishna K. Kookal a , Oluwabunmi Tokede b , Joel M. White c , Ram Vaderhobli c , Rachel Ramoni d , Paul C. Stark e , Nicole S. Kimmes f , Maxim Lagerweij g , Vimla L. Patel h a The University of Texas Health Science Center at Houston, United States b Harvard School of Dental Medicine, United States c University of California, San Francisco, United States d Harvard Medical School, United States e Tufts University, United States f Creighton University, United States g Academic Centre for Dentistry at Amsterdam (ACTA), United States h The New York Academy of Medicine, United States a r t i c l e i n f o Article history: Received in revised form 21 January 2014 Accepted 23 January 2014 Keywords: EHR Usability Human factors Methodology a b s t r a c t Objective: To comparatively evaluate the effectiveness of three different methods involv- ing end-users for detecting usability problems in an EHR: user testing, semi-structured interviews and surveys. Materials and methods: Data were collected at two major urban dental schools from faculty, residents and dental students to assess the usability of a dental EHR for developing a treat- ment plan. These included user testing (N = 32), semi-structured interviews (N = 36), and surveys (N = 35). Results: The three methods together identified a total of 187 usability violations: 54% via user testing, 28% via the semi-structured interview and 18% from the survey method, with modest overlap. These usability problems were classified into 24 problem themes in 3 broad categories. User testing covered the broadest range of themes (83%), followed by the inter- view (63%) and survey (29%) methods. Discussion: Multiple evaluation methods provide a comprehensive approach to identifying EHR usability challenges and specific problems. The three methods were found to be comple- mentary, and thus each can provide unique insights for software enhancement. Interview and survey methods were found not to be sufficient by themselves, but when used in con- junction with the user testing method, they provided a comprehensive evaluation of the EHR. Conclusion: We recommend using a multi-method approach when testing the usability of health information technology because it provides a more comprehensive picture of usabil- ity challenges. © 2014 Published by Elsevier Ireland Ltd. Corresponding author at: The University of Texas Health Science Center at Houston, School of Dentistry, Department of Diagnostic and Biomedical Sciences, 7500 Cambridge Street, Suite 4.184, Houston, TX 77054, United States. Tel.: +1 713 486 4276; fax: +1 713 486 4435. E-mail address: [email protected] (M.F. Walji). 1 These authors contributed equally to this work. 1386-5056/$ see front matter © 2014 Published by Elsevier Ireland Ltd. http://dx.doi.org/10.1016/j.ijmedinf.2014.01.010

Transcript of Are three methods better than one? A comparative assessment of usability evaluation methods in an...

I

Aa

MKRa

b

c

d

e

f

g

h

a

A

R

2

A

K

E

U

H

M

B

1h

ARTICLE IN PRESSJB-3069; No. of Pages 7

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx

j ourna l h omepage: www.i jmi journa l .com

re three methods better than one? A comparativessessment of usability evaluation methods in an EHR

uhammad F. Waljia,∗,1, Elsbeth Kalenderianb,1, Mark Piotrowskia, Duong Trana,rishna K. Kookala, Oluwabunmi Tokedeb, Joel M. Whitec, Ram Vaderhobli c,achel Ramonid, Paul C. Starke, Nicole S. Kimmesf, Maxim Lagerweij g, Vimla L. Patelh

The University of Texas Health Science Center at Houston, United StatesHarvard School of Dental Medicine, United StatesUniversity of California, San Francisco, United StatesHarvard Medical School, United StatesTufts University, United StatesCreighton University, United StatesAcademic Centre for Dentistry at Amsterdam (ACTA), United StatesThe New York Academy of Medicine, United States

r t i c l e i n f o

rticle history:

eceived in revised form

1 January 2014

ccepted 23 January 2014

eywords:

HR

sability

uman factors

ethodology

a b s t r a c t

Objective: To comparatively evaluate the effectiveness of three different methods involv-

ing end-users for detecting usability problems in an EHR: user testing, semi-structured

interviews and surveys.

Materials and methods: Data were collected at two major urban dental schools from faculty,

residents and dental students to assess the usability of a dental EHR for developing a treat-

ment plan. These included user testing (N = 32), semi-structured interviews (N = 36), and

surveys (N = 35).

Results: The three methods together identified a total of 187 usability violations: 54% via

user testing, 28% via the semi-structured interview and 18% from the survey method, with

modest overlap. These usability problems were classified into 24 problem themes in 3 broad

categories. User testing covered the broadest range of themes (83%), followed by the inter-

view (63%) and survey (29%) methods.

Discussion: Multiple evaluation methods provide a comprehensive approach to identifying

EHR usability challenges and specific problems. The three methods were found to be comple-

mentary, and thus each can provide unique insights for software enhancement. Interview

and survey methods were found not to be sufficient by themselves, but when used in con-

junction with the user testing method, they provided a comprehensive evaluation of the

Please cite this article in press as: M.F. Walji, et al., Are three methods bmethods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

EHR.

Conclusion: We recommen

health information techno

ity challenges.

∗ Corresponding author at: The University of Texas Health Science Centiomedical Sciences, 7500 Cambridge Street, Suite 4.184, Houston, TX 7

E-mail address: [email protected] (M.F. Walji).1 These authors contributed equally to this work.

386-5056/$ – see front matter © 2014 Published by Elsevier Ireland Ltdttp://dx.doi.org/10.1016/j.ijmedinf.2014.01.010

etter than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

d using a multi-method approach when testing the usability of

logy because it provides a more comprehensive picture of usabil-

© 2014 Published by Elsevier Ireland Ltd.

er at Houston, School of Dentistry, Department of Diagnostic and7054, United States. Tel.: +1 713 486 4276; fax: +1 713 486 4435.

.

ARTICLE IN PRESSIJB-3069; No. of Pages 7

c a l i

2 i n t e r n a t i o n a l j o u r n a l o f m e d i

1. Introduction

Electronic health records (EHRs) are increasingly beingadopted by healthcare providers, who are attracted by finan-cial incentives and the promise of improved quality, efficiencyand safety [1–5]. However, usability issues are cited as one ofthe major obstacles [6–8] to widespread adoption. Good usabil-ity, the goal of user-centered design [9], is meant to ensurethat a technology will empower the user to effectively and effi-ciently complete work tasks with a high degree of satisfactionand success. Conversely, poor EHR usability has been shownto reduce efficiency, decrease clinician satisfaction, and evencompromise patient safety [10–19].

Usability evaluation is conducted at different stages ofthe EHR development and deployment and vary in theirapproach and focus [20]. Formative usability assessmentsoccur throughout the software development lifecycle. The pur-pose of these evaluations is to identify and fix usability issuesas they emerge during the development stages of a systemto assure quality of the tool, which in the long run wouldbe cost effective in terms of time and effort [21,22]. Accord-ing to the National Institute of Standards and Technology(NIST) EHR Usability Protocol, “the role of formative testing isto support design innovations that would increase the use-fulness of a system and provide feedback on the utility ofan application to an intended user” [23]. Summative usabil-ity assessments occur after the completion of development ofa system and before the release of a product or applicationwhere the goal “is to validate the usability of an application inthe hands of the intended users, in the context of use for whichit was designed, performing the typical or most frequent tasksthat it was designed to perform” [23]. Recognizing the impor-tance of usability to safety, summative usability testing is arequirement of the 2014 EHR certification criteria as part ofMeaningful Use [24].

A key challenge is the selection of an appropriate method,or combination of methods, to efficiently and effectivelyassess usability. Evaluation methods vary and may be con-ducted solely by experts or involve end users (individuals thatwill use the product) [25–27]. Expert reviews, such as heuris-tic evaluation, involve usability specialists who systematicallyinspect EHR platforms and then identify best practice designviolations [28,29]. These reviews are typically conducted inorder to classify and prioritize usability problems [30,31]. Spe-cialists may also build cognitive models to predict the time andsteps needed to complete specific tasks [32]. End users, suchas clinicians and allied health professionals who will actu-ally use the system in the real environment, may participatein usability testing in which they are given tasks to completeand their performance is evaluated [33]. Trained interviewersprovide useful qualitative data through semi-structured andguided interviews, and administering questionnaires.

Surveys are another method to assess the usability of aproduct in development. Surveys measure user satisfaction,perceptions and evaluations [34]; they have the advantages ofbeing inexpensive to administer, the results quickly procured,

Please cite this article in press as: M.F. Walji, et al., Are three methods

methods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

and provide insight into the user’s experience with the prod-uct. Also, open-ended survey questions provide deeper insightinto various usability concerns.

n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx

Each type of usability assessment method has benefits anddrawbacks related to ease of conducting the study, predic-tive power and generalizability [27]. No single approach willanswer all questions because each approach can identify onlya subset of usability problems; therefore, it is reasonable toassert that a combination of different techniques will comple-ment each other [12,26,35–37]. To our knowledge, there has notbeen any study that compares usability testing methods incor-porating user participation (e.g., user testing, semi-structuredinterviews, and open-ended surveys) in terms of effectivenessin detecting usability problems in EHRs.

2. Methods

Our investigation is a secondary analysis of data reported ina previous study [38], in which we used two methods – usertesting and semi-structured interviews – to identify usabilitychallenges faced by clinicians in finding and entering diag-noses selected from a standardized diagnostic terminology ina dental EHR. User testing results showed that only 22–41%of the participants were able to complete a simple task, andno participants completed a more complex task. We previ-ously identified a total of 24 usability problem areas or themesrelated to the interface, diagnostic terminology or clinicalworkflow.

In this report we re-analyze the qualitative and quantita-tive data collected from our previous study, and we include theresults from a usability survey that we subsequently admin-istered to users concerning the same task within the EHR.Our goal is to identify and compare the frequency and type ofusability problems detected by these three usability methods:user testing, interviews and open-ended surveys. A limita-tion of our study design was that subjects were not randomlyassigned to participate in one of the three usability methods(user testing, interviews or surveys). Therefore, we were notable to determine if the characteristics of the subjects mayhave influenced the identification of usability problems.

In the original study, data were collected at two differentdental schools: Harvard School of Dental Medicine (HSDM)and University of California School of Dentistry (UCSF). Eachof these sites used the same version of the axiUm EHR sys-tem and the 2011 version of the EZCodes Dental DiagnosticTerminology [39]. Study participants were representative ofclinic providers who practice in an academic setting, namely,third and fourth year dental students, advanced graduate edu-cational students (residents) and attending dentists (faculty)(Table 2).

The methods for our prior work have been reported in detailpreviously [38]; however the following is a brief synopsis of themethodologies used:

2.1. User testing method

Investigators from the University of Texas Health Science Cen-ter at Houston (UTHealth) conducted a usability evaluation

better than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

over a three day period at both HSDM and UCSF with 32end-users. Participants performed a series of realistic tasks,which had been developed in collaboration with dentists onthe research team. During the testing, subjects were asked

ARTICLE IN PRESSIJB-3069; No. of Pages 7

a l i n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx 3

tttbcattC

2

Dt(tEafuiboTtu

2

A3daiqmqtmf(

2

Uiotwt

Fig. 1 – Overlap of usability themes for each method.

i n t e r n a t i o n a l j o u r n a l o f m e d i c

o think aloud and verbalize their thoughts as they workedhrough two assigned scenarios. The performance of the par-icipants and their interaction with the system were recordedy the tool Morae Recorder Version 3.2 (Techsmith). Hierar-hical Task Analysis (HTA) was used to model and define anppropriate path that users needed to follow to complete theask. Optimal time to complete the tasks was calculated usinghe keystroke level model (KLM) [40] and a software tool calledogTool [41].

.2. Semi-structured interview method

uring the same site visits at HSDM and UCSF, investiga-ors also interviewed a convenience sample of 36 participantsmembers of clinical teams) in thirty-minute sessions, withhe goal of capturing the subjects’ experiences with the axiUmHR, specifically the EZCodes dental terminology, workflownd interface. Interviewers used a semi-structured interviewormat, i.e., asking a series of open-ended questions to promptnbiased responses. Overlap of participants in the user test-

ng method and semi-structured interview method did occurut was not tracked. It was estimated that less than 50%f subjects participated in both user testing and interviews.he interviews were recorded, transcribed and analyzed for

hemes. Identified problems were categorized and tabulatedsing Microsoft Excel 2010.

.3. Survey method

fter the site visits, a survey was conducted on a sample of5 subjects composed of third and fourth year dental stu-ents, advanced graduate education students and the facultyt HSDM and UCSF. Questionnaires were sent to subjects, ask-ng them to respond to 29 statements and four open-endeduestions regarding a user’s view of the dental diagnostic ter-inology and its use in the EHR. Only the four open-ended

uestions were analyzed for this study. The open ended ques-ions focused on the following: (1) need to add diagnoses

issing from the terminology, (2) need to remove diagnosesrom the terminology, (3) need to re-organize diagnoses, and4) barriers to entering diagnoses in the EHR.

.4. Data organization and analysis

sability problems were identified when users struggled dur-ng the task, had an overt impediment on their performance,

Please cite this article in press as: M.F. Walji, et al., Are three methods bmethods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

r reported a specific problems or concerns. We re-analyzedhe collected qualitative data to identify usability problems;e then mapped these problems to 24 overarching themes

hat had been pre-identified in our previous work [38]. All

Table 1 – Participant demographics.

Participants User testingN = 32

Percentage of males (%) 55

Average clinic sessions/week 6

Experienced using computer (%) 85

Experienced with the use of the EHR (%) 76

problems were mapped to the previously identified themes. Inthis study, the 24 themes were grouped into three categories:(1) EHR user interface (10 themes), (2) diagnostic terminol-ogy (5 themes), and (3) clinical work domain and workflow (9themes). We converted the data into a tabular form (Table 3)in which each column was headed by a usability method; weordered the rows based on themes stratified within their sub-ject category. This process of mapping was conducted by oneresearcher and independently verified by another.

We computed the percentage of usability problems identi-fied by each of the methods. The overlap of themes detectedby each method is shown graphically in Fig. 1, and the numberof the themes identified as a function of the number of usersis presented in Fig. 2. Data were analyzed via a computer pro-gram using the statistical software package R (Version 2.11.1).

3. Results

3.1. Participant demographics

The typical study participant was a male who practiced sixsessions (half days) per week and was experienced in usingtechnology. The percentage of males vs. females was slightlyhigher in the survey than in user testing and the interviewgroups. In general, all subjects had prior experience with com-

etter than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

puters. About 75% of subjects of the user testing and interviewgroups had ample experience in using EHRs but this percent-age was lower in the survey condition (Table 1).

InterviewN = 36

SurveyN = 35

58 625 7

86 8075 65

ARTICLE IN PRESSIJB-3069; No. of Pages 7

4 i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx

ntifi

Fig. 2 – Themes ide

3.2. Findings

A total of 187 problems were detected by the three differentmethods: 54% through user testing, 28% by interview and 18%by survey. The frequency of findings for usability problems ispresented for each method, categorized by theme and groupedby related usability problems in Tables 2 and 3. The user test-ing method identified the most themes (83%), followed by theinterview (63%) while the survey identified the fewest num-ber of themes (29%). Taken together, the user testing and theinterview methods identified 23 out of 24 themes. The surveyuniquely identified one theme.

The user testing method identified all the themes (100%)of user interface-related problems, 80% of themes of diag-nosis terminology-related problems and 67% of themes ofwork domain and workflow related problems. The interviewmethod captured 80%, 60% and 33%, respectively, while thesurvey method captured 30%, 40%, and 22%. The user testingmethod was most effective in identifying problems, followedby the interview and the survey methods. It should be notedthat for these three methods, the user interface-related cat-egory of themes were the most frequently cited concerns,followed by the diagnostic terminology-related and then thework domain and the workflow related problems. As shown inFig. 1, the three usability-testing methods conjointly identifiedfive out of 24 themes.

Please cite this article in press as: M.F. Walji, et al., Are three methods

methods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

The user testing and interview methods had 12 overlap-ping themes, followed by user testing and the survey methodwith 6 overlapping themes, while the interview and the sur-vey had five overlapping themes. The five common themes

Table 2 – Summary of usability problems and themes detected

Method Usability problemsN = 187

ThemesN = 24

User testing 54% 83%

Interview 28% 63%

Survey 18% 29%

ed by participants.

identified by all three methods were: (1) Time consuming toenter a diagnosis, (2) Illogical ordering of terms, (3) Searchresults not matching user expectations, (4) Missing conceptsor concepts not being included, and (5) Having limited knowl-edge of diagnostic term concepts and knowing how to enterthem in EHR.

As shown in Fig. 2, the number of themes identified, isa proportional function of the number of study participants,which reaches a point of diminishing returns.

4. Discussion

Overall, a combination of multiple usability methods wassuperior in detecting more problems in a dental EHR than anysingle approach. Given the generic nature of these methods,it is reasonable to assume that this result will apply to anyEHR or even any other health care technology. User testingmethods provided us with both quantitative and qualitativefindings, which were reflective of the systems’ effectivenessand efficiency, whereas the semi-structured interviews andopen-ended surveys provided detailed qualitative data, theanalysis of which reflected users’ perception and satisfaction,including the ease of use. This investigation also uncov-ered usability themes that commonly characterize summative

better than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

problems for EHR systems in development such as time con-suming data entry [42–48], term names not being fully visible[42], and mismatch between the user’s mental model and sys-tem design [12,42].

by usability method.

Interfacethemes (10)

Diagnosticthemes (5)

Work domaintheme (9)

100% 80% 67%80% 60% 33%30% 40% 22%

ARTICLE IN PRESSIJB-3069; No. of Pages 7

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx 5

Table 3 – Number of usability problems by method and theme.

Number of findings (%)

User testing Interview Survey Totals

User interface-related themesTime consuming to enter a diagnosis 5 (14) 17 (50) 13 (36) 35 (19)Illogical ordering of terms 5 (33) 5 (33) 5 (34) 15 (8)Inconsistent naming and placement of user interface widgets 12 (86) 2 (14) 0 14 (8)Search results do not match users expectations 9 (70) 2 (15) 2 (15) 13 (7)Ineffective feedback to confirm diagnosis entry 7 (88) 1 (12) 0 8 (4)Users unaware of important functions to help find a diagnosis 7 (100) 0 0 7 (4)Limited flexibility in user interface 3 (50) 3 (50) 0 6 (3)Inappropriate granularity/specificity of concepts 2 (33) 4 (67) 0 6 (3)Term names not fully visible 2 (67) 1 (33) 0 3 (2)Distinction between the category name and preferred term in

terminology unclear2 (100) 0 0 2 (1)

Diagnosis terminology-related themesSome concepts appear missing/not included 11 (55) 1 (5) 8 (4) 20 (11)Visibility of the numeric code for a diagnostic term 3 (60) 2 (40) 0 5 (3)Abbreviations not recognized by users 2 (67) 0 1 (33) 3 (2)Users not clear about the meaning of some concepts 0 4 (100) 0 4 (2)Some concepts not classified in appropriate categories/sub categories 2 (100) 0 0 2 (1)

Work domain and workflow related themesKnowledge level of diagnostic term concepts and how to enter in EHR

limited14 (70) 2 (10) 4 (20) 20 (11)

Free text option can be used circumvent structured data entry 7 (58) 5 (42) 0 12 (7)Only one diagnosis can be entered for each treatment 0 2 (100) 0 2 (1)No decision support to help suggest appropriate diagnoses, or alert if

inappropriate ones are selected2 (100) 0 0 2 (1)

Synonyms not displayed 2 (100) 0 0 2 (1)Diagnosis cannot be charted using the Odontogram or Periogram 0 0 1 (100) 1 (0)No historical view of when a diagnosis has been added or modified 1 (100) 0 0 1 (0)No way to indicate state of diagnosis (i.e., differential, working or

definitive)0 3 (100) 0 3 (2)

Users forced to enter a diagnosis for treatments that may not require 1 0 0 1 (0)

4

Amnumlpp

wfcpkat

ttttfi

them

Total

.1. User testing method

mong the broad range of usability inspection and testingethods available, user testing with the think aloud tech-

ique is a direct method to gain deep insight into the problemssers encounter in interacting with a computer system. Thisethod does not require users to retrieve concepts from

ong-term memory or retrospectively report on their thoughtrocesses. They provide not only the insights into usabilityroblems, but also the causes underlying these problems [35].

The user testing method with the think aloud techniqueas of high value in evaluating EHR usability flaws. The per-

ormance of users was recorded, and we could accuratelyompute how fast users completed the assignment in com-arison to the time needed by an expert, based on theeystroke-level model. This data provided a more objectivessessment and is therefore more persuasive than informa-ion obtained from the interviews and the surveys.

The user testing method was best in detecting specific,echnical performance problems, because users could report

Please cite this article in press as: M.F. Walji, et al., Are three methods bmethods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

he problems they encountered as they performed tasks in realime. This point is clearly illustrated by the theme “inconsis-ent naming and placement of user interface widgets”; 86% ofndings for this theme came from user testing, while only 14%

99 (54) 54 (28) 34 (18) 187 (100)

of the comments came from the interview and no commentscame from the survey.

4.2. Interview and survey methods

The interview method revealed important concepts missingin the work domain that other methods were unable to iden-tify. For example, the system only allowed the entry of onediagnosis to support a particular treatment. However, in den-tistry multiple diagnoses are often used to support one specifictreatment. In addition, participants commented that therewas no way to indicate the state of a diagnosis, such as differ-ential, working or definitive diagnosis. The interview methodalone provided this finding, probably because the clinical sce-narios in the user testing method were constrained to thosewith one definitive diagnosis. The survey captured one theme(an additional functionality) that was not found using eitheruser testing or interviews. Users expected to be able to charta diagnosis by directly interacting with the visual represen-tations (odontogram and perigram). Our findings suggest the

etter than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

limited value on relying on open ended surveys alone to assessEHR usability. However, in combination with user testing andinterviews, surveys may help to verify reoccurring usabilityproblems. A development team, for example, may choose to

ARTICLE IN PRESSIJB-3069; No. of Pages 7

6 i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i

Summary pointsWhat was already known on the topic:

• Poor usability remains a challenge for existing EHRs.• Numerous usability methods exist that can help to

detect problems in EHRs.

What this study added to our knowledge:

• User testing is more effective in detecting usabilityproblems in an EHR compared with interviews andopen ended surveys.

• Supplementing user testing with interviews and openended surveys allows the detection of additionalusability problems.

r

[20] B. Middleton, et al., Enhancing patient safety and quality of

highly prioritize problems that are detected through multiplemethods.

5. Conclusion

Although the user testing method detected more and differentusability problems than the interview and survey methods, nosingle method was successful at capturing all usability prob-lems. Therefore, a combination of different techniques thatcomplement one another is necessary to adequately evaluateEHRs, as well as other health information technologies.

Funding

This research was supported by the NICDR Grant1R01DE021051. A Cognitive Approach to Refine and EnhanceUse of a Dental Diagnostic Terminology.

Author contributions

MW and EH conceptualized the study, collected data, con-ducted analysis and wrote and finalized the paper. MP, DT andKK conducted analysis and wrote portions of the paper. RR, PS,NK, RV, ML, OT, and JW helped to conceptualize study meth-ods and reviewed the final manuscript. VP provided scientificdirection for conceptualizing the study, guided data analysisand reviewed the final manuscript.

Competing interests

None.

Acknowledgements

Please cite this article in press as: M.F. Walji, et al., Are three methods

methods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

We are indebted to clinical participants at Harvard Schoolof Dental Medicine and the University of California, SanFrancisco School of Dentistry.

n f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx

e f e r e n c e s

[1] D. Blumenthal, J.P. Glaser, Information technology comes tomedicine, N. Engl. J. Med. 356 (24) (2007) 2527–2534.

[2] B. Chaudhry, et al., Systematic review: impact of healthinformation technology on quality, efficiency, and costs ofmedical care, Ann. Intern. Med. 144 (10) (2006)742–752.

[3] R. Hillestad, et al., Can electronic medical record systemstransform health care? Potential health benefits: savings,and costs, Health Aff. (Millwood) 24 (5) (2005)1103–1117.

[4] R. Haux, Health information systems – past, present, future,Int. J. Med. Inf. 75 (3–4) (2006) 268–281.

[5] Medicine, I.o. Health IT and Patient Safety Building SaferSystems for Better Care, 2011, Available from: http://www.iom.edu/∼/media/Files/Report Files/2011/Health-IT/HealthITandPatientSafetyreportbrieffinal new.pdf (cited16.07.12).

[6] V.L. Patel, et al., Translational cognition for decision supportin critical care environments: a review, J. Biomed. Inform. 41(3) (2008) 413–431.

[7] J. Zhang, Human-centered computing in health informationsystems. Part 1: analysis and design, J. Biomed. Inform. 38 (1)(2005) 1–3.

[8] J. Zhang, Human-centered computing in health informationsystems part 2: evaluation, J. Biomed. Inform. 38 (3) (2005)173–175.

[9] J. Zhang, M.F. Walji, TURF. Toward a unified framework ofEHR usability, J. Biomed. Inform. (2011).

[10] D.W. Bates, et al., Reducing the frequency of errors inmedicine using information technology, J. Am. Med. Inform.Assoc. 8 (4) (2001) 299–308.

[11] J.S. Ash, M. Berg, E. Coiera, Some unintended consequencesof information technology in health care: the nature ofpatient care information system-related errors, J. Am. Med.Inform. Assoc. 11 (2) (2004) 104–112.

[12] T.P. Thyvalikakath, et al., A usability evaluation of fourcommercial dental computer-based patient record systems,J. Am. Dent. Assoc. 139 (12) (2008) 1632–1642.

[13] L.W. Peute, M.W. Jaspers, The significance of a usabilityevaluation of an emerging laboratory order entry system,Int. J. Med. Inf. 76 (2–3) (2007) 157–168.

[14] E.M. Campbell, et al., Types of unintended consequencesrelated to computerized provider order entry, J. Am. Med.Inform. Assoc. 13 (5) (2006) 547–556.

[15] A.W. Kushniruk, et al., Technology induced error andusability: the relationship between usability problems andprescription errors when using a handheld application, Int.J. Med. Inf. 74 (7–8) (2005) 519–526.

[16] R. Koppel, et al., Role of computerized physician order entrysystems in facilitating medication errors, J. Am. Med. Assoc.293 (10) (2005) 1197–1203.

[17] J. Horsky, J. Zhang, V.L. Patel, To err is not entirely human:complex technology and user cognition, J. Biomed. Inform.38 (4) (2005) 264–266.

[18] J. Horsky, G.J. Kuperman, V.L. Patel, Comprehensive analysisof a medication dosing error related to CPOE, J. Am. Med.Inform. Assoc. 12 (4) (2005) 377–382.

[19] Y.Y. Han, et al., Unexpected increased mortality afterimplementation of a commercially sold computerizedphysician order entry system, Pediatrics 116 (6) (2005)1506–1512.

better than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

care by improving the usability of electronic health recordsystems: recommendations from AMIA, J. Am. Med. Inform.Assoc. (2013).

ARTICLE IN PRESSIJB-3069; No. of Pages 7

a l i n

[48] D.A. Ludwick, J. Doucette, Primary Care physicians’

i n t e r n a t i o n a l j o u r n a l o f m e d i c

[21] P. Nykanen, et al., Guideline for good evaluation practice inhealth informatics (GEP-HI), Int. J. Med. Inf. 80 (12) (2011)815–827.

[22] M.M. Yusof, et al., Investigating evaluation frameworks forhealth information systems, Int. J. Med. Inf. 77 (6) (2008)377–385.

[23] S.Z. Lowry, et al., Technical Evaluation, Testing, andValidation of the Usability of Electronic Health Record,National Institute of Standards and Technology, NISTIR,2012, pp. 780.

[24] Test Procedure for §170.314(g)(3) Safety-enhanced Design,2013, Available from: http://www.healthit.gov/sites/default/files/170.314g3safetyenhanceddesign 2014 tp approved v1.30.pdf (cited 12.06.13).

[25] A.F. Rose, et al., Using qualitative studies to improve theusability of an EMR, J. Biomed. Inform. 38 (1) (2005) 51–60.

[26] R. Khajouei, A. Hasman, M.W. Jaspers, Determination of theeffectiveness of two methods for usability evaluation usinga CPOE medication ordering system, Int. J. Med. Inf. 80 (5)(2011) 341–350.

[27] A. Kushniruk, V. Patel, Cognitive and usability engineeringmethods for the evaluation of clinical information systems,J. Biomed. Inform. 37 (1) (2004) 56–76.

[28] J. Chan, et al., Usability evaluation of order sets in acomputerised provider order entry system, BMJ Qual. Saf. 20(11) (2011) 932–940.

[29] C.M. Johnson, et al., Can prospective usability evaluationpredict data errors? AMIA Annu. Symp. Proc. 2010 (2010)346–350.

[30] R. Khajouei, et al., Classification and prioritization ofusability problems using an augmented classificationscheme, J. Biomed. Inform. 44 (6) (2011) 948–957.

[31] R. Khajouei, et al., Classification and prioritization ofusability problems using an augmented classificationscheme, J. Biomed. Inform. (2011).

[32] H. Saitwal, et al., Assessing performance of an ElectronicHealth Record (EHR) using Cognitive Task Analysis, Int. J.Med. Inf. 79 (7) (2010) 501–506.

[33] R. Khajouei, et al., Effect of predefined order sets andusability problems on efficiency of computerized

Please cite this article in press as: M.F. Walji, et al., Are three methods bmethods in an EHR, Int. J. Med. Inform. (2014), http://dx.doi.org/10.1016/j.i

medication ordering, Int. J. Med. Inf. 79 (10) (2010) 690–698.[34] J. Viitanen, et al., National questionnaire study on clinical

ICT systems proofs: physicians suffer from poor usability,Int. J. Med. Inf. 80 (10) (2011) 708–725.

f o r m a t i c s x x x ( 2 0 1 4 ) xxx–xxx 7

[35] M.W. Jaspers, A comparison of usability methods for testinginteractive health technologies: methodological aspects andempirical evidence, Int. J. Med. Inf. 78 (5) (2009) 340–353.

[36] M. Peleg, et al., Using multi-perspective methodologies tostudy users’ interactions with the prototype front end of aguideline-based decision support system for diabetic footcare, Int. J. Med. Inf. 78 (7) (2009) 482–493.

[37] J. Horsky, et al., Complementary methods of systemusability evaluation: surveys and observations duringsoftware design and development cycles, J. Biomed. Inform.43 (5) (2010) 782–790.

[38] M.F. Walji, et al., Detection and characterization of usabilityproblems in structured data entry interfaces in dentistry,Int. J. Med. Inf. (2012).

[39] E. Kalenderian, et al., The development of a dentaldiagnostic terminology, J. Dent. Educ. 75 (1) (2011) 68–76.

[40] S.K. Card, T.P. Moran, A. Newell, The Psychology ofHuman-Computer Interaction, Lawrence ErbaumAssociates, London, 1983.

[41] B. John, K. Prevas, D. Salvucci, K. Koedinger, PredictiveHuman Performance Modeling Made Easy, in: Proceedings ofCHI, Vienna, Austria, 2004.

[42] H.K. Hill, D.C. Stewart, J.S. Ash, Health InformationTechnology Systems profoundly impact users: a case studyin a dental school, J. Dent. Educ. 74 (4) (2010) 434–445.

[43] N. Menachemi, A. Langley, R.G. Brooks, The use ofinformation technologies among rural and urban physiciansin Florida, J. Med. Syst. 31 (6) (2007) 483–488.

[44] G.A. Loomis, et al., If electronic medical records are so great,why aren’t family physicians using them? J. Fam. Pract. 51(7) (2002) 636–641.

[45] I. Valdes, et al., Barriers to proliferation of electronic medicalrecords, Inform. Prim. Care 12 (1) (2004) 3–9.

[46] A.R. Kemper, R.L. Uren, S.J. Clark, Adoption of electronichealth records in primary care pediatric practices, Pediatrics118 (1) (2006) e20–e24.

[47] H. Laerum, G. Ellingsen, A. Faxvaag, Doctors’ use ofelectronic medical records systems in hospitals: crosssectional survey, Br. Med. J. 323 (7325) (2001) 1344–1348.

etter than one? A comparative assessment of usability evaluationjmedinf.2014.01.010

experience with electronic medical records: barriers toimplementation in a fee-for-service environment, Int. J.Telemed. Appl. 2009 (2009) 853524.