Clinical questions raised by clinicians at the point of care: a systematic review

31
Page 1 of 31 Clinical Questions Raised by Clinicians at the Point of Care: a Systematic Review Guilherme Del Fiol, MD, PhD 1 T. Elizabeth Workman, PhD, MLIS 2 Paul N. Gorman MD 3 1 Department of Biomedical Informatics, University of Utah, Salt Lake City, UT; 2 Lister Hill Center, National Library of Medicine, Bethesda, MD; 3 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR Corresponding Author and requests for reprints: Guilherme Del Fiol, MD, PhD Assistant Professor Department of Biomedical Informatics, University of Utah HSEB 5700; 26 South 2000 East; Salt Lake City, UT, USA 84112-5750 Phone : +1(801) 213-4129 / Fax : +1(801) 581-4297 [email protected] Author contributions and conflict of interests: All authors provided substantial contributions to study conception and design; data acquisition and analysis; and drafting and revising the manuscript. All authors approved the manuscript prior to submission. The authors have no conflict of interest to disclose. A preliminary version of this study was presented at the American Medical Informatics Association Annual Symposium at Chicago, IL on November 7 th , 2012. Word Count: 3,498

Transcript of Clinical questions raised by clinicians at the point of care: a systematic review

Page 1 of 31  

Clinical Questions Raised by Clinicians at the Point of Care: a Systematic Review

Guilherme Del Fiol, MD, PhD1

T. Elizabeth Workman, PhD, MLIS2

Paul N. Gorman MD3

1Department of Biomedical Informatics, University of Utah, Salt Lake City, UT;

2Lister Hill Center, National Library of Medicine, Bethesda, MD;

3Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University,

Portland, OR

Corresponding Author and requests for reprints:

Guilherme Del Fiol, MD, PhD

Assistant Professor

Department of Biomedical Informatics, University of Utah

HSEB 5700; 26 South 2000 East; Salt Lake City, UT, USA 84112-5750

Phone : +1(801) 213-4129 / Fax : +1(801) 581-4297

[email protected]

Author contributions and conflict of interests: All authors provided substantial contributions to study

conception and design; data acquisition and analysis; and drafting and revising the manuscript. All

authors approved the manuscript prior to submission. The authors have no conflict of interest to disclose.

A preliminary version of this study was presented at the American Medical Informatics Association

Annual Symposium at Chicago, IL on November 7th, 2012.

Word Count: 3,498

Page 2 of 31  

Abstract

Importance: Clinicians raise several questions in the context of patient care decision-making and are

unable to pursue or find answers to most of these questions. Unanswered questions may lead to

suboptimal patient care decisions.

Objective: To systematically review studies that examined the questions that clinicians raise in the

context of patient care decision-making.

Data Sources: MEDLINE, CINAHL, and Scopus through May 2011.

Study selection: Studies that examined clinicians’ questions raised and observed in the context of patient

care were independently screened and abstracted by two investigators. Of 21,710 citations, 72 met the

selection criteria.

Data Extraction and Synthesis: Data were abstracted independently by two investigators. Estimates of

question frequency were obtained by pooling data from studies that employed similar methodology.

Main Outcome(s) and Measure(s): Frequency of questions raised, pursued and answered. Frequency of

questions by type according to a taxonomy of clinical questions. Thematic analysis of barriers to

information-seeking and the impact of information-seeking on decision-making.

Results: In 11 studies, 7,012 questions were elicited through short interviews with clinicians after each

patient visit. In these studies, clinicians raised 0.57 (CI, 0.38 to 0.77) questions per patient seen, pursued

51% (CI, 36% to 66%) of these questions, and found answers to 78% (CI, 67% to 88%) of those they

pursued. Overall, 34% of these questions were about drug treatment and 24% were related to the

potential causes of a symptom, physical finding, or diagnostic test finding. Clinicians’ lack of time and

doubt that a useful answer exists were the main barriers to clinicians’ information-seeking.

Conclusions and Relevance. Clinicians frequently raise questions about patient care in their practice.

While clinicians are effective at finding answers to the questions they pursue, roughly half of the

questions are never pursued. This picture has been fairly stable over time, despite the broad availability of

online evidence resources that can answer these questions. Technology-based solutions should enable

Page 3 of 31  

clinicians to track their questions and provide just-in-time access to high-quality evidence in the context

of patient care decision-making. Opportunities for improvement include the recent adoption of electronic

health record (EHR) systems and maintenance of certification requirements.

Page 4 of 31  

Introduction

A seminal 1985 study by Covell et al. reported that internal medicine physicians raise two

questions for every three patients they see in office practice.1 Since then numerous studies have

examined the questions clinicians raise in the course of patient care. In general, these studies have

confirmed that questions arise frequently and often go unanswered, but no systematic review of this

literature exists to date. Unanswered questions are seen as an important opportunity to improve patient

outcomes by filling gaps in medical knowledge in the context of clinical decisions.2-4 In addition,

providing just-in-time answers to clinical questions offers an opportunity for effective adult learning.5 The

challenge of maintaining current knowledge and practices is likely to be aggravated by the expansion of

medical knowledge, increasing complexity of health care delivery, and the growing aging population.6-8

Understanding clinicians’ questions is essential to guide the design of interventions aimed at

providing the right information at the right time to improve care. To increase current understanding, we

conducted a systematic review of the literature on clinicians’ questions. We focused on the need for

general medical knowledge that might be obtained from books, journals, specialists, and online

knowledge resources. The systematic review was guided by four primary questions: 1) How often do

clinicians raise clinical questions; 2) how often do clinicians pursue questions they raise; 3) how often do

clinicians succeed at answering the questions that they pursue; and 4) what types of questions are asked?

We also conducted a thematic analysis of the barriers to clinicians’ information-seeking and the potential

impact of information-seeking on clinicians’ decision-making.

Materials and Methods

The methodology was based on the Standards for Systematic Reviews set by the Institute of

Medicine.9 Study procedures were conducted based on formally defined processes and instruments that

were drafted and piloted by one of the authors (GDF) and refined with input from an expert review panel.

Page 5 of 31  

Data Sources and Searches. We searched MEDLINE (1966 to May 26th 2011), CINAHL (1982

to May 26th 2011), and Scopus (1947 to May 26th 2011); inspected the citations of included articles and

previous relevant reviews; and requested citations from experts on this topic. Search strategies (available

in the online supplement) were developed with the assistance of two medical librarians.

Study Selection. We searched for original studies that examined clinicians’ questions as defined

by Ely et al.10: “questions about medical knowledge that could potentially be answered by general sources

such as textbooks and journals, not questions about patient data that would be answered by the medical

record.” We used a broad definition for clinicians that included physicians, medical residents, physician

assistants, nurse practitioners, nurses, dentists, and care managers. We included only studies that collected

questions that arose in the care of real patients.

We excluded studies that met any of the following criteria: 1) data collection outside the context

of patient care, such as surveys and focus groups; 2) focus on the use, awareness, satisfaction, impact, or

quality of information resources without providing data on the frequency of information-seeking or the

nature of the questions asked; 3) questions of individuals not defined as clinicians in our study, such as

patients, medical students, and administrators,; 4) needs for specific patient data (e.g., laboratory test

results) that can be found in the patient’s medical record; 5) no data on at least one of the systematic

review primary questions; and 6) articles not written in English.

Abstract screening. One author (GDF) independently reviewed the title and abstract of all

retrieved citations. Two additional authors (TEW, PNG) independently reviewed two random samples of

100 citations. In this phase, articles were labeled as “not relevant” or “potentially relevant.”

Article selection. Two authors (GDF, TEW) independently reviewed the full-text of all citations

labeled as potentially relevant. Included articles were classified into one of five categories based on the

method used to collect clinical questions: 1) Interviews with clinicians after each patient visit or at the end

of a clinic session (After-Visit Interviews); 2) clinicians asked to keep a record of questions as they are

raised in the care of their patients (Self-Report); 3) a researcher directly observes clinicians and record

their questions during routine patient care activities (Direct Observation); 4) analysis of inquiries

Page 6 of 31  

submitted to information services, such as drug information services (Information Services); and 5)

analysis of online information resource usage logs (Search Logs). Disagreements between the two

reviewers were reconciled through consensus with a third reviewer (PNG).

Data Extraction. Two authors (GDF, TEW) independently reviewed the included articles to

extract the data into a data abstraction spreadsheet. Next, quantitative data were verified by the two

authors for accuracy. Disagreements were reconciled with the assistance of a third reviewer (PNG).

Data Synthesis and Analysis. For quantitative measures, we aggregated data from published

studies to determine descriptive statistics across these studies. Due to large variation in study methods and

measurements, a meta-analysis of methodological features and contextual factors associated with the

frequency of questions was not possible.

Results

Description of studies

Of 21,710 unique citations retrieved, 811 were selected for full-text screening and 72 articles met

the study criteria (Figure 1). Clinical questions were collected in After-Visit Interviews in 19 studies;

through clinician Self-Report in 11 studies; by Direct Observation of patient care activities in 11 studies;

by analysis of questions submitted to an Information Service in 26 studies; and by analysis of online

information resource Search Logs in 8 studies. Three studies used more than one method. Characteristics

of included studies are listed on eTable 1. The search also identified a systematic review on clinicians’

information-seeking behavior 11 and several informal literature reviews on related topics 6,12-17. No

systematic review of the questions that clinicians’ raise at the point of care was found. Agreement on

abstract and full-text screening was respectively 99% (kappa=0.88) and 95% (kappa=0.74).

Page 7 of 31  

How often do clinicians raise, pursue, and answer their clinical questions?

Table 1 lists the number of questions, the proportion of questions pursued, and the proportion of

pursued questions that were successfully answered. In 20 studies that provided sufficient data, the

frequency of questions ranged from 0.16 to 1.85 questions per patient seen. Frequency of questions

varied according to study methods, with intermediate frequencies in 11 After-Visit Interview studies

(median 0.57; range 0.22 – 1.27), lower frequency of questions in 4 Self-Report studies (median 0.20;

range 0.16 – 0.23), and higher frequency of questions in 5 Direct Observation studies (median 0.85; range

0.24 – 1.85) (Table 2).

The proportion of questions that were pursued was available in 16 studies, with a median of 81%

(23% – 82%) in 3 Self-Report studies, 47% (28% - 85%) in 11 After Visit Interview studies, and 47%

(22% - 71%) in 2 Direct Observation studies. Finally, most consistent were the reported rates of

successfully answering questions: when clinicians decided to pursue a clinical question, they were

successful approximately 80% of the time across all study types (Table 2).

What types of questions are asked?

Sixty-four studies classified questions using various methods and classification systems. Of these

studies, 48 (75%) employed ad hoc and informal classification approaches, using general categories such

as diagnosis, therapy, etiology, and prognosis. Although these categories had similar names, the

definitions and methods used were poorly defined and varied substantially among studies, precluding

meaningful comparison or aggregation. For simplicity, we have collapsed data from these studies into

approximate categories (eTable 2).

Five studies classified questions according to a formal taxonomy of 64 question types developed

by Ely et al. The question types followed a Pareto distribution, with roughly 30% of the question types

accounting for 80% of the questions clinicians asked. Table 3 lists the 13 most frequent question types

Page 8 of 31  

across these five studies. Overall, 34% of the questions asked were about drug treatment and 24% were

related to the potential causes of a symptom, physical finding, or diagnostic test finding.

Studies that focused on drug-related questions classified questions according to various

categories, such as dose & administration, contraindications, and adverse reactions. The most frequent

categories were dose & administration, indication, and adverse reactions (eTable 3).

Other substantial findings

Table 4 summarizes other substantial findings that were recurrent across studies. Clinicians cited

several barriers to pursuing their questions, such as clinicians’ lack of time (cited in 11 studies) and the

question perceived not to be urgent (5 studies) or important (5 studies) for the patient’s care. Eleven

studies reported that the information found by clinicians produced some kind of positive impact on

clinical decision-making. According to 4 studies, clinicians spent less than 2-3 minutes on average

seeking an answer to a specific question. Two studies demonstrated that the perceived frequency of

questions reported by clinicians in surveys was much lower than the frequency obtained through patient

care observations.

Discussion

This is the first systematic review of clinicians’ patient care questions. In nearly three decades

since Covell’s seminal study, over 20 additional studies have addressed these issues, employing differing

methods in a variety of settings. What has emerged from these efforts is a fairly stable picture: clinicians

have many questions in practice: at least one for every two patients they see, and while they find answers

to most (nearly 89%) of the questions they pursue, more than half of their questions are never pursued and

thus remain unmet. These unanswered questions continue to represent a significant opportunity to

improve patient care and to offer self-directed learning by providing needed information to clinicians in

the context of care.

Page 9 of 31  

Study methods

The methods of the included studies varied substantially regarding the definition of clinical

questions, data collection and analysis, care setting, and clinician background. We found important

differences in the results that can be explained in part by these differences.

Direct observation studies can provide more information about the underlying context that

motivates a clinical question and may identify questions that clinicians fail to articulate. However, the

presence of an observer might artificially stimulate 18 or inhibit 19 articulation of clinical questions.

Further, there may be greater variation in how Direct Observation is employed.

After-Visit interviews may have a smaller effect on artificially stimulating questions, but these

studies might miss questions that clinicians fail to articulate. On the other hand, the After-Visit method

may be more consistently applied, resulting in more stable estimates of the frequency of questions.

The Self-Report method is the least expensive and intrusive, but most susceptible to memory or

saliency bias. On the other hand, given the difficult logistics and expense of Direct Observations and

After-Visit Interviews, the Self-Report method may be a useful alternative when the goal is to collect a

large number of clinical questions from a variety of settings.

How often do clinicians raise, pursue, and answer their clinical questions?

Considering differences in methodology, we found fairly stable reports of the frequency of

questions, the pursuit of information, and clinician success in finding answers to the questions they

elected to pursue.

Most of the After-Visit Interview studies were conducted in community clinics and employed

similar methodology. Except for two outliers, these studies reported similar results. Therefore, 0.57 (0.37

– 0.77) seems to be a reasonable estimate of the frequency of recognized clinicians’ questions in

outpatient community settings. The two outliers can be explained by differences in methodology. On the

Page 10 of 31  

lower extreme (0.22 questions per patient seen), Norlin et al.20 excluded simple drug reference questions

and questions that were answered during the patient visit. At the other extreme (1.27 questions per patient

seen), 25 residents asked twice as many questions as 11 faculty members, increasing the overall questions

frequency.21

The Direct Observation studies were the least uniform regarding data collection methodology and

observation setting, which may explain the wide confidence interval for the frequency of questions. The

Self-Report studies ranged between 0.16 and 0.23 questions per patient seen. Yet, this method is likely to

underestimate the questions frequency due to recall bias.

According to 13 After-Visit interview and Direct Observation studies, clinicians pursued roughly

half of their questions. The percentage of questions that clinicians answered was similar across studies,

with the median per study type ranging between 80% and 87%. This relatively high success rate may be

explained by clinicians’ ability to selectively pursue questions that can be answered quickly.22 According

to information foraging theory, humans constantly weigh the expected benefits versus the estimated cost

of engaging in certain information-seeking activities.23 This process is more notably observed among

experts and in time-sensitive environments.

What types of questions are asked and in what frequency?

According to the five studies that classified questions according to Ely’s taxonomy 24, a relatively

small percentage of question types accounted for a large percentage of the questions asked. This finding

has important implications in the design of information retrieval interventions, in the priorities for

information resource development, and for the optimal structure of information resources. For example,

while typical book chapters enlist the signs and symptoms under the description of a specific condition,

clinicians most often ask which conditions can lead to a specific sign or symptom.

Page 11 of 31  

Despite the many studies that examined the nature of clinical questions, most studies did not follow

a rigorous question classification method and lacked definitions for the question types used, making it

challenging to aggregate the results across studies. Therefore, the findings reported in eTable2 need to be

interpreted with caution.

Potential impact of EHR systems on clinical questions

We were unable to determine whether decision support tools and electronic health record (EHR)

systems were available to clinicians in the vast majority of the included studies. In addition, no study

directly compared clinical questions among clinicians with versus without access to an EHR. Therefore,

we were unable to assess the impact of these tools on the rate of unanswered questions. Yet, there is some

early evidence that (EHR) systems with clinical decision support tools and seamless access to online

reference resources are helping clinicians answer simple questions more quickly,25-32 with some recent

evidence of improvement in patient outcomes.33,34

Implications for Practice

Despite encouraging results obtained by recent information retrieval technology and online

information resources,32-34 the rate of unanswered clinical questions has remained remarkably stable over

time. It is possible that gains achieved through widespread use of online information resources are being

offset by busier settings, more complex patients, and increasingly complex medical knowledge.8

As discussed above, it is possible that clinicians self-select simpler and more urgent questions as a

result of estimating the value of information in terms of its perceived benefits and cost, with a high

threshold for engaging in information-seeking.22 Hence, information interventions should allow clinicians

to easily estimate the benefits of the information available versus the cost of seeking the information as

early as possible with minimal cognitive effort.

Page 12 of 31  

The results of this study also raise implications in clinicians’ training and lifelong learning. A

systematic review has shown decreasing physician knowledge and performance with increasing years in

practice.35 Traditional approaches to address this issue include continuing medical education (CME).

However, the typical CME program follows a passive learning approach and fails to improve physician

performance and patient outcomes.36,37 Alternate learning interventions could promote just-in-time and

self-directed learning in the context of care as questions arise.38 In the United States, this kind of

approach could be integrated as a part of the requirements for Maintenance of Certification (MOC),

particularly the lifelong commitment to learning through ongoing knowledge self-assessment and practice

performance improvement.39 EHR systems are being considered as critical enablers of MOC-driven

practice improvement.38,40 Hence the EHR could be a natural environment for innovative tools that help

clinicians identify knowledge gaps, address these gaps, and improve practice. In addition, questions and

answers related to a particular patient could be tracked in the patient’s EHR in an application similar to

the problem list. Questions and their answers could be made available to the entire care team who could

collaboratively seek optimal answers. In academic settings, such a “question tracker” could also be used

as a teaching device, with interesting questions selected for broader discussion in venues such as grand

rounds. This environment could also be used to automatically suggest unasked but relevant questions.

These ideas are feasible to be implemented in currently available EHR and information retrieval

technology. One important accomplishment towards this EHR vision is the recent inclusion of the Health

Level Seven (HL7) Context-Aware Knowledge Retrieval Standard41 as a requirement for EHR

certification in the United States.42 This standard enables the just-in-time delivery of clinical evidence

into EHR systems and may provide a foundation for innovations that can help transform EHR systems

into practice improvement and learning environments.

Limitations

This systematic review has several limitations. First, the heterogeneity of the studies compromised

general observations about questions and precluded a meta-analysis of factors associated with the

Page 13 of 31  

frequency in which questions are raised, pursued, and answered. Grouping studies with similar

methodology minimized this limitation. Second, the focus of the studies included in this review was on

questions that clinicians recognize. Less is known about unrecognized questions, such as the availability

of new evidence that may alter the management of a particular patient. Last, the most recent study

identified in our review was published in 2011, with most studies being at least five years old. Therefore,

it was not possible to examine the influence of recent trends, such as the growing EHR adoption in the

United States motivated by the EHR Meaningful Use Program.43

Implications for Research

There are several gaps in the literature regarding clinicians’ questions that call for further research.

Studies identified in our review were largely focused on primary care physicians in outpatient settings.

Furthermore, none of the studies directly compared the questions raised and pursued according to factors

such as clinicians’ specialty, years of practice, cognitive style, and care setting. Further research is needed

to investigate these gaps. In-depth knowledge about these effects can be used to personalize information

retrieval solutions to the characteristics of the clinician.

Our review identified only one study that systematically assessed the nature of questions that

clinicians were unable to answer.44 Studies have shown a positive impact of information-seeking on

clinicians’ performance 32 and patient outcomes 33,34. However, this review found no studies that assessed

the association between unanswered questions and inferior clinicians’ performance or patient outcomes.

Further research is also needed to investigate clinical questions in subpopulations of special

interest, such as complex and aging patients. In particular, the study of patient complexity has gained

recent traction, but the literature still lacks a better understanding of clinicians’ questions and information

seeking behavior in the care of these patients.8,45 While several studies in this review were conducted in

the post-Web age, it is still unclear whether we are facing a change in status quo given the new generation

of clinicians, who may have incorporated the use of information resources as a natural component of their

Page 14 of 31  

clinical practice. Finally, research is needed to assess the recent impact of growing EHR43 and online

resource adoption on the rate at which questions are raised, pursued and answered.

Conclusion

This systematic review estimates that clinicians raise between 0.4 and 0.8 questions per patient

seen and that roughly two thirds of these questions are left unanswered. This picture has been fairly stable

over time, despite the broad availability of online evidence resources that can answer these questions.

Unanswered questions are missed opportunities for just-in-time learning and practice improvement.

Potential solutions should help clinicians keep track of their questions and provide seamless and just-in-

time access to high-quality evidence in the context of patient care decision-making. Opportunities for

improvement include the increasing adoption of electronic health record (EHR) systems and maintenance

of certification requirements.

Page 15 of 31  

Acknowledgements

The authors would like to acknowledge James J. Cimino1, MD; Kensaku Kawamoto2, MD, PhD;

Leslie A. Lenert4, MD; and Charlene R. Weir3, RN, PhD; for providing insights on the design of this

study (JC, KK, LL) and for reviewing the manuscript (JC, CW). None of the individuals above have

received compensation for their contributions.

1 Chief, Laboratory for Informatics Development, Clinical Center, National Institutes of Health, USA Senior Scientist, Lister Hill National Center for Biomedical Communication, National Library of Medicine Adjunct Professor of Biomedical Informatics, Columbia College of Physicians and Surgeons

2 Assistant Professor, Department of Biomedical Informatics, University of Utah Assistant Professor, Department of Biomedical Informatics, University of Utah Associate Chief Medical Information Officer, University of Utah Health Sciences Center Director, Knowledge Management and Mobilization, University of Utah Health Sciences Center 3 Associate Professor, Department of Biomedical Informatics, University of Utah Associate Director, IDEAS Center of Innovation, Salt Lake City VA Medical Center 4 Chief Research Information Officer & SmartState Endowed Chair in Medical Bioinformatics, Medical University of South Carolina, USA

All authors had full access to all of the data in the study and take responsibility for the integrity of

the data and the accuracy of the data analysis. The authors have no conflicts of interest to declare.

This project was supported by grant number K01HS018352 from the Agency for Healthcare

Research and Quality. The content is solely the responsibility of the authors and does not necessarily

represent the official views of the Agency for Healthcare Research and Quality. The funding source had

no role in design and conduct of the study; collection, management, analysis, and interpretation of the

data; and preparation, review, or approval of the manuscript.

Page 16 of 31  

References

1.  Covell DG, Uman GC, Manning PR. Information needs in office practice: are they being met? 

Annals of internal medicine. Oct 1985;103(4):596‐599. 

2.  Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention 

Study Group. Jama. Jul 5 1995;274(1):35‐43. 

3.  Solomon DH, Hashimoto H, Daltroy L, Liang MH. Techniques to improve physicians' use of 

diagnostic tests: a new conceptual framework. Jama. Dec 16 1998;280(23):2020‐2027. 

4.  van Walraven C, Naylor CD. Do we know what inappropriate laboratory utilization is? A 

systematic review of laboratory clinical audits. Jama. Aug 12 1998;280(6):550‐558. 

5.  Green ML, Ciampi MA, Ellis PJ. Residents' medical information needs in clinic: are they being 

met? The American journal of medicine. Aug 15 2000;109(3):218‐223. 

6.  Davidoff F, Miglus J. Delivering clinical evidence where it's needed: building an information 

system worthy of the profession. Jama. May 11 2011;305(18):1906‐1907. 

7.  Smith R. Strategies for coping with information overload. Bmj. 2010;341:c7126. 

8.  Grant RW, Ashburner JM, Hong CS, Chang Y, Barry MJ, Atlas SJ. Defining patient complexity from 

the primary care physician's perspective: a cohort study. Annals of internal medicine. Dec 20 

2011;155(12):797‐804. 

9.  Finding what works in health care: standards for systematic reviews. . Institute of 

Medicine;2011. 

10.  Ely JW, Osheroff JA, Ebell MH, et al. Analysis of questions asked by family doctors regarding 

patient care. Bmj. Aug 7 1999;319(7206):358‐361. 

11.  Dawes M, Sampson U. Knowledge management in clinical practice: a systematic review of 

information seeking behavior in physicians. Int J Med Inform. Aug 2003;71(1):9‐15. 

12.  Smith R. What clinical information do doctors need? Bmj. Oct 26 1996;313(7064):1062‐1068. 

Page 17 of 31  

13.  Coumou HC, Meijman FJ. How do primary care physicians seek answers to clinical questions? A 

literature review. J Med Libr Assoc. Jan 2006;94(1):55‐60. 

14.  Davies K, Harrison J. The information‐seeking behaviour of doctors: a review of the evidence. 

Health Info Libr J. Jun 2007;24(2):78‐94. 

15.  Haug JD. Physicians' preferences for information sources: a meta‐analytic study. Bulletin of the 

Medical Library Association. Jul 1997;85(3):223‐232. 

16.  McKnight M, Peet M. Health care providers' information seeking: recent research. Med Ref Serv 

Q. Summer 2000;19(2):27‐50. 

17.  Verhoeven AA, Boerma EJ, Meyboom‐de Jong B. Use of information sources by family 

physicians: a literature survey. Bulletin of the Medical Library Association. Jan 1995;83(1):85‐90. 

18.  Barrie AR, Ward AM. Questioning behaviour in general practice: a pragmatic study. BMJ. Dec 6 

1997;315(7121):1512‐1515. 

19.  Gerrity MS, DeVellis RF, Earp JA. Physicians' reactions to uncertainty in patient care. A new 

measure and new insights. Medical care. Aug 1990;28(8):724‐736. 

20.  Norlin C, Sharp AL, Firth SD. Unanswered questions prompted during pediatric primary care 

visits. Ambulatory pediatrics : the official journal of the Ambulatory Pediatric Association. Sep‐

Oct 2007;7(5):396‐400. 

21.  Ramos K, Linscheid R, Schafer S. Real‐time information‐seeking behavior of residency physicians. 

Fam Med. Apr 2003;35(4):257‐260. 

22.  Huth EJ. Needed: an economics approach to systems for medical information. Annals of internal 

medicine. Oct 1985;103(4):617‐619. 

23.  Pirolli P, Card S. Information foraging. Psychological Review. 1999;106(4):643‐675. 

24.  Ely JW, Osheroff JA, Gorman PN, et al. A taxonomy of generic clinical questions: classification 

study. BMJ. Aug 12 2000;321(7258):429‐432. 

Page 18 of 31  

25.  Del Fiol G, Haug PJ, Cimino JJ, Narus SP, Norlin C, Mitchell JA. Effectiveness of topic‐specific 

infobuttons: a randomized controlled trial. Journal of the American Medical Informatics 

Association : JAMIA. Nov‐Dec 2008;15(6):752‐759. 

26.  Hauser SE, Demner‐Fushman D, Jacobs JL, Humphrey SM, Ford G, Thoma GR. Using wireless 

handheld computers to seek information at the point of care: an evaluation by clinicians. Journal 

of the American Medical Informatics Association : JAMIA. Nov‐Dec 2007;14(6):807‐815. 

27.  Hoogendam A, Stalenhoef AF, Robbe PF, Overbeke AJ. Answers to questions posed during daily 

patient care are more likely to be answered by UpToDate than PubMed. J Med Internet Res. 

2008;10(4):e29. 

28.  Magrabi F, Coiera EW, Westbrook JI, Gosling AS, Vickland V. General practitioners' use of online 

evidence during consultations. International journal of medical informatics. Jan 2005;74(1):1‐12. 

29.  Magrabi F, Westbrook JI, Kidd MR, Day RO, Coiera E. Long‐term patterns of online evidence 

retrieval use in general practice: a 12‐month study. J Med Internet Res. 2008;10(1):e6. 

30.  Maviglia SM, Yoon CS, Bates DW, Kuperman G. KnowledgeLink: impact of context‐sensitive 

information retrieval on clinicians' information needs. Journal of the American Medical 

Informatics Association : JAMIA. Jan‐Feb 2006;13(1):67‐73. 

31.  Van Duppen D, Aertgeerts B, Hannes K, et al. Online on‐the‐spot searching increases use of 

evidence during consultations in family practice. Patient Educ Couns. Sep 2007;68(1):61‐65. 

32.  Pluye P, Grad RM, Dunikowski LG, Stephenson R. Impact of clinical information‐retrieval 

technology on physicians: a literature review of quantitative, qualitative and mixed methods 

studies. International journal of medical informatics. Sep 2005;74(9):745‐768. 

33.  Bonis PA, Pickens GT, Rind DM, Foster DA. Association of a clinical knowledge support system 

with improved patient safety, reduced complications and shorter length of stay among 

Page 19 of 31  

Medicare beneficiaries in acute care hospitals in the United States. International journal of 

medical informatics. Nov 2008;77(11):745‐753. 

34.  Isaac T, Zheng J, Jha A. Use of UpToDate and outcomes in US hospitals. Journal of hospital 

medicine : an official publication of the Society of Hospital Medicine. Feb 2012;7(2):85‐90. 

35.  Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical 

experience and quality of health care. Annals of internal medicine. Feb 15 2005;142(4):260‐273. 

36.  Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor‐Vaisey A. Impact of formal 

continuing medical education: do conferences, workshops, rounds, and other traditional 

continuing education activities change physician behavior or health care outcomes? JAMA : the 

journal of the American Medical Association. Sep 1 1999;282(9):867‐874. 

37.  Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops: 

effects on professional practice and health care outcomes. Cochrane database of systematic 

reviews. 2009(2):CD003030. 

38.  Dhaliwal G. KNown unknowns and unknown unknowns at the point of care. JAMA Internal 

Medicine. 2013:‐. 

39.  Miller S, Horowitz S. Maintenance of certification: relationship to competence. J Med Licensure 

Discipline. 2003;89(1):7‐10. 

40.  Miles P. Health information systems and physician quality: role of the American board of 

pediatrics maintenance of certification in improving children's health care. Pediatrics. Jan 

2009;123 Suppl 2:S108‐110. 

41.  Del Fiol G, Huser V, Strasberg HR, Maviglia SM, Curtis C, Cimino JJ. Implementations of the HL7 

Context‐Aware Knowledge Retrieval ("Infobutton") Standard: challenges, strengths, limitations, 

and uptake. Journal of biomedical informatics. Aug 2012;45(4):726‐735. 

Page 20 of 31  

42.  Office of the National Coordinator for Health Information Technology (ONC) DoHaHS. Health 

Information Technology: Standards, Implementation Specifications, and Certification Criteria for 

Electronic Health Record Technology, 2014 Edition, Final Rule. In: Office of the National 

Coordinator for Health Information Technology (ONC) DoHaHS, ed. 77. Vol 1712012:130. 

43.  Hsiao CJ, Jha AK, King J, Patel V, Furukawa MF, Mostashari F. Office‐based physicians are 

responding to incentives and assistance by adopting and using electronic health records. Health 

affairs (Project Hope). Aug 2013;32(8):1470‐1477. 

44.  Ely JW, Osheroff JA, Maviglia SM, Rosenbaum ME. Patient‐care questions that physicians are 

unable to answer. Journal of the American Medical Informatics Association : JAMIA. Jul‐Aug 

2007;14(4):407‐414. 

45.  Min L, Wenger N, Walling AM, et al. When Comorbidity, Aging, and Complexity of Primary Care 

Meet: Development and Validation of the Geriatric CompleXity of Care Index. Journal of the 

American Geriatrics Society. 2013;61(4):542‐550. 

46.  Chambliss ML, Conley J. Answering clinical questions. J Fam Pract. Aug 1996;43(2):140‐144. 

47.  Cogdill KW, Friedman CP, Jenkins CG, Mays BE, Sharp MC. Information needs and information 

seeking in community medical education. Acad Med. May 2000;75(5):484‐486. 

48.  Cogdill KW. Information needs and information seeking in primary care: a study of nurse 

practitioners. Journal of the Medical Library Association : JMLA. Apr 2003;91(2):203‐215. 

49.  Dee C, Blazek R. Information needs of the rural physician: a descriptive study. Bull Med Libr 

Assoc. Jul 1993;81(3):259‐264. 

50.  Ebell MH, White L. What is the best way to gather clinical questions from physicians? J Med Libr 

Assoc. Jul 2003;91(3):364‐366. 

Page 21 of 31  

51.  Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME. Answering physicians' clinical 

questions: obstacles and potential solutions. Journal of the American Medical Informatics 

Association : JAMIA. Mar‐Apr 2005;12(2):217‐224. 

52.  Gorman PN, Helfand M. Information seeking in primary care: how physicians choose which 

clinical questions to pursue and which to leave unanswered. Med Decis Making. Apr‐Jun 

1995;15(2):113‐119. 

53.  Gorman PN, Yao P, Seshadri V. Finding the answers in primary care: information seeking by rural 

and nonrural clinicians. Stud Health Technol Inform. 2004;107(Pt 2):1133‐1137. 

54.  Graber MA, Randles BD, Monahan J, et al. What questions about patient care do physicians have 

during and after patient contact in the ED? The taxonomy of gaps in physician knowledge. 

Emerg Med J. Oct 2007;24(10):703‐706. 

55.  Crowley SD, Owens TA, Schardt CM, et al. A Web‐based compendium of clinical questions and 

medical evidence to educate internal medicine residents. Acad Med. Mar 2003;78(3):270‐274. 

56.  Gonzalez‐Gonzalez AI, Dawes M, Sanchez‐Mateos J, et al. Information needs and information‐

seeking behavior of primary care physicians. Ann Fam Med. Jul‐Aug 2007;5(4):345‐352. 

57.  Jennett PA, Lockyer JM, Parboosingh IJ, Maes WR. Practice‐generated questions: a method of 

formulating true learning needs of family physicians. Can Fam Physician. Mar 1989;35:497‐500. 

58.  Patel MR, Schardt CM, Sanders LL, Keitz SA. Randomized trial for answers to clinical questions: 

evaluating a pre‐appraised versus a MEDLINE search protocol. J Med Libr Assoc. Oct 

2006;94(4):382‐387. 

59.  Schilling LM, Steiner JF, Lundahl K, Anderson RJ. Residents' patient‐specific clinical questions: 

opportunities for evidence‐based learning. Acad Med. Jan 2005;80(1):51‐56. 

60.  Schwartz K, Northrup J, Israel N, Crowell K, Lauder N, Neale AV. Use of on‐line evidence‐based 

resources at the point of care. Fam Med. Apr 2003;35(4):251‐256. 

Page 22 of 31  

61.  Davies K. Quantifying the information needs of doctors in the UK using clinical librarians. Health 

Info Libr J. Dec 2009;26(4):289‐297. 

62.  Dorr DA, Tran H, Gorman P, Wilcox AB. Information needs of nurse care managers. AMIA Annu 

Symp Proc. 2006:913. 

63.  Ely JW, Burch RJ, Vinson DC. The information needs of family physicians: case‐specific clinical 

questions. J Fam Pract. Sep 1992;35(3):265‐269. 

64.  Osheroff JA, Forsythe DE, Buchanan BG, Bankowitz RA, Blumenfeld BH, Miller RA. Physicians' 

information needs: analysis of questions posed during clinical teaching. Annals of internal 

medicine. Apr 1 1991;114(7):576‐581. 

65.  Sackett DL, Straus SE. Finding and applying evidence during clinical rounds: the "evidence cart". 

JAMA. Oct 21 1998;280(15):1336‐1338. 

66.  Timpka T, Arborelius E. The GP's dilemmas: a study of knowledge need and use during health 

care consultations. Methods Inf Med. Jan 1990;29(1):23‐29. 

67.  Barley EA, Murray J, Churchill R. Using research evidence in mental health: user‐rating and focus 

group study of clinicians' preferences for a new clinical question‐answering service. Health Info 

Libr J. Dec 2009;26(4):298‐306. 

68.  Bergus GR, Emerson M. Family medicine residents do not ask better‐formulated clinical 

questions as they advance in their training. Fam Med. Jul‐Aug 2005;37(7):486‐490. 

69.  Del Mar CB, Silagy CA, Glasziou PP, et al. Feasibility of an evidence‐based literature search 

service for general practitioners. Med J Aust. Aug 6 2001;175(3):134‐137. 

70.  Fozi K, Teng CL, Krishnan R, Shajahan Y. A study of clinical questions in primary care. Med J 

Malaysia. Dec 2000;55(4):486‐492. 

71.  Swain GA, Temkin LA, Gallina JN, Jeffrey LP. Analysis of drug information questions on a 

decentralized pharmacy service unit. Hosp Pharm. Sep 1983;18(9):469‐470, 472. 

Page 23 of 31  

72.  Swinglehurst DA, Pierce M, Fuller JC. A clinical informaticist to support primary care decision 

making. Qual Health Care. Dec 2001;10(4):245‐249. 

73.  Verhoeven AA, Schuling J. Effect of an evidence‐based answering service on GPs and their 

patients: a pilot study. Health Info Libr J. Sep 2004;21 Suppl 2:27‐35. 

74.  Ebell MH, Cervero R, Joaquin E. Questions asked by physicians as the basis for continuing 

education needs assessment. J Contin Educ Health Prof. Winter 2011;31(1):3‐14. 

75.  Graber MA, Randles BD, Ely JW, Monnahan J. Answering clinical questions in the ED. Am J Emerg 

Med. Feb 2008;26(2):144‐147. 

76.  Schaafsma F, Hulshof C, de Boer A, Hackmann R, Roest N, van Dijk F. Occupational physicians: 

what are their questions in daily practice? An observation study. Occup Med (Lond). May 

2006;56(3):191‐198. 

77.  Corcoran‐Perry S, Graves J. Supplemental‐information‐seeking behavior of cardiovascular 

nurses. Res Nurs Health. Apr 1990;13(2):119‐127. 

78.  Ette EI, Achumba JI, Brown‐Awala EA. Determination of the drug information needs of the 

medical staff of a Nigerian hospital following implementation of clinical pharmacy services. Drug 

Intell Clin Pharm. Sep 1987;21(9):748‐751. 

Page 24 of 31  

Table 1 – Frequency of clinical questions per patient seen; percentage of clinical questions that were

pursued; and percentage of questions that were answered.

Study Questi

ons

Patients

seen

Questions

per

patient

%

pursued

% pursued

&

answered

Comments

After-Visit Interview

Chambliss, 199646 84 NR NR NA 54%

Cogdill, 200047 62 148 0.42 32% NR

Cogdill, 200348 75 153 0.57 85% NR

Covell, 19851 269 409 0.66 NR NR

Dee, 199349 48 144 0.33 NR NR Chart review at end of half-day

Ebell, 2003a50 415 966 0.43 68% 74%

Ely, 199910 778 2,467 0.32 36% 80%

Ely, 200551 1,062 NR NR 55% 72%

Gorman, 199552 295 514 0.57 30% 80%

Gorman, 200453 585 705 0.83 47% 77%

Graber, 200754 271 NR NR 81% 87%

Green, 20005 280 401 0.70 29% NR

Norlin, 200720 193 890 0.22 28% 98% Only unanswered questions in the

visit

Ramos, 200321 274 215 1.27 69% NR

Self-Report

Barrie, 199718 85 376 0.23 82.4% 96%

Crowley, 200355 581 NR NR NA 82% Self-recorded searches

Ebell, 2003b50 402 2,496 0.16 81.1% 55%

Gonzalez-Gonzales,

200756

635 3,511 0.18 22.8% 86% Clinicians video-recorded their

questions

Jennett, 198957 592 2,767 0.21 NR NR

Patel, 200658 253 NR NR NA 87% Self-recorded searches

Schilling, 200559 158 NR NR NA 89% Residents were assigned a patient-

Page 25 of 31  

specific question to pursue

Schwartz, 200360 92 NR NR NA 54% Self-recorded searches

Van Duppen, 200731 365 NR NR NA 87% Self-recorded searches

Direct Observation

Davies, 200961 286 1,210 0.24 22.0% NR Recorded by volunteer clinical

librarians

Door, 200662 128 NR NR 71.1% 80%

Ely, 199263 36 NR NR NA 92% Observed information-seeking

episodes

Hauser, 200726 363 228 1.59 NA 68% Residents followed rounds,

collecting & pursuing questions

Osheroff, 199164 77 91 0.85 NR NR Ethnographic method

Sackett, 199865 98 196 0.50 NA NR Clinical team recorded questions

they pursued using “evidence cart”

Timpka, 199066 85 46 1.85 NR NR Video-recorded visits reviewed with

physicians

Search Logs

Del Fiol, 200825 115 NR NR NA 87%

Hoogendam, 200827 1,125 NR NR NA 53%

Magrabi, 200528 63 NR NR NA 73%

Maviglia, 200630 289 NR NR NA 84%

Xu, 2005 54 NR NR NA 57%

Information Service

Barley, 200967 84 NR NR NA 83%

Bergus, 200668 1618 NR NR NA 91%

Del Mar, 200169 84 NR NR NA 82%

Fozi, 200070 78 NR NR NA 40%

Swain, 198371 158 NR NR NA 96%

Swinglehurst, 200172 60 NR NR NA 95%

Verhoeven, 200473 61 NR NR NA 92%

* NR=not reported; NA=not applicable (only collected questions that were pursued).

Page 26 of 31  

Page 27 of 31  

Table 2 – Frequency of clinical questions.

Questions per

patient seen

% of questions

pursued

% of questions

pursued and

answered*

After-Visit

Interview

Number of studies 11 11 8

Questions 3,274

Patients seen 7,012

Mean (95% CI) 0.57 (0.38 to 0.77) 51% (36% to

66%)

78% (67% to 88%)

Median 0.57 47% 78%

Self-Recorded

Number of studies 4 3 8

Questions 1,714

Patients seen 9,150

Mean (95% CI) 0.20 (0.15 to 0.24) 62% (NP) 80% (66% to 93%)

Median 0.20 81% 87%

Direct Observation

Number of studies 5 2 3

Questions 909

Patients seen 1,771

Mean (95% CI) 1.01 (0.15 to 1.87) 47% (NP) 80% (NP)

Median 0.85 47% 80%

Information Service

Number of studies 0 0 6

Pursued questions - - 2,106

Page 28 of 31  

Mean (95% CI) - - 74% (51% to 98%)

Median - - 83%

Search Log

Number of studies 0 0 7

Pursued questions - - 1,767

Mean (95% CI) - - 77% (62% to 93%)

Median - - 84%

* Number of questions that were answered divided by number of questions that were pursued.

NP = not possible to estimate due to small number of studies.

Page 29 of 31  

Table 3 – Clinical questions classified according to the Ely taxonomy. The data include the 13 most

frequent question types across studies that accounted for 80% of the questions asked.

Question type Taxonom

y code

Gorman,

199552

Ely,

199910

Gonzalez,

200756

Graber,

200754

Ebell,

201174

Overall

What is the drug of choice for condition x? 2.1.2.1 13% 10% 7% 10% 13% 10%

What is the cause of symptom x? 1.1.1.1 3% 10% 20% 3% 6% 10%

How should I treat condition x (not

limited to drug treatment)?

2.2.1.1 10% 6% 2% 5% 15% 7%

What is the cause of physical finding x? 1.1.2.1 2% 6% 15% 3% 3% 7%

What test is indicated in situation x? 1.3.1.1 9% 8% 3% 8% 6% 6%

What is the dose of drug x? 2.1.1.2 3% 8% 3% 13% 2% 6%

Can drug x cause (adverse) finding y? 2.1.3.1 6% 4% 1% 7% 8% 5%

What is the cause of test finding x? 1.1.3.1 4% 5% 3% 2% 5% 4%

Could this patient have condition x? 1.1.4.1 1% 4% 6% 1% 2% 4%

How should I manage condition x (not

specifying diagnostic or therapeutic)?

3.1.1.1 2% 5% 4% 0.4% 1% 4%

What is the prognosis of condition x? 4.3.1.1 NA NA 0.2% 4% 6% 2%

What are the manifestations of condition x? 1.2.1.1 NA NA 1% 8% 2% 2%

What conditions or risk factors are associated

with condition y?

4.2.1.1 NA NA 1% 6% 1% 2%

* NA=Not available

Page 30 of 31  

Table 4 – Other substantial and recurring findings.

Barriers to pursuing clinical questions / reasons not to pursue questions:

Lack of time 1,5,20,21,47,49,51,56,75-77

Question is not urgent 5,10,21,48,52

Question is not Important 20,21,51,56,75

Doubt that a useful answer exists 10,20,48,51,52,54,75

Forgetting question 5,56

Referral 21,51,56

Information found produced impact on clinician and decision-making, confirming or changing decisions

5,25,28,46,55,59,65,69,72,73,78

Most questions are pursued when the patient is still in the practice 48,52,53

Most questions are highly patient-specific and non-generalizable 1,48,52

Clinicians used human and paper resources more often than computer resources 1,10,47,48,51-53

Clinicians spend less than 2-3 minutes on average seeking information 21,25,30,75

Observed frequency of clinical questions much higher than clinicians’ own estimate (once per week versus

two out of every three patients seen in Covell et al.1; once a week to once a month versus 10.5 questions per

half-day period in Schaafsma et al.76)

Page 31 of 31  

Figure 1 – Selection process of studies of clinical questions raised by clinicians at the point of care.