Evidence-based practice -- imperfect but necessary

11
Physiotherapy Theory and Practice ( 2001 ) 17, 201 – 211 Ó 2001 Taylor & Francis Evidence-based practice Ð imperfect but necessary Robert D. Herbert, Catherine Sherrington, Christopher Maher, and Anne M. Moseley Evidence-based practice implies the systematic use of best evidence, usually in the form of high quality clinical research, to solve clinical problems. This article considers a series of objections to evidence-based physiotherapy including that (1), it is too time-consuming, (2), there is not enough evidence, (3), the evidence is not good enough, (4), readers of clinical research cannot distinguish between high and low quality studies, (5), clinical research does not provide certainty when it is most needed, (6), ndings of clinical research cannot be applied to individual patients, (7), clinical research does not tell us about patients’ true experiences, and (8), evidence-based practice removes responsibility for decision making from individual physiotherapists. We argue that, while there is some truth in each of these objections, they need to be weighed against the potential bene ts of evidence-based practice. The overwhelming strength of the evidence-based approach to clinical practice is that it takes full advantage of the only potentially unbiased estimates of effects of therapy—those which are derived from carefully conducted clinical research. The evidence-based practice model may be imperfect, but it may be the best model of clinical practice that is currently available. INTRODUCTION This article addresses some theoretical and practical issues with the implementation of evidence-based practice. It begins with a brief Robert D. Herbert, Centre for Evidence-Based Physiotherapy and School of Physiotherapy, University of Sydney. Address Correspondence to School & Physiotherapy, University of Sydney, P.O. Box 170, Lidcombe NSW 1825, Australia. E-mail: [email protected] u Catherine Sherrington, Research manager, Prince of Wales Medical Research Institute Christopher Maher, Senior lecturer, School of Physiotherapy. University of Sydney Anne Moseley, Lecturer, Rehabilitation Studies Unit, University of Sydney. Accepted for publication April 2001. overview of what is implied by evidence-based practice and discusses how this differs from traditional clinical practice. It then considers some frequently raised objections to the evidence-based practice model. WHAT IS EVIDENCE-BASED PRACTICE? The term ‘‘evidence-based practice’’ is used in a variety of ways. We use the term as it is used by Sackett and colleagues in their in uen- tial book on evidence-based medicine (Sackett et al,2000).These authors conceive of evidence- based practice as consisting of a ve step process that is carried out routinely in clinical encoun- ters. The ve-step process involves (1) asking Physiother Theory Pract Downloaded from informahealthcare.com by University of Sydney on 04/19/13 For personal use only.

Transcript of Evidence-based practice -- imperfect but necessary

Physiotherapy Theory and Practice ( 2001 ) 17, 201 –211Ó 2001 Taylor & Francis

Evidence-based practice Ð imperfect but necessary

Robert D. Herbert, Catherine Sherrington, Christopher Maher, andAnne M. Moseley

Evidence-based practice implies the systematic use of best evidence, usually in theform of high quality clinical research, to solve clinical problems. This articleconsiders a series of objections to evidence-based physiotherapy including that ( 1 ) ,it is too time-consuming, ( 2 ) , there is not enough evidence, ( 3 ) , the evidence isnot good enough, ( 4 ) , readers of clinical research cannot distinguish betweenhigh and low quality studies, ( 5 ) , clinical research does not provide certainty whenit is most needed, ( 6 ) , � ndings of clinical research cannot be applied to individualpatients, ( 7 ) , clinical research does not tell us about patients’ true experiences,and ( 8 ) , evidence-based practice removes responsibility for decision making fromindividual physiotherapists. We argue that, while there is some truth in each ofthese objections, they need to be weighed against the potential bene� ts ofevidence-based practice. The overwhelming strength of the evidence-basedapproach to clinical practice is that it takes full advantage of the only potentiallyunbiased estimates of effects of therapy—those which are derived from carefullyconducted clinical research. The evidence-based practice model may be imperfect,but it may be the best model of clinical practice that is currently available.

INTRODUCTION

This article addresses some theoretical andpractical issues with the implementation ofevidence-based practice. It begins with a brief

Robert D. Herbert, Centre for Evidence-BasedPhysiotherapy and School of Physiotherapy,University of Sydney. Address Correspondenceto School & Physiotherapy, University ofSydney, P.O. Box 170, Lidcombe NSW 1825,Australia. E-mail: [email protected] uCatherine Sherrington, Research manager,Prince of Wales Medical Research InstituteChristopher Maher, Senior lecturer, School ofPhysiotherapy. University of SydneyAnne Moseley, Lecturer, Rehabilitation StudiesUnit, University of Sydney.Accepted for publication April 2001.

overview of what is implied by evidence-basedpractice and discusses how this differs fromtraditional clinical practice. It then considerssome frequently raised objections to theevidence-based practice model.

WHAT IS EVIDENCE-BASEDPRACTICE?

The term ‘‘evidence-based practice’’ is usedin a variety of ways. We use the term as it isused by Sackett and colleagues in their in� uen-tial book on evidence-based medicine ( Sackettet al, 2000 ) . These authors conceive of evidence-based practice as consisting of a � ve step processthat is carried out routinely in clinical encoun-ters. The � ve-step process involves ( 1) asking

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

202 R. D. HERBERT

answerable clinical questions, ( 2 ) � nding thebest evidence with which to answer these ques-tions, ( 3) critically appraising the evidence ( thisinvolves deciding if the evidence is believableand, if so, what it means ) , ( 4 ) applying theevidence to clinical problems, and ( 5) evalu-ating the effects of the intervention on individ-uals ( Sackett et al, 2000 ) .

These � ve steps allude to some of themost important distinctions between evidence-based practice and clinical practice as itis traditionally conducted. First, the processof evidence-based practice begins with anacknowledgment of uncertainty. That is, theevidence-based practitioner strives to explicitlyidentify knowledge gaps. This contrasts withsome traditional models of clinical practice inwhich uncertainty is seen as a failing and goodclinicians are thought to be those who alwaysknow what to do, not those who question whatthey do. In many clinical environments thereis an attitude that physiotherapists learn whatto do in clinical practice during their formalphysiotherapy training ( Turner and Whit� eld,1997, 1999 ) . An attitude of uncertainty is likelyto better equip health professionals to deal withrapidly changing evidence.

A second distinction is that the processof gathering and synthesising evidence issystematic and critical ( Sackett et al, 2000) .It involves recording clinical questions thatarise in clinical practice, ranking them in orderof importance, and then tackling them in anoptimal way. Evidence is chosen on the basis ofits probable validity. There is an emphasis ondeciding if the intervention will produce thedesired outcomes without unreasonable risksand at a reasonable cost. This differs fromtraditional models of practice in which theremay be priority given to clinical experience asa form of evidence ( Carr et al, 1994; Nilssonand Nordholm, 1992) , where clinical researchevidence is often happened upon rather thanstrategically sought out, and where appraisal ofthe quality of clinical research is super� cial ordoes not occur at all. A systematic approach tothe use of evidence from clinical trials helpsavoid the temptation to attend only to that

evidence which supports preconceived ideas ofwhich therapies are effective.

An implicit assumption in this model ofevidence-based practice is that well-conductedclinical research often provides the bestinformation about what interventions areeffective and ineffective, how useful a diagnostictest is, or a patient’s likely prognosis. That is,where good quality, relevant clinical researchis available, it usually takes precedence overtheory or personal experience, even thetheories or experiences of experts ( NationalHealth and Medical Research Council, 2000;but see Greenhalgh, 1999) . The role of clinicalexperience, clinical wisdom, and intuitionis primarily in making best use of goodevidence to meet individual patients’ needs andpreferences.

The requirement of good evidence neces-sarily restricts the focus in evidence-based prac-tice to optimally designed studies. The optimalstudy design will depend on the type of clinicalquestion. For example, the best evidence aboutthe effects of therapy is provided by randomisedtrials or systematic reviews of randomisedtrials ( National Health and Medical ResearchCouncil, 2000) . On theoretical grounds thesesorts of evidence are expected to providerelatively unbiased estimates of the effects oftherapy. There is some empirical evidence thatother sorts of studies, particularly uncontrolledstudies or studies with historical controls, tendto produce in� ated estimates of the size oftreatment effects ( Chalmers et al, 1983; Colditz,Miller, and Mosteller, 1989; Linde et al, 1999;Miller, Colditz, and Masteller, 1989; Sacks,Chalmers, and Smith, 1982; but see also Bensonand Hartz, 2000; Concato, Shah, and Horwitz,2000 and the ensuing letters ) . Questions aboutdiagnostic tests are usually best answered bystudies in which there is independent ( blind )comparison of the test with a gold standardtest ( see Sackett et al, 2000 and paper by Strat-ford in this issue ) . There is some empiricalevidence that studies that include nonrepre-sentative patients, lack blinding, or do not usea single gold standard for all subjects tend tooverestimate the diagnostic accuracy of a test( Lijmer et al, 1999) . Questions about prognosis

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

EVIDENCE-BASED PRACTICE —IMPERFECT BUT NECESSARY 203

are best answered by studies that prospectivelymonitor well-de� ned cohorts from an early anduniform point in the course of their condition( see Sackett et al, 2000 and paper by de Biein this issue ) . The most dif� cult questions,those about patients’ beliefs and the mean-ings they attach to their experiences, may bebest explored with carefully conducted qualita-tive research ( see Ritchie, 1999 and paper byRitchie in this issue ) .

Evidence-based practice does not implythat clinical decisions should be made on thebasis of clinical research alone. Key proponentsof evidence-based healthcare have emphasisedthat the evidence provided by clinical researchmust complement other sorts of information,such as information about individual patients’speci� c needs and preferences ( Sackett et al,2000; Greenhalgh, 1999) . Good clinicians areable to discern these needs and preferences.In the best models of evidence-based practice,evidence about the effects of therapy ( oraccuracy of diagnostic tests or prognoses )informs, but does not dominate clinicaldecision-making. The physiotherapist draws onpast clinical experience to apply the results ofresearch to the care of individual patients. Thebest decisions are made with the patient, notfound in journals and books.

OBJECTIONS TOEVIDENCE-BASED PRACTICEThe preceding section has described a modelof clinical practice that probably differs signif-icantly from what happens in even the mostevidence-based clinical settings. Real worldevidence-based practice faces signi� cant prac-tical dif� culties. In addition, legitimate philo-sophical and theoretical objections have beenraised against models of evidence-based prac-tice ( see, for example, Feinstein and Horwitz,1997; DiFabio, 1999) .

In this section we attempt to confront someof the objections to evidence-based practice.The emphasis will be on objections to the use ofsystematic reviews and randomised controlledtrials in making decisions about therapy, asissues concerning diagnosis and prognosis are

covered in other papers in this issue. Ourconclusions will be that there are, indeed, someserious practical, theoretical, and philosoph-ical problems with evidence-based practice.Nonetheless, evidence-based practice offers atleast one profound advantage over alternativemodels of clinical practice in that optimal useis made of the least-biased evidence from clin-ical research. Thus evidence-based practice maybe imperfect, but it may be the best model ofclinical practice that is currently available.

Evidence-based practice is tootime-consuming to be practicalEven with practice and optimal resources, theprocess of � nding and critically appraising thebest evidence pertaining to a single clinicalquestion usually takes considerable time. Asa consequence, it is not practical to use thebest evidence to deal with every uncertaintythat arises in every clinical encounter, and evenif there was good quality evidence to answerall clinical questions, not all practice could beevidence-based. Anyrealistic model of evidence-based practice must involve deciding whatare the most important clinical questions and� nding answers to those questions � rst. Giventhis reality, evidence must be used strategically.Time should be devoted to answering questionsthat are commonly seen in practice, haveimportant consequences, have potential foreither bene� cial or harmful treatment, orincur considerable cost ( Evidence-Based CareResource Group, 1994) . In this issue, Walker-Dilks discusses the issue of secondary sourcesof information ( such as the ACP Journal Club,Evidence-Based Medicine and the AustralianJourn al of Physiotherapy Critically AppraisedPapers ) . These sources distill the key � ndingsof high-quality papers, usually in one page orless, so they potentially provide a signi� canttime-saving mechanism for busy practitioners.

How much time is and should be spentseeking out and appraising the evidence?Most physiotherapists spend little time readingclinical research ( Turner and Whit� eld, 1997 )and, because few physiotherapists have trainingin clinical appraisal, reading time may be

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

204 R. D. HERBERT

spent suboptimally. Rational determination ofthe amount of time that should be spentseeking out and appraising evidence requiresinformation about both the effectiveness ofcurrent clinical practices and about how muchof an improvement in effectiveness could beaccrued in a given amount of time by searchingfor and appraising papers. Unfortunately, dataon these issues are elusive. Our view is that muchof clinical practice is far from optimally effectiveand that potentially even modest amounts oftime spent in the judicious application ofevidence to clinical decision making couldsubstantially improve clinical outcomes. As justone example, exercise is prescribed with equalfrequency for acute and chronic low backpain ( van der Valk, Dekker, and van Baar,1995 ) , but systematic reviews indicate thereis strong evidence that exercise therapy iseffective for chronic, but not acute, low backpain ( van Tulder, Koes, and Bouter, 1997;Maher, Latimer, and Rofshauge, 1999 ) . Thissuggests that changes in exercise prescriptionpractices could signi� cantly improve outcomesin patients with low back pain. We expect thatmany practices would converge rapidly on thisoutcome if scarce time was used to answer keyclinical questions.

Most clinicians are busy. Where can they� nd time to seek and critically appraise theevidence from clinical research? There arenumerous possibilities. Time spent in formalcontinuing education activities ( staff seminars,for example ) may be better spent by individualsor small groups of physiotherapists answeringtheir own clinical questions. Depending on theclinical setting, case conferences could alsobe restructured so that they create learningexperiences for staff as well as deal with patient’sproblems. These and other suggestions havebeen made by Sackett et al, ( 2000 ) . Time spentbusily applying ineffective or harmful therapieswould be better spent seeking out and criticallyappraising best evidence.

There is not enough evidenceIdeally, at least from a purely professional pointof view, there would be good clinical research

answering all important clinical questions. Ofcourse, that is not the case. It has beenclaimed that there is not enough evidence topractice evidence-based physiotherapy ( Bithell,2000 ) . How much clinical research existsand how much can it assist clinical decisionmaking?

It is dif� cult to quantify the volume ofclinical research in physiotherapy. Howeverit is possible to estimate, at least roughly,the number of relevant randomised trialsand systematic reviews. The Centre forEvidence-Based Physiotherapy, with assistancefrom, among others, the Rehabilitation andRelated Therapies Field of the CochraneCollaboration, has attempted to identify allrandomised controlled trials and systematicreviews in physiotherapy and collate these onthe Physiotherapy Evidence Database ( PEDro;http://ptwww.cchs.usyd.edu.au/pedro ) . At thetime of writing 2,229 randomised or quasi-randomised trials and 297 systematic reviewshad been identi� ed ( Moseley AM et al., inpress; see also Sherrington et al, 2000; Moseleyet al, 2001 ) .

There are more than 200 randomised trialsand systematic reviews on PEDro pertainingto each of the following subdisciplines ofphysiotherapy: cardiothoracics, continence andwomen’s health, gerontology, musculoskeletal,neurology, orthopaedics, and sports ( MoseleyAM et al., in press ) . This is enough totackle many fundamental clinical questions,though there are not yet enough trials in mostareas of physiotherapy to provide convincingreplication on every permutation of therapyin every setting for every patient group. Insome areas of physiotherapy, the volume oftrials and reviews is not suf� cient to haveany real impact on clinical practice. However,given the exponential rate of publicationof clinical trials and systematic reviews inphysiotherapy ( Moseley AM, et al., in press )this will almost certainly change in the nearfuture.

It is likely that most clinicians have not readall of the high quality evidence that pertains totheir own clinical questions. In this sense atleast, there is an abundance of evidence. It

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

EVIDENCE-BASED PRACTICE —IMPERFECT BUT NECESSARY 205

is probably reasonable to expect all practisingtherapists to be aware of key trials and reviewsin their area of practice.

The evidence is not good enoughCertain features of clinical trials ( such asconcealment of randomisation, blinding ofsubjects and assessors, and adequacy of follow-up ) tend to be associated with smaller effectsizes, suggesting that trials that have thesefeatures tend to be less biased ( Moher et al,1999 ) . Other trials lack these features, andso we should expect that, on average, theywill be biased. In physiotherapy, the typicalrandomised trial lacks concealment of alloca-tion and has unblinded patients, assessors, andtherapists, but does have adequate follow-up( Moseley AM et al., in press ) . There must bereal concern about the capacity of the typicaltrial to provide an unbiased picture of theeffects of therapy. Fortunately, the quality ofclinical trials appears to be improving slowly.The median PEDro score for randomisedtrials in physiotherapy has crept up from 3in the 1960s to its current value of 5. ( Ifthis rate was to continue, most trials wouldreturn perfect scores by the turn of the nextcentury. )

Systematic reviews ( such as those conduc-ted by the Cochrane Collaboration ) synthesisethe � ndings of clinical trials. Ideally, systematicreviews would objectively assess trial quality andthen pool the � ndings of high quality studies toprovide less biased and more precise estimatesof the effects of therapy. There are some realdif� culties that arise, however, when an attemptis made to systematically review clinical trials inall areas of health care. Three such problemsare discussed below. The � rst two issues alsoare relevant to readers of individual clinicaltrials.

1. Pu blication bias. This is the bias that arisesbecause trials with positive � ndings aremore likely to be published than trialswith negative � ndings. Consequently posi-tive studies are more likely to be reviewed,and reviews are likely to contain in� ated

estimates of treatment effects ( Stern andSimes, 1997 ) . Although it is often assumedthat exhaustive searching reduces the poten-tial for publication bias, it is possiblethat this actually increases the potentialfor publication bias. There are currentlyno completely satisfactory solutions to theproblem of publication bias ( Thornton andLee, 2000 ) .

2. Scorin g of stu dy quality. Systematic reviewsmust take into account the quality of thestudy if they are to produce unbiased esti-mates of the effects of treatment. However,the methods for assessing trial quality havenot yet been fully validated ( Moher et al,1999 ) , so we cannot yet be sure that mecha-nisms for rating study quality are truly able todiscriminate between trials that are and arenot likely to be biased. To further complicatethis issue there are a wide variety of qualityscales currently available. The number ofitems in each scale ranges from as few as3 to as many as 34, with no consensus onthe weighting applied to central items suchas randomisation, blinding, and withdrawals( Juni et al, 1999) . The choice of quality scalemay in� uence the conclusions of a system-atic review by in� uencing the eligibility ofparticular trials for inclusion in the reviewor weighting of the trial’s � ndings in thereview synthesis.

A practical question for readers of clinical trialsis how potentially biased does a study haveto be before it should no longer be used forclinical decision-making? The answer shoulddepend on the degree of con� dence that isheld in other information that pertains tothe clinical question at hand. As a workingprinciple, the threshold of quality should bethat the study must be able to provide morecertainty than the reader already has. Ouropinion is that, in practice, there will usuallybe little point in reading clinical trials that donot meet basic criteria ( true randomisation,acceptable follow-up, and blinding wherepossible ) .3. Syn thesis of ® n din gs. Ideally, systematic

reviews are accompanied by meta-analyses

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

206 R. D. HERBERT

that provide pooled estimates of treat-ment effects. However, this is only advis-able when the individual studies areof suf� cient quality and when there issuf� cient homogeneity of interventions,outcomes, and � ndings across studies. Whenheterogeneity precludes meta-analysis, someauthors conduct best-evidence syntheses inwhich the quality of evidence supporting aconclusion is rated according to a predeter-mined scale of study quality and consistencyof � ndings. Unfortunately the � ndings ofbest-evidence syntheses may depend heavilyon the rating system used, and may beunduly sensitive to the � ndings of individualstudies.

The sensitivity of conclusions in systematicreviews to methods of best evidence synthesisis illustrated clearly with a recent reviewof ultrasound ( van der Windt et al, 1999) .The review concluded, on the basis of sevenrandomised trials, that ‘‘ultrasound is noteffective in the treatment of shoulder disorders( pg. 263) .’’ When the more recent trial byEbenbichler and colleagues ( 1999 ) is added tothe review, the review’s best evidence synthesismethods support the conclusion that thereis weak evidence for ultrasound therapy forshoulder disorders. In contrast, use of vanTulder et al’s ( 1999 ) method of synthesiswould lead to the conclusion that there isno evidence of effectiveness, and van Poppelet al’s ( 1997 ) method would lead to the decisionthat there is strong evidence that ultrasound isineffective.

The problem with these methods ofqualitative synthesis is that while they use similardescriptors such as ‘‘strong,’’ ‘‘moderate,’’ or‘‘limited’’ to describe the level of evidence,the de� nitions for each descriptor vary. Witheach method the addition of a single trial ofsimilar quality and precision to the existingtrials can change the review conclusion to anextent that seems unjusti� ed. For example,with the van Poppel et al, ( 1997 ) systemthe � ndings of one trial can change theconclusion from ‘‘no evidence’’ to ‘‘strongevidence.’’ We recommend that great cautionbe used by readers of systematic reviews

that employ ‘‘best evidence’’ methods ofsynthesis.

Many readers are unable todiscriminate between studiesthat are probably valid and thosethat are probably not

Almost all methodological surveys and mostsystematic reviews in physiotherapy havedecried the quality of published research( e.g., Green et al, 2000) . Many physiother-apists do not have suf� cient training inresearch methodology to con� dently distin-guish between studies of high and lowquality. Consequently, there is a risk ofmany readers being mislead by potentiallybiased studies or excluding well-conductedtrials.

The eventual solution must be that physio-therapists will develop the skills to criticallyappraise clinical research. Most undergrad-uate curricula now teach research methodsand increasingly more explicitly teach crit-ical appraisal of clinical research. In the nearfuture we may be able to expect new grad-uates to have basic critical appraisal skills.Graduate physiotherapists will have to seekout training in skills of critical appraisal. Itis to be hoped that they do so with thesame enthusiasm that most physiotherapistsapply to the development of new clinicalskills.

Some simple strategies may enhance phys-iotherapists’ abilities to identify high qualitytrials. These include using methodological� lters ( Guyatt, Sackett, and Cook, 1993; Sackettet al, 2000 ) or methodological ratings fromthe PEDro database to screen out low qualityresearch. Secondary sources of publication,such as those referred to earlier, can performmuch of the work of critical appraisal forclinicians who lack critical appraisal skills.Some of these ( such as Cochrane System-atic Reviews ) are quite uniformly of highquality, and can generally be consideredto provide an unbiased synthesis of theliterature.

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

EVIDENCE-BASED PRACTICE —IMPERFECT BUT NECESSARY 207

When there is clinicaluncertainty, randomisedcontrolled trials and systematicreviews often cannot providecertainty

Some therapies appear so unlikely to haveuseful therapeutic effects that they are of littleinterest to most therapists. Other therapieshave such positive effects that their ef� cacyis obvious to all ( for example, strapping toprevent pain and further injury in acute skier’sthumb) . There is relatively little bene� t insubjecting these therapies to rigorous clinicalexperimentation. The role of clinical trials andsystematic reviews is to provide informationabout the size of treatment effects where thereis reasonable doubt that the treatment has aneffect that is large enough to be worthwhile. Thevalue of clinical trials and systematic reviewsis that they provide estimates of the size oftreatment effects that can be compared to thesmallest clinically worthwhile effect ( Herbert,2000a, 2000b ) . If the effect observed in thetrial is clearly larger than the smallest clinicallyworthwhile effect, the therapy may be clinicallyuseful.

Unfortunately, because trials always involvea � nite sample of patients, they cannot tellus with absolute certainty the size of thetreatment effect. Instead, they provide us withan estimate of the average treatment effect. Theuncertainty associated with this estimate can bedescribed with con� dence intervals ( commonlythe 95% con� dence interval ) . The width ofthe con� dence interval de� nes the range ofvalues within which the true average effect oftreatment probably lies. If all of the con� denceintervals fall to one side or other of the smallestclinically worthwhile effect, it is possible to becon� dent that, on average, the therapy has ( ordoes not have ) a clinically worthwhile effect( Herbert, 2000a, 2000b ) . Studies with largenumbers of subjects tend, all else being equal,to provide more precise estimates of the sizeof treatment effects ( estimates with narrowercon� dence intervals ) than small studies withfew subjects.

The problem is that we most need clinicaltrials when there is most uncertainty. Weare likely to be most uncertain when thetrue size of the treatment effect is close tothe smallest clinically worthwhile effect. Yetwhen the true effect of treatment is close tothe smallest clinically worthwhile effect thecon� dence intervals are likely to span thesmallest clinically worthwhile effect, regardlessof whether the treatment is clinicallyworthwhile( Herbert, 2000a) . In these circumstances, wecannot know if the treatment effect is largeenough to be clinically worthwhile.

Meta-analysis is one solution to thisproblem. The advantage of meta-analysis is thatit can provide estimates of effect size basedon large numbers of subjects from several ormany trials. Potentially, then, meta-analysis canprovide the precision needed to decide if atreatment produces clinically worthwhile effectseven if the true value is quite close to thesmallest clinically worthwhile effect.

It is not possible to use thendings of a clinical trial

performed on a particularsample to make inferences aboutthe effects of treatment on anindividual patient who is notfrom that sampleThere are three subproblems here. These aredealt with in more detail in two recent papers( Herbert, 2000a, 2000b ) :

First, trials usuallyonlygive us reliable infor-mation about the average response to therapy,yet obviously some patients will do much betterthan average and some will do much worse.Thus, some argue, clinical trials cannot tell usabout the responses of individuals.

It is true that clinical trials cannot predicthow each individual will respond to treatment,but then neither can any other sort ofinformation. Nonetheless, the informationprovided by trials about the average ( or mostlikely) outcome of therapy is valuable forclinical decision-making because the averageresponse is the response that we should expectin the absence of any other information. It

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

208 R. D. HERBERT

makes sense to make decisions on the basis ofexpected outcomes, even though we know thatthe expected outcome will probably not occur.

Second, the average subject in a trial mightdiffer in important ways from the people we arecontemplating treating. In that case it may nolonger be true that the average response of thesubjects in the trial is the expected responsewhen the therapy is applied. Many cliniciansfeel uncomfortable about the fact that trialsnever contain quite the sorts of patients theyare interested in, and the unease may be fuelledby a feeling that they can pick, at least roughly,who is and who is not likely to respond well totherapy on the basis of their clinical experience.

Clearly there are two important sourcesof information about the likely size of thetreatment effect that can be brought to bearon clinical decisions. On the one hand,clinical trials and systematic reviews can providerelatively unbiased information about theeffects of therapy on the average patient inthe trial or review. On the other hand, clinicalexperience and intuition may be capable ofdiscriminating between patients who are andare not likely to respond to therapy. Thissuggests a sensible compromise. We can useclinical trials to provide unbiased estimates ofthe average effect of therapy on the averagepatient in the trial. Then, when applyingthe trial � ndings to a particular patient, theestimate of the effect of therapy can be adjustedup or down based on what clinical intuition saysabout how more or less likely the particularpatient is to respond to therapy ( Herbert,2000a; see also Glasziou and Irwig, 1995 ) .

Third, there is a ( similar) problem withthe diversity of ways in which a therapy can beapplied. Differences in patient characteristics,equipment availability, staf� ng levels, stafftraining and philosophies, and health caresettings mean that the therapy is often notapplied in trials exactly as we could or wouldchoose to apply it. Therefore, trials mightprovide estimates of treatment effects that areunduly pessimistic ( if we feel the therapy wasapplied suboptimally in the trial ) or optimistic( if we feel the therapy was applied betterthan we would be able to apply it ) . Again,

the unbiased estimate of treatment effectsprovided by clinical trials can be combinedwith clinical intuition. Estimates of the sizesof treatment effects provided by trials can beadjusted upwards or downwards on the basis ofhow much more or less effectively we feel wecould apply the therapy.

The alternative approach is more nihilistic.Some clinicians have � xed ideas about how atherapy should be administered, and considerthat unless a trial is conducted in whichthe therapy is administered exactly as theywould choose to administer it, the trial isnot useful. There is an irony here: Diversityof practice arises when there is uncertaintyamong clinicians about how therapies should beapplied. Yet, when there is diversity of practicesome practitioners are less likely to be satis� edwith the � ndings of clinical trials because theybelieve the therapy should be administered asthey administer it in their clinical practices.When there is diversity of clinical practice,a more rational way to use clinical trials isto be tolerant about exactly how therapiesare delivered in clinical trials. If diversity ofclinical practice re� ects uncertainty about howa therapy should be administered, we shouldbe satis� ed when a therapy is tested as otherclinicians feel it would best be administered.

Only patient-centred researchcan really tell us about peoples’experiencesWe want clinical trials to tell us how a therapyaffects a patient in terms that matter to patients.A problem with clinical trials is that theyonly measure outcomes that the experimenterperceives as important, and they do not permitcomplete expression of what patients feel whengiven a particular therapy ( Greenhalgh, 1999;Higgs and Titchen, 1998; Ritchie, 1999 ) .

At one level many trials do measurethe effects of therapy in terms that patientsthemselves deem important. Many trials nowmeasure outcomes such as ‘‘global perceivedeffect’’ or ‘‘preference for treatment’’ becauseit is thought that measurement of theseoutcomes gives patients the opportunity to

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

EVIDENCE-BASED PRACTICE —IMPERFECT BUT NECESSARY 209

assign appropriate weighting to their feelingsof their responses to therapy. Nonetheless,these single-dimensional outcomes providelittle opportunity for patients to expressthe breadth of their feelings about theeffects of therapies. The need for patient-centred outcomes in clinical trials suggestsone important way ( but not the only way)in which qualitative and quantitative researchcan complement each other in evidence-basedpractice. Qualitative research can inform thedesigners of clinical trials about what consumerssee as the important issues when choosingtherapies ( see paper by Ritchie, this issue ) .Such considerations probably should be, butrarely are, paramount.

Evidence-based practiceremoves the clinicaldecision-making role fromclinicians and gives it tomanagersThere is a view that evidence-based practicetakes clinical decision-making out of clinicians’hands. In our view, this is not intrinsicallywrong: There is no intrinsic right of therapiststo be solely responsible for clinical decision-making. Instead, the justi� cation for clinician-as-decision-maker lies in the reasonable expec-tation that this provides the best possible careand outcomes.

Nonetheless, Sackett et al, ( 1996 ) haveargued that evidence-based practice does notsubjugate responsibility for clinical decision-making. It is true that, in evidence-basedpractice, good clinical research provides anexternal measure of effectiveness, and thissort of evidence should take priority overclinical experience alone. That is, good clinicalresearch acts as an external arbiter of effectiveclinical practice that constrains clinicians’choices. In a more important sense, however,evidence-based practice does not constraindecision-making. Instead, it emphasises therole of clinicians in using evidence to answertheir own clinical problems, and removes theconstraint of tradition from clinical practice.In evidence-based practice the responsibility

for clinical decisions is taken away fromhow-to textbooks and devolved to individualpractitioners and their patients.

SUMMARY AND CONCLUSIONS

We conclude that there are, indeed, somereasonable objections to the practice ofevidence-based physiotherapy, although in ouropinions, this model of clinical practice hastremendous advantages as well. Evidence-basedpractice is time-consuming, and the timeinvolved in answering clinical questions doesnot � t easily into conventional models of clinicalpractice. Nonetheless, the time spent answeringimportant clinical questions may prove worth-while in the medium term. There is, unfor-tunately, not enough evidence to answer allclinical questions well, but there is much thatis worthwhile and underutilised. The availableevidence is often not of suf� cient quality toguide clinical decision-making, and many thera-pists may have dif� culty distinguishing betweenvalid and potentially invalid research. Thus itis important for clinicians to develop skills orstrategies that enable discrimination betweenpotentially valid and probably invalid studies.A particularly dif� cult aspect of evidence-basedpractice is using trials to make inferences aboutindividual patients. We argue that this is bestdone by combining unbiased estimates of theeffects of treatment provided by clinical trialsand systematic reviews with clinical intuitionabout how well a particular patient will respondto therapy. Unfortunately clinical trials usuallymeasure outcomes of interest to investigators,but currently we do not usually know if theseoutcomes are of interest to the consumers them-selves. Evidence-based practice devolves respon-sibility for clinical-decision-making to therapistsand their patients.

In choosing between models of clinicalpractice we must discern which is best in somesense. Here ‘‘best’’ should mean somethinglike ‘‘the model that produces the outcomesmost desired by recipients of physiotherapyservices.’’ We have argued that there are realproblems with current models of evidence-based practice, but we point out that many

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

210 R. D. HERBERT

of the problems of evidence-based practiceare common to other ways of doing therapyas well. For example, clinical practice thatis based on clinical experiences suffers fromthe problem that therapists must use theirclinical experience to make predictions aboutindividual future patients, just as they mustusing good clinical research in evidence-basedpractice. The overwhelming strength of theevidence-based approach to clinical practice isthat it takes full advantage of the only potentiallyunbiased estimates of effects of therapy—thosewhich are derived from carefully conductedclinical research. There is a theoretical andprofessional imperative to use this ‘‘bestevidence.’’ The evidence is combined with,but does not dominate, other information thatpractitioners glean by communicating well withtheir patients. Evidence-based practice is, in ourview, the best of a number of imperfect modelsof clinical practice in the sense that it is likelyto produce the best outcomes for patients withavailable resources. Evidence-based practice isimperfect, but necessary.

ReferencesBenson K, Hartz AJ 2000 A comparison of observational

studies and randomized, controlled trials New EnglandJou rnal of Medicin e 342: 1878–1886

Bithell C 2000 Evidence-based physiotherapy: somethoughts on ‘best evidence’. Physiotherapy 86: 58 –60

Carr JH, Mungovan SF, Shepherd RB, Dean CM, Nord-holm LA 1994 Physiotherapy in stroke rehabilitation:bases for Australian physiotherapists’ choice of treat-ment. Physiotherapy Theory and Practice 10: 201 –209

Chalmers TC, Celano P, Sacks HS, Smith H 1983 Bias intreatment assignment in controlled clinical trials. NewEngland Journal of Medicin e 309: 1358–1361

Colditz GA, Miller JN, Mosteller F 1989 How study designaffects outcomes in comparisons of therapy. I: medical.Statistics in Medicin e 8: 441 –454

Concato J, Shah N, Horwitz RI 2000 Randomized,controlled trials, observational studies, and the hierarchyof research designs. New Englan d Journ al of Medicin e 342:1887 –1892

DiFabio R 1999 Myth of evidence-based practice. Jou rn al ofOrthopaedic and Sports Physical Therapy 29: 632–634

Ebenbichler GR, Erdogmus CB, Resch KL, Funovics MA,Kainberger F, Barisani G, Aringer M, Nicolakis P,Wiesinger GF, Baghestanian M, Preisinger E, Fialka-Moser V, Weinstabl R1999 Ultrasound therapy for calci� ctendinitis of the shoulder. New Englan d Journal of Medicin e340: 1533–1538

Evidence-Based Care Resource Group 1994 Evidence-basedcare: 1. Setting priorities: how important is the problem?CMAJ 150: 1249–1254

Feinstein AR, Horwitz RI 1997 Problems in the ‘‘evidence’’of ‘‘evidence-based medicine’’ American Jou rnal ofMedicin e 103: 529–535

Glasziou PP, Irwig LM 1995 An evidence based approachto individualising treatment. BMJ 311: 1356–1359

Green S, Buchbinder R, Glazier R, Forbes A 2000Interventions for shoulder pain ( Cochrane Review) . In:The Cochran e Library, Issue 4. Oxford: Update Software

Greenhalgh T 1999 Narrative based medicine: narrativebased medicine in an evidence based world. BMJ 318:323–325

Guyatt GH, Sackett DL, Cook DJ 1993 User’s guide to themedical literature: II. How to use an article about therapyor prevention: A. Are the results of the study valid? Journ alof the American Medical Association 270: 2598–2601

Herbert RD 2000a Critical appraisal of clinical trials. I:estimating the magnitude of treatment effects whenoutcomes are measured on a continuous scale. AustralianJourn al of Physiotherapy 46: 229 –235

Herbert RD 2000b Critical appraisal of clinical trials.II: estimating the magnitude of treatment effectswhen outcomes are measured on a dichotomous scale.Australian Journal of Physiotherapy 46: 309 –313

Higgs J, Titchen A 1998 Research and knowledge.Physiotherapy 84: 72–80

Juni P, Witschi A, Bloch R, Egger M 1999 The hazardsof scoring the quality of clinical trials for meta-analysis.Journ al of the American Medical Association 282: 1054–1060

Lijmer J, Mol B, Heisterkamp S, Bonsel G, Prins M, van derMeulen J, Bossuyt P 1999 Empirical evidence of design-related bias in studies of diagnostic tests. Journal of theAmerican Medical Association 282: 1061–1066

Linde K, Scholz M, Ramirez G, Clausius N, Melchart D,Jonas WB 1999 Impact of study quality on outcomein placebo-controlled trials of homeopathy. Journ al ofClin ical Epidemiology 52: 631–636

Maher C, Latimer J, Refshauge K 1999 Prescription ofactivity for low back pain: what works? Australian Journ alof Physiotherapy 45: 121–132

Miller JN, Colditz GA, Mosteller F 1989 How study designaffects outcomes in comparisons of therapy. II: surgical.Statistics in Medicin e 8: 455–466

Moher D, Cook DJ, Jadad AR, Tugwell P, Moher M, Jones A,Pham B, Klassen TP 1999 Assessing the quality of reportsof randomised trials: implications for the conduct ofmeta-analyses. Health Techn ology Assessmen t 3: 1–98

Moseley AM, Herbert RD, Sherrington C, Maher CGEvidence for physiotherapy practice: a survey of thephysiotherapy evidence database ( PEDro ) . AustralianJourn al of Physiotherapy. in press

Moseley AM, Sherrington C, Herbert RD, Maher CG 2001The extent and quality of evidence in neurologicalphysiotherapy: an analysis of the Physiotherapy EvidenceDatabase ( PEDro ) . Brain Impairmen t, 1: 130 –140

National Health and Medical Research Council 2000 Howto Use the Eviden ce: Assessmen t and Application of Scien ti® cEviden ce. Canberra, Biotext

Nilsson LM, Nordholm LA 1992 Physical therapy in strokerehabilitation: bases for Swedish physiotherapists’ choiceof treatment. Physiotherapy Theory & Practice 8: 49–55

Ritchie J 1999 Using qualitative research to enhancethe evidence-based practice of health care providers.Australian Journal of Physiotherapy 45: 251 –256

Sackett DL, Rosenberg WM, Gray JA, Haynes RB,Richardson WS 1996 Evidence based medicine: whatit is and what it isn’t. BMJ 312: 71–72

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.

EVIDENCE-BASED PRACTICE —IMPERFECT BUT NECESSARY 211

Sackett DL, Straus SE, Richardson WS, Rosenberg W,Haynes RB 2000 Evidence-Based Medicin e: How to Practiceand Teach EBM ( 2nd ed.) . Edinburgh, Scotland:Churchill Livingstone

Sacks H, Chalmers TC, Smith H 1982 Randomized versushistorical controls for clinical trials. American Journ al ofMedicine 72:233 –240

Sherrington C, Herbert RD, Maher CG, Moseley AM 2000PEDro. A database of randomized trials and systematicreviews in physiotherapy. Manual Therapy 5: 223–226

Stern JM, Simes RJ 1997 Publication bias: evidence ofdelayed publication in a cohort study of clinical researchprojects. BMJ315: 640–645

Thornton A, Lee P 2000 Publication bias in meta-analysis: itscauses and consequences. Journal of Clin ical Epidemiology53: 207 –216

Turner P, Whit� eld TWA 1997 Physiotherapists’ useof evidence based practice: a cross-national study.Physiotherapy Research In tern ational 2: 17–29

Turner PA, Whit� eld TWA 1999 Physiotherapists’ reasonsfor selection of treatment techniques: a cross-nationalsurvey. Physiotherapy Theory & Practice 15: 235 –246

Van der Valk R, Dekker J, van Baar M 1995 Physicaltherapy for patients with back pain. Physiotherapy 81:345–351

Van der Windt D, van der Heijden G, van den Berg S, terRiet G, de Winter A, Bouter L 1999 Ultrasound therapyfor musculoskeletal disorders: a systematic review. Pain81: 257–271

Van Poppel MNM, Koes BW, Smid T, Bouter LM 1997A systematic review of controlled clinical trials on theprevention of back pain in industry. Occupation al andEn vironmen tal Medicin e 54: 841–847

Van Tulder MW, Koes BW, Bouter LM 1997 Conservativetreatment of acute and chronic nonspeci� c low backpain. A systematic review of randomized controlledtrials of the most common interventions. Spin e 22:2128 –2156

Van Tulder MW, Cherkin DC, Berman B, Lao L,Koes B 1999 The effectiveness of acupuncture in themanagement of acute and chronic low back pain.A systematic review within the framework of theCochrane Collaboration Back Review Group. Spine 24:1113 –1123

Phys

ioth

er T

heor

y Pr

act D

ownl

oade

d fr

om in

form

ahea

lthca

re.c

om b

y U

nive

rsity

of

Sydn

ey o

n 04

/19/

13Fo

r pe

rson

al u

se o

nly.