104qs

download 104qs

of 7

Transcript of 104qs

  • 8/8/2019 104qs

    1/7

    Public release of performance data and qualityimprovement: internal responses to external databy US health care providers

    H T O Davies

    AbstractHealth policy in many countries empha-sises the public release of comparativedata on clinical performance as one way ofimproving the quality of health care. Evi-dence to date is that it is health careproviders (hospitals and the staV withinthem) that are most likely to respond tosuch data, yet little is known about howhealth care providers view and use thesedata. Case studies of six US hospitals werestudied (two academic medical centres,two private not-for-profit medical cen-tres, a group model health maintenance

    organisation hospital, and an inner citypublic provider safety net hospital)using semi-structured interviews followed

    by a broad thematic analysis locatedwithin an interpretive paradigm. Withinthese settings, 35 interviews were heldwith 31 individuals (chief executive of-ficer, chief of staV, chief of cardiology,senior nurse, senior quality managers,and front line staV). The results showedthat key stakeholders in these providerswere often (but not always) antipathetictowards publicly released comparativedata. Such data were seen as lacking inlegitimacy and their meanings were dis-puted. Nonetheless, the public nature of

    these data did lead to some actions inresponse, more so when the data showedthat local performance was poor. Therewas little integration between internal andexternal data systems. These findings sug-gest that the public release of comparativedata may help to ensure that greaterattention is paid to the quality agendawithin health care providers, but greatereVorts are needed both to develop internalsystems of quality improvement and tointegrate these more eVectively with ex-ternal data systems.(Quality in Health Care 2001;10:104110)

    Keywords: quality of health care; quality improvement;

    comparative performance data; public disclosure

    Background

    Quality of care has risen up the health policyagenda in most developed nations over the pasttwo decades or so. Significant quantitativestudies have repeatedly shown that the qualityof care is often highly variable about a medio-cre mean, and that medical errors abound.18

    Two main strategies to address such deficien-cies can be discerned. The first broad strategyencompasses those varied activities internal to

    health care provider organisations such as con-tinuing medical education, service develop-ment, or continuous quality improvement in allits guises. The second approach to forcingquality improvement relates much more to theexternal pressures that are placed on healthcare providers, and includes the developmentof markets or quasi-markets, accreditation,regulatory regimes, and other forms of externalaccountability. In the past two decades healthcare in most developed nations,like many otheraspects of public life, has seen a steep increasein the amount of external regulatoryattention.911

    External pressure to bring about qualityimprovements cannot function without quanti-tative assessments of existing quality. Thus, therise in external scrutiny has gone hand in handwith the development of an ever greater arrayof measurement tools for comparing theperformance of health care providers. Reportcards, provider profiles, comparative healthoutcomes, consumer reports, and league tablesin all shapes and sizes now abound in healthcare. Although some of these schemes remainconfidential, a further trend during the pastdecade has been the increasingly public natureof the assessment of quality.12 13 Even whenreports are not aimed directly at a public audi-

    ence, they may nonetheless reside in the publicdomain; more commonly reports are targeteddirectly at the public.

    Many issues arise in the development anduse of such comparative datafor example,data quality, validity, reliability, timeliness,meaningfulness, utility, and potential for dys-functional eVects.1417 Other debates surroundthe ability of the public to make sensible use ofsuch data.1820 Current evidence suggest thatmost health care stakeholders (for example,enrollees, patients, employees, purchasers) donot actually make much use of comparativeperformance data,1822 nor is there muchevidence that referring physicians pay muchattention to these data when making referraldecisions.23 However, some research does sug-gest that health care providers themselves,those whose care is examined and publicised byexternal comparisons, may indeed pay someattention to publicly released data.12 24 25 This isclearly an important issue: if health care is to beimproved by external scrutiny and the publicrelease of comparisons, then it is change withinhealth care provider organisations that will beneeded to deliver such improvements.

    This study set out to explore what healthcare providers think about external compara-tive data, how these views are changed when

    Quality in Health Care 2001;10:104110104

    Department of

    Management,University of StAndrews, St Andrews,Fife KY16 9AL, UKH T O Davies, reader inhealth care policy &management

    Correspondence to:Dr H T O [email protected]

    Accepted 30 March 2001

    www.qualityhealthcare.com

  • 8/8/2019 104qs

    2/7

    such data are made public, and how theyrespond when the data suggest that all is notwell with their practice. In particular, the studysought to shed some light on how (or, indeed,whether) externally generated public reportson health care performance are integrated withinternal strategies for identifying and dealingwith quality problems.

    ApproachThe study used qualitative case study meth-ods26 27 to explore attitudes to, and reactions to,externally driven comparisons of clinical per-formance. A qualitative approach was takenbecause of the desire to expose rich accounts ofhighly complex and contingent activities. Datagathering primarily involved qualitative semi-structured interviews with key stakeholderslocated in US health care providers, togetherwith some documentary analysis of internaland external quality reports. The settingsaccessed, nature of the key informants, inter-view content, and analysis strategy are allexplained below.

    SETTINGS

    Data gathering took place in six US hospitalsall located in California. Purposive sampling28

    was used to select centres with the reputationof being high quality providers renowned forthe quality of their care. This strategy was usedin an attempt to identify sites for fieldworkwhere there was likely to be more qualityimprovement activity to observe and explorethat is, the interest lay in examining leadingedge centres rather than the middle majority orlaggard. If the increasing emphasis on externaldata was bearing any fruit, then it is in thesecentres that there would be most to explore andlearn.

    Despite seeking centres with a high reputa-tion, otherwise diverse institutions were in-cluded. Thus, two of the six centres selectedwere academic medical centres of internationalrenown (indicated as Acad in the text), one wasa hospital which was part of a group modelHealth Maintenance Organisation with sala-ried physicians (GM-HMO), two were private(but not for profit) medical centres (NFP), andone was a public provider safety net hospital(PP). This approach (seeking diverse settings)was taken to help buttress the external validityof the findingsthat is, an exploration of pro-vider responses in diverse settings shouldincrease confidence that the findings were notcase-specific. However, there was never anintention to make detailed comparisons be-tween the individual case studies.

    INFORMANTS

    Within each setting interviews were soughtwith a range of key informants including thechief executive oYcer (CEO), chief of staV(CS; i.e. senior clinician with managementresponsibilities), senior quality managers(QM), chief of cardiology (CC; senior manag-ing clinician in the cardiology service line),senior nurse manager (SNM), and two or threefront line clinical staV (e.g. senior and junior

    physician and lead nurse within cardiologyservices).

    With the exception of organisation-widemanagement leaders (CEO, CS, and QM), allinformants were drawn from cardiology serv-ices. This service line was chosen for a numberof reasons. Firstly, there exists a wealth of evi-dence about appropriate clinical practice incardiologyfor example, on the use of manycategories of drugs. Secondly, there is ample

    evidence that actual practice often falls short ofideal practice in a number of areasfor exam-ple, in the use of low dose aspirin for patients atrisk of myocardial infarction or in the timelyuse of thrombolytic drugs for those suVeringfrom an infarct. Finally, there exists within car-diology both external systems of report cardsfor example, the California Hospital OutcomesProject which reports public data on 30 daymortality after myocardial infarction25 29aswell as confidential data systems designed forinternal usefor example, a national registerfor myocardial infarction supported by Genen-tech, and the activities of a Health CareFinancing Administration (HCFA) sponsored

    peer review organisation within the State.

    INTERVIEWS

    A total of 35 interviews were conducted with31 individuals from the six hospitals. Inter-views were conducted on site and lasted 4590minutes. All interviewees (except one) agreedto the interview being taped. In addition, theinterviewer (HD) kept contemporaneous notesas a back up and to record additionalcontextual information. Assurances were giventhat comments made would not be attributedeither to individuals or named institutions.

    The interviews were semistructured in na-ture, with a standardised preamble being usedto introduce the questions. The preamble con-

    sisted of a brief description of the areas ofinterest expressed in as neutral a manner aspossible. The bulk of the interview consisted of31 main questions (supported by pre-setprobes), arranged under three broad headings:+ attitudes and beliefs of health care providers

    about the role and impact of externalcomparative data, especially that designedfor public release;

    + the use of internal and external data systemsto identify and deal with local clinical qual-ity problems;

    + perceptions of the prevailing organisationalculture, the place of clinical excellencewithin this culture, and the extent of organi-

    sational trust.This paper emphasises data gathered in thefirst two of these areas.

    These themes, and the specific questionswithin them, were developed after extensivereading of the literature in this area and infor-mal discussions with over 40 academic, policy,and practitioner experts (from the USA andthe UK). The study interviews were largelyopen, friendly, and reflective in tone, and aneasy rapport almost always developed betweenthe interviewer and interviewee. Most inform-ants seemed both interested in the subject andeager to impart their views.

    Public release of performance data and quality improvement 105

    www.qualityhealthcare.com

  • 8/8/2019 104qs

    3/7

    ANALYSIS OF DATA

    All tapes were reviewed immediately after eachinterview, with further written notes being pre-pared as necessary; the interviews were subse-quently transcribed. The transcriptions wereread through on several occasions by theauthor to highlight relevant data. Where neces-sary, the original tapes were replayed and con-temporaneous notes were re-examined toclarify meanings and context. A broad thematic

    analysis,30 located within an interpretive para-digm,31 was used to identify and elaborate keythemes. Statements relating to these themeswere collated and cross checked to exploreboth strong themes and diversity within them.As the themes emerged, specific searches weremade in the transcripts for countervailingarguments or beliefs and, where these oc-curred, they are reported. Cross case diversitywas not explored.

    Findings

    The interviewees were first asked about theiroverall attitudes towards externally generatedcomparative performance measures, in par-

    ticular their views when these data were madepublic. In the subsequent dialogue, informantswere encouraged to reveal their perceptionsabout the strengths and weaknesses of suchsystems. Subsequently, interviewees wereasked about the quality of care delivered intheir own institutions, and were asked todescribe the ways in which quality issues wereidentified and addressed. In particular, theinterviewees discussed whether and how theyreacted to external reports, and how theseexternal data were integrated into internalquality improvement activities.

    OVERALL ATTITUDES TO COMPARATIVE

    PERFORMANCE DATA

    Attitudes to external comparative clinicalperformance data ranged from open hostility,through indiVerence and resignation, to reluc-tant acceptance and even guarded welcome.Negative comments included remarks such as:I dont think that data that are collated externallyhave had a positive impactor any impact. I thinkthey have had zero impact (QM, PP) andtheyre burdensome (SNM, NFP). Moregrudging acknowledgements included Its a

    pain, but overall the care for the populationimproves . . . So thats why I think that it [externalmonitoring] has to be there (Physician, Acad),and You get some benchmarkstrusted bench-marks (QM, Acad), with even some enthusias-

    tic support: Theyre welcome because we want toknow how we compare . . . it helps us strive forimprovement (CS, NFP).

    The range of these responses suggests, atbest, ambivalence in welcoming the increasinguse of comparative performance measures.Such ambivalence is seen within individuals aswell as within organisations: When its goodnews its I love it, its great, this is me! If its not

    flattering, its like Well, theres something wrong[with the data] (CS, NFP).

    Respondents who were more accepting ofpublic scrutiny sometimes highlighted the factthat attitudes had shifted somewhat over the

    past decadefrom hostility to greateracceptanceas the availability of comparativeperformance data had become commonplace:No,I dont think it bothers us nowwere kind ofused to it (CC, Acad); Weve accepted the real-ity that it will be public and available (CC,NFP); and You just have so many people lookingover your shoulder that thats not troubling (QM,PP).

    Respondents were discriminating when wel-

    coming or rejecting external review. For exam-ple, some were keen to diVerentiate betweenthe potential benefits of confidential systems(such as the peer review organisations that pro-vide comparative data within the State), andthe much more problematic nature of publicrelease of comparative data (such as the Statemandated public release of health out-comes25 29).

    CONCERNS ABOUT THE DATA

    Compiling valid, reliable and meaningful com-parative performance data is beset with pit-falls,14 15 32 and those interviewed were quick toraise a range of concerns. The essential fairness

    of the comparisonsand, in particular, theextent to which they took account of diVer-ences in patient populations or case mixreceived considerable criticism: A lot of thedata is specious in that you can explain it away by

    patient selection etc (CC, PP); and Most of thetime the data is [sic] not risk adjusted and the gen-eral population doesnt understand what this meansand so they take it at face value (QM, NFP).

    In addition, others highlighted the poorquality of the underlying data through, forexample, inconsistent coding practices: [Thissystem] relies purely on administrative data, andadministrative datas so full of flaws (QM,Acad); and [these data are] generated by codersand medical records departments rather than by

    physicians themselves (CC, Acad).Thus,appar-ent diVerences in performance were dismissedas artefacts of the data systems rather than seenas real clinical diVerences, and responses wereoften more concerned with reforming data col-lection and processing than addressing clinicalcare issues.

    Finally, the long lags between data gatheringand the production of oYcial reports came infor considerable and scathing criticism: Some-one may ask you to respond to that information,butits so old that what were doing now has nothing todo with what was happening back then (QM,NFP); and It takes so long to develop the modeland put the data through from all those organisa-

    tions that, by the time we get it, its meaningless (SNM, NFP). At the extreme, delays in thedata reaching a public audience bordered onthe farcical: I was pretty astonished to read in aSunday newspaper that [named unit] was consid-ered probably the best in the city.I always felt it wasvery deserved. However the unit had closed three

    years before the article was written! (CEO,NFP).

    Notwithstanding the many negative com-ments on data quality, meaningfulness andtime lines, there was a belief among some of therespondents that improvements were beingseen over the years the data has gotten better in

    106 Davies

    www.qualityhealthcare.com

  • 8/8/2019 104qs

    4/7

    terms of risk adjustment (QM, NFP)as well asa grudging acceptance that the deficienciesaVected all providers similarly: . . . its consist-ent, we all kinda use it in the same way and recog-

    nise its, uh . . . foibles (SNM, NFP).

    WHAT GETS MEASURED GETS ATTENTION

    For all the accusations about the lack of mean-ing or relevance of the external data, many

    respondents expressed further concerns that,nonetheless, these data might distort clinicalpriorities: Were spending an awful lot of timeand a very large amount of very finite resources to

    create a very elegant model [of post-MI mortality]

    that really looks at such a small part of what we

    should be concerned about (QM, Acad). Thus,even before thoughts were turned to howexternal data might be used to improve care,study participants worried that what getsmeasured gets attention. Clinical issues high-lighted by external data sets were thought toattract more institutional attention than wasperhaps warrantedperhaps to the detrimentof other unmonitored services: Theres a diVer-

    ent impetus when you know that the data has the potential to be released (QM, Acad) and Itreally fries people to do something to meet the task,

    rather than for clinically appropriate reasons(QM, GM-HMO).

    These concerns were not necessarily justacademic. One provider reported that they hadbeen pressured by an employers consortiumpurchasing group over some of the comparativedata and had resisted what it saw as inappropri-ate priorities: So we took the data back to [the

    purchaser] and said That goal is not necessarily

    desirable. Youre pushing people to do something

    counter productive (QM, GM-HMO). In sum,despite what was often seen as the limited

    information content of these data sets, fearswere raised repeatedly that such data mighthave an inappropriate and disproportionateimpact.

    QUALITY OF CARE ASSESSMENTS: MEASURED ANDPERCEIVED

    Interviews then moved on to discuss the levelof current quality in the institution concerned,and the means by which quality problems wereuncovered and addressed. Initially, most re-spondents were keen to volunteer that, al-though there may be quality problems in healthcare generally, their own institutions werelargely exemplary: Fortunately, this is a good

    hospital (CS, NFP); We do very well inwhatever we have looked at (QM, PP); Its myabsolute belief that we are top in all these areas and

    that we do a much better job than everybody else(CS, GM-HMO); and, the ultimate accolade:This is a good place. I would bring my Mom(QM, Acad).

    Given the level of self-belief in this sample(who were indeed selected because of theirhigh reputations), some welcomed the publi-cation of performance data as a means ofextending institutional reputation and for mar-keting purposes: I think that [comparative data]

    are very important to the people that buy our serv-ices. Its a very important marketing tool. Its won-derful to say were number one on all of thesethings (CS, GM-HMO).

    However, on closer questioning, some inter-viewees admitted that the external data had notalways been so encouraging for their institu-tion, and indicated that external data highlight-ing potential deficiencies were sometimesinfluential in prompting further internal inves-

    tigations: Its a reality test to assumptions that wemight make internally (QM, GM-HMO); andI think it really forces you to take a real good look(CS, NFP). The fact that data were made pub-lic was seen as crucial in focusing organisa-tional attention: They [i.e. comparative data]

    get reported in the media, so you have to respond tothem, you cant ignore them (SNM, NFP); and,most memorably: Its a gun to your head(Physician, Acad).

    External comparative data do provide anassessment of performance yet, in identifyingquality problems, these data were often seen asoVering just one perspective among several.Several respondents raised the importance of

    softer qualitative judgements in making qualityassessments: Its the opinions of peers that mattermore than anything else about quality. Who do

    people go to for consults? (CS, GM-HMO); andIts largely perception . . . our perception thattheres something awry (Physician, Acad).Thus, in identifying targets for quality im-provement initiatives, it is the subjective andthe informal that are often more influentialthan the external data: Clinicians come in to meand say I think theres something here,and I thinkits bigger than this one patient (QM, NFP) andWe benefit from having multiple disparate inputs.When somebody out on the battlefront identifies a

    problem, then thats valuable (CS, NFP). Somewent as far as to assert that formal comparative

    data served merely to confirm such impression-istic judgements: I think it merely reinforcesalready held opinions just based on other factors,

    you know, day-to-day experience (CS, Acad).

    ACTING ON EXTERNAL PERFORMANCE DATA

    Notwithstanding widespread concerns aboutthe meaningfulness of external comparisons,providers do at times respond to the publicrelease of comparative data. Given theimportance they attach to public perceptions,this is perhaps unsurprising. Action seemedmost likely when an organisation was seen to beperforming poorly on any given external meas-ure: Being an outlier does motivate performance.

    Theres no doubt about that (QM, GM-HMO); Any time we do get really poor results, we willrespond very um . . . very conscientiously (QM,PP); and Last time around we went from beingthe best to the worst in one fell swoop. It obviously

    got our attention more, shall we say, than if we hadbeen the best (Physician, GM-HMO).

    Action to improve health care qualityseemed rather less likely if data showed theorganisation to be a middle ranker: Externalindicators only have significance to us when wereoutside the normwell tolerate middle of the

    pack (CS, NFP); and If youre on the averageit doesnt give your hospital or your physicians

    Public release of performance data and quality improvement 107

    www.qualityhealthcare.com

  • 8/8/2019 104qs

    5/7

    much of an incentive to look into the areasothats not terribly helpful (QM, PP). However,there were also many instances cited wheresuch complacency would not prevail: I dontthink in the middle of the range is acceptable: werestriving to be the best (SNM, NFP); If were inthe middle of the pack it can be very upsetting(CS, GM-HMO); and [whether we took action]would depend on our own perception as to whether[the data] were an accurate reflection of what we

    think is happening (QM, NFP). However, abelief that actions could occur in the absence ofan identified quality problem may be optimis-tic. When comparative data are largely unex-ceptional,then these data tended not to be seenby front line workers but were filtered out byhigher echelons within the organisation: Iwouldnt even see itunless it was bad (Physi-cian, PP).

    RELATING EXTERNAL DATA TO INTERNAL

    QUALITY IMPROVEMENT

    A strong theme to emerge from many inter-views was that external data might kick starta process of internal enquiry, but that they were

    insuY

    cient in and of themselves for completeunderstanding: [External data] are the start of aprocess,you know, that really gets the ball rolling, interms of an [internal CQI] investigation (SNM,GM-HMO); and We respond more to our owndata, I think (CC, Acad). So linkages betweenexternal data and internal quality improvementactivities were generally weak. The weaknessesof these linkages arises from two distinctsources. Firstly, external data were generallyfound to be substantially out of date and thuslacking in relevance: If youre not doing it for

    yourself [collecting data] and reacting to itimmediately, theres a whole time lag and opportu-nities for improvement that youve missed (CS,NFP), and [We] definitely prefer in-house

    data . . . so that everything is very fresh (QM,NFP). The second limitation of external datawas the only very limited amount of infor-mation available, particularly when the exter-nal comparisons focused on outcomes ratherthan processes: I believe the in-house data more.You just dont get the details [from exter nal data](QM, GM-HMO); and Its the in-house data[that] drives us more than the outside data. I thinkits also better data and its more focused; it hasmany more elements to it (CC, Acad).

    In these accounts, therefore, external publicdata gave some impetus, but it was internalsystems (or confidential collaborative benchmarking ventures) that provided the necessary

    clinical detail to allow the unpacking and fixingof defective clinical processes: We use flow-charting to really dr ill-down on the issue (SNM,NFP); and Our best successes [in using data toimprove quality] were our own internal ones (CS,NFP).

    Thus, external publicly reported compara-tive outcomes were seen as sometimes helpfulin indicating priorities for further investigation,but they needed to be complemented by homegrown, clinically owned, process based datasystems. Also required was the provision ofpractical resources for the analysis, presenta-tion, and interpretation of such dataand a

    culture that encouraged, valued, and supportedcontinuous quality improvement processes:We have wonderful wonderful motivated people,but if we didnt have the resources to do this, we

    couldnt. So there is resource. Theres not only peo-

    ple committed to excellence but theres resources

    committed to excellence. Thats very important.(CS, NFP); and All the data in the world isnt

    gonna help if the people at the top dont wanna use

    it or dont have the resources to use it (CS, NFP).In the absence of good local data and support-ive resources, little quality improvement activ-ity was seen: We dont do it [benchmarking] andwe dont have the resources to do it . . . really, no

    way, since we dont have ongoing databases (QM,PP).

    ENCOURAGING SERVICE DEVELOPMENT AND

    PRACTITIONER CHANGE

    In none of the organisations were significantfinancial incentives used as levers for change.More commonly commented upon was thefact that reward structures were sometimesdisincentives to high qualityfor example,

    salaried physicians attracting additional work-load as a consequence of a reputation forexcellence or fee-for-service reimbursementencouraging throughput over excellence: Themajor emphasis is on access and throughput . . . I

    think that outcomes are secondary (QM, PP).Although better alignment of physician

    rewards was thought sensible, few respondentswere interested in using financial incentives todrive practitioner change. Instead, the keyissues for pressuring change were seen as cred-ible comparative data of quality problems anddetailed exploration of clinical processes,coupled with professional and institutionalpride. So I do see physicians taking it very

    seriously, they do want that data to reflect favour-ably on them, theres a tremendous pride in their

    work (CS, NFP) and If you are sort of an out-lier, thats going to, without anybody saying

    anything, influence your behaviour (Physician,GM-HMO). Thus, identifying and dealingwith quality issues were seen as indicators ofpeer esteem and good professional practice: If

    youve got the best outcomes [and] least complica-

    tions, you have a higher standing with your peers.

    And if you know youve got a problem and you

    address it, that improves your standing . . . They

    [physicians] are also very competitive. They want

    to do the r ight thing, and they want to do it as well

    or better than everybody else (CS, NFP).

    The greater openness fostered by the reportcard movementin itself legitimising a greateropenness within institutionswas thus seen asa very important means of encouraging morereflective practice. The availability of goodcomparative data can then work to enhanceand channel intrinsic motivations: Physiciansare self-correcting, theyre very competitive, they

    always want to be the best. If you show them data

    and theyre not as good as their partner, they tend

    to try and figure out themselves whats going on . . .

    Weve been trying to use it [comparative data] in a

    non-punitive, self-correcting mode (QM, NFP).

    108 Davies

    www.qualityhealthcare.com

  • 8/8/2019 104qs

    6/7

    Conclusions

    The public release of comparative performancedata has grown to greater prominence in healthcare in many countries. Public policy and con-siderable private sector activity have both con-tributed to these trends,13 but relatively littleresearch is available to shed much light onwhether and how such a strategy mightimprove health care quality. Indeed, althoughmany rationales are available and have been

    articulated, current schemes tend to be vagueabout the purported mechanisms of actionwhereby public release will improve health carequality.12 13

    This study sought to get inside health careprovider organisations to explore the dynamicsas they respond to more public scrutiny of whathave hitherto been confidential professionalmatters. It is because current best evidencesuggests that health care providers should bethe key targets for publicly released compara-tive performance data12 that it is important tounderstand the mechanisms by which suchdata might be actioned.

    The key findings from these interviews can

    be summarised as follows:+ The growing availability of comparative per-formance data, both internally and exter-nally driven systems, have made quality ofcare issues much more visible than hitherto,hoisting them higher up the providersagenda.

    + External data systems turn up the heat onhealth care providermost especially sowhen these data are made publicandencourage them to examine the clinicalissues covered by the measures.

    + The accuracy, validity, and timeliness ofexternal data sets are widely called intoquestion, severely limiting their legitimacyin the eyes of health care providers.

    + Despite perceptions about the inadequacyof the measures, many providers are worriedthat what gets measured gets attentionand thus raise fears that disproportionateattention may be paid to those clinical areason which data are publicly released.

    + External data have greatest impact whenthey indicate that performance is below thatexpected. For some providers, anything lessthan exemplary performance creates a desirefor action. For many others, however, solong as the external data do not indicate thatthey are significantly worse than average, noactions would result.

    + Wherever possible, providers seek verifica-

    tion of any problems identified from outsideby reference to internal data sets and subjec-tive assessments based on soft data.33

    Internal data sets tend to cover clinicalprocesses in considerable detail, in contrastto external systems which often focus onhealth outcomes.

    + Peer pressure, professional pride, and therelentless logic of credible comparative datawere seen as the key drivers of changes inindividual behaviour rather than financial orother external incentives.

    + The public release of comparative dataoVers one way of building pressure on health

    care providers to prioritise health care qual-ity issues.Nonetheless, there is a considerable way to

    go before these data will be seen as both timelyand credible when they appear to criticise localpractice. In practice, attempting to win overproviders with more credible data, or attempt-ing to shorten the data delivery time to one thatis acceptable, may be diYcultand may noteven be necessary. This study suggests that the

    data just need to be credible enough to promptfurther local investigations. What is clear is thateVective local quality improvement activity ispredicated on the availability of detailedprocess based clinical information and theresources to enable the exploration of this. Yetcurrently there is little connection (never mindintegration) between internal and external datasystems. This would seem to be a lostopportunity. The growing availability of volun-tary based, bottom up, clinically driven com-parative data baseswhich emphasise a com-bined analysis of both process and outcomesmay oVer some potential to bridge this gap.34

    Caution should be exercised in extrapolating

    from this analysis to other nations or contexts.

    35

    A study of this type has a number of importantlimitations. Most obviously, the study tookplace in California at a time when health careproviders are under considerable pressures tocut costs as aggressive managed care begins tobite. Nonetheless, most health care providers indeveloped countries are familiar with stringentfinancial circumstances. In addition, the ac-counts presented reflect only the perceptionsand conscious constructions of the stakehold-ers interviewed. Only very limited corrobora-tion of the accounts was soughtfor example,through sight of quality improvement reportsor through cross referencing between inter-views in the same organisation. The potential

    certainly exists for these accounts to be inaccu-rate or incomplete. Nonetheless, all the partici-pants were willing volunteers for the study(there were no significant refusals) and gaveevery sign of being engaged and thoughtfulwith the subject. The academic nature of thestudy and the independence of the intervieweralso contributed to a spirit of open enquiry.

    Despite these notes of caution, there are ofcourse many similarities across health careproviders even in diVerent countries. The pub-lic release of comparative performance data isan international phenomenon, and commonal-ity of experience in responding to these datamay be as important as diversity. Thus, the

    findings from this study should stimulatedebate about the appropriate development ofcomparative data systems in many countriesand settings.

    The public release of comparative clinicalperformance data has become a de factohealth policy in most developed nations.Whereas previous debates have largely revolvedaround the technical issues of data collection,analysis and interpretation,36 we now need tobe much more concerned with how such dataare usedfor good or illwithin healthsystems. For example, it is still far from clearthat any benefits arising from the public release

    Public release of performance data and quality improvement 109

    www.qualityhealthcare.com

  • 8/8/2019 104qs

    7/7

    of comparative data will outweigh both thecosts and the harms incurred. Since improvedclinical processes within health care providerorganisations will be the main way that realimprovements are delivered, it is here that wemust seek the evidence. It is here too that weneed a better understanding of the dynamicinteractions between data, organisational sys-tems, and individual health care professionals.

    Theauthor would liketo thank all of theinterviewees,both withinthe health care provider organisations and elsewhere, who gave sogenerouslyof their timeand expertise.Duringthe development ofthis research Huw Davies was a Harkness Fellow in Health CarePolicy at the University of California, San Francisco (UCSF).Thus, this work was supported by The Commonwealth Fund, aNew York city based private independent foundation. However,theviews presented here arethose of theauthorand not necessar-ily those of The Commonwealth Fund, its directors, oYcers orstaV. Huw Davies is sincerely grateful to The Fund and the Insti-tute for Health Policy Studies (UCSF) for the opportunitiesaVorded to him by the Harkness Fellowship. In addition, AlisonPowell assisted with some of the transcript analysis, for which theauthor is duly grateful.

    1 Schuster MA, McGlynn EA, Brook RH. How good is thequality of health care in the United States? Milbank Quar-terly 1998;76:51763.

    2 Millenson ML. Demanding medical excellence. Chicago: Uni-versity of Chicago Press, 1997.

    3 Chassin MR, Galvin RW. The urgent need to improvehealth care quality. Institute of Medicine National Round-

    table on Health Care Quality. JAMA 1998;280:10005.4 Bogner MS. Human error in medicine: a frontier forchange. In: Bogner MS, ed. Human error in medicine. Hills-dale, New Jersey: Lawrence Erlbaum Associates, 1994:37383.

    5 The Presidents Advisory Commission on ConsumerProtection and Quality in the Health Care Industry. Qual-ity first: better health care for all Americans. Washington: USGovernment Printing OYce, 1998.

    6 Kohn LT, Corrigan JM, Donaldson M. To err is human:building a safer health system. Washington: Institute of Medi-cine, 1999.

    7 Leape LL, Berwick DM. Safe health care: are we up to it?BMJ 2000;320:7256.

    8 Vincent C, Neale G, Woloshynowych M. Adverse events inBritish hospitals: preliminary retrospective record review.BMJ 2001;322:5179.

    9 Brennan TA. The role of regulation in quality improvement. Milbank Quarterly 1998;76:70931, 512.

    10 Power M. The audit society: rituals of verification. Oxford:Oxford University Press, 1997.

    11 Hood C, James O, Jones G, et al. Regulation insideGovernment. Oxford: Oxford University Press, 1999.

    12 Marshall MN,Shekelle PG, Leatherman S, et al. The publicrelease of performance data. What do we expect to gain? Areview of the evidence. JAMA 2000;283:186674.

    13 Davies HTO, Marshall MN. Public disclosure of perform-ance data. Does the public get what the public wants? Lan-cet 1999;353:163940.

    14 Goldstein H, Spiegelhalter DJ. League tables and theirlimitations: statistical issues in comparisons of institutionalperformance. J R Stat Soc A 1996;159:385443.

    15 Davies HTO, Crombie IK. Interpreting health outcomes. JEval Clin Pract 1997;3:187200.

    16 Smith P. On the unintended consequences of publishingperformance data in the public sector. Int J Public Admin1995;18:277310.

    17 Davies HTO, Lampel J. Trust in performance indicators.Quality in Health Care 1998;7:15962.

    18 Hibbard JH, Jewett JJ. Will quality report cards helpconsumers? Health AVairs 1997;16:21828.

    19 Hibbard JH, Jewett JJ, Engelmann S, et al. Can Medicarebeneficiaries make informed choices? Health AVairs 1998;

    17:18193.20 Jewett JJ, Hibbard JH. Comprehension and quality careindicators: diVerences among privately insured, publiclyinsured, and uninsured. Health Care Financing Rev1996;18:7594.

    21 Hibbard JH, Jewett JJ, Legnini MW, et al. Choosing a healthplan: do large employers use the data? Health AVairs 1997;16:17280.

    22 Schneider EC, Epstein AM. Use of public performancereports: a survey of patients undergoing cardiac surgery.

    JAMA 1998;279:163842.23 Schneider EC, Epstein AM. Influence of cardiac surgery

    performance reports on referral practices and access tocare. A survey of cardiovascular specialists. N Engl J Med1996;335:2516.

    24 Romano PS, Rainwater JA, Antonius D. Grading thegraders: how hospitals in California and New York perceiveand interpret their report cards. Med Care 1999;37:295305.

    25 Rainwater JA, Romano PS, Antonius DM. The CaliforniaHospital Outcomes Project: how useful is Californiasreport card for quality improvement? Joint Commission JQual Improvement 1998;24:319.

    26 Stake RE. The art of case study research. Thousand Oaks:Sage Publications, 1995.

    27 Yin RK. Case study research: design and methods. In:Applied Social Research Methods. Series Volume 5. Thou-sand Oaks: Sage Publications, 1994.

    28 Patton M. How to use qualitative methods in evaluation.London: Sage Publications, 1990.

    29 Romano PS, Zach A, Luft HS, et al. The California Hospi-tal Outcomes Project: using administrative data to comparehospital performance.Joint Commission J Qual Improvement1995;21:66882.

    30 Denzin NK,Lincoln YS.The handbook of qualitative research.London: Sage Publications, 2000.

    31 Locke K. Grounded theory in management research. London:Sage Publications, 2001.

    32 Iezzoni LI. The risks of risk adjustment. JAMA 1997;278:16007.

    33 Goddard M, Mannion R, Smith PC.Assessing the perform-ance of NHS hopital trusts: the role of hard and softinformation. Health Policy 1999;48:11934.

    34 Black N. High-quality clinical databases: breaking downbarriers. Lancet 1999;353:12056.

    35 Davies HTO, Marshall MN. UK and US health caresystems: divided by more than a common language. Lancet2000;355:336.

    36 Nutley SM, Smith PC. League tables for performanceimprovement in health care. J Health Services Res Policy1998;3:507.

    110 Davies

    www.qualityhealthcare.com