Searching for unpublished trials in Cochrane reviews may not be worth the effort

33
COCHRANE METHODS Editors: Sally Hopewell, Mike Clarke and Julian PT Higgins www. thecochranelibrary .com ISSN 2044–4702 September 2010

Transcript of Searching for unpublished trials in Cochrane reviews may not be worth the effort

COCHRANE METHODSEditors: Sally Hopewell, Mike Clarke and Julian PT Higgins

www.thecochranelibrary.comISSN 2044–4702

September 2010

Cochrane Methods

Sally Hopewell, Co-Scientifi c Editor and Technical EditorUK Cochrane Centre National Institute for Health ResearchSummertown Pavilion Middle WayOxford OX2 7LG UK Tel: +44 1865 516300 [email protected]

Registered Methods Groups

Adverse Eff ectsYoon Loke, ConvenorSchool of Medicine, Health Policy and PracticeUniversity of East AngliaNorwich, NR4 7TJUKTel: +44 1603 [email protected]@cochrane.org

Applicability and RecommendationsHolger Schünemann, ConvenorClinical Epidemiology and BiostatisticsMcMaster University1200 Main Street WHamilton Ontario L8N 3Z5CanadaTel: +1 905 5259140 ext 24931 [email protected]

BiasDavid Moher, ConvenorOttawa Health Research Institute501 Smyth RoadBox 208Ottawa Ontario K1H 8L6CanadaTel: +1 613 7377600 ext [email protected]/bmg

EconomicsIan Shemilt, ConvenorSchool of Medicine, Health Policy and Practice University of East AngliaNorwich NR4 7TJUKTel: +44 1603 591086 [email protected]

EquityErin Ueffi ng, Co-ordinatorCentre for Global HealthInstitute of Population HealthUniversity of Ottawa207-1 Stewart StreetOttawa Ontario K1N 6N5CanadaTel: +1 613 5625800 ext 1963erin.ueffi [email protected]@cochrane.org

Individual Patient Data Meta-analysisLarysa Rydzewska, Co-ordinatorMeta-analysis GroupMRC Clinical Trials Unit222 Euston RoadLondon NW1 2DAUKTel: + 44 207 [email protected]/cochrane/ipdmg

Information RetrievalAlison Weightman, Co-ConvenorHead of Library Service DevelopmentInformation ServicesCardiff UniversityCardiff CF24 0DEUK Tel: +44 2920 875693 weightmanal@cardiff [email protected]

Non-Randomised Studies Barney Reeves, ConvenorBristol Heart InstituteUniversity of BristolLevel 7, Bristol Royal Infi rmaryMarlborough StreetBristol BS2 8HWUKTel: +44 117 9283143 [email protected]

Patient Reported OutcomesDonald Patrick, ConvenorDepartment of Health ServicesSeattle Quality of Life Group\Center for Disability Policy and Research at the University of WashingtonBox 359455Seattle Washington 98195-9455USATel: +1 206 [email protected]

PrognosisKatrina Williams, ConvenorSydney Children’s HospitalUniversity of New South WalesSydney Children’s Community Health CentreCnr Avoc and Barker StreetRandwick, NSW 2031AustraliaTel: +61 2 93828183katrina.williams@sesiahs.health.nsw.gov.auwww.prognosismethods.cochrane.org/en/index.html

Mike Clarke, Co-Scientifi c EditorUK Cochrane Centre National Institute for Health ResearchSummertown Pavilion Middle WayOxford OX2 7LG UK Tel: +44 1865 516300 [email protected]

Julian PT Higgins, Co-Scientifi c EditorMRC Biostatistics UnitInstitute of Public HealthUniversity Forvie Site Robinson WayCambridge CB2 0SRUKTel: +44 1223 [email protected]

Prospective Meta-analysisLisa Askie, ConvenorNHMRC Clinical Trials CentreUniversity of SydneyLocked Bag 77Camperdown NSW 1450AustraliaTel: +61 2 [email protected]@cochrane.org

Qualitative Research Jane Noyes, ConvenorCentre for Health Related ResearchSchool of Healthcare SciencesCollege of Health & Behavioural SciencesUniversity of WalesBangorWales LL57 2EFUKTel: +44 1248 [email protected]/cqrmg

Screening and Diagnostic TestsConstantine Gatsonis, ConvenorCenter for Statistical StudiesBrown University, Box G-HProvidence, RI 02912USA Tel: +1 401 8639183 [email protected]

Statistical MethodsDoug Altman, ConvenorCentre for Statistics in MedicineWolfson College University of OxfordLinton RoadOxford, OX2 6UDUKTel: +44 1865 284401 [email protected]

The opinions expressed in the newsletter do not necessarily refl ect the opinion of the editors, The Cochrane Collaboration, or anyone other than the authors of the individual articles. Cochrane Methods should be cited as: Hopewell S, Clarke M, Higgins JPT (editors). Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1:1–29.

Editors

Cochrane MethodsSeptember 2010

Table of ContentsFrom the Editors 1

Articles 2A new infrastructure for Cochrane Methods 2Methods Application and Review Standards (MARS) Working Group 3Revised core functions of Methods Groups 3Training Working Group 4Risk of Bias tool evaluation: process, survey results and recommendations 4Core reporting of outcomes in effectiveness trials 5Priority setting in The Cochrane Collaboration 6

PublishedMethodological Research 7The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews 7Risk of bias versus quality assessment of randomised controlled trials: cross sectional study 8Retrieving randomized controlled trials fromMEDLINE: a comparison of 38 published search filters 9Systematic reviews of low back pain prognosis had variable methods and results: guidance for future prognosis reviews 10Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published

systematic reviews 11CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials 12The PRISMA Statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions:

explanation and elaboration 13AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews 14An evidence-based practice guideline for the peer review of electronic search strategies 15

Empirical Studies within the Collaboration 16Use of qualitative methods alongside randomised controlled trials of complex healthcare interventions:

methodological study 16An encouraging assessment of methods to inform priorities for updating systematic reviews 16Reporting and methodologic quality of Cochrane Neonatal Review Group systematic reviews 17Analysis of the reporting of search strategies in Cochrane systematic reviews 17Searching for unpublished trials in Cochrane reviews may not be worth the effort 18An empirical assessment of the validity of uncontrolled comparisons of the accuracy of diagnostic tests 18Including evidence about the impact of tests on patient management in systematic reviews of diagnostic test accuracy 19Cochrane Methodology Review Group 19Cochrane Methodology review on recruitment strategies for randomized trials 19

CochraneMethods Groups 21Cochrane Adverse Effects Methods Group 21Cochrane Bias Methods Group 21Campbell and Cochrane Economics Methods Group 22Campbell and Cochrane Equity Methods Group 23Cochrane Individual Patient Data Meta-analysis Methods Group 23Cochrane Information Retrieval Methods Group 24Cochrane Non-Randomised Studies Methods Group 25Cochrane Prognosis Methods Group 26Cochrane Qualitative Research Methods Group 26Cochrane Screening and Diagnostic Tests Methods Group 27Cochrane Statistical Methods Group 27

Campbell CollaborationMethods Groups (C2) 28

Future Meetings 29Joint Colloquium of the Cochrane and Campbell Collaborations 29COMET Symposium 29

From the EditorsWelcome to the first issue of Cochrane Methods, the official annual newsletter for methodological issues within The CochraneCollaboration. Many of you will have seen and contributed to previous issues of the Cochrane Methods Groups Newsletter which hasbeen in circulation since 1997. After more than 13 years, we have redesigned and renamed the newsletter with the aim of givinggreater prominence to the work of Cochrane Methods Groups within the Collaboration, and to help raise their profile more widely.

The Cochrane Collaboration is an international, independent, not-for-profit organization of over 27,000 contributors from more than100 countries, dedicated to making up-to-date, accurate information about the effects of health care readily available worldwide. Itscontributors work together to produce systematic reviews of healthcare interventions, diagnostic tests and methodology, publishedonline in The Cochrane Library. These reviews help providers, practitioners and patients make informed decisions about their ownhealth care and that of others. The role of the Cochrane Methods Groups is primarily to provide policy advice to The CochraneCollaborationonhow the validity andprecisionof theCochrane reviews it produces canbe improved. In addition,MethodsGroupsmayalso carry out additional tasks such as providing training, peer review and specialist advice, contributing to software developments, orconducting methodological research aimed at improving the quality of Cochrane reviews.

This new-look newsletter highlights the work of Methods Groups and other methodological initiatives within The CochraneCollaboration. It contains news of relevance to Methods Groups, structured abstracts and commentaries on topical methodologicalissues, reports of recentmethodological research fromwithin the Collaboration, details of CochraneMethodology reviews andupdateson the work of individual Methods Groups.

This issue of Cochrane Methods continues to focus on some of the challenging issues facing the methodology of Cochrane and othertypes of systematic reviews. This year has seen the introduction of a number of changes to the structure and role of Methods Groupswithin The Cochrane Collaboration, and we begin the issue with a series of short articles outlining some of these important changes,as well as providing news on the recent evaluation of the Cochrane Risk of Bias tool and an update on the training needs of reviewauthors.

As with previous editions, we also include a series of structured abstracts and commentaries on topical methodological issues. Thisyear we include a study examining the relevance of outcome reporting bias, the selective reporting of specific study results and itsimpact on Cochrane reviews, and another study looking at methods used for indirect comparisons in systematic reviews. We alsoinclude commentaries on two influential reporting guidelines, namely the PRISMA Statement for reporting of systematic reviews anda further revision of the CONSORT Statement for reporting of randomized trials. The implications of these two guidelines and theirrelevance to The Cochrane Collaboration and Cochrane reviews are discussed.

We are, as ever, very grateful to the many people who have contributed to this newsletter. We should also like to thank The CochraneCollaboration and the UK Cochrane Centre (part of the National Institute for Health Research) for providing resources to produce it.

Finally, we should very much welcome your comments on the new look newsletter and your suggestions for future content.

Sally Hopewell, Mike Clarke and Julian PT Higgins (Editors of CochraneMethods)

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 1Published by John Wiley & Sons, Ltd. 1

ArticlesA new infrastructure forCochraneMethods

Julian HigginsMethods Groups’ representative on theCochrane Collaboration SteeringGroup

Correspondence to:[email protected] Biostatistics Unit, Institute of PublicHealth, Cambridge, UK.

Several changes have taken place over thelast year relating to the ways in whichmethodologists contribute to theworkingof The Cochrane Collaboration. The Steer-ing Group funded a discussion meetingaboutmethods in August 2009, whichwasgenerously hosted by Jon Deeks and col-leagues in Birmingham, UK. EachMethodsGroupwas represented, and we discussedthe recommendationsof the recentStrate-gic Review as well as a series of sugges-tions on how the methods infrastructuremight be revised to enhance collabora-tion and better allow us to contribute tothe production of high quality system-atic reviews. The four new initiatives de-scribed below arose out of this meetingand subsequent discussions in Singaporeand elsewhere. The two articles that fol-low this one describe two further aspectsof the new infrastructure: the MethodsApplication and Review Standards (MARS)Working Group and important changes tothe core functions of Methods Groups.

CochraneMethods Board

The Cochrane Methods Board brings to-gether people in high level methods rolesin The Cochrane Collaboration. This in-cludes the Co-Convenors of all Meth-ods Groups, Co-ordinating Editors of theMethodology Review Group, Co-Editorsof Cochrane Handbooks, the Editor in

Chief, representatives of Diagnostic TestAccuracy Reviews and Overviews initia-tives, and people who represent meth-ods on various other committees. Whilethe full membership of the Board is inclu-sive, the main entities that contribute tothe Board constitute its votingmembership(currently comprising 14Methods Groups,two Handbooks, theMethodology ReviewGroup and the Methods Groups’ repre-sentative on the Cochrane CollaborationSteering Group).The purpose of the Methods Board isto provide a broad forum for discus-sion and formulation of recommenda-tions on methods for Cochrane reviewsand other methodological issues faced byThe Cochrane Collaboration. It has takenover responsibility from the HandbookAdvisory Group for developing method-ologicalguidelines forpreparingCochranereviews. In addition, it will be asked to ap-prove major methods-related documents,such as shared training materials.

Methods Executive

The Methods Executive comprises eightmembers of the Methods Board, whowill represent the Board on a day-to-day basis. Current members areMike Clarke (Methodology ReviewGroup),Julian Higgins (Co-Convenor; MethodsGroups’ representative on the SteeringGroup), Mariska Leeflang (Screening andDiagnostic Tests Methods Group), CarolLefebvre (Information Retrieval MethodsGroup), Jane Noyes (Co-Convenor; Quali-tative Research Methods Group), HolgerSchunemann (Applicability and Recom-mendations Methods Group), Ian Shemilt(Campbell andCochraneEconomicsMeth-ods Group) and Jonathan Sterne (BiasMethods Group). The Methods Executiveacts as a conduit for communication andinformation flow between the MethodsBoard and the Steering Group, the Editor

in Chief and other Cochrane Executivesor Executive Groups (such as those rep-resenting the Co-ordinating Editors, Man-aging Editors, Trials Search Co-ordinatorsand Fields).

Handbook Editorial AdvisoryPanel

As mentioned above, the responsibilityof the former Handbook Advisory Groupfor developing methodological guidancefor the conduct of Cochrane reviews hasmoved to the Methods Board. The Hand-book Co-Editors are now supported bya smaller group focussed on implemen-tation, rather than development, of thisguidance. This new Handbook Edito-rial Advisory Panel (HEAP) also brings to-gether the Editors of the InterventionsHandbook with those of the DiagnosticTest Accuracy Handbook in order to maxi-mize sharing and consistency of guidanceacross different types of Cochrane review.HEAP includes representation from sys-tematic review methodologists, authorsand editorial bases.

Methods Co-ordinator

At its recent meeting in Auckland, theSteering Group approved the creation ofa new post of Methods Co-ordinator. Thisperson will provide support to MethodsGroups, to the three committees aboveand to the MARS Working Group. In addi-tion, heor shewillworkwithMARSand theEditor in Chief (and others) on facilitatinga range of projects to assess and improvethe methodological quality of Cochranereviews. Examples include a collation ofexamples of implementation of methods,development of frequently asked ques-tions about methods (with answers), andassisting with the creation of networks ofCochraneReviewGroup-based individuals

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 2–6Published by John Wiley & Sons, Ltd. 2

with interests in similar methodologicalareas.The various changes summarised aboveillustrate that this is an exciting time,

methodologically, for the Collaboration.I very much look forward to working withthe new Methods Co-ordinator and witheveryone else to ensure that this new

infrastructure is appropriately focussedon the continual improvement of themethodological quality of our systematicreviews.

Methods Applicationand Review Standards(MARS) Working Group

Julian Higgins and Rachel ChurchillCo-Convenors, Methods Applicationand Review StandardsWorking GroupCorrespondence to:[email protected] Biostatistics Unit, Institute of PublicHealth, Cambridge, UK.

When the substantially revised version 5of the Cochrane Handbook for SystematicReviews of Interventions was introducedalongside Review Manager 5 in 2008,much new methodological guidance be-came available to review authors. Thisincluded new approaches for assessingand addressing risk of bias in includedstudies, the introduction of ‘Summaryof findings’ tables, and new guidanceon incorporating non-randomized stud-ies, adverse effects, economic outcomes,qualitative evidence and Cochraneoverviews, among other special topics.Unfortunately, Review Group editorialteams were required to support reviewauthors trying to implement the newguidance before they were properly fa-miliar with it and could consider the im-plications for their editorial processes.In response to the challenges experi-enced by Review Groups in supportingthe roll-out of these newmethods consis-tently across The Cochrane Collaborationa working group was set up tofacilitate interaction between MethodsGroups and Review Groups. Initiallyknown as the ‘CoEds-Methods Working

Group’, this group is now establishedas the Methods Application and ReviewStandards (MARS) Working Group. It alsoprovides support to the Editor in Chiefand Cochrane Editorial Unit more widely.With the formation of the Co-ordinatingEditors and Methods Boards and theirexecutive groups, as well the ManagingEditors executive group, an efficient andproductive collaboration is now possible.The purpose of the MARS Working Groupis to enhance the quality and relevance ofCochrane reviews. It aims to achieve thisby providing a forum in which MethodsGroups representatives, Review Groupsrepresentatives and the Cochrane Edi-torial Unit can discuss the introductionof new methods, the specification ofmethodological quality standards, andprocesses for monitoring and improv-ing review quality. The current termsof reference for the MARS WorkingGroup are: (i) to enhance communicationand understanding between MethodsGroups, ReviewGroups and the CochraneEditorial Unit; (ii) to ensure, throughinvolvement from an early stage, thatmethodological guidance (developedby the Methods Board for implementa-tion in the Cochrane Handbook and inReview Manager) is suitable for im-plementation in Cochrane reviews; (iii)to identify strategies to assist ReviewGroups to implement the guidance inthe Handbook; (iv) to explore and agreeon minimum methodological qualitystandards for Cochrane reviews, and tofacilitate their implementation acrossReview Groups; (v) to develop processesfor monitoring and improving method-ological quality of Cochrane reviews; (vi)

to consider implications of surveys andempirical studies relating to the method-ological quality of Cochrane reviews; and(vii) to discuss emerging research onmethods and standards for the reportingof systematic reviews and their relevanceto Cochrane reviews.On a practical level, the MARS WorkingGroup is playing a key role in facilitatingthe development of minimum method-ological standards for Cochrane reviews,and is overseeing a project to develop asuite of exemplar protocols and reviews.Both of these projects will involve all (ormost) of the Methods Groups. In particu-lar,weplanto inviteallMethodsGroupstoprovide feedback on the exemplar pro-tocols and reviews to ensure that theyillustrate the ‘gold-standard’ applicationof the guidance in the Cochrane Hand-book. The complementary activities ofthe Training Working Group (TWG) willenhance the organisation and deliveryof core training by Cochrane Centres.The involvement of several members ofthe TWG in MARS ensures that trainingrequirements relating to methodologi-cal guidance are also more effectivelycommunicated.The members of the MARS WorkingGroup include methodologists, Co-ordin-ating Editors, the Editor in Chief, Co-Editors of the Cochrane Handbook, aCo-Convenor of the Training WorkingGroup, a Managing Editor from the Man-aging Editors Executive and a represen-tative from the InformationManagementSystem team. TheWorking Group is likelyto be co-convened by a methodologistand a Co-ordinating Editor (currently thetwo of us).

Revised core functions ofMethods Groups

Ian Shemilt

Correspondence to: [email protected] Economics Group, University ofEast Anglia, Norwich, UK.

The Cochrane Collaboration SteeringGroup has recently approved a revisedset of core functions for Methods Groups,in parallel with recent changes to theinfrastructure that support method-

ological input to the Collaboration andits reviews. Essentially, revised corefunctions reflect the idea that the focusof needed methodological input variesacross areas of methodology covered byeach Methods Group, so that more flex-ibility is needed in the framework usedby Methods Groups to prioritize theiractivities and outputs.Under the revised framework, three corefunctions apply to all Methods Groups:

• Providing policy advice.• Serving as a forum for discussion.

• Ensuring that the Group func-tions as part of The CochraneCollaboration.

Methods Groups may also elect to adoptone or more additional core functions:

• Providing training.• Hosting a network of Cochrane

Review Group-based methods in-dividuals.

• Providing peer review.• Providing specialist advice.• Contributing to new products or

lines of activity.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 2–6Published by John Wiley & Sons, Ltd. 3

• Contributing to software develop-ment.

• Conducting Cochrane Methodol-ogy reviews.

• Contributing to the CochraneMethodology Register.

• Helping to monitor and improvethe quality of Cochrane reviews.

• Conducting methodologicalresearch.

• CommunicatingCochranemetho-dology to external organizations.

Additional core functions are adoptedin consultation with the MethodsExecutive, to reflect the needs of theCollaboration and the aims, scope andresources of the Methods Group, andare reviewed biennially. We antic-ipate that the current transition to

this new core functions framework willappear seamless to other Cochraneentities, underpinned by effectivecommunication and the new CochraneMethods infrastructure. Furtherdetailscanbe found in The Cochrane Policy Manual,Section 3.5 (www.cochrane.org/policy-manual/welcome).

TrainingWorking Group

Steve McDonald and Phil WiffenCo-Convenors, TrainingWorking Group

Correspondence to:[email protected] Cochrane Centre, MonashInstitute of Health Services Research,Australia.

Training and support for peopleengaged in all aspects of the prepa-ration and maintenance of Cochranereviews underpins The CochraneCollaboration’s primary purpose ofpreparing high quality evidence fordecision makers and is essential for theCollaboration’s long-term sustainability.Preparing a Cochrane review involvesmany people (authors, editorial staff,methods experts, consumers, etc.) and re-quires multiple competencies and skills.Until recently, most training within theCollaboration has focused on authorsand been delivered through face-to-faceworkshops. But as the number andgeographic distribution of authors in-crease and reviews become more com-plex to prepare and support, there’san urgent need to provide a greaterrange of training and support opportu-nities to the various groups of peopleinvolved. These trends, coupled withbetter access to new technologies for de-livering training and support, have high-lighted the need for a Collaboration-wideapproach to determining training

priorities and developing appropriatestrategies.The Training Working Group (TWG) hasexpanded its remit to support everyoneinvolved in preparing and maintain-ing Cochrane reviews (not just authors)and was given the responsibility by theCochrane Collaboration Steering Groupfor developing and implementing aCollaboration-wide training strategy. InApril 2010, the TWG met in Oxford to dis-cuss the contents of the training strategyand to identify priority projects. Leadingup to themeeting, we identified the com-petencies and skills required to carry outthe various tasks involved in preparingreviews (from title registration to pub-lication), and mapped these to existingtraining and support.A full report and funding proposal is be-ing prepared for the Steering Group forconsideration later in 2010. Some of thekey projects to emerge from the meetingthat are likely to feature in the trainingstrategy include:

• Better explanatory informationabout what is involved in doingCochrane reviews (linking with thework onminimum competencies forreview author teams).

• Expansionof theOnline Learning Re-sources to include additional coretopics and new specialized topics.

• Continued development of the Stan-dard Author Training Materials to in-clude specialized topics, multimediaresources and translations.

• Use of webinars to supplement on-line and face-to-face training.

• Collaboration with the MethodsApplication and Review StandardsWorking Group to develop trainingmaterialsbasedaroundminimumre-view standards and common errors(for authors and editorial teams).

• Expansion of the successful Manag-ing Editors’ induction andmentoringscheme to other entity staff (TrialsSearch Co-ordinators, Field adminis-trators, etc.).

• Development of training packagesfor technical editing, publicationethics, peer reviewing and consumerrefereeing.

As part of its co-ordination function, theTWG will be responsible for setting up adedicated training website and a train-ers’ network, and for advising on toolsand technologies to facilitate training andsupport initiatives.

The involvement of Methods Groups inthese proposed training projects will becritical, particularly as more advancedand specialized training is made availablethrough webinars, slidecasts and practi-cal examples. Several Methods Groupsare currently contributing to the Stan-dard Author Training Materials and it isproposed that the Methods Board willhave an oversight role in approving train-ing resources that will be made availablethrough the training website.

Risk of Bias toolevaluation: process,survey results andrecommendationsJelena SavovicCorrespondence to:[email protected] of Social Medicine, Universityof Bristol, UK.

Version 5 of the Cochrane Handbookfor Systematic Reviews of Interventionsintroduced the Risk of Bias (RoB) as-sessment tool, which represents a morecomprehensive undertaking than pre-viously recommended procedures forassessing study quality. The Bias Meth-ods Group, funded by the CochraneOpportunities Fund, has conducted anevaluation of the RoB tool. This aimed toobtain feedback from a range of stake-

holders within The CochraneCollaboration regarding their experienceswith, and perceptions of, the RoB tool andassociated guidance materials, and to rec-ommend improvementswhere necessary.

We used qualitative and quantitativemethods to conduct our evaluation. Inthe first stage, four semi-structured focusgroups were held with 25 participants,both by telephone and at the CochraneColloquium in Singapore in October 2009.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 2–6Published by John Wiley & Sons, Ltd. 4

Results from the focus groups informeddevelopment of two questionnaires thatwere used in online surveys (distributedthrough established Cochrane mailinglists) of review authors and ManagingEditors and other Cochrane Review Groupstaff.We received 190 responses from reviewauthors who had used the RoB tool, 132from authors who had not, and 58 fromCochrane Review Group staff. RoB assess-ments were reported to take an averageof 10 to 60minutes per study to complete:83% of respondents deemed this accept-able. Most respondents thought that RoBassessments were better than past ap-proaches to trial quality assessment. Mostauthors liked the standardized approach(81%) and the ability to provide quotes tosupport judgments (74%). About a thirdof participants did not like the increasedworkload, and found the wording de-scribing judgments of RoB to be unclear.Most authors (75%) thought availability oftrainingmaterials was sufficient, butmanyexpressed an interest in online training.Following the surveys, we held a meet-ing in Cardiff on 1 March 2010 of Coch-rane methodologists, review authors,

Managing and Co-ordinating editors, andtheCochrane Editorial Unit, just before theUK- and Ireland-basedCochraneContribu-tors’ Meeting. They discussed the findingsfrom the focus groups and online surveys,anddevelopeddraft recommendations forimprovements to the RoB tool. The mainrecommendations include:

Immediate to short term:

• Change wording of bias judg-ments from ‘yes/no’ to ‘low/highrisk of bias’.

• Introduce category headings forselection, performance anddetec-tion, attrition, reporting, andotherbias.

• Manually split the assessmentof blinding into (i) participantsand personnel, and (ii) outcomeassessment.

• Clarify guidance, particularly forincomplete outcomes and selec-tiveoutcomereporting, and ‘othersources of bias’.

• Produce clearer and more explicitguidance on decision-making for

incorporation of risk of biasassessments into meta-analyses.

Medium term (implement-ation with ReviewManager6 or later):

• Structurally split assessmentblinding into participants, person-nel (performance bias heading)and blinding of outcome assess-ment (detection bias heading).

• Weight RoB graphs by study size.• Provide an algorithm for reaching

a summary assessment of risk ofbias per study/outcome.

• Develop online guidance andtrainingmaterials includinganon-line frequently asked questionsbank and examples of assess-ments.

The evaluation team hopes that thesechanges will make it easier for authorsto use the RoB tool, improve the reliabilityof assessments and improve the quality ofCochrane reviews.

Core reporting ofoutcomes ineffectiveness trials

PaulaWilliamson, Jane Blazeby, DougAltman andMike Clarke

Correspondence to:[email protected] for Medical Statistics and HealthEvaluation, University of Liverpool, UK.

Everyone who embarks on a systematicreviewhasprobablyexperienced the frus-trationof carefullyplanning theoutcomesto be sought and analysed, only to findthat the original researchers either didnot measure these outcomes or mea-sured them in such different ways thatit will be difficult or impossible to com-pare, contrast or combine them. Peopletrying to use the findings of individualrandomized trials are also hampered byinconsistencies in the patient outcomesassessed across the different studies. In-deed, trialists themselves often strugglewith identifying the outcomes to collectwhichwould be ofmost value to eventualusers of their research.One way to address all these difficultieswould be through the adoption of an

agreed minimum set of core outcomesfor each medical condition.1 Consistentmeasurement and reporting of these coreoutcomes in all clinical trials would re-duce the potential for selective outcomereporting, since trial reports should focuson presenting their findings for the coreoutcomes. Core outcomes would make iteasier to compare results across differentstudies and would enhance systematicreviews and the combination of resultsin meta-analyses. Statistical power wouldbe increased and the potential for biasin the overall estimates reduced, becausefewer studies would have to be omitted.

Over the last couple of decades, sev-eral groups have been working on coreoutcome sets in specific areas of healthcare, including rheumatology, pain andmaternity care. In January 2010, theCOMET (Core Outcome Measures in Ef-fectiveness Trials) initiative was launchedat a meeting in Liverpool to try to en-courage, highlight and facilitate suchactivities more widely. More than 100people, including those working on coreoutcome sets, journal editors, regulators,consumers, clinicians, policy makers, trialfunders, trialists and systematic review-ers, discussed what had already beenachieved and the opportunities for the

future. The presentations are availablefrom the COMETwebsite (see below). Themeeting was made possible by fundingfrom the MRC North West Hub for TrialsMethodology Research, and we are nowplanning a second meeting, in Bristol inearly 2011.

The COMET initiative is an internationalnetwork bringing together individualsand organizations interested in the de-velopment, application and promotion ofcore outcome sets. We aim to collate rele-vant resources, bothappliedandmethod-ological, facilitate exchange of ideas andinformation, and foster methodologicalresearch. Further information aboutCOMET can be found at the websitewww.liv.ac.uk/nwhtmr/ andwe shouldbedelighted to hear from anyone interestedin this topic. The website will include ex-amplesof amatrix of outcomes that couldbe used within systematic reviews.2 Weencourage authors of Cochrane reviewsto consider including these in their re-views and, if they wish, to send them tous for the collection of examples. Where acoreoutcomessethasbeenestablished intheir topic area, Cochrane authors mightalso wish to draw attention to the useof these outcome measures within their‘Implications for research’.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 2–6Published by John Wiley & Sons, Ltd. 5

References1. Clarke M. Standardising outcomes forclinical trialsandsystematic reviews. Trials2007; 8: 39.

2. Kirkham JJ, Dwan KM, Altman DG,Gamble C, Dodd S, Smyth R,Williamson PR. The impact of outcome

reporting bias in randomised controlledtrials on a cohort of systematic reviews.BMJ 2010; 340: c365.

Priority setting in TheCochrane Collaboration

MonaNasser

Correspondence to:[email protected] for Quality and Efficiency inHealth Care, Cologne, Germany.

Identifying and prioritizing key topics iscrucial to the relevance and applicabil-ity of Cochrane reviews for end-users.There have been several attempts withinThe Cochrane Collaboration to improvethe methods to identify topics for futureCochrane reviews. In a 2008 survey, 29Cochrane entities reported having madeattempts to inform the selection or priori-tization of topics for Cochrane reviews.1

In a meeting in 2006, the need for a morestrategic approach to improve the prior-itization process was recognized by theCochrane Collaboration Steering Group.This led to the creation of the CochranePrioritization Fund, which funded fiveinitiatives to explore prioritization in theproduction and updating of Cochranereviews.

The five projects were:

• Delivering on priorities: de-veloping and implementing ef-fective collaboration between aCochrane Review Group and aCochrane Field.

• Using practice guidelines to de-termine review priorities: a pilotproject.

• Prioritization of Cochrane reviewsfor consumers and the public inlow- and high-income countriesas a way of promoting evidence-based health care.

• Prioritizing Cochrane review top-ics to reduce the know-dogap in low- and middle-incomecountries.

• Piloting and evaluation of apatient professional partnershipapproach to prioritizing Cochranereviews and other research.

The findings from these five projectswere presented and discussed during aspecial session at the Cochrane Collo-quium in Singapore in October 2009.2

The projects have shown that there aredifferent approaches to identifying and

prioritizing topics. Although it is dif-ficult to define one single approach toprioritizing topics for the whole CochraneCollaboration, central guidance mightsupport and harmonize these activitiesand avoid duplication of effort. Some ofthe prioritization research teams have gottogether and intend to publish a series ofarticles in the near future. These projectshave also shown that there is still a lot ofuncertainty regarding the best methodsto set a research agenda for Cochrane re-views. Therefore, we also proposed a newCochrane Methods Group on prioritiza-tion and agenda setting for Cochranereviews that will work with the JamesLind Alliance (www.lindalliance.org) to fillthis gap.

References

1. Nasser M, Welch V, Tugwell P. Ueff-ing E, Doyle J,Waters E. Ensuring relevancefor Cochrane reviews: evaluating the cur-rent processes and methods for prioritiz-ing Cochrane reviews (unpublished 2010).2. Jones L. The Prioritisation Fund.Cochrane News 2009; 47: 4–5.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 2–6Published by John Wiley & Sons, Ltd. 6

Published MethodologicalResearch

The impact of outcome reporting biasin randomised controlled trials on acohort of systematic reviews

Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S,Smyth R, Williamson PR. BMJ 2010; 340: c365.

Structured Abstract

Background: Selective reporting bias in a study is defined as theselection, based on the study results, of a subset of analyses to bereported.Objective: To examine the prevalence of outcome reporting bias(the selection for publication of a subset of the original recordedoutcome variables on the basis of the results) and its impact onCochrane reviews.Design: A nine point classification system for missing outcomedata in randomized trials was developed as part of the OutcomeReporting Bias in Trials (ORBIT) study and applied to a cohort ofnew Cochrane reviews published in The Cochrane Library (Issue4, 2006 to Issue 2, 2007). Researchers who conducted the trialsincluded in the Cochrane reviews were contacted and the reasonsought for the non-reporting of data. A sensitivity analysis wasundertaken to assess the impact of outcome reporting bias on re-views that included a singlemeta-analysis of the review’s primaryoutcome.Main results: One hundred and fifty-seven of the 283 (55%)Cochrane reviews assessed did not include data from all eligi-ble trials addressing the primary outcome in the review. Themedian amount of missing data (missing for any reason) was10%, whereas 50% or more of the potential data were missingin 70 (25%) reviews. Of the 2562 reports of randomized trialsincluded in the 283 Cochrane reviews, the researchers of 155(6%) trials measured and analysed the primary outcome for thereview but did not report, or only partially reported, the results.For reports that did not mention the primary outcome for thereview, the ORBIT classification scheme regarded the presenceof outcome reporting bias to have a sensitivity of 88% (95% CI65% to 100%) and specificity of 80% (95% CI 69% to 90%) onthe basis of responses from 62 trialists. A third of Cochrane re-views (96/283 (34%)) contained at least one trial with a highsuspicion of outcome reporting bias for the primary outcome ofthe review. In a sensitivity analysis undertaken for 81 reviewswith a single meta-analysis of the primary outcome of interest,the treatment effect estimate was reduced by 20% or more in19 (23%) Cochrane reviews. Of the 42 meta-analyses with a

statistically significant result, eight (19%) became non-significantafter adjustment for outcome reporting bias and 11 (26%) wouldhave overestimated the treatment effect by 20% or more.Conclusions: Outcome reporting bias is an under-recognisedproblem that affects the conclusions in a substantial proportionof Cochrane reviews. People conducting systematic reviews needto address explicitly the issue of missing outcome data for theirreview in order for it to be considered a reliable source of evi-dence. Extra care is required during data extraction, and reviewauthors should identify when a trial reports that an outcome wasmeasured but no results were reported or events observed.

Commentary

Prepared by Isabelle Boutron

Correspondence to: [email protected] d’Epidemiologie, Biostatistique et Recherche Clin-ique, Universite Paris, France.

Chan and colleagues published the first empirical investigationof outcome reporting bias defined as a selective reporting of fa-vorable outcomes in published studies.1 Several methodologicalstudies have since been published.2–4 These studies showed thatstatistically significant outcomes had higher odds of being fullyreported than non-significant outcomes (odds ratio 2.2 to 4.7).3

Discrepancies between the primary outcome published and thatreported in the protocol was identified in 40% to 62% of studies.3

When comparing publication data to data recorded in trial reg-istries, discrepancies in primary outcomes were identified in 31%of the studies, mainly favouring statistically significant results.4

Considering the importance of selective reporting bias, a specificitem is now dedicated to ‘selective outcome reporting’ in the Riskof Bias tool of The Cochrane Collaboration.Kirkham and colleagues evaluated an unselected cohort ofCochrane reviews published in 2006 and 2007 to estimate theprevalence and impact of selective reporting of outcomes. Forabout one-third of the reports of randomized trials evaluated, thereview’s primary outcome was partially or not reported. Amongthese reports, half were highly suspected to contain outcomereporting bias. More than one-third of the reviews contained atleast one randomized trial report with high suspicion of outcomereporting bias for the review’s primary outcome. Furthermore,sensitivityanalyses showedthatoutcomereportinghadan impor-tant impact on meta-analyses, with about 20% of the statisticallysignificant meta-analyses results becoming non-significant afteradjusting for outcome reporting bias.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 7

The Outcome Reporting Bias in Trials (ORBIT) project publishedby Kirkhamand colleagues also offers a new classification schemethat could help systematic reviewers identify selective reportingbias. This tool, developedandevaluatedby the authors, considersnine different categories classified under four headings relatedto whether it was (1) clear that the outcome was measured andanalyzed, (2) clear that the outcome was measured, (3) unclearthat the outcome was measured, and (4) clear that the outcomewas not measured. For each category, review authors shoulddetermine whether the risk of bias is null, low or high.This tool is a very important step for review authors and has ad-vantages in evaluating selective reporting bias without the needto examine the trial protocol. However, use of this scheme relieson the reviewauthors’ subjectivity, and further research is neededto evaluate the reproducibility of this classification scheme anddevelop training materials.

References

1. Chan AW, Hrobjartsson A, Haahr MT, Gøtzsche PC, AltmanDG. Empirical evidence for selective reporting of outcomes inrandomized trials: comparison of protocols to published articles.JAMA 2004; 291;2457–65.2. Chan AW, Altman DG. Identifying outcome reporting bias inrandomised trials on PubMed: review of publications and surveyof authors. BMJ 2005; 330: 753.3. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E,Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioanni-dis JP, Simes J, Williamson PR. Systematic review of the empiricalevidence of study publication bias and outcome reporting bias.PLoS One 2008; 3(8): e3081.4. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Compari-son of registered andpublishedprimary outcomes in randomizedcontrolled trials. JAMA 2009; 302: 977–84.

Risk of bias versus quality assessmentof randomised controlled trials: crosssectional study

Hartling L, Ospina M, Liang Y, Dryden DM, Hooton N, KrebsSeida J, Klassen TP. BMJ 2009; 339: b4012.

Background: The methodological quality of studies included ina systematic review can have a substantial impact on the resultsof a review and validity of its conclusions.Objective: To evaluate inter-rater agreement and validity of TheCochrane Collaboration Risk of Bias tool, for assessing the internalvalidity of randomized trials, compared to the Jadad Scale andSchulz approach for assessing allocation concealment; to assessthe relationship between risk of bias and effect estimates.Design: Thiswasacross sectional studyonasampleof163 reportsof randomized trials in child health. Two reviewers independentlyassessed trials using the Risk of Bias tool, one reviewer assessedtrials using the Jadad and Schulz approaches. The inter-rateragreement between reviewers assessing trials using the Risk ofBias tool (weighted kappa), and the time taken to apply the toolwas compared with the other approaches to quality assessment.The degree of correlation for overall risk was compared with theoverall quality scores, and the magnitude of effect estimates for

studies classified as being at high, unclear, or low risk of bias wascompared.Main results: The inter-rater agreement between reviewers onindividual domains of the Risk of Bias tool ranged from slight(kappa = 0.13) to substantial (kappa = 0.74) depending on thedomain assessed. The mean time taken to complete the tool wassignificantly longer than for the Jadad Scale and Schulz approach.The average time taken for one reviewer to complete the toolwas 8.8 minutes (SD 2.2) per study compared with 0.5 minutes(SD 0.3) for the Schulz approach and 1.5 minutes (SD 0.7) forthe Jadad approach. There was low correlation between therisk of bias overall compared with the Jadad scores (P = 0.395)and Schulz approach (P = 0.064). Effect sizes differed betweenstudies assessed as being at high or unclear risk of bias comparedwith those at low risk.Conclusions: The inter-rater agreement varied across domains ofthe Risk of Bias tool. Generally, agreement was poorer for thoseitems that required more judgment. There was low correlationbetween assessments of overall risk of bias and the Jadad Scaleand Schulz approach to allocation concealment. Overall risk ofbias, as assessed by the Risk of Bias tool differentiated effectestimates, with more conservative estimates for studies at lowrisk.

CommentaryPrepared by Lise Lotte Gluud

Correspondence to: [email protected] of Internal Medicine, Gentofte University Hospital,Hellerup, Denmark.

The interesting study by Hartling and colleagues published inthe BMJ is most definitely worth reading. The study highlights apotential problem for systematic reviewers and those who readand rely on systematic reviews. Part of the strength of Cochranereviews is that they provide a thorough assessment of the qualityof the available evidence. There is increasing evidence to supportthe importance of this assessment, butwe still have not reached acomplete agreement on how exactly to perform the assessmentof quality or how to deal with low quality trials.In line with previous research,1 Hartling and colleagues founda low correlation between quality scores. The findings supportrecommendations to base quality assessments on componentsrather than composite scores. Likewise, the inter-rater agree-ment varied even for this group of experienced researchers,even though a guideline for how the quality assessment wasdeveloped and applied systematically was used. Accordingly, ad-ditional research is necessary to improve the quality assessmentof randomized trials. Continued improvement of the CochraneHandbook for Systematic Reviews of Interventions is also essential.Furthermore, it may be argued that since the quality assessmentof trials is not (yet) perfect, exclusion of randomized trials definedas having a low quality of bias control is debatable. Since theextent and direction of the influence of bias varies in differentresearch areas, tools to help weigh trials according to the qualityof bias control may be a feasible way forward.

Reference1. Juni P, Witschi A, Bloch R, Egger M. The hazards of scoringthe quality of clinical trials for meta-analysis. JAMA 1999; 282:1054–60.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 8

Retrieving randomized controlledtrials fromMEDLINE: a comparisonof 38 published search filters

McKibbon KA, Wilczynski NL, Haynes RB, The HedgesTeam.Health Information and Libraries Journal 2009; 26:187–202.

Background: People search MEDLINE for trials of healthcareinterventions for clinical decisions, or to produce systematic re-views, practice guidelines, or technology assessments. Finding allrelevant randomized trials to address a specific question can bechallenging due to the large volume of available articles and theimperfections of indexing in bibliographic databases.Objective: To provide comparative data on the sensitivity, speci-ficity and precision of search filters designed to retrieve random-ized trials fromMEDLINE.Design: Search filters designed to identify reports of randomizedtrials in MEDLINE were found by searching PubMed (1991 toMay 2008), bibliographies of published papers and the internet.A test database of reports of randomized trials compiled fromhandsearching 161 clinical journals indexed in MEDINE was usedto calculate the sensitivity, specificity andprecisionof each searchfilter.Main results: Thirty-eight search filters were identified for usein MEDLINE. The number of terms and the sensitivity, specificityand precision of each search filter varied considerably. Twenty-four of the 38 search filters had a statistically significant highersensitivity when compared to the retrieval using the single termrandomized controlled trials.pt. (sensitivity 93.7%); six of the 24filters had a sensitivity of at least 99%. Four other filters hadspecificities (non-retrieval of non-randomized trials) that wereeither no different or better than the single term randomizedcontrolled trials.pt. (specificity 97.6%). Precision (proportion ofretrieved articles that were randomized trials) was poor, only twofilters had precision statistically similar to that of the single term(56.4%); all others were lower. Search filters with more searchterms and high sensitivity often had lower specificity.Conclusions: A number of search filters to identify reports ofrandomized trials in MEDLINE exist. The data in this study willallow users to identify the most appropriate search filter to suittheir information needs, depending on simplicity of the searchingdesired and the performance required.

Commentary

Prepared by Julie Glanville and Carol Lefebvre

Correspondence to: [email protected] Health Economics Consortium Ltd, University of York, UK.

This paper reports on the performance of 38 search filters toidentify randomized controlled trials (RCTs) in MEDLINE. The fil-ters were tested using a gold standard set of 1587 reports ofRCTs identified from 161 MEDLINE-indexed journals hand-searched for the year 2000. The most sensitive single termfilter was randomized controlled trial.pt. with 93.7% sensitivityand 56.4% precision (Number Needed to Read [NNR] = 1.77). Sixother filters offered sensitivity of 99% or over but with consider-able reductions in precision (range: 5.6% to 10%, NNR range: 10to 18). The authors offer a look-up table, ranked by sensitivity, so

that searchers can select filters which meet their specific sensi-tivity and precision level requirements.This type of study offers valuable performance data: many fil-ters are validated within their own gold standards only and theperformance of some filters has never been tested. The moreadditional validation that occurs, in particular of a range of similarfilters across one or more gold standards, the clearer the pictureof search filter performance and the easier it may become toselect the filter that meets specific needs. The findings of this per-formance assessment broadly support a previous performanceassessment.1

The finding that randomized controlled trial.pt. is a relatively sen-sitive search term to identify reports of RCTs in MEDLINE is valu-able corroboration that it is now easier to identify such records inMEDLINE. This reflects the successful efforts of the US National Li-brary ofMedicine (NLM) and TheCochraneCollaboration over thelast15years toachievebetteraccess tosuchreports. Whichfilter touse to identify the trialsmissedbythe termrandomizedcontrolledtrial.pt. will depend on review authors’ requirementswith respectto levels of sensitivity and tolerance for the resulting trade-off inprecision/specificity. For example, the ‘Clinical Queries—sensi-tive’ filter developed by the study authors has 99.2% sensitivity,70.1% specificity and 10% precision, whereas the ‘CochraneHighly Sensitive Search Strategy—sensitivity-maximizing ver-sion’, from the Cochrane Handbook for Systematic Reviews of Inter-ventions, (referred to in the paper as HSS-Sensitive) has a slightlylower sensitivity of 98.4% but higher specificity (77.9%) andprecision (13%).The factors which inform the selection of search filters isa largely unexplored process, which may be assisted bythe use of critical appraisal tools such as the UK InterTASCInformation Specialists Sub-Group Search Filter Appraisal Check-list (www.york.ac.uk/inst/crd/intertasc/critical appraisal searchfilters.htm).2 Such checklists provide structured assessments ofthe focus and methods used to develop and validate searchfilters and can also usefully be applied to studies such as this.For example, the ISSG Search Filter Appraisal Checklist suggeststhat the development of the gold standard used for performancetesting should be evaluated. The gold standard used in this studywas developed by extensive handsearching. The journals whichwere selected for handsearching, however, were those with ahigh impact factor. The titles and abstracts of those journalsmay, therefore, be of higher quality in terms of reporting re-search methods clearly than journals which have lower impactfactors. This, in turn, would make those records easier for anindexer to identify and index correctly and easier to identifywith free-text search terms. If that is the case, then the searchfilters may over-perform for high impact factor journals and thetrue sensitivity of the filters may be lower than reported in thisstudy.The authors also noted that their own search filters (the ClinicalQueries RCT strategies) might over-perform in this gold standardbecause they were developed using this gold standard and somight be expected to perform well. In addition, the recordsin the gold standard are all indexed in MEDLINE, so the per-formance of the filters in un-indexed e.g. in-process records, isunknown.The reports selected for the gold standard had to meet the au-thors’ definition of a randomized controlled trial. This definitionincludes random allocation of patients, that outcomes had to bereported for at least 80% of participants with the analysis con-sistent with the study design. The single term filter ‘randomized

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 9

controlled trial.pt.’ had a precision of only 56.4% in searchingthis gold standard. This indicates that nearly 44% of the recordsretrieved by this term, all of which had been indexed by NLMindexers as RCTs, were considered not to be RCTs according tothe definition used by the study authors. The authors’ definitionmay differ from that used when selecting trials for Cochranereviews. In that case, the filters’ performance when used forCochrane reviews may be different to that presented in thisstudy.All the records in the gold standard were published in 2000.It is unclear to what extent the findings of this study mightbe generalisable to searches to retrieve more recent or olderrecords, as styles of reporting change over time, in particularwith respect to initiatives such as CONSORT and guidance onuse of the word ‘randomized’ in the title/abstract of a journalarticle.The authors also note that ‘although indexing vocabularychanges over time, our careful review of changes that couldaffect classification of RCTs showed that few pertinent adjust-ments occurred since 2000.’ They do not comment on the changein NLM indexing policy, introduced in 2006, to cease ‘redundantindexing’ of RCTs andControlledClinical Trialswith the additional‘parent’ term Clinical Trial and the effect this may have had onthe performance of the filters they tested.3 Similarly, they do notcomment on the fact that some, but not all, of the filters inclu-ded various techniques to exclude animal studies and how thismight affect comparative performance.There are additional issues which need to be considered be-fore adopting a specific filter. The authors of the study notethat they have translated or adapted some of the filters fromdifferent interfaces. This translation process is not described indetail, so before adopting a strategy the translation should bechecked with the original version. Incorrect translation or adap-tation may impact on performance. Search filters are designedvery specifically and translations need to take this into accountcarefully.This study is broadly useful but the choice of search filters foruse for Cochrane reviews needs to be informed by an assessmentof the relevance of the gold standard to these reviews and anyadaptations made to filters by the authors. The development offurther gold standards, which more closely reflect the eligibilitycriteria for studies for inclusion in Cochrane reviews, could behelpful.Declarations of interest: the authors of this commentary areauthors of some of the search filters tested in this study—andauthors of filters which were used as a basis for some of the otherfilters.

References1. Glanville JM, Lefebvre C, Miles JN, Camosso-Stefinovic J. Howto identify randomized controlled trials in MEDLINE ten years on.[Erratum in Journal of the Medical Library Association 2006; 94:354]. Journal of theMedical Library Association 2006; 94: 130–6.2. Glanville J, Bayliss S, Booth A, Dundar Y, Fernandes H, FleemanND, Foster L, Fraser C, Fry-Smith A, Golder S, Lefebvre C, Miller C,Paisley S, Payne L, Price A, Welch K, on behalf of the InterTASCInformation Specialists’ Sub-Group. So many filters, so little time:the development of a search filter appraisal checklist. Journal oftheMedical Library Association 2008; 96: 356–61.3. Tybaert S. MEDLINE /PubMed End-of-Year Activities. NLMTechnical Bulletin 2005; 346: e4.

Systematic reviews of low back painprognosis had variable methods andresults: guidance for future prognosisreviews

Hayden JA, Chou R, Hogg-Johnson S, Bombardier C.Journal of Clinical Epidemiology 2009; 62: 781-96.

Background: Systematic reviews of prognostic research can helpclinicians educate patients and can be used to target specificinterventions to modify prognostic factors. However, there islimited guidance available on how to conduct systematic reviewsof prognosis and their design and conduct can vary substantially.Objective: To identify, describe, and synthesise systematic re-views of low back pain prognosis, and to explore the potentialimpact of reviewmethods on the conclusions of the review.Design: MEDLINE, EMBASE and CINAHL were searched (up toJune 2007) to identify systematic reviews of prognosis of lowback pain. One reviewer extracted information from each in-cluded review on characteristics of the review question, reviewmethods and analysis; a second reviewer checked the extracteddata. Two reviewers independently assessed review quality.Main results: Seventeen systematic reviews of prognosis of lowback pain were identified. The review questions and selectioncriteria varied and included both focused and broad reviews ofprognostic factors. A quarter of reviews did not clearly definesearch strategies. The number of included prognosis studies perreview ranged from three to 32 (median 17; interquartile range10 to 22). Seventy percent of reviews assessed the quality of theincluded studies, but only assessed a median of four out of sixpotential biases. All reviews reportedassociationsbasedon statis-tical significance using a variety of strategies for syntheses. Only asmall number of important prognostic factors were consistentlyreported: older age, poor general health, increased psychologicalor psychosocial stress, poor relations with colleagues, physicallyheavy work, worse baseline functional disability, sciatica, and thepresence of compensation. There were discrepancies across re-views in selection criteria which influenced the studies included,and various approaches to data interpretation influenced the re-view conclusions about evidence for specific prognostic factors.Conclusions: There is an immediate need for methodologicalresearch in the area of prognosis systematic reviews. Due tomethodological shortcomings in the primary and review litera-ture, there remainsuncertainty about the reliabilityof conclusionsregarding prognostic factors for low back pain.

Commentary

Prepared by KatrinaWilliams

Correspondence to:[email protected]’s andChildren’sHealth, UniversityofNewSouthWales, Australia.

Hayden and colleagues conducted a reviewof systematic reviewsabout the prognosis of low back pain to examine the potentialimpact of review methods on findings. To do this, they usedsound methods for conducting a review of systematic reviews

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 10

and applied current best practice for assessing the quality of theincluded prognosis systematic reviews. The collective expertiseof the authors in this content area also allowed them to assessthe clinical nuances of the population included, outcomes usedand associations or prognostic factors reported.This study is important in relation to primary prognosis stud-ies and prognosis systematic reviews across all health topics.Currently there is renewed interest in prognosis research andevidence on prognosis.1 As treatments and diagnostic tests havedeveloped it has become increasingly clear to clinical and policydecision-makers that prognosis and prognostic factors (factorsthat are associated with or influence outcome) underpin all de-cisions about investigation pathways and intervention choices.To guide decision-making high quality, transparently synthesisedevidence about prognosis and prognostic factors will need to be-come an integral part of diagnostic test and intervention researchaswemove towards assessing ‘clinical utility’ of health evidence.2

Hayden and colleagues identified considerable heterogeneity inthe systematic reviewmethods across the 17 reviews of low backpain prognosis, leading to variation in studies included in eachreview. Although no individual factors were identified that clearlyinfluenced systematic review findings, the degree of variabilityspeaks clearly to the need for reporting standards for primaryprognosis studies, guidance on methods for systematic reviewsof prognosis, and further methodological work in the field. Assuch the research by Hayden and colleagues is relevant to workcurrently underway internationally in areas described under ‘re-search framework’ on the Prognosis Methods Group’s website(http://prognosismethods.cochrane.org/whats-new).The review of reviews by Hayden and colleagues also presentsfindings that resonate with findings of reviews of systematicreviews of interventions, where variations in systematic reviewmethodology and risk of bias of included studies, as well as thepresentation of information about different populations, with dif-ferent outcomes and treatments, present problems for interpre-tation and application of ‘evidence’. Experience from prognosis,diagnostic test and intervention reviews of systematic reviewsmay come together ‘asmore than the sumof the parts’ inworkingtowards solutions to this problem.

References1. Hemingway H. Prognosis research: why is Dr. Lydgate stillwaiting? Journal of Clinical Epidemiology 2006; 59: 1229–38.2. Simon RM, Paik S, Hayes DF. Use of archived specimens inevaluation of prognostic and predictive biomarkers. Journal ofthe National Cancer Institute 2009; 101: 1446–52.

Methodological problems in the use ofindirect comparisons for evaluatinghealthcare interventions: survey ofpublished systematic reviews

Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ,Altman DG. BMJ 2009; 338: b1147.

Background: It is generally accepted that evidence from welldesigned head-to-head randomized trials provides the most rig-orous and valid research evidence on the relative effects of

different interventions. However, evidence from these trials isoften limited or not available and evidence from indirect compar-isons may be necessary.Objective: To determine which methods have been used forindirect comparisons in systematic reviews of competing health-care interventions and to identify any methodological problemsin these applications.Design: Systematic reviews published between 2000 and 2007in which an indirect approach had been explicitly used wereidentified by searching PubMed (up to October 2008). Identifiedreviews were assessed for comprehensiveness of the literaturesearch, method for indirect comparison, and whether assump-tions about similarity and consistency were explicitly mentioned.One reviewer extracted data and a second reviewer checkedeach study.Main results: Eighty-eight systematic reviews involving indirectcomparisons were identified. In 13 of the 88 reviews the indirectcomparison was informal, with no calculation of relative effectsor testing for statistical significance. In six reviews, results fromdif-ferent trials were compared without using a common treatmentcontrol. Forty-nine reviews used an adjusted indirect compari-son using classic frequentist methods and 18 reviews used morecomplex methods. The key assumption of trial similarity wasexplicitly mentioned in only 40 of the 88 reviews. The consis-tency assumption was not explicit in most cases where directand indirect evidence were compared or combined (18 out of 30reviews). Evidence from head-to-head comparison trials was notsystematically searched for or was not included in nine reviews.Conclusions: Identified methodological problems included anunclear understanding of the underlying assumptions, inappro-priate search and selection of relevant trials, use of inappropriateor flawed methods, lack of objective and validated methods toassess or improve trial similarity, and inadequate comparisonor inappropriate combination of direct and indirect evidence.Adequate understanding of basic assumptions underlying indi-rect and mixed treatment comparisons is crucial to resolve thesemethodological problems.

Commentary

Prepared by Kristian Thorlund, Milo Puhan, Jason Busse and GordonGuyatt

Correspondence to: [email protected] Trial Unit, Copenhagen University Hospital,Denmark.

Song and colleagues have done an impressive job assessing indi-rect and multiple treatment comparisons published up to 2007.Their recommendations are both intelligent and intelligible, andwill be highly valuable to systematic review authors embark-ing on conducting indirect and multiple treatment comparisons.However, as this field is evolving rapidly, some additions maybe needed to their recommendations. Most recent multipletreatment comparisons have employed Bayesian methods thatfacilitate ranking of treatments according to their relative efficacyand safety profile. Presumably, this trend will continue. Sincetreatment rankings will undoubtedly influence health policy, itis important that recommendations are made to ensure validityand adequate reporting of treatment rankings.There are a number of limitations associated with treatmentrankings that review authors should be aware of. A recent

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 11

commentary on a multiple treatments meta-analysis of 12 new-generation anti-depressants pointed out that the exclusion of allplacebo-controlled trials in the original publication had a dra-matic impact on the treatment ranking.1 Althoughmore researchis needed, this example illustrates how sensitive treatment rank-ings can be to the systematic review inclusion/exclusion criteria.Other well-known limitations are the risks of overestimation (orunderestimation) due to bias and imprecision.2–5 Including anoverestimated comparative treatment effect in a multiple treat-ment comparisonwill causeother treatments to receive artificiallyrelatively lower ranking (and vice versa for underestimated com-parative effects). Estimates of heterogeneity may also be over orunderestimated due to bias or imprecision.6–7 In this case, cred-ible intervals may become artificially wide or narrow, and thusdistort treatment rankings. Lastly, Bayesian analysis necessitatesthat priors are elicited for the heterogeneity parameter. Althoughthe conventionally used truncated flat prior (the uniform distri-bution) is believed to be a ‘vague’ prior, this may not always betrue when only a few trials are available per comparison.8

To ensure that adequate inferences are drawn from treat-ment rankings, authors should disclose sensitivity to inclu-sion/exclusion criteria, high risk of bias trials and comparisons,and the choice of prior distribution. Treatment rankings shouldalways be interpreted according to the overall risk of bias andimprecision. To assess the overall risk of bias, one can followthe criteria outlined in the GRADEProfiler9 for each direct com-parison and assume that ‘the chain of evidence is no strongerthan its weakest link.’ To interpret treatment rankings in relationto precision, one should present treatment rankings with treat-ment effect point estimates and credible intervals. This can beachieved by presenting treatment rankings, direct, indirect andpooled (combined) estimates in one table.

References

1. Ioannidis JP. Ranking antidepressants. Lancet 2009; 373:1759–60.2. Ioannidis J, Lau J. Evolution of treatment effects over time:empirical insight from recursive cumulative metaanalyses. Pro-ceedings of the National Academy of Sciences of the United States ofAmerica 2001; 98: 831–6.3. Thorlund K, Devereaux PJ, Wetterslev J, Guyatt G, IoannidisJP, Thabane L, Gluud LL, Als-Nielsen B, Gluud C. Can trial se-quential monitoring boundaries reduce spurious inferences frommeta-analyses? International Journal of Epidemiology 2009; 38:276–86.4. Trikalinos TA, Churchill R, Ferri M, Leucht S, Tuunainen A,Wahlbeck K, Ioannidis JP, The EU-PSI project. Effect sizes in cumu-lative meta-analyses of mental health randomized trials evolvedover time. Journal of Clinical Epidemiology 2004; 57: 1124–30.5. Wood L, Egger M, Gluud LL, Schulz KF, Juni P, Altman DG,Gluud C, Martin RM, Wood AJ, Sterne JA. Empirical evidence ofbias in treatment effect estimates in controlled trialswithdifferentinterventions and outcomes: meta-epidemiological study. BMJ2008; 336: 601–5.6. Jackson D. The implications of publication bias for meta-analysis’ other parameter. Statistics inMedicine 2006; 25: 2911–21.7. Ioannidis JP, Patsopoulos NA, Evangelou E. Uncertainty inheterogeneity estimates in meta-analysis. BMJ 2007; 335: 914–6.8. Thorlund K, Steele R, Platt R, Shrier I. Important under-recognised issue in the use of Bayesian mixed-treatment

comparisons forevaluatinghealthcare interventions: prior sensiti-vity. BMJ Rapid response 2009; 16 December: www.bmj.com/cgi/eletters/338/apr03 1/b1147# 227603.9. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schunemann HJ; GRADE Working Group. GRADE: anemerging consensus on rating quality of evidence and strengthof recommendations. BMJ 2008; 336: 924-6.

CONSORT 2010 explanation andelaboration: updated guidelines forreporting parallel group randomisedtrials

Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC,Devereaux PJ, Elbourne D, Egger M, Altman DG. BMJ 2010;340: c869.

Overwhelming evidence shows the quality of reporting of ran-domized trials is not optimal. Without transparent reporting,readers cannot judge the reliability and validity of trial findingsnor extract information for systematic reviews. Recent method-ological analyses indicate that inadequate reporting and designare associated with biased estimates of treatment effects. Suchsystematic error is seriouslydamaging to randomized trials, whichare considered the gold standard for evaluating interventions be-cause of their ability to minimise or avoid bias.A group of scientists and editors developed the CONSORT (CON-solidated Standards Of Reporting Trials) Statement to improvethe quality of reporting of randomized trials. It was first pub-lished in 1996 and updated in 2001. The Statement consists of achecklist and flow diagram that authors can use for reporting arandomized trial. Many leading medical journals and major inter-national editorial groupshaveendorsed theCONSORTStatement.The Statement facilitates critical appraisal and interpretation ofrandomized trials.During the 2001 CONSORT revision, it became clear that explana-tion and elaboration of the principles underlying the CONSORTStatement would help investigators and others to write or ap-praise trial reports. A CONSORT explanation and elaborationarticle was published in 2001, alongside the 2001 version of theCONSORT Statement.After an experts meeting in January 2007, the CONSORT State-menthasbeen further revisedandwaspublishedas theCONSORT2010 Statement. This update improves the wording and clarityof the previous checklist and incorporates recommendations re-lated to topics that have only recently received recognition, suchas selective outcome reporting bias.The explanatory and elaboration document intended to en-hance the use, understanding, and dissemination of the CON-SORT Statement has also been extensively revised. It presentsthe meaning and rationale for each new and updated checklistitem providing examples of good reporting and, where possible,references to relevant empirical studies. Several examples of flowdiagrams are included.The CONSORT 2010 Statement, this revised explanatory andelaborationdocument, andtheassociatedwebsite (www.consort-statement.org) should be helpful resources to improve reportingof randomized trials.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 12

Commentary

Prepared by Gerd Antes

Correspondence to: [email protected] Cochrane Centre, Freiburg, Germany.

Full and transparent reporting of results of clinical trials is essen-tial for assessing the quality of healthcare interventions. Inade-quate reporting of trials is common, and it impedes the use of trialresults in healthcare research and practice.1 Underreporting oftrial results is highly detrimental for the evidence base formedicaldecision-making because it is likely that bias is introduced intosystematic reviews if they are built on a patchy or distorted bodyof evidence.Consequently, a series of reporting guidelines have been de-veloped during the past 15 years (www.equator-network.org).The pioneering first step of this framework was the CONSORTStatement (CONsolidated Standards Of Reporting Trials) in 1996for the publication of randomized controlled clinical trials. Ithas now been published as the 2010 (substantial) update of theStatement,2 after the last revision in 2001. A few particularlyrelevant new items have been introduced: registration is nowrequired before inception, researchers must state where the pro-tocol can be accessed (if this is possible), and where the fundingcomes from.The CONSORT Statement has received broad acceptance andsupport within the scientific community and from medical jour-nal editors. However, progress in the quality of reporting isfar from what could be expected.3 Several investigations haveassessed whether the quality of reporting has improved sincepublication and revision of the CONSORT Statement. Althoughthese have shown that improvements have occurred, the qualityof reporting is far from satisfactory, even for items that are cru-cial for the assessment of trial quality. Essential items like samplesize estimation (45%), the randomization procedure (34%) or theconcealment of treatment allocation (25%) are described in anunacceptably low number of reports.3

Even among high impact journals, fewer than 50% of the in-vestigated journals recommend that authors comply with theCONSORT Statement. Of those, only a minority have proceduresthat support adherence to the guidance in the CONSORT State-ment. Amongnon-English language journals, the situation is evenmore irritating although CONSORT has been translated into 10other languages (www.consort-statement.org/database/consort-statement). Of 30 German journals which produced a consider-able yield of reports of randomized trials fromhandsearching, notone of them even mentioned the CONSORT Statement in theirauthor guidelines.Even after almost 15 years the CONSORT Statement has notachieved the endorsement and adherence it deserves, in spite ofimpressive evidence of benefit. The quality of literature-basedsystematic reviews is so heavily dependent on the report qualityof trial results that The Cochrane Collaboration should increaseits efforts to motivate editors and journals to implement proce-dures which directly adhere to the CONSORT Statement and thecorresponding checklist.

References

1. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E,Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D,

Ioannidis JP, Simes J, Williamson PR. Systematic review of the em-pirical evidence of study publication bias and outcome reportingbias. PLoS One 2008; 3(8): e3081.2. Schulz KF, Altman DG, Moher D, the CONSORT Group. CON-SORT 2010 Statement: updated guidelines for reporting parallelgroup randomised trials. BMJ 2010; 340: c332.3. Antes G. The new CONSORT statement. BMJ 2010; 340: c1432.

The PRISMA Statement for reportingsystematic reviews andmeta-analysesof studies that evaluate health careinterventions: explanation andelaboration

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC,Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D.PLoSMedicine 2009; 6(7): e1000100.

Systematic reviews and meta-analyses are essential to summa-rise evidence relating to efficacy and safety of healthcare inter-ventions accurately and reliably. The clarity and transparency ofthese reports, however, is not optimal. Poor reporting of system-atic reviews diminishes their value to clinicians, policy makers,and other users.Since the development of the QUOROM (QUality Of ReportingOf Meta-analyses) Statement, a reporting guideline publishedin 1999, there have been several conceptual, methodological,and practical advances regarding the conduct and reporting ofsystematic reviews and meta-analyses. Also, reviews of pub-lished systematic reviews have found that key information aboutthese studies is often poorly reported. Realizing these issues,an international group that included experienced authors andmethodologists developed PRISMA (Preferred Reporting Itemsfor Systematic reviews and Meta-Analyses) as an evolution of theoriginal QUOROM guideline for systematic reviews and meta-analyses of evaluations of healthcare interventions.The PRISMA Statement consists of a 27-item checklist and afour-phase flow diagram. The checklist includes items deemedessential for transparent reporting of a systematic review. Inthe Explanation and Elaboration document, the meaning andrationale for each checklist item is explained. For each item, anexample of good reporting and, where possible, references to rel-evant empirical studies andmethodological literature is provided.The PRISMA Statement, this explanation and elaboration docu-ment, and the associated website (www.prisma-statement.org)should be helpful resources to improve reporting of systematicreviews and meta-analyses.

Commentary

Prepared by Harriet MacLehose and David Tovey

Correspondence to: [email protected] Editorial Unit, London, UK.

The recent proliferation of reporting guidelines reflect deficien-cies in all types of published articles that can lead to invalidconclusions and misinterpretation. As systematic reviews are in-creasingly used to informdecision-making in clinical practice and

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 13

health policy, it is crucial that they are conducted and reportedin a manner that minimises bias and promotes understanding.After 10 years, the QUOROM (QUality Of Reporting Of Meta-analyses) Statement has evolved into the PRISMA (PreferredReporting Items for Systematic reviews andMeta-Analyses) State-ment. PRISMA, like QUOROM, aims to improve the quality ofreporting of systematic reviews, but the guidelines have beenupdated to reflect developments in systematic review method-ology, including clearer guidance on evaluating risk of bias ofincluded studies. This is particularly important given the findingby Moher and colleagues in 2007 that ‘For therapeutic reviews... only half of the non-Cochrane Reviews assessed the qualityof included studies (43/87; 49.4%).’1 The acronym alone reflectsthe evolution in terminology, from ‘meta-analysis’ to ‘systematicreview’, with the former termnowbeing reserved for quantitativepooling of results.The PRISMA Statement was developed through an inclusive,consensus process, informed by evidence whenever possible,involving review authors, methodologists, clinicians, medical ed-itors, and consumers. The PRISMA Statement companion paperprovides an explanation for the inclusion of the checklist itemsalongwith examples. Most items can be implemented in practice,although at least one item – systematic review registration – isnot yet always practical.The PRISMA Statement may bring challenges for review authorsand editors: the new checklist is longer than the QUOROM check-list (increased from an 18-item to a 27-item checklist) and haslengthy associated documentation; the new checklist may resultin longer articles (an adverse effect noted by the PRISMAauthors);and, unlike other reporting guidelines, it has had a name change,whichmay contribute to the challenges in uptake asmanypeopleare familiar with QUOROM. Fortunately for authors and editors,the latest versions of PRISMA and other reporting guidelines arenowhosted in one place by the Enhancing theQUAlity and Trans-parency Of health Research (EQUATOR) Network (www.equator-network.org). Importantly, the PRISMA Statement is not intendedas a quality measure for systematic reviews, concentrating as itdoes on reporting rather than conduct. However, since these areinextricably linked, there is no doubt that it will be widely used asa component of any evaluation of review quality.The Co-ordinating Editors of Cochrane Review Groups haveendorsed the PRISMA Statement to improve the reporting ofCochrane reviews. Compliance with most requirements will notbe a cause for concern – authors are required to report on mostof the items by the nature of the structured format of Cochranereviews. However, Cochrane review titles do not identify thereview as a systematic review and not all Cochrane reviews in-clude a flow diagram for search results. Work is ongoing toensure that these will become features of future Cochrane re-views, incorporated into the Review Manager software wherepossible, as The Cochrane Collaboration continues to ensure thatit retains its reputation for creating reviews that meet the highestpossible standards for quality, transparency and completeness ofreporting.

Reference

1. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epi-demiology and reporting characteristics of systematic reviews.PLoSMedicine 2007; 4(3): e78.

AMSTAR is a reliable and validmeasurement tool to assess themethodological quality of systematicreviews

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E,Grimshaw J, Henry DA, Boers M. Journal of ClinicalEpidemiology 2009; 62: 1013-20.

Background: Systematic reviews have become a standardisedway to assess and summarise healthcare research, however, theunderlying quality of these reviews has received relatively littleattention. AMSTAR has been developed as a tool for evaluatingthe quality of systematic reviews.Objective: To measure the agreement, reliability, construct va-lidity, and feasibility of a measurement tool to assess systematicreviews (AMSTAR).Design: A random sample of 30 systematic reviews, including 11Cochrane and 19 non-Cochrane reviews was selected. Each re-view was assessed by two reviewers using: the enhanced qualityassessment questionnaire (Overview Quality Assessment Ques-tionnaire [OQAQ] originally developed by Oxman and Guyatt,1991); the Sacks’ instrument, 1987; and the newly developedmeasurement tool AMSTAR. The reliability (inter-observer kappasof the 11 AMSTAR items), intraclass correlation coefficients (ICCs)of the sum scores, construct validity (ICCs of the sum scores ofAMSTAR compared with those of the other instruments), andcompletion times were assessed.Main results: The inter-observer agreement of the individualitems in the AMSTAR was high with a mean kappa of 0.70 (95%confidence interval [CI] 0.57 to 0.83). However, items four (pub-lication status), seven (report of assessment of scientific quality),and nine (appropriate method to combine studies) scored fair tomoderate at 0.38, 0.42, and 0.45, respectively. The mean kappastatistics recorded for the other instruments were 0.63 (95% CI0.38 to 0.78) for the enhanced OQAQ tool and 0.40 (95% CI 0.29to 0.50) for the Sacks’ instrument. The ICC of the total score forAMSTAR was 0.84 (95% CI 0.65, 0.92) compared with 0.91 (95%CI: 0.82 to 0.96) for OQAQ and 0.86 (95% CI 0.71 to 0.94) for theSacks’ instrument. AMSTAR took a mean of 14.9 (95% CI 17.0 to12.8) minutes to complete, OQAQ took 20.3 (95% CI 22.5 to 18.0)minutes, and the Sacks’ instrument took 34.4 (95%CI 37.3 to 31.6)minutes.Conclusions: AMSTAR has good agreement, reliability, constructvalidity, and feasibility. These findings need confirmation by abroader range of assessors and a more diverse range of reviews.

Commentary

Prepared by Steff Lewis

Correspondence to: [email protected] Health Sciences, University of Edinburgh, UK.

AMSTAR is a measurement tool to assess the methodologicalquality of systematic reviews. It is an 11 point scale, which theauthors of this paper claim has good inter-rater agreement, andis reasonably quick to use.Cochrane review authors spend a lot of time assessing the quality(or risk of bias) of individual studies, but as yet, the Collaboration

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 14

does not routinely assess the methodological quality of the re-views it produces. If the Collaboration were to assess the qualityof its reviews, is AMSTAR something it could use?When assessing individual trials, The Cochrane Collaboration hasrecommended assessing individual domains of quality, or risk ofbias, such as allocation concealment and blinding of outcomeassessors. However, AMSTAR provides a total quality score (out of11), which goes against this advice. AMSTAR assumes that each ofits domains are equally important. In providing a single numericsummary, the details of the areas where a particular systematicreview falls short are lost.AMSTAR asks several questions relating to searching for and doc-umenting the studies in the review, but only asks one questionrelating to the statistical methods used to summarise the studies‘Were themethods used to combine the findings appropriate?’ Asa statistician, I have spentweeks ofmy life assessing this particularpoint in Cochrane reviews, and it is hard for me to accept that allthe multitude of errors that I have seen can be summarised intoone sentence. If I were using AMSTAR, I would want a separatechecklist of mistakes to look, for so that I could assess this onepoint.In summary, I think this paper describes a good study assessingthe inter-rater agreement and usability of the AMSTAR scale.However, I’m not sure that the AMSTAR scale is something thatThe Cochrane Collaboration should use in its current form.

An evidence-based practice guidelinefor the peer review of electronic searchstrategies

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D,Lefebvre C. Journal of Clinical Epidemiology 2009; 62:944-52.

Background: Systematic reviews require complex and highlysensitive electronic literature search strategies however there areno current guidelines for their peer review. Poor search strategiesmay fail to identify existing evidence because of poor sensitivityor may increase the resources required to conduct reviews as aresult of inadequate precision.Objective: To create an annotated checklist for the peer reviewof electronic search strategies.Design: A systematic review of the literature was conducted toidentify existing instruments that evaluate or validate the qualityof literature searches in any discipline, and to identify whichelements of electronic search strategies have demonstrable im-pact on search performance. A survey of people experienced insystematic review searching was also conducted to gather expertopinion regarding both the impact of various search elements onthe search results and the importance of each element in the peerreview of electronic search strategies.Main results: The checklist for the peer review of electronicsearch strategies includes six elements onwhich therewas strongconsensus: the accurate translation of the research question intosearch concepts; the correct choice of Boolean operators and ofline numbers; the adequate translation of the search strategy for

each database; the inclusion of relevant subject headings, andthe absence of spelling errors. Seven additional elements hadpartial support and are included in this guideline.Conclusions: This evidence-based guideline facilitates the im-provement of search quality through peer review, and therebythe improvement in quality of systematic reviews. It is relevantfor librarians and information specialists, journal editors, devel-opers of knowledge translation tools, research organizations, andfunding bodies.

CommentaryPrepared by Julie Glanville

Correspondence to: [email protected] Health Economics Consortium Ltd, University of York, UK.

This paper provides much needed guidance on the parame-ters which could be most useful when assessing the quality ofsearches. This is a helpful paper because of its focus on thetypes of searching used to inform systematic reviews, which tendto have an emphasis on recall (sensitivity). The guidance wasdeveloped specifically for database searches.The guidance was developed by systematically reviewing infor-mation retrieval literature to identify the evidence on importantelements in database search strategy performance. Once key el-ements which might impact on search strategy performance hadbeen identified, searchers experienced in conducting literaturesearches to inform systematic reviews were surveyed to obtaintheir ratings of the importance of the elements.The elements which emerged as most key to successful sensitivesearches included ensuring that the research question was cap-tured by the search, using the correct combination of Booleanoperators and proximity operators, checking for spelling andsyntax errors, checking that line numbers were correct, checkingthat translation between search interfaces had been achievedcorrectly and ensuring that the subject headings used were sen-sitive enough to capture the research question. Some issueswere assessed as unimportant to the quality of the search in-cluding search term redundancy, combining subject headingsand free text in a single search statement and using additionaldatabase-specific fields.The guidance recommends that peer review of search strategiesshould be conducted near the beginning of the review processto ensure the search strategy is fit for purpose and the review isinformed by the best possible searches. The guidance offers astructure for standardised assessment, but when used in practiceit is likely to need expansion. Practical guidance needs to bemore detailed as each recommendation contains several aspectswhich might need to be assessed. The guidance is pitched atexperienced information professionals, but even theymight ben-efit from an annotated version of the guidance with explanationsand examples, for example, further explanation of why using theMeSH term REHABILITATION/ in searches is not recommended,and more examples of how to limit safely in specific databases.The guidance can assist Trials Search Co-ordinators and othersearchers to improve the quality of their own searches and pro-vides a clear structure for peer reviewers who have been asked toreview a strategy.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 7–15Published by John Wiley & Sons, Ltd. 15

Empirical Studies within theCollaboration

This section aims tohighlight someof the currentmethodologicalresearch being carried out within The Cochrane Collaboration. Toregister ongoing methodological research within The CochraneCollaboration please contact [email protected].

Use of qualitativemethods alongsiderandomised controlled trials ofcomplex healthcare interventions:methodological study

Simon Lewin, Claire Glenton and AndrewOxman

Correspondence to: [email protected] Knowledge Centre for the Health Services, Oslo,Norway.

Background: Complex interventions are made up of charac-teristics such as elements that may act both independentlyand interdependently, complex systems for intervention deliv-ery, interventions that are difficult to describe and replicate,have complex explanatory pathways, or uncertainty about themechanism of action of the intervention. Randomized trials aresometimes used to evaluate complex interventions, whilst quali-tative approaches can contribute to both their development andevaluation. The use of multiple, integrated approaches may beparticularly useful in evaluating the effects of complex health andsocial care interventions as these involve social or behaviouralprocesses that are difficult to explore using quantitative methodsonly.Objective: To examine the use of qualitative approaches along-side randomized trials of complex healthcare interventions.Methods: A systematic sample of 100 trials from 492 trials pub-lished in English during 2001 to 2003 by the Cochrane EffectivePractice and Organisation of Care Review Group were analysed.Two reviewers extracted data describing the randomized con-trolled trials and qualitative studies, the quality of the studiesand how, if at all, qualitative and quantitative findings werecombined.Summary of main results: Thirty trials had associated quali-tative work and 19 of these were published studies. Fourteenqualitative studies were done before the trial, nine during thetrial and four after the trial. Thirteen studies reported an explicittheoretical basis and 11 specified theirmethodological approach.Approaches to sampling anddata analysis were poorly described.For 20 trials there was no indication of integration of qualitative

and quantitative findings at the level of analysis or interpretation.The quality of the qualitative studies was highly variable.Conclusions: Qualitative studies alongside randomized con-trolled trials remain uncommon. The findings of qualitativestudies seemed to be poorly integrated with those of trials andoften had major methodological shortcomings.

Reference

Lewin S, Glenton C, Oxman AD. Use of qualitative methodsalongside randomised controlled trials of complex healthcareinterventions: methodological study. BMJ 2009; 339: b3496.

An encouraging assessmentof methods to inform prioritiesfor updating systematic reviews

Alex Sutton, Sarah Donegan, Yemisi Takwoingi, Paul Garner,Carol Gamble and Alison Donald

Correspondence to: [email protected] of Health Sciences, University of Leicester, UK.

Background: Systematic reviews can become rapidly out of dateas new research evidence emerges. Arbitrary update strategiessuch as updating all reviews according to a perpetual rota mayresult in inefficient use of resources in slowly developing fieldsor delayed incorporation of knowledge in rapidly evolving fields.Update prioritization strategies devised for a collection of exis-ting reviews, such as those of Cochrane ReviewGroups, may offera more effective way of keeping clinical recommendations up todate and accurate for a limited resource.Objective: To consider the use of statistical methods that aim toprioritize the updating of a collection of systematic reviews basedon preliminary literature searches.Methods: A new simulation-based method estimating statisticalpower and the ratio ofweights assigned to the predicted newandold evidence, and the existing Barrowman n approach were usedto assess whether the conclusions of a meta-analysis are likely tochange when the new evidence is included. Using only informa-tion on the number of subjects randomized in the ‘new’ trials,these were applied retrospectively, by removing recent studies,to the 12 systematic reviews of 67 reviews in the Cochrane Infec-tious Diseases Group Database which met the inclusion criteriafor the study.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 16–20Published by John Wiley & Sons, Ltd. 16

Summary of main results: When the removed studies werereinstated, inferences changed in five of them. These reviewswere ranked, in order of update priority, 1, 2, 3, 4 and 11 (Barrow-man n approach) and 1, 2, 3, 4 and 12 (simulation-based powerapproach). The low ranking of one significant meta-analysis byboth methods was due to unexpectedly favourable results in thereinstated study.Conclusions: This study demonstrates the feasibility of the useof analytical methods to inform update prioritization strategies.Under conditions of homogeneity, Barrowman’s n and simulatedpower were in close agreement. Further prospective evaluationof these methods should be undertaken.

Reference

Sutton AJ, Donegan S, Takwoingi Y, Garner P, Gamble C,Donald A. An encouraging assessment of methods to informpriorities for updating systematic reviews. Journal of ClinicalEpidemiology 2009; 62: 241–51.

Reporting andmethodologic qualityof Cochrane Neonatal Review Groupsystematic reviews

Khalid Al Faleh andMohammed Al-Omran

Correspondence to: [email protected] of Pediatrics, King Saud University, Saudi Arabia.

Background: The Cochrane Neonatal Review Group is one ofthe 51 Review Groups registered with The Cochrane Collabora-tion. Members of the Group prepare reviews of the results ofrandomized trials of interventions for the prevention and treat-ment of disease in newborn infants. In preparing their reviews,review authors follow systematic methods summarised in theCochrane Handbook for Systematic Reviews of Interventions and ina checklist developedby theGroup’s editors for neonatal reviews.Assessment of the methodological quality of systematic reviewsandhowwell they are reported is essential in judgingwhether thefindingswarrant a change in clinical practice. Themost commonlyused tools for the assessment of review quality are the QUality OfReporting Of Meta-analyses (QUOROM) Statement, published in1999 and recently updated as the PRISMA (Preferred ReportingItems for Systematic reviews and Meta-Analyses) Statement (seepage 13), and the Overview Quality Assessment Questionnaire(OQAQ).Objective: To assess themethodological and reporting quality ofsystematic reviews published in the Cochrane Neonatal ReviewGroup and to evaluate whether the publication of the QUOROMStatement is associated with an improvement in review quality.Methods: A random sample of all neonatal reviews publishedin the Cochrane Database of Systematic Reviews Issue 4 2005 wasselected for analysis. Two reviewers independently extracteddata and assessed reviewquality, using the items of theQUOROMStatement to assess the quality of reporting, and total scores ofthe OQAQ to assess methodological quality.Summary of main results: A sample of 61 of the 210 neonatalreviews was analysed. Eighty-two per cent were published afterthe publication of the QUOROM Statement. Most of the reviewspublished before the QUOROM Statement had been updated

and the most recent version was used for the analysis. Overall,the reviews were of good quality with minor flaws based onOQAQ total scores. Areas needing improvement include abstractreporting, a priori plan for heterogeneity assessment and how tohandle heterogeneity, assessment of publication bias, reportingof agreement among review authors, documentation of trialsflow, and discussion of possible biases in the review process. Re-views published after the QUOROM Statement had significantlyhigher quality scores.Conclusions: The systematic reviews produced by the CochraneNeonatal Review Group are generally of good quality with mi-nor flaws, but efforts should be made to improve the quality ofreports. Readers should assess the quality of published reportsbefore implementing the recommendations.

ReferenceAl Faleh K, Al-Omran M. Reporting and methodologic qualityof Cochrane Neonatal review group systematic reviews. BMCPediatrics 2009; 9: 38.

Analysis of the reporting of searchstrategies in Cochrane systematicreviews

Adriana Yoshii, Daphne Plaut, KathleenMcGraw, MargaretAnderson and KayWellik

Correspondence to: [email protected] Science Center Libraries, University of Florida-Jacksonville,USA.

Background: The Cochrane Handbook for Systematic Reviews ofInterventions provides instructions for documenting a systematicreview’s electronic search strategy, listing seven elements thatshould be included. Comprehensive reporting of the search strat-egy is important in enabling readers to evaluate the search whencritically appraising the review’s quality.Objective: To determine to what extent these instructions forreporting electronic search strategies have been followed in re-cently published Cochrane reviews.Methods: Sixty-five reviews added to the Cochrane Databaseof Systematic Reviews in the first quarter of 2006 were examinedfor their adherence to the instructions in the Cochrane Handbookfor reporting electronic search strategies. A further 18 reviewswere excluded as their searches were conducted only in thespecialized registers of Cochrane Review Groups.Summary of main results: No review reported all seven rec-ommended elements. Four reviews (6%) included six elements.Twenty-one (32%) included five or more elements and 44 (68%)four or fewer. Three reviews reported only two elements. The 65reviews came from 41 Cochrane Review Groups.Conclusions: The instructions from the Cochrane Handbook forreporting search strategies are not being consistently followedby groups producing Cochrane reviews.

ReferenceYoshii A, PlautDA,McGrawKA, AndersonMJ,Wellik KE. Analysis ofthe reporting of search strategies in Cochrane systematic reviews.Journal of theMedical Library Association 2009; 97: 21–9.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 16–20Published by John Wiley & Sons, Ltd. 17

Searching for unpublished trials inCochrane reviewsmay not be worththe effort

Mieke L van Driel, An De Sutter, Jan DeMaeseneer andThierry Christiaens

Correspondence to: [email protected] of Health Sciences and Medicine, Bond University,Australia.

Background: Minimizing bias related to the studies included ina systematic review is an important issue in Cochrane reviews.This includes attempts to locate unpublished studies, as studieswith significant results are more likely to be published. However,unpublished studies may lack information, be of poor method-ological quality or both, and their inclusion could introduce biasrather than prevent it.Objective: To assess the value of searching for unpublished databy exploring the extent to which Cochrane reviews include un-published data and by evaluating the methodological quality ofincluded unpublished trials.Methods: The reference lists of all completed Cochrane reviewspublished since 2000 in the Cochrane Database of SystematicReviews Issue 3, 2006 were checked for the inclusion of unpub-lished studies. All 116 references from a random sample of 61 ofthe 292 reviews which included unpublished trials were studied.MEDLINE, CENTRAL and websites of pharmaceutical companieswere searched for formal publications of these trials. The meth-odological quality of trials marked as ‘unpublished data only’ wasassessed on three items relating to the control of bias, allocationconcealment, blinding and withdrawals.Summary of main results: Of the 2689 completed Cochranereviews, 292 (12%) included references to unpublished data. Un-published trials made up 9% of all trials included in the sampleanalysed. Thirty-eight per cent of the unpublished trials werefound to have been published. Allocation concealment was ratedas unclear or not adequate in 54%, blinding was not reported in39% and in 43% the randomization procedure was unclear. In43% of reviews, the reported withdrawal rates were above 20%.Trials that were eventually published had largermean populationsizes than those that remain unpublished (P = 0.02). Method-ological quality and publication bias were mentioned in half ofthe reviews and explored in a third.Conclusions: A minority of Cochrane reviews include unpub-lished trials and many of these are eventually published. Trulyunpublished studies are of poor or unclear methodological qual-ity. It may be better to invest in regular updating of reviews thanin extensive searching for unpublished studies.

Referencevan Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Search-ing for unpublished trials in Cochrane reviews may not beworth the effort. Journal of Clinical Epidemiology 2009; 62:838–44.

Thomas C ChalmersM.D. Award—2009The Thomas C Chalmers M.D. prize is awarded annually for thebest oral or poster presentation at the Cochrane Colloquium. In

2009, in Singapore, the best oral presentation was awardedto Yemisi Takwoingi, Jac Dinnes, Mariska Leeflang and JonDeeks for their study entitled ‘An empirical assessment of thevalidity of uncontrolled comparisons of the accuracy of di-agnostic tests’. The best poster presentation was awardedto Lukas Staub, Sarah Lord and Nehmat Houssami for theirstudy entitled ‘Including evidence about the impact of tests onpatient management in systematic reviews of diagnostic testaccuracy’.

An empirical assessment of the validityof uncontrolled comparisons of theaccuracy of diagnostic tests

Yemisi Takwoingi, Jac Dinnes, Mariska Leeflang and JonDeeks

Correspondence to: [email protected] Health, Epidemiology and Biostatistics Unit, University ofBirmingham, UK.

Background: Cochrane reviews of diagnostic test accuracy aimto provide evidence to support the selection of diagnostic testsby comparing the performance of tests or test combinations.Studies that directly compare tests within patients or betweenrandomized groups are preferable but are uncommon. Conse-quently, between-study uncontrolled (indirect) comparisons oftests may provide the only evidence of note. Such comparisonsare likely to be more prone to bias like indirect comparisonsbetween healthcare interventions, andmaybemore severely dueto considerable heterogeneity between studies and the lack of acommon comparator test.Objective: To estimate bias and reliability of meta-analyses ofuncontrolled comparisons of diagnostic accuracy studies com-pared to meta-analyses of comparative studies.Methods: Meta-analyses that included test comparisons withboth comparative studies and uncontrolled studies were identi-fied from a cohort of higher quality diagnostic reviews (Dinneset al 2005) indexed in the Database of Abstracts of Reviewsof Effects up to December 2002 supplemented by more re-cent searches. The hierarchical summary ROC model wasused to synthesize pairs of sensitivity and specificity in eachmeta-analysis and estimate and compare accuracy measures forboth the uncontrolled test comparison and the comparativestudies.Summary of main findings: Ninety-four comparative reviewswere identified of which 30 provided data to conduct both di-rect and uncontrolled test comparisons. The degree of biasand variability of relative sensitivities, specificities and diagnosticodds ratios between comparative and uncontrolled compar-isons was analysed. Further results will be available at theColloquium.Conclusions: Test selection is critical to health technology as-sessment. In the absence of comparative studies, selection hasoften relied on comparisons of meta-analyses of uncontrolledstudies. Limitations of such comparisons should be consideredwhen making inferences on the relative accuracy of competingtests, and in encouraging funders to ensure future test accuracystudies address important comparative questions.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 16–20Published by John Wiley & Sons, Ltd. 18

Reference

Takwoingi Y, Dinnes J, Leeflang M, Deeks J. An empiricalassessment of the validity of uncontrolled comparisons of theaccuracy of diagnostic tests [abstract]. 17th Cochrane Colloquium;2009 Oct 11-14; Singapore: 11.

Including evidence about the impactof tests on patient management insystematic reviews of diagnostic testaccuracy

Lukas Staub, Sarah Lord and Nehmat Houssami

Correspondence to: [email protected] Clinical Trials Centre, University of Sydney, Australia.

Background: Systematic reviews (SRs) provide more preciseestimates of test sensitivity and specificity than single studies.Their interpretation requires consideration of the impact of testresults on patient management and consequences for patientoutcomes.Objective: To describe concepts for the inclusion of data aboutpatient management as an extension to SRs of test accuracy.Methods: We apply standard epidemiological principles andpresent examples to define key concepts for reporting test im-pact on patient management in SRs of test accuracy.Summary of main findings: Review authors should state as-sumptions about changes in management and consequencesfor patient outcomes due to detection of ‘extra’ true-positives(TP)/false-negatives (FN)/true-negatives (TN)/false-positives (FP)when comparing the sensitivity and specificity of two tests: a) Ifassumptions that all extra cases will receive the specified man-agement change are straightforward, no further evidence aboutmanagement is needed; b) If uncertainty exists, additional datamay be required to estimate what proportion of extra cases willreceive a change in management. These data may be found inaccuracy studies and can be summarised to aid interpretation ofSRs to clinical practice. Unfortunately, they are often not clearlyor adequately reported. TheGRADE approach can thenbeused tojudge evidence about the effects of these management changeson patient outcomes.Conclusions: Patient management cannot always be inferredfrom test accuracy results but are relevant for interpretation ofthese results, for example, if the index test is more sensitive andless specific than the comparator, information about the propor-tion of extra true-positives whowill receive a change in treatmentmay be important when weighing up treatment benefits againstthe harms of extra false-positives. Review authors can identifysituations where empirical evidence about the changes in man-agement will assist interpretation of accuracy results and presentavailable data in a table with a summary estimate.

Reference

Staub LP, Lord SJ, Houssami N. Including evidence about theimpact of tests on patient management in systematic reviewsof diagnostic test accuracy [abstract]. 17th Cochrane Colloquium;2009 Oct 11-14; Singapore: 74–5.

CochraneMethodology Review Group

Nicola McDowell andMike Clarke

Correspondence to: [email protected] Methodology Review Group, UK Cochrane Centre, UK.

In 2009, the editorial base for the CochraneMethodology ReviewGroupmoved from its original home in Oslo in Norway, to Dublin,Ireland and Oxford, England. We are very grateful for the con-siderable input to the Group by Elizabeth Paulsen and MaritJohansen who stepped down as Managing Editor and TrialsSearch Co-ordinator respectively at that time. The editorial teamcontinues to be co-ordinated by Mike Clarke and Andy Oxman,supported now by Nicola McDowell (Managing Editor) and SarahChapman (Trials Search Co-ordinator), based in Oxford. The othereditors are Paul Glasziou, Peter Gøtzsche, Gordon Guyatt (Criti-cism Editor), Peter Juni, Philippa Middleton and Karen Robinson.During the last year, one Cochrane Methodology Review wasupdated (see below) and the protocol for a new review looking atthe impact of the CONSORT guidelines on reporting of trials waspublished in The Cochrane Library. This brings the total numberof Cochrane Methodology reviews to 14 protocols and 14 fullreviews.The CochraneMethodology Register (CMR) continues to grow yearon year, with infrastructure support from the UK Cochrane Cen-tre, which is part of the National Institute for Health Research,and a grant from The Cochrane Collaboration during 2009. SallyHopewell and Anne Eisinga at the UK Cochrane Centre work withMike on CMR and by mid 2010 it contains more than 13,000references to studies and other reports relevant to the methodsof systematic reviews and other evaluations of health and socialcare. Over a thousand records were added in the last twelvemonths.If you are interested in contributing to the work of the CochraneMethodology Review Group, as an author, referee or in someother way, please contact the Managing Editor ([email protected]).

CochraneMethodology review onrecruitment strategies for randomizedtrials

Mike Clarke

Correspondence to: [email protected] Methodology Review Group, UK Cochrane Centre, UK.

In the last year, one of the Cochrane Methodology reviews1 un-derwent a major face lift and updating, with a new author teamtaking over responsibility for this review of ways to improve therecruitment of participants into research studies, assisted by anaward from the National Institute for Health Research’s incentivescheme for Cochrane reviews. Shaun Treweek and colleagues rannew searches and focused the scope of the research on recruit-ment to randomized trials, producing an updated review with27 included studies and identifying strategies which might helpboost the number of people agreeing to take part in trials.Finding studies that are about recruitment to trials, rather thanthe trials themselves is not easy. As well as looking in the Coch-raneMethodologyRegister, the teamofauthorssearchedMEDLINE,

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 16–20Published by John Wiley & Sons, Ltd. 19

EMBASE, the educational database ERIC, and The Campbell Col-laboration’s SPECTR collection of controlled trials in educationand other non-healthcare areas. They checked 10,000 potentiallyrelevant references, and worked their way through more than180 full articles, before settling on the 27 reports that were suit-able for inclusion in the review. There were 24 studies withinterventions targeted at the potential participants for random-ized trials and three studies where the interventions were aimedat people recruiting others to trials. As well as recruitment toreal trials, the review also includes research in which strategieswere tested for hypothetical trials, to see if people would say thatthey would be willing to be randomized, even if a trial was notimmediately available for them. The authors did find some suchstudies. The whole collection of research allowed nine categoriesof intervention to be assessed.Studies among potential participants filled seven of these cat-egories: open versus blinded randomized trial, placebo versusanother comparator, conventional randomized trial design versusanother design, modifications to the consent process, modifica-tions to the approach made to potential participants, financialincentives to participants, and reminders about the trial. The twocategories for the studies that targeted recruiters were modifi-cations to their training, and greater contact between the trialco-ordinator and the trial sites.Formost of the interventions investigated, the updatedCochraneMethodology review was able to draw on a single trial only. But,some of these did suggest that the intervention tested would in-crease recruitment. The promising interventions were telephonereminders to non-responders for a trial of ways to help people re-turn to work after illness; opt-out procedures requiring potentialparticipants to get in touch with the researchers, if they did notwant to be contacted about a trial of decision aids for colorectalcancer screening; and mailing a home safety questionnaire to

potential participants for an injury prevention trial. A trial ofvitamin and mineral supplementation, and another of hormonereplacement therapy both found that making the trial openrather than blinded boosted the number of people willing totake part.The effects of the other strategies tested by studies in the reviewwere less clear. One strategy based on payments to participantsworked in the context of a hypothetical trial, but the review au-thors are not sure if this would translate into a real trial. The study,which was published in 2004, presented pharmacy students inthe USA with a range of hypothetical trials involving differentlevels of potential harm. It found that increasing the incentivepayment by several hundred dollars increased the number ofpeople who said they would be willing to be randomized.A further updating of the review is taking place now. Bring-ing in information from more recent research and completingthe thorough investigation of some studies that were identi-fied for the current version but for which more informationneeded to be collected. This update should be ready andpublished in The Cochrane Library in the coming year. In themeantime, you can listen to the lead author, Shaun Treweekdiscuss the review in The Cochrane Collaboration’s special col-lection of material for International Clinical Trials Day 2010(www.cochrane.org/podcasts/international-clinical-trials-day-2010/recruitment-strategies-clinical-trials).

Reference

1. Treweek S, Mitchell E, Pitkethly M, Cook J, Kjeldstrøm M, Task-ila T, JohansenM, Sullivan F,WilsonS, JacksonC, JonesR.CochraneDatabase of Systematic Reviews 2010, Issue 1. Art. No.: MR000013.DOI: 10.1002/14651858.MR000013.pub4.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 16–20Published by John Wiley & Sons, Ltd. 20

Cochrane Methods GroupsThe science of research synthesis is still young and evolvingrapidly. Methods Groups have been established to developmethodology and advise The Cochrane Collaboration on howthe validity and precision of systematic reviews can be im-proved. For example, the Statistical Methods Group is assessingways of handling different kinds of data for statistical synthesis,and the Applicability and Recommendations Methods Group isexploring important questions about drawing conclusions re-garding implications for practice, based on the results of re-views.There are 14 registeredMethodsGroups and, although theirmainrole is to provide policy advice to The Cochrane Collaboration,they may also carry out additional core functions such as pro-viding training, peer review and specialist advice, contributingto software developments, or conducting methodological re-search aimed at improving the quality of Cochrane reviews (seepages 16–20). Reports on the activities from most of the 14registered Methods Groups are given below. Contact details for

each of the Methods Groups can be found on the inside frontcover of this edition of CochraneMethods.

Registered groups

Adverse EffectsApplicability and RecommendationsBiasEconomicsEquityIndividual Patient Data Meta-analysisInformation RetrievalNon-Randomised StudiesPatient Reported OutcomesPrognosisProspective Meta-AnalysisQualitative ResearchScreening and Diagnostic TestsStatistical Methods

Cochrane Adverse Effects MethodsGroup

Yoon Loke, Andrew Herxheimer, Su Golder and Sunita Vohra

This has been, as usual, a busy year for the Cochrane AdverseEvents Methods Group. We had workshops at the CochraneColloquium in Singapore in 2009 and the UK- and Ireland-based Cochrane Contributors’ Meeting in Cardiff in March2010, which were excellent opportunities to help review au-thors in tackling adverse effects. Later on in the year, forthose who are unable to attend our workshops in person,we plan to run ‘webinars’ with the help of our friends at theCanadian Cochrane Centre. This will certainly be a new chal-lenge in transferring our usual hands-on workshop to a virtual

setting. The first webinar on data extraction for adverse ef-fects was scheduled for 17 June 2010 (after Cochrane Methodswent to press) (http://ccnc.cochrane.org/cochrane-canada-live-webinars). If you have any suggestions on what you would likeus to cover in future webinars, please contact us via our website.We are glad to welcome Sunita Vohra from the University ofAlberta, Canada, who has kindly agreed to join us as one ofthe Co-Convenors. We are also very pleased to have Su Golder(MRC Fellow in Health Services Research) back from maternityleave, and look forward to further methodological expertisefrom Sunita and Su in improving the ways in which we reviewadverse effects.We welcome questions, suggestions, and new ideas from re-view authors and editors; please e-mail any of us. Contactdetails and specific areas of interest are given on our webpage:http://aemg.cochrane.org/contact-us.

Cochrane Bias Methods Group

DavidMoher, Doug Altman, Jonathan Sterne, IsabelleBoutron and Lucy Turner

Over the past five years, the Bias Methods Group (BMG) has con-tinued to raiseawarenessofmethodologicaldiscussionandcodesof practice for dealing with bias in systematic reviews. The BMG

is active within the Cochrane community hosting workshops,training sessions, giving presentations, conducting priority topicresearch and methods reviews. The BMG has experienced a 30%increase in membership over the past year, with 115 members in18 countries who share an interest in bias.

In 2009, the BMG welcomed a fourth Co-Convenor, Dr. IsabelleBoutron whose expertise is proving to be a considerable asset tothe Group. Dr. Boutron’s work on the risk of bias is contributing

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 21

to the current evaluation of the Risk of Bias (RoB) tool (seepage 4) led by the BMG alongside Jelena Savovic, Julian Higginsand David Tovey. The objective of the tool is to provide an eas-ily accessible, comprehensive means of assessing and reportingbias in reviews. The update of the RoB tool has involved focusgroups sessions, discussion meetings and an extensive surveycompleted by Cochrane authors, editors and Review Group staff.The updated and improved version of the tool is due to bepublished in the next edition of the Cochrane Handbook for Sys-tematic Reviews of Interventions. We aim to present results of theevaluation at the Joint Colloquium of the Cochrane and Camp-bell Collaborations in Keystone in October 2010, where we alsohope to run a workshop at which further discussion of proposedimprovements to the RoB tool can take place.We are delighted to announce that Co-Convenor ProfessorJonathan Sterne has been elected as a member of the Collab-oration’s recently established Methods Executive. The MethodsExecutive will provide advice on methodological issues to theCochrane Collaboration Steering Group, and provides a focalpoint for Collaboration-wide methods discussions and initiatives(see page 2).The Collaboration has also established a new Methods Board(see page 2), which will be the body responsible for develop-ing methods guidance and strengthening the communicationsamong Methods Groups, the Methodology Review Group andvarious other individuals withmethods roles in the Collaboration.We look forward to working with the Methods Board to establisha network of Review Group-based methodologists who will takeresponsibility for the way that Cochrane reviews address bias.Current, in progress, Cochrane Methodology reviews by BMGmembers include:

• Adjusted indirect comparison for estimating relative ef-fectsofcompetinghealthcare interventions (FujianSong).

• Comparison of protocols to published articles for ran-domised controlled trials (Rebecca Smyth).

• CONsolidated Standards Of Reporting Trials (CONSORT)and the quality of reporting of randomized controlledtrials (David Moher).

Recently completed BMG Cochrane Methodology reviews:

• Checking reference lists to find additional studies for sys-tematic reviews [Pending Publication] (Tanya Horsley).

• Publication bias in clinical trials due to statistical signifi-cance or direction of trial results (Sally Hopewell).

• When and how to update systematic reviews (DavidMoher).

The BMG is also involved in initiatives related to the reportingof health research. Without adequate reporting, it is impossi-ble to identify and assess risk of bias in primary studies. Ourconvenors are among the founders of the EQUATOR Network(www.equator-network.org) which seeks to improve the qualityof scientific publications by promoting transparent and accuratereporting of health research. Our members have also been in-tegral to the recent 2010 update of the CONSORT Statement(www.consort-statement.org) (see page 12) and the new PRISMAStatement (see page 13).The BMG will be holding our next meeting at the Joint Colloqui-um of the Cochrane and Campbell Collaborations in Keystone inOctober 2010 and we should like to invite all those interested toattend. We thank our current and potential funders, the CanadianInstitutes of Health Research, without whose support our contin-uedprogresswouldnotbepossible. For further informationaboutthe Group, please visit the BMG website at www.ohri.ca/bmg orcontact our Research Co-ordinator, Lucy Turner: [email protected].

Campbell and Cochrane EconomicsMethods Group

Ian Shemilt

Economics is the study of the optimal allocation of limitedresources for the production of benefit to society. The Camp-bell and Cochrane Economics Methods Group (CCEMG) focuseson the development and application of approaches to evi-dence synthesis that combine economics and systematic re-view methods—both the role of economics methods in theevidence review and synthesis process, and conversely therole of evidence review and synthesis methods in economicevaluation.

For Cochrane and Campbell reviews, we advocate that authorsshould at least comment on economic aspects of interventionsand if possible evaluate included evidence from an economicperspective. Additionally, we support systematic approachesto incorporating searches for, and critical summaries of, evi-dence on resource use, costs and cost-effectiveness into re-views, alongside evidence on beneficial and adverse effects.The primary objective here is not to synthesise precise esti-mates of incremental resource use, costs or cost-effectiveness,but to explore economic trade-offs between interventions andto present findings in formats that facilitate new economicanalyses.

Economicmethodsguidelinesarepublished inPart3, Chapter15of theCochraneHandbook for SystematicReviewsof Interventions.We encourage Cochrane authors and ReviewGroups to becomefamiliar with these guidelines and to seek specialist advice andpeer review from the CCEMG for any protocols and reviewswhich have economics components, including those incorpo-ratingmeasures of resource use, costs and/or cost-effectivenessas primary or secondary outcomes. We also invite Cochranecontributors to access economics methods training at colloquiaand other network events, or via ourwebsite (www.c-cemg.org).In 2009 and 2010, CCEMG Co-Convenors have edited a newWiley-Blackwell book, Evidence-Based Decisions and Economics(April 2010), which describes how the activities and outputsof evidence synthesis, systematic review, economic analy-sis and decision-making interact within and across differentspheres of health and social policy and practice, and profilesthe latest methods proposals and controversies in the field(http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1405191538.html). Other activities and outputs have included: anew web-based tool and supplementary guidance for usein reviews to adjust estimates of costs collected from in-cluded studies to a common target currency and price year(eppi.ioe.ac.uk/costconversion/default.aspx); work on incorpo-rating evidence on resource use and costs into Summary offindings tables (see forthcoming ‘GRADEGuidelines’ series in theJournalofClinicalEpidemiology); andprovisionof free-trial access

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 22

to the Health Economic Evaluations Database for Cochraneauthors and Trials Search Co-ordinators (www.c-cemg.org).Miranda Mugford stepped down as Chair of Co-Convenors ofthe CCEMG in May 2010, and was replaced by Luke Vale. We

thank Miranda for her leadership andmany contributions to theCCEMG over almost 20 years since its inception and wish herwell in her retirement from academia. To contact the CCEMG,please e-mail Ian Shemilt ([email protected]).

Campbell and Cochrane EquityMethods Group

Peter Tugwell, Mark Petticrew, VivianWelch, Jordi PardoPardo, Elizabeth Kristjansson and Erin Ueffing

At the end of 2009, the Cochrane Health Equity Field be-came a Cochrane Methods Group. This change recognises ourwork in systematic review methods around equity, and facili-tates discussion of these methodological challenges within theCollaboration. Our thanks go out to those who supported us!Our aim is to improve the quality of Campbell and Cochranereviews on interventions to reduce socioeconomic inequalitiesin health and to promote their use to the wider community.Ultimately, this will help build the evidence base on such in-terventions and increase our capacity to act on the health gapbetween rich and poor.Staff: This year, we were pleased to welcome Jordi Pardo Pardoto our team as a Knowledge Translation Specialist.Extrapolation: There is a need for improved guidance on howpolicy-makers, clinicians, practitioners, and the public can apply(extrapolate) the results from systematic reviews to disadvan-taged groups. The question that these stakeholders really haveis ‘In my setting/ population, will this intervention have the sameeffects that it had in the studies in the systematic review?’ Wehavebeen awarded a grant from the Cochrane Opportunities Fund forour work in this area, and plan to hold a meeting/workshop atthe Joint Colloquium of the Cochrane and Campbell Collabora-tions in Keystone in October 2010.

Non-randomized methods: We are collaborating with theCochrane Non-Randomised Methods Group to host a two-dayworkshop session in Ottawa, Canada. Participants will dis-cuss methodological issues that arise when doing systematicreviews that include non-randomized studies, such as ‘Is someevidence (irrespective of the risk of bias) always better thannone?’Equity 101: We are developing training modules/workshopson basic equity principles and how the differential effects ofinterventions on disadvantaged populations can be consideredin systematic reviews. Please contact us to participate in asession.Open Equity Meeting: Friday, October 22 2010 at 07:30 (duringthe Joint Colloquium of the Cochrane and Campbell Collab-orations in Keystone). We welcome your contributions andinvolvement.Logicmodels: The role and value of theory in systematic reviewsare sometimes contested. Logic models describing mechanismsof action, with consideration of context and policy, social andcultural environments are one method of including theory. Ana-lytic frameworks, with their map of relationships and outcomes,are also useful for critiquing linkages in evidence in systematicreviews. Wewill be giving aworkshop on logicmodels in collabo-ration with the Cochrane Public Health Review Group: please joinus at the Joint Colloquium.Membership/Contact: The Equity Methods Group has nearly400 members from 35 countries. For more information or to joinour listserv, please contact Erin Ueffing ([email protected])or visit www.equity.cochrane.org.

Cochrane Individual Patient DataMeta-analysis Methods Group

Larysa Rydzewska, Jayne Tierney, Lesley Stewart, MikeClarke andMaroeska Rovers

The Individual Patient Data (IPD) Meta-analysis Methods Grouphas 73 members (35 active, 38 passive) from 17 countries, withinterests spanning a wide range of health care, including can-cer, epilepsy, stroke, perinatal care, and malaria, and researchquestions in prevention, treatment, rehabilitation and progno-sis. With this diversity in mind, we are considering changing thename of the Group to ‘Individual Participant Data’, retaining the‘IPD’ abbreviation.

During our meeting at the Cochrane Colloquium in Freiburg inOctober 2008, attended by many new members of the Group,one of the main issues discussed was the difficulty in obtainingfunding for IPD projects. Although there are well establishedadvantages of collecting IPD for systematic reviews, this is notalways apparent to funders. Also, it is not clear to them that forsome types of systematic review, such as those of prognosticstudies, the collection of IPD may be the only way to performreliable meta-analyses. Therefore, we have been compiling a

list of both criticisms and positive feedback from funders foruse by our members. We are also planning a collection ofarticles about current topics in relation to IPD with examplesfrom the Methods Group. This should bring the literature onthis topic up to date and provide examples and informationthat can be cited in future funding applications. Furthermore,an article on the rationale, conduct and reporting of IPD inmeta-analyses has recently been published by a member of theGroup.1

At the Cochrane Colloquium in Singapore in October 2009and again at the UK- and Ireland-based Cochrane Contributors’Meeting inMarch 2010, we ran trainingworkshops onwhen andhow to use IPD in systematic reviews. This training helps reviewauthors to decide whether an IPD approach is appropriate totheir own review question and circumstances, and providespractical guidance on all aspects of the IPD approach. We intendto run thisworkshopduring the forthcoming JointColloquiumofthe Cochrane and Campbell Collaborations in Keystone in Octo-ber 2010. We also ran a further trainingworkshop at the UK- andIreland-basedContributors’Meeting inMarch2010, ledbyCatrinTudur-Smith, on statistical methods for themeta-analysis of IPD.This workshop covered methods for modelling IPD, combiningIPD and aggregate data, and estimating treatment-covariateinteractions.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 23

As part of the organizational changes in the way thatMethods Groups participate in The Cochrane Collaboration(see page 3), we have adopted the following as our corefunctions:

• Providing training.• Providing peer review on the IPD element of Cochrane

reviews.• Providing specialist advice on IPD.• Contributing to the development of software relevant to

using IPD meta-analyses in Review Manager (RevMan).• Conducting Cochrane Methodology Reviews relevant to

IPD topics.• Contributing to the Cochrane Methodology Register.• Helping to monitor and improve the quality of Cochrane

reviews.

We are also holding regular teleconferences in which theconvenors can discuss issues relevant to the organization ofthe Group, including, for example, the preparation of thepapers on IPD methods. If you would like to join theIPD Methods Group or are interested in finding out more,please contact Larysa Rydzewska ([email protected]) or visitwww.ctu.mrc.ac.uk/cochrane/ipdmg,whereyouwillfindsearch-able databases of completed and ongoing IPD meta-analysesand methodology research projects, as well as general informa-tion about IPDmeta-analyses.

Reference1. Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individualparticipant data: rationale, conduct and reporting. BMJ 2010;340: c.221.

Cochrane Information RetrievalMethods Group

Julie Glanville, Carol Lefebvre, Jessie McGowan, AlisonWeightman and Bernadette Coles

There are approximately 190 members of the Information Re-trieval Methods Group (IRMG), many of whom have been activein a number of the projects outlined below. Our Co-ordinator(BC) continues to maintain members’ contact details in Archie. InNovember 2009 we transferred our discussion list to the Collab-oration mailing lists system supported by the German CochraneCentre and in March 2010 we transferred our website to the newweb system, also supported by the German Cochrane Centre.We welcome feedback on the website (irmg.cochrane.org) fromIRMGmembers and others.In March 2010, Julie Glanville became an additional Co-Convenorof the IRMG. She brings a wealth of experience in informationretrieval research and teaching in the context of systematic re-views and is co-author of the Searching for Studies chapter of theCochrane Handbook for Systematic Reviews of Interventions.The Co-Convenors and members of the IRMG continued to serveon various Cochrane Collaboration policy advisory groups rele-vant to information retrieval including the Handbook AdvisoryGroup, the Publishing Policy Group, the Quality Advisory Groupand the Trials Search Co-ordinators Executive. There have beena number of changes in policy advisory group structures overthe last year with existing groups being wound down and newgroups and committees being formed. For example, the IRMG isrepresentedon thenewly formedMethodsExecutive (seepage2).Two of the Co-Convenors (CL and JG) together with anothermember of the IRMG updated the Searching for Studies chapterin the CochraneHandbook for Systematic Reviews of Interventions.1

The revised chapter contains Collaborationpolicy on study identi-fication for Cochrane reviews and information on searchmethodsand sources to search, together with revised versions of theCochrane Highly Sensitive Search Strategies for identifying re-ports of randomized trials in MEDLINE.Following on from the three-day Cochrane Collaboration Steer-ing Group-funded meeting in Cambridge in July 2008 to ex-plore approaches and identify solutions to meet the trainingand support requirements across the Collaboration, the Co-

Convenors have been involved in revising their training ma-terials, based on the Searching for Studies chapter of theCochrane Handbook for Systematic Reviews of Interventions. Theseslides will contribute to Collaboration-wide training materials forCochraneCentres andothers tousewhen training reviewauthors.Progress continues to bemade on the PRESS project (Peer Reviewof Electronic Search Strategies), led by Co-Convenors (CL andJMcG) together with other IRMG members, to develop guidancefor evaluating search strategies for systematic reviews. In addi-tion to the full project report published by the project funder, theCanadianAgency forDrugs andTechnologies inHealth (CADTH),2

an evidence-based practice guideline for peer-reviewing searchstrategies has been published3 and also a checklist4. The peerreview forum, which forms the final element of this project, isstill under development at the pilot stage and how this might beimplemented across Review Groups in the Collaboration will bediscussed with IRMG members, Trials Search Co-ordinators andothers when the pilot is complete. All Cochrane Trials SearchCo-ordinators and all members of the IRMG were invited tocontribute to the survey that underpinned this project.The funding application toundertake anaudit of search strategiesin new and / or updated Cochrane reviews, reported in the pre-vious issue of this newsletter, was not successful but it is hopedthat an audit will take place in the near future.Several members of the IRMG including two of the Co-Convenors(JG and CL) are involved in updating the Cochrane MethodologyReview on handsearching versus electronic searching to identifyreports of randomized trials, which was first published in 2003,to provide advice to the Cochrane Collaboration Steering Groupthrough the Monitoring and Registration Group on the value ofhandsearching.Filters for importing records from The Cochrane Library intoProCite, Reference Manager and EndNote continue to be up-dated on the IRMG’s website (http://irmg.cochrane.org/filters-reference-management-software-procite-reference-manager-and-endnote). If you are aware of any filters for import-ing records from The Cochrane Library into any other refer-ence management software, please contact Bernadette Coles([email protected]).Work on expanding and updating the web resource of searchfilters compiled by the InterTASC Information Specialists’ Sub-Group (ISSG) continues (www.york.ac.uk/inst/crd/intertasc). Twoof the Co-Convenors (JG and CL) are editors of the site and many

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 24

members of the IRMG are contributors. It currently recordsknown search filters, filter design projects in progress, andresearch on the development and use of search filters. Thereare also critical appraisals for some of the filters, which havebeen carried out using an appraisal checklist developed by theISSG5. We should like to encourage searchers to test the filters inpractice where possible, so that the results can be recorded onthe website.Several members of the IRMG have been involved in a project,led by one of the Co-Convenors (AW), to build up the Special-ized Register for the recently registered Cochrane Public HealthReview Group (PHRG). In line with the underlying principles ofpublic health, PHRG reviews have a significant focus on equityand the Specialized Register is being developed so that equity-related studies can easily be identified. A particular effort is beingmade to identify studies for reviews that reflect the needs oflow- and middle-income countries. The work is supported bythe Welsh Assembly Government, Cardiff University and the EPPICentre, and the Public Health Review Group’s editorial base atthe University of Melbourne.Funding was awarded from the Cochrane Opportunities Fundto the IRMG and other Cochrane Groups for a project led byone of the Co-Convenors (AW) to develop global resources forliterature searches. This included a list of databases of valuefor locating evaluation studies previously identified as hard toaccess, particularly from low- and middle-income countries (theLMICDatabase). The list is currently available via the EPOCGroup,IRMG and the Public Health Group websites.The UK Medical Research Council (MRC) has awarded funding fora two-year project on search filter performance. The project isbeing led by two of the Co-Convenors (CL and JG) and is due forcompletion in May 2012.Members of the IRMG have continued to be active in develop-ing areas within The Cochrane Collaboration including adverseevents, diagnostic test accuracy, economic evaluation and theproject to develop a new Cochrane Register of Studies (CRS).Two of the Co-Convenors (CL and JMcG) assisted in updating theCochrane Glossary. Members of the Campbell IRMG continue tobe members of the Cochrane IRMG. The Searching for Studieschapter of the Cochrane Handbook for Systematic Reviews ofInterventions was used extensively as the basis for the revisedversion of the Campbell Information Retrieval Policy Brief. TheIRMG discussion list is used to notify members of activities suchas the annual IRMG Meeting at Cochrane Colloquia and tocirculate the minutes. It has been used to find possible

collaborators in projects associated with information retrieval,including those listed above. To join the list, please contactBernadette Coles ([email protected]).Co-Convenors andmembers of the IRMG conducted a number ofworkshopsat recentColloquiaand furtherworkshopsareplannedfor the JointColloquiumof theCochraneandCampbell Collabora-tions in Keystone in 2010, including two specific IRMGworkshops:one workshop for review authors on study identification and oneon clinical trials registers. An open meeting of the IRMG washeld during the Cochrane Colloquium in Singapore in 2009 and afurther meeting is planned for the Joint Colloquium in Keystonein October 2010. Infrastructure support for time and fundingfor Colloquium attendance of the Co-Convenors is provided byCardiff University, the UK Cochrane Centre and the University ofOttawa. Support for the time of the Co-ordinator and admin-istrative assistance, with funding for Colloquium attendance, isprovided by Cardiff University and Cancer Research Wales.

References

1. Lefebvre C,Manheimer E, Glanville J, on behalf of the CochraneInformation Retrieval Methods Group. Chapter 6: Searching forstudies. In: Higgins JPT, Green S, editors. Cochrane Handbookfor Systematic Reviews of Interventions. Version 5.0.2 [updatedSeptember 2009]. The Cochrane Collaboration, 2009. Availablefrom: http://www.cochrane-handbook.org.2. Sampson M, McGowan J, Lefebvre C, Moher D, GrimshawJ. PRESS: Peer Review of Electronic Search Strategies (Technologyreport number 477). Ottawa: Canadian Agency for Drugs andTechnologies in Health, 2008.3. SampsonM, McGowan J, Cogo E, Grimshaw J, Moher D, Lefeb-vre C. An evidence-based practice guideline for the peer reviewofelectronic search strategies. Journal of Clinical Epidemiology 2009;62: 944–52.4. McGowanJ,SampsonM,LefebvreC.AnEvidence-BasedCheck-list for the Peer Review of Electronic Search Strategies (PRESSEBC). Evidence Based Library and Information Practice 2010; 5:149–54.5. Glanville J, Bayliss S, Booth A, Dundar Y, Fernandes H, FleemanND, Foster L, Fraser C, Fry-Smith A, Golder S, Lefebvre C, Miller C,Paisley S, Payne L, Price A, Welch K, on behalf of the InterTASCInformation Specialists’ Sub-Group. So many filters, so little time:the development of a search filter appraisal checklist. Journal oftheMedical Library Association 2008; 96: 356–61.

Cochrane Non-Randomised StudiesMethods GroupBarney Reeves

In last year’s Cochrane Methods Groups Newsletter, I describedthe inclusion of non-randomized studies (NRS) in reviews aboutthe benefits of healthcare interventions as an ‘underlying ten-sion in the Collaboration’. Chapter 13 in the CochraneHandbookfor Systematic Reviews of Interventions appears to have height-ened this tension and the risk that Cochrane Review Groups arechoosing to go their own way. In order to address this directly,the Non-Randomised Studies Methods Group (NRSMG) hostedan invited workshop in Ottawa in June 2010. The meetingbrought together methodologists, review authors and senior

representatives of organizations such as the National Institutefor Health and Clinical Excellence (NICE) and the Agency forHealthcare Research and Quality (AHRQ) that commission re-views to discuss the most important issues facing authors whowant to include NRS. The meeting is expected to provide thebasis for revisions to Chapter 13 of the Cochrane Handbook andto prioritize methodological research questions that need to beanswered in order to provide better guidance in areas that arecurrently evidence free.

The NRSMGhas also been investigating how to bring the assess-ment of the risk of bias in non-randomized studies into linewith the existing Risk of Bias (RoB) tool which has beenimplemented across Cochrane reviews of effectiveness. Wereasoned that this is important in order to provide a ‘levelplaying field’ for assessing the risk of bias. It should be possible

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 25

since the main bias domains are the same, although otherdomains may have to be added. The NRSMG workshop atthe Cochrane Colloquium in Singapore in October 2009 pilotedan initial attempt to adapt the RoB tool to non-randomizedstudies. Further discussions were held with the NRSMG, andbetween the NRSMG and the Bias Methods Group, resulting inthe NRSMG being represented at the workshop in Cardiff toevaluate the RoB tool (see page 5). The workshop endorsed theprinciple that assessing risk of bias in non-randomized studiesshould follow the same principles. However, the Bias MethodsGroup’s immediate priority has to be revising the RoB tool totake into account authors’ feedback (primarily in reviews thatinclude randomized trials only). One issue that has emerged isthe need to distinguish varying degrees of risk of bias betweendifferent typesof non-randomized studies—users of reviewswillnot be greatly helped by uniformly red RoB graphs in Cochrane

reviews. Unfortunately, this is something which cannot beaccommodated by the current RevMan structure (designed toaccept only high, low or unclear response options). These delib-erations led to further revisions to theNRSMG trainingworkshopat the UK- and Ireland-based Cochrane Contributors’ Meetingin Cardiff in March 2010, which will be repeated at the JointColloquium of the Cochrane and Campbell Collaborations inKeystone in October 2010.Selection bias is one of the key bias domains, currently coveredby sequence generation and allocation concealment items. As-sessment of risk of bias in non-randomized studies clearly hasa confounding item and this has been another area of activity.A proposal for a discussion workshop has been submitted forthe Joint Colloquium at Keystone. We expect that the workshopwill inform the design of a training workshop on this subjectin 2011.

Cochrane Prognosis Methods Group

Doug Altman, Riekie de Vet, Jill Hayden, Richard Riley,KatrinaWilliams and SusanWolfenden

Over the past year, the Convenors of the Cochrane PrognosisMethods Group have beenworking together to establish the pro-cesses required to produce high quality methods for undertakingsystematic reviews and meta-analyses of prognosis studies andtoprovideadvice to reviewauthorswishingtowriteprognosis sys-tematic reviewsor incorporateprognosis information into their in-terventionordiagnostic reviews. This informationwillbeprovidedthrough Cochrane Newsletters, the Cochrane Prognosis web-site (www.prognosismethods.cochrane.org/en/index.html) andvia e-mail to the Prognosis Review Network. If you are inter-ested in joining this network and/or the Cochrane PrognosisMethods Group, please contact Katy Sterling-Levis ([email protected]).Progress to date includes establishment of a research frameworkfor the Prognosis Methods Group. A research framework is cur-rently being updated following feedback from members of thePrognosis Methods Group and the latest version will soon beavailable on the Prognosis Methods Group website. The aim ofthe research framework is to identify research priorities for theGroup andwe encouragemembers of the Group to provide feed-back on the framework, identify where their research activitiesmay lie in the matrix that is provided and provide this feedbackto the Prognosis Methods Group Co-ordinator, Greta Ridley, [email protected] Prognosis Methods Group Meeting held at the Cochrane Col-loquium in Singapore in 2009 identified specific key priorities forthe research agenda including:

• Consideration of baseline risk stratification for interven-tion reviews and trials.

• Towhatextent candiagnostic test accuracy systematic re-viewmethods be applied to prognostic studies includingrisk of bias, literature searching, etc.

• What study designs should be searched for and includedin prognosis systematic reviews. Is there a hierarchy ofstudy design methods?

• Reporting guidelines for prognosis studies and reviews.• Consensus regarding nomenclature in prognosis re-

search.

A Convenors’ task list is currently being developed to clarify andprogress the research and administrative activities of the Group.Public meetings involving Convenors of the Prognosis MethodsGroup have been organized to raise the profile of prognosisresearch and the need for funding.

A database of prognosis studies and a methodological resourcefor the Prognosis Methods Group is currently underway as aresult of funding received by Dr Jill Hayden (Prognosis Meth-ods Group Convenor) from the Nova Scotia Health ResearchFoundation in Canada (www.nshrf.ca), to form the MethodologyResource Group, a subgroup of the Prognosis Methods Group.This subgroupwill help to co-ordinate and facilitatemethodolog-ical research relevant to prognosis reviews. The subgroup aimsto develop and maintain databases of relevant methodologicalstudies, protocols, and systematic reviews of prognosis.

Discussions between the Screening and Diagnostic Tests Meth-ods Group, the Diagnostic Test Accuracy Working Group and thePrognosis Methods Group are ongoing in regard to a combinedMethods Group Journal.

Cochrane Qualitative ResearchMethods Group

Janet Harris

The Cochrane Qualitative Research Methods Group(CQRMG) has completed draft guidance for inte-grating qualitative research into intervention reviewswww.joannabriggs.edu.au/cqrmg/tools.html. The guidance

is being reviewed by the Cochrane Collaboration SteeringGroup. It will be piloted through collaboration with theCochrane Consumers and Communication Review Group, us-ing the protocol ’Peer support strategies for improving thehealth and well-being of individuals with chronic disease’as an exemplar.

Our latest workshop at the UK- and Ireland-based CochraneContributors’ Meeting in Cardiff in March 2010 showed thatwe have two distinct groups within The Cochrane

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 26

Collaboration that have different training needs. There is aneed for introductory sessions introducing Cochrane editors,statisticians, and those with primarily quantitative researchbackgrounds to the potential utility of qualitative research tosome Cochrane intervention protocols. Qualitative and mixedmethods researchers want opportunities to develop skills insynthesis. We have pointed the latter group to the qualita-tive synthesis workshops that are organized by various Con-venors of CQRMG. The University of Sheffield co-ordinates Ev-idence Synthesis of Qualitative Research in Europe (ESQUIRE).The next ESQUIRE workshop is 7 to 9 September 2010 (es-quiresheffield.pbworks.com). The Catholic University of Leuven

organizes a three-day systematic review course. The third day ofthe programme consists of an introduction to differentmethod-ologies of qualitative evidence synthesis, a critical appraisalworkshop and a workshop on meta-aggregation of qualitativedata. Please e-mail [email protected] if you or oneof your colleagues are interested in participating.

CQRMG has started offering technical assistance to severalCochrane Review Groups, focusing on protocol developmentfor reviews incorporating qualitative evidence. If you havequeries regarding technical support please e-mail one of us orvisit www.joannabriggs.edu.au/cqrmg/convenors.html.

Cochrane Screening and DiagnosticTests Methods Group

PetraMacaskill, Constantine Gatsonis, Roger Harbord andMariska Leeflang

The work of the Cochrane Screening and Diagnostic Tests Meth-ods Group continued to diversify during 2009 as the number ofdiagnostic reviews that are planned, or underway, has increased.Members of our Methods Group have contributed their time andexpertise to deal with the growing need for peer reviewers tocomment on protocols. It is very pleasing to see the increasinglevel of activity.We have also contributed to a range of training activities run bythe support units (the UK Support Unit (UKSU) based in Birm-ingham, and the Continental Europe Support Unit (CESU) basedin Amsterdam) in the UK, Continental Europe, Australasia andCanada. The strong links between our Methods Group and thesupport units, with some members common to both, has helpedto ensure the quality and success of these initiatives. The input ofour Methods Group members was particularly important for the

trainingworkshop formethodologists held in July in Birminghamin 2009. This workshop was attended by methodologists fromaround the world, many of whom are responsible for statisti-cal aspects of Cochrane reviews. The mix of presentations andpractical work was very successful. The workshop was usefulin building links between participants (including presenters). Italso highlighted the successful completion of key aspects of thediagnostic initiative such as the integration of diagnostic reviewsinto Review Manager software.

Methods Group members were again involved in presenting di-agnostic training workshops at the Cochrane Colloquium held inSingapore in 2009. We plan to offer a comprehensive range oftraining workshops under the umbrella of our Methods Groupat the Joint Colloquium of the Cochrane and Campbell Collab-orations in Keystone in October 2010. These workshops will beconducted in collaboration with the support units.

We would like to thank everyone who has contributed to theactivities of our Methods Group over the past year. Much hasbeen achieved over past years, but there is still much to do. Welook forward to seeing many of you at the Joint Colloquium in2010.

Cochrane Statistical Methods Group

Joseph Beyene, Doug Altman, Steff Lewis, JoanneMcKenzieand Georgia Salanti

The Statistical Methods Group (SMG) has contributed to sev-eral activities of The Cochrane Collaboration over the past yearincluding training and research. One of the highlights of lastyear’s activities was the successful award of funding from TheCochrane Collaboration’s Opportunities Fund for the SMG torun a short course for statisticians. The two-day course, aimed atstatisticians who provide support to Cochrane Review Groups(CRGs) or Centres, was held from 4 to 5 March 2010 in Cardiff,UK. Thirty-four participants attended the course. The coursewastaught by several experienced statisticians and methodologistswith a mix of didactic and interactive sessions. A wide rangeof advanced issues in meta-analysis and assessment of bias

were addressed (see www.smg.cochrane.org for more details).The course ended with an open and informal discussion aboutgeneral issues related to the statistical contribution to CRGs.Georgia Salanti presented the results of a survey of CRG statis-ticians which stimulated discussion. Participants exchangedhelpful ideas drawing from their own CRG experiences. Thecourse evaluation form showed that the participants respondedfavourably to the course and felt that similar courses should beorganized by SMG more frequently in the future. The materialof the course will be made available on the SMG website.Many SMG members facilitated workshops on a range of topicsat the Cochrane Colloquium in Singapore in 2009 and at otherregional meetings and symposia.We thank all those who have contributed to the SMG activi-ties over the previous year and look forward to seeing manyof you at the Joint Colloquium of the Cochrane and CampbellCollaborations in Keystone in October 2010.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 21–27Published by John Wiley & Sons, Ltd. 27

Campbell CollaborationMethods Groups (C2)

Terri Pigott, Peter Tugwell, Erin Ueffing and RyanWilliams

The Campbell Methods Group supports the production of Camp-bell Collaboration reviews by improving the methodology ofresearch synthesis, and disseminating guidelines for state of theart reviewmethods.The Campbell Methods Group has subgroups that play a keyrole in helping the editors to ensure the quality of Campbell’ssystematic reviews. The subgroups serve as forums for discussionon research models, and provide advice on specific topics ofmethodology and methods policy. They also provide trainingand support to review authors, editors, and those who wish toundertake a systematic review. The subgroups are Economics, Eq-uity, InformationRetrieval, Process and Implementation, StatisticsMethods, and Training.For the first time, the Campbell and Cochrane Collaborations arecoming together for a joint Colloquium in Keystone, Coloradoon 18 to 22 October 2010. Working with leading methodolo-gists from both the Campbell and Cochrane Collaborations, IanShemilt has organized a joint symposium on Cochrane Camp-

bell Methods for the Colloquium on Monday 18 October at09:30. Please mark your calendars and save the date. We willbe discussing key methods issues relevant for both Collabora-tions, such as fixed versus random effects models, inclusion ofnon-randomized studies, and publication bias.

We will also be holding an open meeting of the CampbellMethods Group at the Joint Colloquium on Tuesday 19 Octo-ber at 07:30. You are most welcome to join us, your contributionswould be greatly appreciated as we discuss Campbell methods,activities, and updates. Further methods meetings and trainingworkshops on systematic review methods are also planned. Thisyear’s workshops range from introductory topics such as ‘Intro-duction to systematic reviews in the Campbell Collaboration’ tothe more advanced ‘Meta-regression with dependent effect sizeestimates’. Please visit the Colloquium website for an updatedschedule. We look forward to seeing you in Keystone.

For more information about the Campbell Methods Group,please contact Erin Ueffing ([email protected]) or visit ourwebsite at www.campbellcollaboration.org.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 28Published by John Wiley & Sons, Ltd. 28

Future MeetingsJoint Colloquium of the Cochrane andCampbell Collaborations

Keystone, Colorado18 to 22 October 2010

This year will see the first ever Joint Colloquium between theCochrane and Campbell Collaborations. It aims to focus onraising evidence-based decision-making to new heights. Col-orado is known for its towering mountain peaks and naturalbeauty, and this meeting aims to be a coming together of multi-disciplinary learning and sharing at its best.As part of the Colloquium, an open Joint Symposium onCochrane–Campbell Methods will be held onMonday 18th Octo-ber between 9.30am and 1.45pm. This is being organised jointlyby theCochraneMethodsBoardand theCampbellMethodsCoor-dinatingGroup. LeadingCochrane andCampbellmethodologistswill present key current methods issues and challenges faced inthe preparation and maintenance of Cochrane and Campbellreviews for discussion. The symposium will also discuss howwe can improve collaboration between Cochrane and Campbellmethodologists, including the potential for more joint activitiesand outputs.The Cochrane Methods Board will meet (by invitation only) onMonday 18 Oct between 7.30am and 9.30am, continuing from4.30pm to 6.30pm.More information is available at www.colloquium.info.

COMET Symposium

Bristol, UK2011

The COMET initiative (Core Outcome Measures in Effective-ness Trials) was launched in January 2010 (see page 5) tofacilitate the development and use of sets of core outcomemeasures in health care. A symposium is now being plannedfor early next year. This will be an opportunity to hear frompeople who have developed core outcome sets, as well asthosewhoareusing them in research andpractice. Therewillbe ample opportunity to discuss the processes for develop-ing core outcomes, and the challenges of doing so. Furtherinformation, including registration details, will be availablefrom the COMET website: www.liv.ac.uk/nwhtmr.

Copyright �c 2010 The Cochrane Collaboration. Cochrane Methods. Cochrane DB Syst Rev 2010 Suppl 1: 29Published by John Wiley & Sons, Ltd. 29

AcknowledgementsThe editors of the Cochrane Methods should like to thank all thecontributors for making this issue possible. We should also like tothank Sarah Chapman for help in preparing structured abstractsand Anne Eisinga for her careful proof-reading. Thanks are alsodue to the UK National Institute for Health Research, which pro-vided the core funding to the UK Cochrane Centre to producethis newsletter and to The Cochrane Collaboration for providingfunding towards printing costs.

Availability of CochraneMethodsAdditional copies of Cochrane Methods may be obtained free ofcharge from the UK Cochrane Centre, which is based at:

UK Cochrane CentreNational Institute for Health ResearchSummertown PavilionMiddle WayOxford OX2 7LGUK

Cochrane Methods is also available electronically viaThe Cochrane Collaboration website at www.cochrane.org/newslett/index.htm and via The Cochrane Library website atwww.thecochranelibrary.com

Comments and FeedbackIf you want to make sure that you receive the next issue ofCochrane Methods please contact us at the address below. Com-ments are welcome. Let us know what you liked and what youdid not like and any suggestions you have for the next issue.

Thank you!

Maria BurgessUK Cochrane CentreNational Institute for Health ResearchSummertown PavilionMiddle WayOxford, OX2 7LGUKTel: + 44 1865 516300Fax: + 44 1865 [email protected]

The Cochrane LibraryTheCochrane Library is available at www.thecochranelibrary.com.It contains six databases: the Cochrane Database of Systematic Re-views (CDSR), theDatabaseofAbstractsofReviewsofEffects (DARE),the Cochrane Central Register of Controlled Trials (CENTRAL), andthe Cochrane Methodology Register (CMR) as well as the HealthTechnologyAssessmentDatabaseandtheNHSEconomicEvaluationDatabase. In addition, The Cochrane Library contains informationabout theCollaborationandCochraneentities. Informationabouthow to subscribe is available from:

Jennifer CoatesCochrane Library Customer Services AdvisorJohn Wiley & Sons Ltd1 Oldlands WayBognor RegisWest Sussex, PO22 9SAUK

Tel: +44 1243 [email protected]/view/0/HowtoOrder.html

The Cochrane CollaborationA wide range of Cochrane Collaboration information is avail-able from www.cochrane.org including the abstracts from allthe Cochrane reviews in the current issue of The Cochrane Li-brary, details of Cochrane e-mail lists, opportunities to downloadCochrane software (including Review Manager 5), contact de-tails for all Cochrane entities, copies of previous editions of theCochrane Methods Groups Newsletter and much more.

International Cochrane e-mail list: CCINFOThis moderated list offers an excellent means of keepinginformed about the activities and policies of The CochraneCollaboration. The list is used for announcements and dis-cussion of matters relevant to the Collaboration as a whole.To subscribe go to the following webpage: lists.cochrane.org/mailman/listinfo/ccinfo

Cochrane Centre Internet SitesThere are 14 Cochrane Centres around the world; to speak tosomeone about The Cochrane Collaboration, please contact yourlocal Centre.

Australasian Cochrane Centrewww.cochrane.org.au

Brazilian Cochrane Centrewww.centrocochranedobrasil.org

Canadian Cochrane Centrewww.ccnc.cochrane.org

Chinese Cochrane Centerwww.ebm.org.cn

Dutch Cochrane Centrewww.cochrane.nl

French Cochrane Centree-mail: [email protected]

German Cochrane Centrewww.cochrane.de

Iberoamerican Cochrane Centrewww.cochrane.es

Italian Cochrane Centrewww.cochrane.it

Nordic Cochrane Centrewww.cochrane.dk

South African Cochrane Centrewww.mrc.ac.za/cochrane

South Asian Cochrane Centrewww.cochrane-sacn.org

UK Cochrane Centrewww.cochrane.ac.uk

United States Cochrane Centerwww.cochrane.us