Factors influencing the development of evidence-based practice: a research tool

Post on 10-Apr-2023

0 views 0 download

Transcript of Factors influencing the development of evidence-based practice: a research tool

Factors influencing the development of evidence-based practice:

a research tool

Kate Gerrish1, Peter Ashworth2, Anne Lacey3, Jeff Bailey4, Jo Cooke5, Sally Kendall6 &

Elaine McNeilly7

Accepted for publication 5 September 2006

1Kate Gerrish BN MSc PhD RN

Professor of Nursing

Centre for Health and Social Care Research,

Sheffield Hallam University, Sheffield

Teaching Hospitals NHS Trust, Sheffield, UK

2Peter Ashworth PhD FBPsS

Professor of Educational Research

Faculty of Development and Society,

Sheffield Hallam University, Sheffield, UK

3Anne Lacey MSc RN

Senior Research Fellow/Director (Sheffield)

Trent RDSU, ICOSS, University of Sheffield,

Sheffield, UK

4Jeff Bailey BSc

Research Co-ordinator

Barnsley Hospital NHS Foundation Trust,

R&D Department, Barnsley, UK

5Jo Cooke MA RN HV

Primary and Social Care Lead

Trent RDSU, University of Sheffield,

Sheffield, UK

6Sally Kendall BSc PhD RN HV

Professor of Primary Health Care Nursing

Faculty of Health and Human Sciences,

University of Hertfordshire, Hatfield, UK

7Elaine McNeilly BSc

Research Assistant

Faculty of Health and Human Sciences,

University of Hertfordshire, Hatfield, UK

Correspondence to Kate Gerrish:

e-mail: k.gerrish@shu.ac.uk

GERRISH K. , ASHWORTH P. , LACEY A. , BAILEY J. , COOKE J. , KENDALL S. &GERRISH K. , ASHWORTH P. , LACEY A. , BAILEY J. , COOKE J. , KENDALL S. &

MCNEILLY E. (2007)MCNEILLY E. (2007) Factors influencing the development of evidence-based

practice: a research tool. Journal of Advanced Nursing 57(3), 328–338

doi: 10.1111/j.1365-2648.2006.04112.x

AbstractTitle. Factors influencing the development of evidence-based practice: a research

tool

Aim. The paper reports a study to develop and test a tool for assessing a range of

factors influencing the development of evidence-based practice among clinical

nurses.

Background. Achieving evidence-based practice is a goal in nursing frequently cited

by the profession and in government health policy directives. Assessing factors

influencing the achievement of this goal, however, is complex. Consideration needs

to be given to a range of factors, including different types of evidence used to inform

practice, barriers to achieving evidence-based practice, and the skills required by

nurses to implement evidence-based care.

Methods. Measurement scales currently available to investigate the use of evidence

in nursing practice focus on nurses’ sources of knowledge and on barriers to the use

of research evidence. A new, wider ranging Developing Evidence-Based Practice

questionnaire was developed and tested for its measurement properties in two

studies. In study 1, a sample of 598 nurses working at two hospitals in one strategic

health authority in northern England was surveyed. In study 2, a slightly expanded

version of the questionnaire was employed in a survey of 689 community nurses in

12 primary care organizations in two strategic health authorities, one in northern

England and the other in southern England.

Findings. The measurement characteristics of the new questionnaire were shown to

be acceptable. Ten significant, and readily interpretable, factors were seen to

underlie nurses’ relation to evidence-based practice.

Conclusion. Strategies to promote evidence-based practice need to take account of

the differing needs of nurses and focus on a range of sources of evidence. The

Developing Evidence-Based Practice questionnaire can assist in assessing the specific

‘evidencing’ tendencies of any given group of nurses.

Keywords: evidence-based practice, instrument development, nursing, research

implementation, survey design

RESEARCH METHODOLOGYJAN

328 � 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd

Introduction

Over the past 15 years, evidence-based practice has emerged

as a major policy theme in Western healthcare systems. The

increased emphasis internationally on clinical and cost-

effectiveness in health policy has highlighted the need for

quality health services to be built upon the use of best

evidence (McKenna et al. 2004). Various governments have

introduced initiatives to support the development of evi-

dence-based healthcare systems in which decisions made by

healthcare practitioners, managers, policy makers and

patients are based on high quality evidence. Activity has

focused on developing evidence-based guidelines for clinical

interventions. For example, in the United States of America

(USA) the Agency for Healthcare Research and Quality

(http://www.ahrq.gov) leads national efforts in the use of

evidence to guide healthcare decisions. The establishment of

the National Institute for Health and Clinical Excellence

(http://www.nice.org.uk) in England, the Scottish Intercol-

legiate Guidelines Network (http://www.sign.ac.uk), and the

National Institute for Clinical Studies (http://www.nicsl.co-

m.au) in Australia have similar responsibilities for developing

evidence-based guidelines and providing information on the

clinical and cost-effectiveness of interventions.

Developing evidence-based guidelines is just one step in a

complex process of ensuring that nurses actually base their

practice on evidence. Achieving evidence-based practice

requires skill on the part of nurses to appraise research

evidence in order to decide whether it is appropriate to use.

The evidence then needs to be translated into a form that can

be implemented in practice and following implementation,

the change needs to be evaluated (Gerrish 2006). Whereas the

publication of systematic reviews of research and national

clinical guidelines makes some aspects of the process easier,

implementing change can still be challenging (Collett & Elliot

2000). In recognizing the importance of evidence-based

practice to contemporary health care this paper reports on

the development and testing of a questionnaire designed to

identify factors which influence the development of evidence-

based practice in nursing.

Background

Despite widespread calls for nursing to be evidence-based,

there is a lack of clarity regarding the concept of evidence-

based practice. Sackett et al.’s (1996) definition of evidence-

based medicine is one of the most widely cited:

Evidence-based medicine is the conscientious, explicit, and judicious

use of current best evidence in making decisions about the care of

individual patients. The practice of evidence-based medicine means

integrating individual clinical expertise with the best available

external evidence from systematic research. (p. 71)

More recently, Sackett et al. (2000) acknowledged the need

also to take account of patient values.

Whereas Sackett’s definition of evidence-based medicine

has been applied to evidence-based practice in nursing (for

example, Ingersoll 2000, DiCenso et al. 2004), there is some

concern that the definition is too restrictive. Debates focus on

the perceived over-emphasis on research evidence, especially

that derived from randomized controlled trials, to the neglect

of other sources of evidence, the devaluing of patient

experiences and values, and the largely atheoretical medically

dominated model of evidence which is contrary to nursing’s

disciplinary focus on theory-guided practice (DiCenso et al.

2004).

There is general consensus that a broader definition of

evidence should be considered which takes account of other

ways of knowing that inform nursing practice (Lomas et al.

2005). For example, although Rycroft-Malone et al. (2004)

acknowledge the relationship of research, clinical experience

and patient experience as the core of evidence-based practice,

they argue that the evidence-base for nursing should also

include information derived from the local context. Clinical

experience as a source of evidence is elaborated by Gerrish

(2003) who, in drawing upon the work of Liaschenko and

Fisher (1999), differentiates between scientific, empirically

based knowledge, patient knowledge developed through an

understanding of how patients are located within the health-

care system and knowledge derived from the personal

biography of individual patients.

Nolan (2005) draws attention to the international growth

of policies promoting user participation which are under-

pinned by a belief that users should be active shapers of

knowledge and subsequent action. He argues that evidence-

based practice should encompass this tacit expertise of

patients in addition to that of professionals and research –

this moves beyond taking account of patient preferences to

valuing the knowledge that patients bring to the nurse–

patient interaction.

Fawcett et al. (2001) argue for a more theory-guided

approach to evidence-based practice in which multiple

patterns of knowing in nursing are acknowledged. Drawing

upon Carper’s typology of ways of knowing (empirical,

ethical, personal and aesthetic) they caution against the

virtually exclusive emphasis on empirical theories in evi-

dence-based practice and argue for a more holistic approach

in which different ways of knowing provide different lenses

for critiquing and interpreting different kinds of evidence.

JAN: RESEARCH METHODOLOGY Factors influencing the development of evidence-based practice

� 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd 329

Research examining the implementation of evidence-

based practice in nursing has focused primarily on research

evidence, in particular on the barriers nurses encounter in

using research. These studies have consistently identified

that the major obstacles that nurses experience in seeking

to implement research findings relate to insufficient time to

access and review research reports, a shortfall in critical

appraisal skills together with lack of authority and support

to implement findings (Funk et al. 1991a, Bryar et al.

2003, McKenna et al. 2004). Researchers have also come

under criticism for not presenting their research to clinical

audiences in a way that is easy to understand and in which

the implications for practice are made clear (Nolan et al.

1998).

Much of the responsibility for evidence-based practice has

been placed on individual practising nurses. However,

although it is recognized that all nurses have a professional

responsibility to base their care on the best available

evidence, implementing evidence-based practice in healthcare

settings is a complex undertaking (Royle & Blythe 1998). It

has been argued that healthcare organizations should support

the development of a culture of evidence-based practice and

provide resources for its implementation (DiCenso & Cullum

1998, Gerrish & Clayton 2004). Consideration of this

broader context has highlighted the importance of the

leadership styles of senior clinical nurses in promoting a

ward/team culture that is patient-centred, values members

and promotes a learning environment to support evidence-

based practice (McCormack et al. 2002). Some models for

promoting evidence-based practice also emphasize the need

for facilitation by external and internal change agents to

support the process of change and identify the importance of

the personal characteristics of the facilitator, the style of

facilitation and the role of the facilitator in terms of authority

(Harvey et al. 2002).

Existing questionnaires used to examine evidence-based

practice have focused on research utilization, in particular

nurses’ ability to access and appraise research reports and

implement research findings in practice. The Barriers to

Research Utilization Questionnaire developed in the USA

by Funk et al. (1991a) has been used extensively over the

past 15 years in a number of countries including Australia

(Retsas & Nolan 1999, Hutchinson & Johnson 2004),

Finland (Oranta et al. 2002, Kuuppelomaki & Tuomi

2005), Ireland (Glacken & Chaney 2004), Sweden

(Kajermo et al. 1998), and the United Kingdom (UK)

(Dunn et al. 1998, Nolan et al. 1998, Closs & Bryar

2001). It has also been used to examine research utilization

in specific groups of nurses, for example, community nurses

(Bryar et al. 2003), specialist breast care nurses (Kirshbaum

et al. 2004) and forensic mental health nurses (Carrion

et al. 2004). The questionnaire identifies 29 items consid-

ered to be barriers to research utilization. Respondents are

asked to rate on a 5 point Likert scale the extent to which

they perceive each item to be a barrier. Factor analysis

grouped the items around four factors, the nurse’s research

values, skills and awareness, the quality of the research, the

way in which research is communicated and the charac-

teristics of the organization (Funk et al. 1991b). Inter-

national comparisons of published findings indicate that

nurses experience broadly similar barriers to using research

in terms of the ranking of individual items.

Some studies have sought to replicate the factor analysis.

Whereas the original four factors identified by Funk et al.

were confirmed by Hutchinson and Johnson (2004), other

studies have identified different groupings of items: Retsas

and Nolan (1999) – three factors, Marsh et al. (2001) – three

factors, Kirshbaum et al. (2004) – three factors, Ku-

uppelomaki and Tuomi (2005) – six factors. Closs and Bryar

(2001) and Marsh et al. (2001) undertook extensive testing of

the instrument and independently raised questions about the

content and construct validity of the scale for use in the UK.

Several other questionnaires have been developed to

examine research utilization, however, they have not been

used as extensively as the Barriers questionnaire in order to

test the validity and reliability of the instruments in other

settings (for example Lacey 1994, Rodgers 1994, Hicks

1996, Estabrooks 1998, McKenna et al. 2004). Moreover,

within the context of evidence-based practice they focus on

the use of research findings rather than a broader definition

of evidence identified as important in the literature and

referred to above. Although Estabrooks (1998) considered

a broad range of sources of information that nurses draw

upon, including professional and patient expertise, this was

in order to examine the extent to which sources of research

evidence were used rather than to acknowledge the

contribution of diverse sources of evidence. Nevertheless,

parts of Estabrooks instrument have the potential to be

used to examine a broader definition of evidence-based

practice.

From a review of the literature and existing instruments,

there appeared to be a need for a questionnaire which would

examine factors influencing evidence-based practice where in

addition to research evidence, other forms of evidence were

considered. The definition of evidence-based practice which

informed the development of the questionnaire in this study

was adapted from Sackett’s definition referred to above

which emphasizes the interplay of research evidence, clinical

expertise and patient preferences. However, the definition of

evidence was extended to include research products such as

K. Gerrish et al.

330 � 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd

national guidelines and local information such as protocols

and audit reports.

This paper reports the design and testing of the psycho-

metric properties of a wide-ranging questionnaire designed to

measure several aspects of evidence-based practice. Two

surveys were undertaken in order to test the instrument, one

involving hospital nurses and the second nurses working in

the community.

The studies

Aims

The aim of the two studies was

• to develop and validate the Developing Evidence-Based

Practice (DEBP) questionnaire as a comprehensive measure

of evidence-based practice in England;

• to determine the important factors influencing the devel-

opment of evidence-based practice, using a composite

measuring tool, the DEBP questionnaire.

The results of the first aim are presented in this paper.

Results based on the application of the questionnaire will be

published elsewhere.

Study 1: Hospital nurses

Study 1 provided an opportunity to survey two contrasting

hospital sites. The nurse respondents were located in two

acute hospitals in northern England [a university teaching

hospital and a district general hospital (DGH)], within the

same strategic health authority. It built upon earlier research

(Nolan et al. 1998) undertaken in the teaching hospital which

had developed an anglicized version of the ‘Barriers to

Research Utilization’ scale. However, the current study took

a much broader view of evidence-based practice and included

the use of different sources of evidence and a self-appraisal of

skills in finding and using evidence. Data were collected

during 2002–2003.

Participants

The sample was drawn from the records of qualified nursing

staff at each hospital. All nurses were included in the sample

except those from two directorates at the teaching hospital

that were participating in another research study related to

evidence-based practice. This resulted in a sample of 728 at

the teaching hospital, and 683 at the DGH. Of these, 330

were returned at the teaching hospital, and 274 at the DGH,

a response rate of 45% and 40%, respectively. The useable

achieved sample was 598, after the exclusion of question-

naires without information about the respondents’ grades.

Study 2: Community health nurses

In the second study, a slightly expanded version of the

questionnaire was used, with an additional eight items which

in each case simply increased the lists of sources of evidence;

barriers and facilitators to employing evidence in practice,

and personal skills (see Table 2 for all items including the

additions, which are italicized). These minor modifications

arose from testing the content validity of the instrument

originally used for hospital nurses for use with community

health nurses. The respondents were community health

nurses in 12 primary care trusts (PCTs) in two strategic

health authorities, one in northern England and the other in

southern England. Data were collected during 2005.

Participants

A random sample of 1600 community health nurses was

drawn from the records of qualified staff in the 12 PCTs.

Equal numbers of health visitors, district nurses, community

nurses, practice nurses and school nurses were sampled. The

overall response rate was 47% with responses for each of the

five community health nursing groups as follows: health

visitors 57%, district nurses 55%, community nurses 40%,

practice nurses 37%, school nurses 43%. The usable sample

was 689, after the exclusion of questionnaires without

information about the respondents’ post.

Construction of the questionnaire

The DEBP questionnaire has five main parts, each one derived

from somewhat different sources. Twenty-two items, 16 of

which are anglicized versions of items of the Estabrooks scale

(Estabrooks 1998) about sources of knowledge, constitute the

first section of the questionnaire. Each item was scored on a

5-point scale from never (score 1) to always (score 5).

Permission to use these items was obtained from the author.

The second, third and fourth sections of the DEBP

questionnaire examine barriers to achieving evidence-based

practice. Feedback from the earlier study in the teaching

hospital that had utilized an anglicized version of the North

American ‘Barriers’ questionnaire identified a number of

problems with this instrument that necessitated the develop-

ment of a quite different set of items, albeit ones that still

examined barriers. The new items took account of a broader

understanding of evidence by including questions on organ-

izational information (defined as care pathways, clinical

protocols and guidelines) in addition to questions on research

evidence. Emphasis was also placed on changing practice

based on evidence rather than just the implementation of

research findings. Additionally, in contrast to the original

JAN: RESEARCH METHODOLOGY Factors influencing the development of evidence-based practice

� 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd 331

Barriers scale which asked respondents to comment on ‘the

nurse’ in a generic sense, the new items used a personal ‘I’ or

‘my’ to ensure respondents were reporting their own experi-

ence rather than that of nurses in general. This new ‘Barriers’

scale consisted of 19 items with 5-point response scales.

These are divided into two groups of barriers and one group

concerned with colleague relations which facilitate evidence-

based practice (scored in the opposite direction to the other

two groups, but intended to reduce the apparent negativity of

the ‘barriers’ items of the questionnaire). The scoring used the

5-point scale technique of section 1, with a score of 1 for

‘agree strongly’.

Finally, a fifth section was devised consisting of eight items

asking nurses to rate themselves on skills of finding and

reviewing evidence, and using evidence to effect change.

Ratings on a 5-point scale ranged from ‘complete beginner’

(score 1) to ‘expert’ (score 5).

So the DEBP tool consisted of

• Section 1. Bases of practice knowledge (22 items).

• Section 2. Barriers to finding and reviewing evidence (10

items).

• Section 3. Barriers to changing practice on the basis of

evidence (five items).

• Section 4. Facilitation and support in changing practice

(four items).

• Section 5. Skills in finding and reviewing evidence (eight

items).

The core of the DEBP questionnaire hence consists of 49

items, designed as a paper-based tool for self-completion. It

was initially piloted with 20 nurses who worked in hospital

settings and minor modifications were subsequently made to

two items to improve clarity. Prior to study 2 the content

validity of the questionnaire was considered by a panel of four

experts in community health nursing and minor modifications

made to the questionnaire. This included four additional

sources of knowledge, two additional barriers and two further

skills considered to be relevant to the practice of community

health nursing. The revised questionnaire was piloted with

five community health nurses but no changes were required.

Data collection

Questionnaires were addressed individually and distributed

via the external post or internal mail system at each site,

depending on the preference of the organization. An

addressed envelope was enclosed for return of the question-

naire. In study 1, reminders were posted around the hospital

site to maximize response, but no individual reminders were

sent to maintain anonymity of responders. Ward managers

were asked to encourage completion of the questionnaires,

but it was stressed that this was entirely voluntary. In study 2

targeted reminders were sent to non-respondents to maximize

the response rate.

Ethical considerations

The study was approved by the relevant research ethics and

governance committees at each site. A participant informa-

tion sheet giving details of the study accompanied the

questionnaire. Consent to participate was assumed on the

basis of a returned, completed questionnaire.

Data analysis

The data for all items employed in both studies 1 and 2 were

analysed using SPSS version 13. Initial analysis suggested no

alteration in the structure of results over the timespan in

which data were collected; this justified bringing together the

findings of the two studies in validating the instrument.

Results

Measurement characteristics of the ‘Developing Evidence-

Based Practice’ Questionnaire

Responses were treated as five-point scale items, with ‘high’

and ‘low’ being assigned, as indicated above. The question-

naire has five major sections. The measurement features of

the sections are given in Table 1. To be noted in particular is

the column of values of reliability. Reliability in this context

means internal consistency. It refers to the extent to which the

scores on the items correlate with each other and, if they do,

this means that we can regard the items within a scale as all

being about the same thing. It then becomes justifiable to

treat the items as constituting a Likert scale, since it is

meaningful to sum up the scores of each item to give a scale

score for the individual. The questionnaire was tested as to

coherence of scales and subscales using intercorrelation of

items and Cronbach’s Alpha as indicators of reliability. The

most widely used index of internal consistency, Cronbach’s

Alpha is equivalent to the average of all possible split-half

correlation coefficients. The values of a for each scale and

subscale can be seen in Table 1a. All a values are acceptable

as estimates of reliability (the conventional value of a

regarded as indicating a level which avoids false positive

reliability estimates is 0Æ7). The five sections of the question-

naire can be assumed to be reliable. However, the pattern of

intercorrelations in Table 1b is such that it would be

inappropriate to employ all 49 items as a ‘scale’ (see also

factor analytic evidence below).

K. Gerrish et al.

332 � 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd

It is to be noted that the five-point scoring technique, which

is employed for all items of the DEBP questionnaire, is a

deviation from the agree/disagree scoring originally suggested

by Funk et al. (1991a). As a check on the effect of this

modification, in study 1, the relevant items were recoded with

1 for ‘agree’ and ‘agree strongly’ and 0 for other responses.

The results were equivalent though the five-point scoring

technique is more sensitive (as one might expect).

Comparison of scores on section 1 of the DEBP

questionnaire with Estabrooks (1998)

As part of the effort to validate the DEBP questionnaire, the

results in study 1, for items of section 1 matching Estabrooks’

items were compared with those she reports in her original

paper (1998). The correlation between the rank-orders of the

means for the items in the two studies yields a value of

Spearman’s q ¼ 0Æ897, which is significant at the P < 0Æ01

level (one tail). Tests of the difference between means in the two

studies showed none to be significant at the 0Æ01 level using t

(two-tailed for degrees of freedom of around 590). These

findings indicate that the anglicized version of the Estabrooks

questionnaire functioned in a manner akin to the original

instrument, and that the responses of the nurses in this study

were similar to those of the original Canadian sample. The

comparability of the first section of the questionnaire with

Estabrooks’ results gives evidence of construct validity.

Factor analysis

The mean and standard deviation for each item were

calculated, and the Pearson correlation of each item with

each other item was calculated. On this basis, an exploratory

factor analysis (Lawley & Maxwell 1971, Pett 2003) was

carried out. A factor analysis economises on the number of

variables used to account for a matrix. So, in the present case,

the very large number of intercorrelations between the 49

items of the questionnaire can be summarized by calculating

the ‘position’ of a fewer number of imaginary parameters to

which each of the actual items can be, as it were, correlated.

These imaginary parameters are ‘factors’. The correlation of

an item with a factor is the ‘loading’ of the item on the factor.

The principle components algorithm was used to specify

factors.

There are a large number of equivalent mathematical

solutions to the question of where the imaginary param-

eters can go. The decision about the preferred (i.e. the

specification of the rotation) was made using a conven-

tional set of criteria. The solution was calculated in which

factors (a) are not correlated with each other (orthogonal),

and (b) have loadings which are as high as possible or

near-zero.

The varimax (Kaiser 1958) rotated factor matrix is given in

Table 2. The factor analysis was carried out using only the

questionnaire items common to both studies. Factor analysis

was based on the 10 principal components with initial

eigenvalues greater than one. The relative strength of each

factor within the matrix as a whole is indicated in the final

row of Table 2.

The version of the questionnaire employed in study 2 had

eight items in addition to those in study 1. The factor analysis

was carried out only on the items common to the two studies,

and Table 2 reports the factor loadings of these items in

regular font. The eight additional items also appear in Table 2

but are printed in italics. The data provided in the table for

each of the additional items are not factor loadings but are

correlations between the item scores and factor scores

(Mulaik 1987). Correlations greater than þ0Æ3 or less than

�0Æ3 are reported in the table. The new items fall into the

expected groupings.

The interpretation of the factors was undertaken by

inspection of the items which had the highest loadings on

each factor. The factors relate neatly to the five sections.

There is overlap between sections 2 and 3 – which both

have to do with barriers to evidence-based practice. The

particular kind of evidence matters. And section 1 also

generates factors specific to the kind of evidence which is

under consideration.

Table 1 Measurement characteristics of the Developing Evidence-

Based Practice Questionnaire

(a) Descriptive statistics and reliability of sections of the question-

naire and overall questionnaire

Section n

Number of

items* Mean SDSD

Reliability

(Cronbach a)

1 1282 18 59Æ900 6Æ718 0Æ788

2 1286 9 28Æ664 6Æ084 0Æ843

3 1286 5 16Æ596 3Æ990 0Æ805

4 1286 3 10Æ291 2Æ014 0Æ730

5 1287 6 17Æ198 4Æ448 0Æ913

Overall* 1279 41 132Æ672 15Æ001 0Æ874

(b) Intercorrelations between sections of the questionnaire (Pearson)

Section 2 3 4 5

1 0Æ145 0Æ087 0Æ184 0Æ229

2 0Æ582 0Æ160 0Æ373

3 0Æ248 0Æ211

4 0Æ197

*Using only items employed in both studies.

All correlations are significant at the 0Æ01 level, 2-tailed. n for each

section is between 1281 and 1286.

JAN: RESEARCH METHODOLOGY Factors influencing the development of evidence-based practice

� 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd 333

Table 2 Rotated factor matrix

Factor

Questionnaire item 1 2 3 4 5 6 7 8 9 10

Section 1. Bases of practice knowledge

1. Information I learn about each patient/client as an individual 0Æ687

2. My intuitions about what seems to be ‘right’ for the patient/client 0Æ560 0Æ372

3. My personal experience of caring for patients/clients over time 0Æ525 0Æ464

4. What has worked for me for years 0Æ851

5. The ways I have always done it 0Æ822

6. Information my fellow practitioners share 0Æ556

7. Information senior clinical nurses share,

e.g. clinical nurse specialists, nurse practitioners

0Æ458

8. What doctors discuss with me 0Æ797

9. New treatments and medications that I learn about

when doctors prescribe them for patients

0Æ761

10. Medication and treatments I gain from pharmaceutical

or equipment company representatives

0Æ401

11. Information I get from product literature 0Æ329 0Æ343

12. Information I learned in my training 0Æ622

13. Information I get from attending in-service training/conferences 0Æ719

14. Information I get from local policy and protocols 0Æ767

15. Information I get from national policy initiatives/guidelines 0Æ435 0Æ531

16. Information I get from local audit reports 0Æ601

17. Articles published in medical journals 0Æ763

18. Articles published in nursing journals 0Æ758

19. Articles published in research journals 0Æ733

20. Information in textbooks 0Æ664

21. Information I get from the internet 0Æ606

22. Information I get from the media 0Æ589

Section 2. Barriers to finding and reviewing evidence

23. I do not know how to find appropriate research reports 0Æ592 0Æ551

24. I do not know how to find organisational

information (guidelines, protocols, etc.)

0Æ722 0Æ373

25. I do not have sufficient time to find research reports 0Æ806

26. I do not have sufficient time to find organisational

information (guidelines/protocols, etc.)

0Æ754

27. Research reports are not easy to find 0Æ464 0Æ511

28. Organizational information

(protocols, guidelines, etc.) is not easy to find

0Æ448 0Æ316 0Æ363

29. I find it difficult to understand research reports 0Æ810

30. I do not feel confident in judging the quality of research reports 0Æ806

31. I find it difficult to identify the implications of

research findings for my own practice

0Æ426 0Æ683

32. I find it difficult to identify the implications of

organizational information for my own practice

0Æ450 0Æ443

Section 3. Barriers to changing practice on the basis of evidence

33. I do not feel confident about beginning to change my practice 0Æ818

34. The culture of my team is not receptive to changing practice 0Æ847

35. I lack the authority in the workplace to change practice 0Æ763

36. There are insufficient resources (e.g. equipment) to change practice 0Æ433 0Æ613

37. There is insufficient time at work to implement changes in practice 0Æ327 0Æ693

Section 4. Facilitation and support in changing practice

38. Nursing colleagues are supportive of my changing practice 0Æ819

39. Nurse managers are supportive of my changing in practice 0Æ837

K. Gerrish et al.

334 � 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd

The interpretability of the factors in the light of the intended

meaning of the sections of the DEBP questionnaire constitutes

construct validation. The interpretations are as follows:

Factor 1. Skill in finding, reviewing and using different

sources of evidence The factor is construct validation for

section 5.

Factor 2. Barriers to, or facilitators of, personal efficacy in

the context of the organization, including team culture and

personal authority. This factor includes part of each of the

‘barriers’ sections.

Factor 3. Published information as a source of knowledge

used in practice. A subset of section 1.

Factor 4. Focal concern or interest in the effective use of

research. Part of section 2.

Factor 5. The availability of formal information (research

and organizational information), and disposable time to

implement the recommendations. This factor includes part

of each of the ‘barriers’ sections.

Factor 6. Knowledge gleaned from training, conferences, and

local and national reports and audits. A subset of section 1.

Factor 7. Personal experience. A subset of section 1.

Factor 8. Informal information gleaned in the course of

daily work, including interprofessional conversations. A

subset of section 1.

Factor 9. The facilitating or hindering effect of colleagues in

changing practice. This factor gives construct validity to

section 4.

Factor 10. Client /patient contact and the nurse’s personal

knowledge and experience. A subset of section 1.

In Table 3, we present the results of a further analysis in

which each of the factors is treated as a scale. The reliability

of each factor, regarded as a Likert scale, is given as

Cronbach’s a. Values where a > 0Æ7 are generally regarded

as indicating the reliability of a scale. However, a is sensitive

to the number of items in a scale. Reliability values for factors

8 and 10 are likely to be low due to the small number of items

contributing to the factor. Three additional items are

associated with factor 8 in study 2.

Since it does seem that several of the factors cut across or

subdivide the sections of the DEBP questionnaire, it was

considered whether the 10 factors would function as sections,

replacing the existing five sections. In the event, this would be

premature: we are insufficiently confident of the reliability of

factors 8 and 10 pending further data.

Table 2 (Continued)

Factor

Questionnaire item 1 2 3 4 5 6 7 8 9 10

40. Doctors with whom I work are supportive of my

changing practice

0Æ697

41. Practice managers are supportive of my

changing practice

0Æ564

Section 5. Self-assessment of skills

42. Finding research evidence 0Æ759

43. Finding organizational information 0Æ815

44. Using the library to locate information 0Æ622 0Æ393

45. Using the internet to search for information 0Æ594 0Æ337

46. Reviewing research evidence 0Æ818

47. Reviewing organizational information 0Æ873

48. Using research evidence to change practice 0Æ792

49. Using organizational information to change practice 0Æ787

Percentage variance of matrix due to each factor 10Æ622 10Æ622 10Æ622 10Æ622 10Æ622 10Æ622 10Æ622 10Æ622 10Æ622 10Æ622

Additional items included in the questionnaire for study 2 are in italics.

Table 3 Characteristics of the factors, treated as scales

Factor

Values for factor as a scale

n

Number of items

defining the factor Mean SDSD Cronbach’s a

1 1287 6 17Æ20 4Æ148 0Æ913

2 1285 9 30Æ34 6Æ665 0Æ871

3 1286 7 20Æ95 4Æ065 0Æ820

4 1286 7 22Æ86 5Æ216 0Æ859

5 1285 6 18Æ26 4Æ205 0Æ798

6 1286 4 14Æ61 2Æ485 0Æ731

7 1284 4 12Æ67 2Æ348 0Æ716

8 1287 3 10Æ57 1Æ767 0Æ689*

9 1286 3 10Æ29 2Æ014 0Æ730

10 1286 3 11Æ71 1Æ613 0Æ539*

*Reliability values as low as this may be due to the small number of

items contributing to the factor. Three additional items are associated

with factor 8 in study 2.

JAN: RESEARCH METHODOLOGY Factors influencing the development of evidence-based practice

� 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd 335

Discussion

Study limitations

One of the main disadvantages of using a self-completed

postal questionnaire is the potential for a low response rate

(Robson 2002). Previous surveys examining barriers to

research utilization have experienced relatively poor response

rates. In study 1, the response rate of 45% for the teaching

hospital and 40% for the DGH are not dissimilar to the

response rate of 44% reported by Bryar et al. (2003) in a

large study involving two hospitals and four community

settings and the 40% response rate achieved by Funk et al.

(1991b) in their original study of barriers to research

utilization. In study 2 response rates were slightly higher at

47% which may reflect the effect of targeted reminders to

non-respondents.

Although the response rates compared favourably with

many similar studies, they may nevertheless conceal some

response bias. It was noted, for example, that in the

community study the response rate between different profes-

sional groups varied considerably (57% for health visitors

compared with 37% for practice nurses). It is possible that

nurses who were less favourable towards using evidence in

their practice might have been less likely to respond, thus

biasing the achieved sample. However, such response bias

would be more of a concern in interpreting the overall

findings of the study (to be reported elsewhere) rather than in

assessing the reliability and validity of the tool.

A further limitation might be that the DEBP questionnaire

was changed slightly between the two studies by the addition

of some items. Although these changes were made to enhance

face validity, after consulting users from community settings,

it might be suggested that such changes would have altered

the psychometric properties of the questionnaire. However,

this was not found to be the case. The new items related well

to the established factor structure.

Discussion of results

A large enough sample size (n ¼ 1287 in total) was achieved

for adequate testing of the tool. The psychometric properties

of the DEBP suggest that it is a reliable instrument with 10

identifiable factors, although it is not a single scale. The

conventions used to test the psychometric properties of the

questionnaire were drawn from well-established sources and

demonstrate high reliability (>0Æ7) for each of the five

sections and for eight of the 10 factors when treated as scales.

The lower values for the two remaining factors (8 and 10) are

likely to be due to the small number of items defining the

factors.

The 10 factors are in some cases consistent with the

different elements and sub-sections of the tool, but in some

cases provide over-arching concepts that are drawn from

different elements of the tool. Factor 2, for example,

highlights personal and organizational difficulties in using

evidence-based practice which range from a lack of personal

knowledge to a lack of empowerment to challenge estab-

lished practice. Factors 7 and 8 emphasize the role of

personal experience and informal sources of information in

nurses’ application of evidence-based practice. This aspect of

knowledge utilization has been disregarded in many previous

tools measuring evidence-based practice.

The need to promote the use of appropriate evidence in

nursing practice has been widely acknowledged, along with

an associated need to test and evaluate the extent of evidence-

based practice. As pointed out earlier in the paper, a variety

of instruments have been used to do this, particularly the

‘Barriers’ scale (Funk et al. 1991a) but these tools have been

either untested or found to be lacking in validity (Closs &

Bryar 2001, Marsh et al. 2001). Many of the previously

developed tools are also historically located in a time when

information technology and access to electronic information

in clinical settings was limited. Much has changed in the last

decade, with computer access close to patient care increas-

ingly available, and protocol-based care now integrated into

many clinical areas. We therefore set out to develop and test a

more comprehensive tool (DEBP).

The study has provided evidence of validation of the DEBP

questionnaire for investigating factors associated with evi-

dence-based practice among nurses in England. Notwith-

standing the need for additional studies in the UK and beyond

to further validate of the instrument, the inclusion of sources

of knowledge and skills ratings alongside the ‘barriers’ scale

adds considerably to its usefulness, and factor analysis

suggests that the scales are consistent. Whereas the DEBP

questionnaire has been shown in this study to be a valid

instrument, with reliable scales, the questionnaire as a whole

does not constitute a scale – the component sections are too

diverse in meaning for this. One significant modification to

earlier ‘barriers to research utilization’ questionnaires is the

inclusion in the current questionnaire of organizational

information as a source of evidence. This reflects the

increased emphasis placed on nurses in the UK to draw upon

national and local evidence-based guidelines and clinical

protocols rather than assuming that nurses would, or indeed

should, interpret the significance of findings in published

research papers for their practice.

K. Gerrish et al.

336 � 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd

Conclusion

The development and testing of the DEBP questionnaire

reported in this article suggest that the instrument is a valid

and reliable measure, although further testing is required to

fully establish its validity and reliability. The generalizability

of the DEBP questionnaire has been shown to extend to nurses

in hospital and community settings in England. However, its

validity in other countries remains to be demonstrated. Before

adoption elsewhere it will be important to review the cultural

appropriateness and content validity of items in the different

sections of the questionnaire as different barriers to evidence-

based practice may be important in some countries. For

example, Oranta et al. (2002) identified that one of the

greatest barriers to evidence-based practice for nurses in

Finland was the fact that most research papers were published

in a language other than Finnish (section 2 of questionnaire).

Indeed, the preponderance of English language journals may

present particular challenges for those whose first language is

not English. Moreover, the high turnover of staff in some

parts of South Africa is seen to mitigate against sustaining

change in respect of evidence-based practice (section 3 of

questionnaire) (Garner et al. (2004), McInerney 2004).

The questionnaire could be used as an outcome measure in

‘before and after’ intervention studies that aim to assess the

impact of service development, training or other innovations

on the extent of evidence-based practice. Organizations

wishing to build research capacity will also find the tool

useful in measuring progress. Because the tool has five

sections and 10 identifiable factors, it may be possible to

analyse the nature of the change being measured over time.

Policies can then be tailored to address the particular barriers

and organizational factors highlighted as being problematic.

It would also be interesting to test its relevance to other

professions such as allied health professions and social work.

Comparisons between the professions regarding the imple-

mentation of evidence-based practice would then be possible.

Author contributions

KG and AL were responsible for the study conception and

design of the manuscript and KG, PA and AL were

responsible for the drafting of the manuscript. KG, AL, JB,

JC, SK and EM performed the data collection and KG, PA

and AL performed the data analysis. KG, AL, JB and SK

obtained funding and JB, EM and JC provided administrative

support. PA provided statistical expertise. KG, PA, AL, JC,

SK and EM made critical revisions to the paper.

Acknowledgements

We are grateful to Professor Carole Estabrooks, University of

Alberta, Canada, for granting permission to adapt and use

some questions from an instrument she had developed to

examine research utilisation.

References

Bryar R., Closs S., Baum G., Cooke J., Griffiths J., Hostick T., Kelly

S., Knight S., Marshall K. & Thompson D. (2003) The Yorkshire

BARRIERS project: diagnostic analysis of barriers to research

utilisation. International Journal of Nursing Studies 40, 73–85.

Carrion M., Woods P. & Norman I. (2004) Barriers to research

utilisation among forensic mental health nurses. International

Journal of Nursing Studies 41, 613–619.

Closs J. & Bryar R. (2001) The BARRIERS scale: does it ‘fit’ the

current NHS research culture. NT Research 6, 853–865.

Collett S. & Elliot P. (2000) Implementing clinical guidelines in

primary care. In Implementing Evidence-Based Changes in

What is already known about this topic

• Existing instruments for assessing evidence-based prac-

tice have focused on examining research utilization ra-

ther than taking a broader view of different sources of

evidence that can inform practice.

• Nurses experience major barriers to implementing re-

search findings due to insufficient time to access and

review research reports, a shortfall in critical appraisal

skills, together with lack of authority and support to

implement findings.

• Whereas evidence-based guidelines and protocols are

increasingly available to support evidence-based prac-

tice, the extent to which nurses use these various forms

of evidence is not clear, nor is how skilled they are in

accessing them.

What this paper adds

• The development and initial validation data for a new

measure of a range of factors involved in evidence-based

practice for use with hospital and community nurses in

England.

• The questionnaire could be used as an outcome measure

in ‘before and after’ intervention studies that aim to assess

the impact of service development, training or other

innovations on the extent of evidence-based practice.

• Further research is needed to test the validity of the

instrument in other countries, and it would also be

interesting to test its relevance to other professions such

as allied healthcare professions and social work.

JAN: RESEARCH METHODOLOGY Factors influencing the development of evidence-based practice

� 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd 337

Healthcare (Evans D. & Haines A., eds), Radcliffe Medical Press,

Abingdon, pp. 235–256.

DiCenso A. & Cullum N. (1998) Implementing evidence-based

nursing: some misconceptions. Evidence-Based Nursing 1, 38–40.

DiCenso A., Prevost S., Benefield L., Bingle J., Ciliska D., Driever M.,

Lock S. & Titler M. (2004) Evidence-based nursing: rationale and

resources. Worldviews on Evidence-Based Nursing First Quarter

2004, 69–75.

Dunn V., Crichton N., Roe B., Seers K., & Williams K. (1998) Using

research for practice: a UK experience of the BARRIERS Scale.

Journal of Advanced Nursing 27, 1203–1210.

Estabrooks C. (1998) Will evidence-based nursing practice make

practice perfect? Canadian Journal of Nursing Research 30, 15–36.

Fawcett J., Watson J., Neuman B., Hinton Walker P. & Fitzpatrick J.

(2001) On nursing theories and evidence. Journal of Nursing

Scholarship Second Quarter 2001, 115–119.

Funk S.G., Champagne M.T., Wiese R.A. & Tornquist E.M. (1991a)

The Barriers to Research Utilization Scale. Applied Nursing

Research 4, 39–45.

Funk S.G., Champagne M.T., Wiese R.A. & Tornquist E.M. (1991b)

Barriers to using research findings in practice: the clinician’s per-

spective. Applied Nursing Research 4, 90–95.

Garner P., Meremikwu M., Volmink J., Xu, Q. & Smith H. (2004)

Putting evidence into practice: how middle and low income countries

‘get it together’. British Medical Journal 329, 1036–1039.

Gerrish K. (2003) Evidence-based practice: unravelling the rhetoric

and making it real. Practice Development in Health Care 2, 99–113.

Gerrish K. (2006) Evidence-based practice. In The Research Process

in Nursing (Gerrish K. & Lacey A., eds), 5th edn Blackwell Sci-

ence, London, pp. 491–505.

Gerrish K. & Clayton J. (2004) Promoting evidence-based practice:

an organisational approach. Journal of Nursing Management 12,

114–123.

Glacken M. & Chaney D. (2004) Perceived barriers and facilitators

to implementing research findings in the Irish practice setting.

Journal of Clinical Nursing 13, 731–740.

Harvey G., Loftus-Hill A., Rycroft-Malone J., Titchen A., Kitson A.,

McCormack B. & Seers K. (2002) Getting evidence into practice:

the role and function of facilitation. Journal of Advanced Nursing

37, 577–588.

Hicks C. (1996) A study of nurses’ attitudes towards research: a factor

analytic approach. Journal of Advanced Nursing 23, 373–379.

HutchinsonA.&JohnsonL. (2004)Bridging thedivide: a surveyofnurses’

opinions regarding barriers to, and facilitators of, research utilisation in

the practice setting. Journal of Clinical Nursing 13, 304–315.

Ingersoll G. (2000) Evidence-based nursing: what it is and what it

isn’t. Nursing Outlook 48, 151–152.

Kaiser H.F. (1958) The varimax criterion for analytic rotation in

factor analysis. Psychometrika 23, 187–200.

Kajermo K., Nordstrom G., Krusebrant A. & Bjorvell H. (1998) Barriers

to and facilitators of research utilisation, as perceived by a group of

registeredNurses inSweden. JournalofAdvancedNursing27, 798–807.

Kirshbaum M., Beaver K. & Luker K. (2004) Perspectives of breast

care nurses on research dissemination and utilisation. Clinical

Effectiveness in Nursing 8, 47–58.

Kuuppelomaki M. & Tuomi J. (2005) Finnish nurses’ attitudes

towards nursing research and related factors. International Journal

of Nursing Studies 42, 187–196.

Lacey E.A. (1994) Research utilisation in nursing practice: a pilot

study. Journal of Advanced Nursing 19, 987–995.

Lawley D.N. & Maxwell A.E. (1971) Factor Analysis as a Statistical

Method, 2nd edn. Butterworth, Oxford.

Liaschenko J. & Fisher A. (1999) Theorizing the knowledge that

nurses use in the conduct of their work. Scholarly Inquiry for

Nursing Practice. An International Journal 13, 29–40.

Lomas J., Culyer T., McCutcheon C., McAuley L., Law S. (2005)

Conceptualizing and Combining Evidence for Health System

Guidance. Canadian Health Services Research Foundation

(CHSRF). Retrieved from: http://www.chsrf.ca.

Marsh G., Nolan M. & Hopkins S. (2001) Testing the revised

barriers to research utilisation scale for use in the UK. Clinical

Effectiveness in Nursing 5, 66–72.

McCormack B., Kitson A., Harvey G., Rycroft-Malone J., Titchen A.

& Seers K. (2002) Getting evidence into practice: the meaning of

‘context’. Journal of Advanced Nursing 38, 94–104.

McInerney P. (2004) Evidence-based nursing and midwifery: the state

of the science in South Africa. Worldviews on Evidence-Based

Nursing 1, 4207.

McKenna H., Ashton S. & Keeney S. (2004): Barriers to evidence-

based practice in primary care. Journal of Advanced Nursing 45,

178–189.

Mulaik, S.A. (1987) A brief history of the philosophical foundations

of exploratory factor analysis. Multivariate Behavioural Research

22, 267–305

Nolan M. (2005) Reconciling tensions between research, evidence-

based practice and user participation: time for nursing to take the

lead. International Journal of Nursing Studies 42, 503–505.

Nolan M., Morgan L., Curran M., Clayton J., Gerrish K. & Parker

K. (1998) Evidence-based care: can we overcome the barriers?

British Journal of Nursing 7, 1273–1278.

Oranta O., Routasolo P. & Hupli M. (2002) Barriers to and facil-

itators of research utilisation among Finnish Registered Nurses.

Journal of Clinical Nursing 11, 205–213.

Pett M.A. (2003) Making Sense of Factor Analysis in Health Care

Research: A Practical Guide. Sage, London.

Retsas A. & Nolan M. (1999) Barriers to nurses’ use of research: an

Australian hospital study. International Journal of Nursing Studies

36, 335–343.

Robson C. (2002) Real World Research: A Resource for Social

Scientists and Practitioners. Blackwell, Oxford.

Rodgers S. (1994) An exploratory study of research utilisation by

nurses in general medical and surgical wards. Journal of Advanced

Nursing 20, 904–911.

Royle J. & Blythe J. (1998) Promoting research utilisation in nursing:

the role of the individual, organisation, and environment. Evi-

dence-Based Nursing 1, 71–72.

Rycroft-Malone J., Seers K., Titchen A., Harvey G., Kitson A. &

McCormack B. (2004) What counts as evidence in evidence into

practice? Journal of Advanced Nursing 47, 81–90.

Sackett D., Rosenburg W., Muir Gray J., Haynes R. & Richardson

W. (1996) Evidence-based medicine: what it is and what it isn’t.

British Medical Journal, 312, 71–72.

Sackett D., Richardson S., Richardson S., Rosenberg W. & Haynes B.

(2000) Evidence-Based Medicine: How to Practice and Teach

EBM. Churchill Livingstone, Edinburgh.

K. Gerrish et al.

338 � 2007 The Authors. Journal compilation � 2007 Blackwell Publishing Ltd