The development and psychometric testing of the Satisfaction with Simulation Experience Scale

6
The development and psychometric testing of the Satisfaction with Simulation Experience Scale Tracy Levett-Jones a, , Michelle McCoy a,1 , Samuel Lapkin a,2 , Danielle Noble b,3 , Kerry Hoffman a,4 , Jennifer Dempsey a , Carol Arthur a,5 , Jan Roche a,6 a School of Nursing and Midwifery, The University of Newcastle, Callaghan, NSW 2308, Australia b School of Nursing and Midwifery, The University of Newcastle, Ourimbah Campus, Ourimbah, NSW 2308, Australia summary article info Article history: Accepted 5 January 2011 Available online xxxx Keywords: Simulation Student satisfaction Human patient simulation manikin Fidelity Nursing student Aim: This paper reports the development and psychometric testing of the Satisfaction with Simulation Experience Scale, an instrument designed to measure and compare differences in satisfaction levels between nursing students exposed to medium and high delity human patient simulation manikins. Background: Student satisfaction is important to engaged and meaningful learning and it facilitates active and purposeful participation in simulation experiences. There are suggestions that student satisfaction may have some correlation with performance. Few studies have explored in a rigorous way the impact of manikin delity on nursing students' satisfaction with simulation experiences. Method: The items for the Satisfaction with Simulation Experience Scale were identied following a critical review of the literature. Content validly was established by use of an expert panel. During 2009 and 2010 the instrument was tested with second year (n = 268) and third year nursing students (n = 76) from one Australian university. Exploratory factor analysis with varimax rotation was used to determine construct validity and Cronbach's coefcient alpha determined the scale's internal consistency reliability. Differences in satisfaction levels between groups were analysed using an independent t test. Responses to an open ended question were categorised using thematic content analysis. Results: The scale demonstrated satisfactory internal consistency (alpha 0.77). Exploratory factor analysis yielded a three-component structure termed Debrieng and Reection, Clinical Reasoning, and Clinical Learning; each subscale demonstrated high internal consistency: 0.94; 0.86; 0.85 respectively. Mean satisfaction scores were high for each group. However, statistically signicant differences were not apparent between second or third year students exposed to medium and high delity manikins. Content analysis identied 13 main categories including supplementing versus replacing clinical placements and the need for increased exposure to simulation sessions. Conclusion: The results of this study indicate that simulation is highly valued by students, irrespective of the level of delity. This raises questions about the value of investing in expensive simulation modalities. The Satisfaction with Simulation Experience Scale was reliable and valid for this cohort. Further research in different contexts would be valuable in extending upon this work. © 2011 Elsevier Ltd. All rights reserved. Introduction It is acknowledged that self-reported levels of satisfaction alone do not provide a comprehensive assessment of the overall impact of simulation experiences. Nevertheless, satisfaction is an important unit of measure. Student satisfaction is important to engaged and meaningful learning and it facilitates active and purposeful partici- pation in simulation experiences (Prion, 2008). There are also suggestions that student satisfaction may have some correlation with performance (Bremner et al., 2006). To date, many studies evaluating nursing students' satisfaction with simulation have tended to be small scale or localised studies using instruments that have not been psychometrically tested. In addition, few studies have explored in a rigorous way the impact of manikin delity on nursing students' Nurse Education Today xxx (2011) xxxxxx Corresponding author. Tel.: +61 2 49216559; fax: +61 2 4921 6301. E-mail addresses: [email protected] (T. Levett-Jones), [email protected] (M. McCoy), [email protected] (S. Lapkin), [email protected] (D. Noble), [email protected] (K. Hoffman), [email protected] (J. Dempsey), [email protected] (C. Arthur), [email protected] (J. Roche). 1 Tel.: +61 2 4938 7369; fax: +61 2 4921 6301. 2 Tel.: +61 2 4921 6599. 3 Tel.: +61 2 4349 4534; fax: +61 2 43494538. 4 Tel.: +61 2 433494533; fax: +61 2 49216301. 5 Tel.: +61 2 40216339; fax: +61 2 49216301. 6 Tel.: +61 2 4921 6230; fax: +61 2 49216301. YNEDT-01965; No of Pages 6 0260-6917/$ see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.nedt.2011.01.004 Contents lists available at ScienceDirect Nurse Education Today journal homepage: www.elsevier.com/nedt Please cite this article as: Levett-Jones, T., et al., The development and psychometric testing of the Satisfaction with Simulation Experience Scale, Nurse Education Today (2011), doi:10.1016/j.nedt.2011.01.004

Transcript of The development and psychometric testing of the Satisfaction with Simulation Experience Scale

Nurse Education Today xxx (2011) xxx–xxx

YNEDT-01965; No of Pages 6

Contents lists available at ScienceDirect

Nurse Education Today

j ourna l homepage: www.e lsev ie r.com/nedt

The development and psychometric testing of the Satisfaction with SimulationExperience Scale

Tracy Levett-Jones a,⁎, Michelle McCoy a,1, Samuel Lapkin a,2, Danielle Noble b,3, Kerry Hoffman a,4,Jennifer Dempsey a, Carol Arthur a,5, Jan Roche a,6

a School of Nursing and Midwifery, The University of Newcastle, Callaghan, NSW 2308, Australiab School of Nursing and Midwifery, The University of Newcastle, Ourimbah Campus, Ourimbah, NSW 2308, Australia

⁎ Corresponding author. Tel.: +61 2 49216559; fax: +E-mail addresses: [email protected]

[email protected] (M. McCoy), Samuel(S. Lapkin), [email protected] (D. [email protected] (K. Hoffman), Jennifer(J. Dempsey), [email protected] (C. Arthur(J. Roche).

1 Tel.: +61 2 4938 7369; fax: +61 2 4921 6301.2 Tel.: +61 2 4921 6599.3 Tel.: +61 2 4349 4534; fax: +61 2 43494538.4 Tel.: +61 2 433494533; fax: +61 2 49216301.5 Tel.: +61 2 40216339; fax: +61 2 49216301.6 Tel.: +61 2 4921 6230; fax: +61 2 49216301.

0260-6917/$ – see front matter © 2011 Elsevier Ltd. Aldoi:10.1016/j.nedt.2011.01.004

Please cite this article as: Levett-Jones, T., eScale, Nurse Education Today (2011), doi:1

s u m m a r y

a r t i c l e i n f o

Article history:

Accepted 5 January 2011Available online xxxx

Keywords:SimulationStudent satisfactionHuman patient simulation manikinFidelityNursing student

Aim: This paper reports the development and psychometric testing of the Satisfaction with SimulationExperience Scale, an instrument designed to measure and compare differences in satisfaction levels betweennursing students exposed to medium and high fidelity human patient simulation manikins.Background: Student satisfaction is important to engaged and meaningful learning and it facilitates active andpurposeful participation in simulation experiences. There are suggestions that student satisfaction may havesome correlation with performance. Few studies have explored in a rigorous way the impact of manikinfidelity on nursing students' satisfaction with simulation experiences.Method: The items for the Satisfaction with Simulation Experience Scale were identified following a criticalreview of the literature. Content validly was established by use of an expert panel. During 2009 and 2010 the

instrument was tested with second year (n=268) and third year nursing students (n=76) from oneAustralian university. Exploratory factor analysis with varimax rotation was used to determine constructvalidity and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Differences insatisfaction levels between groups were analysed using an independent t test. Responses to an open endedquestion were categorised using thematic content analysis.Results: The scale demonstrated satisfactory internal consistency (alpha 0.77). Exploratory factor analysisyielded a three-component structure termed Debriefing and Reflection, Clinical Reasoning, and ClinicalLearning; each subscale demonstrated high internal consistency: 0.94; 0.86; 0.85 respectively. Meansatisfaction scores were high for each group. However, statistically significant differences were not apparentbetween second or third year students exposed to medium and high fidelity manikins. Content analysisidentified 13 main categories including supplementing versus replacing clinical placements and the need forincreased exposure to simulation sessions.Conclusion: The results of this study indicate that simulation is highly valued by students, irrespective of thelevel of fidelity. This raises questions about the value of investing in expensive simulation modalities. TheSatisfaction with Simulation Experience Scale was reliable and valid for this cohort. Further research indifferent contexts would be valuable in extending upon this work.

© 2011 Elsevier Ltd. All rights reserved.

61 2 4921 6301..au (T. Levett-Jones),[email protected]),[email protected]), [email protected]

l rights reserved.

t al., The development and p0.1016/j.nedt.2011.01.004

Introduction

It is acknowledged that self-reported levels of satisfaction alone donot provide a comprehensive assessment of the overall impact ofsimulation experiences. Nevertheless, satisfaction is an importantunit of measure. Student satisfaction is important to engaged andmeaningful learning and it facilitates active and purposeful partici-pation in simulation experiences (Prion, 2008). There are alsosuggestions that student satisfaction may have some correlationwith performance (Bremner et al., 2006). To date, many studiesevaluating nursing students' satisfaction with simulation have tendedto be small scale or localised studies using instruments that have notbeen psychometrically tested. In addition, few studies have exploredin a rigorous way the impact of manikin fidelity on nursing students'

sychometric testing of the Satisfaction with Simulation Experience

2 T. Levett-Jones et al. / Nurse Education Today xxx (2011) xxx–xxx

satisfaction level. This paper profiles a study that (a) developed andpsychometrically tested the Satisfaction with Simulation Experience(SSE) Scale; and (b) compared satisfaction levels in second and thirdyear bachelor of nursing students exposed to medium and highfidelity human patient simulation manikins (HPSMs). This paperforms part of a larger mixed-methods simulation study funded by theAustralian Learning and Teaching Council that also examinedknowledge acquisition, clinical reasoning ability and communicationskills; these results are reported separately.

Background

Simulation is broadly defined as an educational technique inwhichelements of the real world are appropriately integrated to achievespecific goals related to learning or evaluation; simulation is aneducational strategy, not a technology (Gaba, 2004). Simulationsessions frequently use HPSMs to enhance fidelity. Fidelity refers tohowauthentic or life-like themanikin and/or simulation experience is.Low fidelity HPSMs include simple task trainers such as intravenous(IV) arms and resuscitation torsos, and anatomically correct fullbody static manikins that replicate the external anatomy and jointmovement of humans, but have no interactive capacity. Mediumfidelity HPSMs are full body manikins usually with embeddedsoftware. They are controlled by an external, hand held device andhave limited physiological responses such as palpable pulse and bloodpressure on one arm, and verbal noises limited to breath sounds,coughing, vomiting, groaning and one syllable words. High fidelityHPSMs are defined as ‘life-like’manikinswith embedded software thatcan be remotely controlled by computer (usually in a separate controlroom) to allow for individualised, programmed scenarios that allowthe operator to set physiological parameters and respond to students'interventions with changes in voice, heart rate, respiratory rate, bloodpressure, oxygen saturation level and other physiological signs.

Simulation provides authentic and clinically relevant opportuni-ties for students to engage in experiential leaning (Cioffi, 2001) andthe use of HPSMs, especially medium and high fidelity, has increasedmarkedly over the last decade. The literature (Comer, 2005; Feingoldet al., 2004; Jeffries, 2007; Larew et al., 2006) cites a number ofbenefits of simulation including:

• Opportunities for active involvement in challenging clinical situa-tions that involve unpredictable simulated patient deterioration.

• Exposure to time sensitive and critical clinical scenarios that, ifencountered in a ‘real’ clinical environment, students couldnormally only passively observe.

• Opportunities to integrate clinical skills, content knowledge,interprofessional communication, teamwork, physical assessment,nursing therapeutics and critical thinking in a realistic but non-threatening environment.

• Opportunities tomakemistakes and learn from themwithout risk topatients.

• Opportunities for repeated practice of requisite skills and formativeand summative assessment can be provided.

• Debriefing and immediate opportunities for reflection can enhancethe conditions for learning

• Remediation can be provided in a supportive environment.

A number of studies have reported on the levels of satisfactionwithsimulation experiences with results indicating that overall nursingstudents tend to be highly satisfied with these types of learningactivities irrespective of the type of manikin used (Abdo and Ravert,2006; Bremner et al., 2006; Schoening et al., 2006). Kardong-Edgrenet al. (2008) examined student satisfaction in first year studentsexposed to three simulation sessions using medium fidelity HPSMsand high levels of student satisfactionwere reported.When using highfidelity HPSMs for mock cardiac arrest training Bruce et al. (2009) also

Please cite this article as: Levett-Jones, T., et al., The development and pScale, Nurse Education Today (2011), doi:10.1016/j.nedt.2011.01.004

found a high level of student satisfaction. A number of studies haveidentified increased student satisfaction when lectures or tutorials aresupplemented with simulation sessions. For example, a projectcomparing lectures supplemented with the use of medium fidelityHPSMswith lectures only resulted in a 91% student satisfaction level inthe intervention group compared with 68% for the control group(Sinclair and Ferguson, 2009). The impact of manikin fidelity onstudent satisfaction is less clear however. In a study by Jeffries andRizzolo (2006) student satisfaction was higher when using highfidelity HPSMs than when using either low fidelity HPSMs or a paper-based case study. By contrast no statistically significant differencewas found in student satisfaction comparing high and low fidelityHPSM for life support training in a study by Hoadley (2009). This issupported by Kardong-Edgren et al. (2009) who also found nosignificant difference in student satisfaction when comparing highand medium fidelity HPSMs. These contradictory findings haveimportant cost–benefit implications and call for further research;particularly as there is currently an impetus to invest in high fidelityHPSMs despite the increasing fiscal constraints being experienced bymany universities and health services.

Research design

This study used a mixed method design which captured bothquantitative and qualitative data. In disciplines such as nursing,the phenomena studied are often complex and mixed-methodapproaches can expand the impact and enhance the flexibility ofresearch designs (Sandelowski, 2000). Recognising that all methodshave limitations, many researchers (Cherryholmes, 1992; Creswell,2003) believe that biases inherent in any single method may help toneutralise or cancel out the biases of other methods. The mixedmethods approach adopted for this study presented different slices ofreality and thus allowed more holistic understandings to emerge.

There were four phases to this study:

• Development of the SSE (satisfaction with simulation experience)Scale.

• Psychometric testing of the SSE Scale.• Comparison of differences in SSE scores between second and thirdyear students exposed to high and medium fidelity HPSMs.

• Thematic content analysis of qualitative data from the open-endedquestion on the SSE Scale.

Phase 1: development of the SSE scale

The SSE Scale was developed and evaluated in accordance withDeVellis (2003) method of scale development and in line with thefollowing processes:

Development of item pool – The initial item pool for the Scale wascomprised of 70 closed-ended questions and one open-endedquestion drawn from the literature related to simulation. DeVellis(2003) suggests that over inclusion of items in the initial stage ofitem development is important to ensure all options are evaluated.Method of measurement – The SSE Scale required participants torate their level of agreementwith each statementwith 1=stronglydisagree; 2=disagree; 3=unsure; 4=agree and 5=stronglyagree. Rating scales allow participants to rate their attitudes,opinions or perceptions (de Vaus, 2004; DeVellis, 2003); and theyallow for “direction and intensity” (p.357) of the measurementtaken (de Vaus, 2004). The main disadvantage of rating scales isthat they are sometimes poorly developed and refined before enmasse use. A rigorous development and psychometric testing stagemust be completed to ensure that the scale is both valid andreliable. These issues can be addressed in part by the use of anexpert panel (DeVellis, 2003).

sychometric testing of the Satisfaction with Simulation Experience

7 The Statistical Package for the Social Sciences (SPSS) (Version 19) was used tofacilitate statistical analysis.

3T. Levett-Jones et al. / Nurse Education Today xxx (2011) xxx–xxx

Review of item pool by an expert panel – Measurement of apopulation's attitudes, beliefs or opinions can be problematic andvalidation of the relevance of the items included in the scale isimproved by the use of an expert panel (DeVellis, 2003). The itempool for the SSE Scale was evaluated by an expert panel consisting ofnine academics with expertise in simulation and/or instrumentdevelopment. One reviewer was from a non-English speakingbackground. The background to and aim of the study was presentedto the expert panel and theywere then asked to revieweach item forclarity, conciseness, relevance, ambiguity and unnecessary repeti-tion. This approach enhances a scale's content and constructvalidity (Kardong-Edgren et al., 2010). Content validity assessesthe appropriateness of the scales' items and the comprehensivenessof measurement. It relies on clear research aim/s and a thoroughreview of the literature (Heard and Harris, 2004). Construct validitydetermineswhether a scale adequately represents the concept beingevaluated and measures what it is intended to measure (DeVellis,2003). Demonstration of the construct validity of the SSE Scale wasachieved by ensuring that the Scale reflected the main conceptsdrawn fromthe literatureand feedback fromtheexpert panelhelpedto determine the items to be included in the scale. From the initialitem pool of 70 items 10 were deleted and 11 rephrased; thus thefinal Scale consisted of 60 items, 59 closed-ended and one open-ended question. The Scale comprised three subscales with questionsrelated to the simulation session (37 items), debriefing (15 items)and overall impressions (7 items).Pre-testing – In order to determine the instrument's quality andprovide information about the readability of the scale and/orindividual items (DeVellis, 2003; Litwin, 1995) a pre-test wasconducted with a sample of eight third year nursing students. Norevisions to the scale were required as a result of this process.

Context and participants

The study was conducted in Australia in a regional school of nursingthat offers a Bachelor of Nursing program across three campuses. In2009 and 2010, following ethics approval, second year (N=353) andthird year (N=203) nursing students undertaking their first simulatedlearning experience were informed about the study by advertisementsplaced onBlackboard™, aweb-basedplatform, and invited to participateby completing the SSE Scale. An information statement was providedand students were asked to sign a consent form prior to participating.For third year students the simulation session was an optional activity;for second year it was a required course component. However,completion of the SSE Scale was voluntary for both groups. Studentswere provided with the opportunity to revise their knowledge of fluidbalance (the focus of the simulation) using either an online or paperbased case study prior to commencing the simulation. The validity andauthenticity of the simulation scenariowere assessedby anexpert panelcomprised of four academics and three clinicians.

Working in pairs students were randomly allocated to eithermedium fidelity HPSM (Laerdal's Megacode Kelly™ with VitalSimcapability) or high fidelity HPSM (Laerdal's 3G SimMan™). A fullyimmersive approach was used, meaning that the participants wererequired to respond independently and without guidance from theeducator who was either located in a control booth (high fidelity) orin the same room (medium fidelity) (Arthur et al., 2010). Irrespectiveof location the educator did not provide instruction to the studentsduring the scenario, apart from an orientation to the environment andequipment. A 20 minute scenario requiring students to use theirclinical reasoning ability to identify and respond to a deteriorating‘patient’ with hypervolaemia and pulmonary oedema was presented;and a 20 minute debrief followed.

The SSE survey was completed immediately following the debriefand took approximately 10 min. In order to maintain anonymity eachsurvey was numerically coded for data entry and all indentifying

Please cite this article as: Levett-Jones, T., et al., The development and pScale, Nurse Education Today (2011), doi:10.1016/j.nedt.2011.01.004

information was removed prior to access by the researchers. Two-hundred and sixty-eight second year students and 76 third yearstudents completed the SSE survey from across the three campusesgiving a response rate of 76% and 37% respectively.

The participants' ages ranged from 20 to 54 years. School leavers,those in the 19–22-year-old age group, comprised 45% and matureage students (23 years and older) 55% of the participants. Most (85%)were women. The majority of participants (75%) identified Australiaas their country of birth. The remainder were from Korea, China,Canada, England, Philippines, Singapore, Botswana, Zambia or NewZealand.

Missing data were coded and participant responses with greaterthan 20% of missing data were removed prior to data analysis; anyremaining missing responses were imputed with the mean responsefor that respective item prior to data analysis (Fink, 2003). Of the 268questionnaires completed by second year students 58 has more than20% missing data; data from all of the third year questionnaires werecomplete and included.

Phase 2: psychometric testing of the SSE Scale7

In reviewing the psychometric properties of this scale redundantitems were removed and further analysis of the SSE conducted. Themethods of analysis used to facilitate this process included initialreliability measure of internal consistency using Cronbach's alphacoefficient. A correlation matrix was then examined to allow for theremoval of items with poor correlations. This process was followed bythe use of factor analysis.

Internal consistency reliability

Internal consistency is an important part of scale developmentbecause the results gained from an instrument with high internalconsistency are more reliable (DeVellis, 2006). Cronbach alphacoefficient (Cronbach alpha) is a commonly used indicator of scalereliability and in particular internal consistency. The alpha coefficientmeasures the “correlations among the items (of a scale)” (DeVellis,2006, p.S52). If there are high correlations between each of the items(i.e. they are inter related and measure the same variable) the alphacoefficient will be high. This means that a scale can then be deemedunidimensional; that is, the items within the scale measure the samecore subject (Greenwood, 2004). Thus, internal consistency of theentire SSE Scale and each subscale was measured by Cronbach's alphacoefficient. This process allowed for a preliminary evaluation of theitems within the scale and also for the removal of redundant orrepetitive items. Field (2005) suggests that there is much debateabout a suitable cut off for a Cronbach alpha coefficient value. Thisstudy considered an alpha value of 0.7 or more as satisfactory(DeVellis, 2006). The alpha coefficient of each subscale and the fullScale were measured to establish initial values and at this stage werenot satisfactory (refer to Table 1).

The reliability analysis also provided hypothetical informationabout what the potential alpha would be if an itemwas removed fromthe scale. This showed that removal of 12 items would improve theoverall alpha of the scale. Consequently these itemswere not includedin further analysis.

Correlation matrix

The next step was to examine the correlation matrix of the SSE foritems with high (greater than 0.9) or low (less than 0.4) correlations.A high correlation coefficient between two items suggests that theitems are too similar or too redundant and should be removed or

sychometric testing of the Satisfaction with Simulation Experience

Table 1Internal consistency values for SSE and subscales.

α

Triple SSE scale 0.666Simulation subscale 0.638Debriefing subscale 0.604Overall impressions subscale 0.303

Table 2Final SSE scale with means and standard deviation.

Items M SD N

Debrief and reflectionThe facilitator provided constructive criticism during thedebriefing

4.402 0.108 344

The facilitator summarised important issues during thedebriefing

4.498 0.073 344

I had the opportunity to reflect on and discuss myperformance during the debriefing

4.453 0.110 344

The debriefing provided an opportunity to ask questions 4.437 0.096 344The facilitator provided feedback that helped me to developmy clinical reasoning skills

4.448 0.087 343

Reflecting on and discussing the simulation enhanced mylearning

4.474 0.098 344

The facilitator's questions helped me to learn 4.504 0.118 344I received feedback during the debriefing that helped meto learn

4.514 0.071 344

The facilitator made me feel comfortable and at easeduring the debriefing

4.525 0.076 344

Clinical reasoningThe simulation developed my clinical reasoning skills 4.369 0.099 344The simulation developed my clinical decision making ability 4.286 0.089 344The simulation enabled me to demonstrate my clinicalreasoning skills

4.236 0.092 344

The simulation helped me to recognise patient deteriorationearly

4.283 0.159 344

This was a valuable learning experience 4.638 0.059 343

Clinical learningThe simulation caused me to reflect on my clinical ability 4.657 0.102 343The simulation tested my clinical ability 4.586 0.093 343The simulation helped me to apply what I learned from thecase study

4.186 0.347 343

The simulation helped me to recognise my clinical strengthsand weaknesses

4.748 0.124 343

Table 3Internal consistency values for revised SSE scale and subscales.

α

SSE scale 0.776Debriefing and reflection subscale 0.935Clinical reasoning subscale 0.855Clinical learning subscale 0.850

4 T. Levett-Jones et al. / Nurse Education Today xxx (2011) xxx–xxx

rephrased (Field, 2005). Examination of the correlation matrix of theSSE Scale resulted in 5 items that had coefficients greater than 0.9 (i.e.too similar) and 11 that had coefficients less than 0.4 (redundant)were subsequently removed.

Exploratory factor analysis

In order to better understand the underlying dimensions of theBES–CPE and to ascertain its construct validity, exploratory factoranalysis was employed. From this, three new subscales weresubsequently developed.

Factor analysis allows for the grouping of related items in a scaleinto a smaller number of factors or categories (DeVellis, 2003, 2006;Harris and Heard, 2004). DeVellis (2006) states that this allows for anassessment of dimensionality within the scale, that is, whether theitems within it are measuring the same or different concepts. In thisstudy exploratory factor analysis (EFA) was conducted as this was thefirst testing of the SSE Scale and EFA allows for testing when there isno predefined underlying construct (DeVellis, 2006).

A varimax rotationwas used for factor rotation as this is best suitedwhen factors correlate with each other (Field, 2005).

The rotated component matrix was examined for items withloadings greater than 0.6 on one factor, with eigenvalues (factorcommunalities) greater than 1.0 considered relevant (Field, 2005;Harris and Heard, 2004). A factor analysis of the 31 remaining itemsrevealed 8 factors with eigenvalues greater than one were accountingfor 68% of the variance. However, only three of these factors wereextracted as these three factors accounted for 20, 11 and 10% of thevariance independently. Furthermore, a scree plot indicated a levelingoff at factors 4, 5 and 6. Similarly, the rotated component matrixshowed high item loadings on factors one to three. Based uponconcept evident in the factors they were labeled:

1. Debriefing and reflection2. Clinical reasoning3. Clinical learning.

Following the psychometric testing of the SSE Scale 18 items wereretained and are listed in Table 2 along with mean SSE scores andstandard deviation of each.

Reliability revisited

The alpha values were for the new Scale and subscales weresatisfactory after removal of the poorly fitting items (refer to Table 3).

Phase 3: analysis of differences in SSE scores

Mean SSE scores for each item and each group were high overall(refer to Tables 2 and 4).

To determine whether there were any differences in SSE scoresbetween groups independent t-tests were conducted. The second yearhigh fidelity group had higher SSE scores (Mean=4.515) than thesecond year medium fidelity group (Mean=4.415); however thisdifference was not statistically significant t (208)=−1.586, pN0.05.Refer to Table 5.

Please cite this article as: Levett-Jones, T., et al., The development and pScale, Nurse Education Today (2011), doi:10.1016/j.nedt.2011.01.004

Similarly, the third year high fidelity group had slightly higher SSEscores (Mean=4.472) than the third year medium fidelity group(Mean=4.415). Once again the differences between the means werenot statistically significant t (74)=−0.586, pN0.05 (refer to Table 6).

Phase 4: content analysis of open ended questions

The open ended question on the SSE asked participants if theywished to make any further comments about their simulationexperiences. This very general question was revealing as it allowedparticipants to share their perspectives and comment on aspects ofthe simulation experience that they felt were most important. Onehundred and sixty-six participants answered this question andresponses were collated and categorized. This allowed for inferencesto be made about the characteristics and meaning of participants'responses. The categories are reported in order of frequency in Table 7and illustrated with verbatim quotes.

From the participants' responses it is evident that they highlyvalued the simulation experience. They described it as brilliant,fantastic, awesome and motivating; perhaps somewhat influenced bythe fact that this was a novel experience and the first time each hadbeen exposed to a simulation session. The most frequently occurringcomment was that because the simulation provided a highly valuedlearning experience it should be integrated into every clinical course

sychometric testing of the Satisfaction with Simulation Experience

Table 4Group means – second and third year.

N M SD Std. error mean

Second year 210 4.496 0.373 0.258Third year 76 4.445 0.416 0.048

Table 6t-tests for difference in mean SSE scores with standard deviation for third year mediumand high fidelity groups.

N M SD Std. error mean t-test

t (74)=−0.586, pN0.05Medium fidelity 36 4.415 0.447 0.079High fidelity 40 4.472 0.360 0.057

Table 7Categories identified from content analysis.

Category Participant quotes

Need for more simulationexperiences

We should have simulations much more often –

at least once per week.Simulation should complementbut not replace clinicalplacements

It should not replace the valuable clinicalplacement hours but should definitely beincorporated throughout the program in everyclinical course.

More valuable than clinicallaboratory experiences

I learnt more in this hour than I ever did in theclinical labs just practicing skills.

Stress It was a stressful experience but well worth it.Working in pairs made it less stressful.

Application of knowledge andskills to the simulation

The simulation helped to bring everythingtogether – knowledge, skills and clinicalreasoning.

Clinical reasoning The simulation helped me to think like a nurse.The simulation helped me to put into practicewhat I have learned about clinical reasoning.

Importance of cue collectionand interpretation

I learned the importance of collecting cues andother assessments – not just vital signs.Now I know how and why to interpret fluidbalance charts properly.

Fidelity The high fidelity manikin mimicked a real-lifepatient very well.

Value of learning experience This was a brilliant learning experience. I learntmore in the simulation than in hours and hoursof tutorials and lectures.

Preparation for practice It helped me to see where and how I need todevelopmy clinical skills to become an registerednurse.I think I could manage a similar situation inpractice now.

Safe environment to learn The simulation mirrored a ‘real-life’ situationwhere you knew your mistakes were learningopportunities that could not affect a real patient.

Clinical learning I gained more from this simulation thanplacement because I had to do the thinking andmake the decisions – I couldn't just watch.

5T. Levett-Jones et al. / Nurse Education Today xxx (2011) xxx–xxx

and be timetabled weekly. It is noteworthy that although theexperience was valued by all of the participants who responded,none believed that simulation should be used as a replacement forclinical placement hours; although many did view it as more valuablethan the time spent in clinical skills laboratories. The participantsacknowledged that the simulation caused them to accept responsi-bility for ‘patient’ care rather than being a passive observer and theysaw the experience as an opportunity to apply their knowledge andskills and make independent clinical decisions – opportunities thatmost placements did not afford. Many of the participants commentedon how the simulation allowed then to apply what they had learnedabout clinical reasoning and they participants described how thesimulation reinforced the importance of accurate and timely patientassessment, careful interpretation of cues and interprofessionalcommunication. The participants also saw the simulation as a ‘safe’and supported opportunity to make and learn from their mistakes.The debrief was identified as a valuable learning opportunityfacilitated by educators who challenged students, provided valuablefeedback, supported them to learn and grow, and helped them to feelmore confident of their abilities. However, a number suggested thatboth the debrief and the simulation should be longer.

Discussion

Pike (1991) suggests that satisfaction exerts a greater influence onacademic performance than performance exerts on satisfaction.Educational psychologists have also found that student satisfactionhelps to build self-confidence which in turn helps students developskills and acquire knowledge. Thus it is important that educationalinitiatives such as simulation are appropriately evaluated for theircapacity to create satisfying and engaging learning experiences.Although investment in simulated learning environments hasescalated in Australia and internationally (Arthur et al., 2010), fewrigorous instruments exist for evaluating student satisfaction or theimpact of level of fidelity on student satisfaction. This paper hasprofiled the development and psychometric testing of the SSE Scale.Results indicate that it is a valid and reliable instrument. The results ofthis study further demonstrate that while SSE scores were generallyhigh for both years and both medium and high fidelity HPSMs groups,the differences between medium and high fidelity groups were notstatistically significant. This is an important finding and should befactored into decisionmaking by those investing in simulated learningenvironments. If student satisfaction is high irrespective of the levelof fidelity then it may be prudent to invest in more medium fidelitymanikins thus allowing a greater number of students access tosimulation experiences. However, it should be noted that this wasthe first simulation experience for all of the participants and it wouldbe interesting to investigate whether their perspectives changedfollowing their next simulation session, particularly if they wereexposed to a different level of fidelity.

Table 5t-tests for difference in mean SSE scores with standard deviation for second yearmedium and high fidelity groups.

N M SD Std. error mean t-test

t (208)=−1.586, pN0.05Medium fidelity 43 4.415 0.370 0.564High fidelity 167 4.515 0.372 0.029

Please cite this article as: Levett-Jones, T., et al., The development and pScale, Nurse Education Today (2011), doi:10.1016/j.nedt.2011.01.004

The results from the content analysis add another dimension tothis discussion and further illuminate some of the reasons underpin-ning student satisfaction. It evident from the participants' responsesthat many valued their first simulation experience more highlythan clinical skills laboratories, tutorials and lectures; their responsesalso supported the increased scheduling of simulation experiences.Certainly academics involved in or observing simulation sessionsare invariably impressed by the capacity of these experiences tochallenge, motivate and engage students, and with the meaningfuland memorable learning that quality simulation sessions evoke.However, research suggests that the use of simulation as a teachingstrategy is currently constrained by a number of factors, includingcost, adequacy of facilities and equipment, and availability of staff withexpertise and experience in conducting simulation sessions (2010).

The simulation motivated me to learn moreabout the deteriorating patient.I learned that to intervene effectively one mustdo a thorough assessment first.I learned how to communicate effectively withdoctors using ISBAR (introduction, situation,background, assessment, recommendation).

Value of debriefing The educator provided valuable feedback and thedebrief made me more confident in my clinicalability.

sychometric testing of the Satisfaction with Simulation Experience

6 T. Levett-Jones et al. / Nurse Education Today xxx (2011) xxx–xxx

The participants further commented that while simulationexperiences should supplement clinical placements they should notbe used as a replacement. These perspectives need to be considered inlight of the decreased availability of quality clinical placements andthe impetus for increased pressure for simulation experiences toreplace some required clinical placement hours; as is alreadyoccurring in some countries (McCallum, 2006; Nehring et al., 2001;Nursing and Midwifery Council, 2007).

Both the quantitative and the quantitative data highlighted theimportance of clinical reasoning to student learning. Clinical reasoningis defined as the process by which nurses collect cues; process theinformation; come to an understanding of a patient problem orsituation; plan and implement interventions; evaluate outcomes andreflect on and learn from the process (Levett-Jones et al., 2010).Competent practice requires not only psychomotor skills andknowledge but also sophisticated thinking abilities; and nurses withpoor clinical reasoning skills often fail to detect impending patientdeterioration resulting in a “failure-to-rescue” (Aiken et al., 2003). Asmany simulation sessions focus on management of the deterioratingpatient the SSE Scale has proven to be an important, relevant andsomewhat unique evaluation instrument.

Limitations

The response rate for the SSE Scale was satisfactory for second yearparticipants but less so for third year participants. However, it isacknowledged that even a good response rate does not necessarilyprotect against bias. As with any survey, there is a possibility thatparticipantsmay differ in character or attitudes fromnon-participants,and one could speculate that those with strong views regarding theirsimulation experiences may have been more likely to respond to thesurvey. The extent to which this is significant cannot be ascertainedbut should be taken into consideration. It should also be noted that,although fairly typical of the student cohorts from which they weredrawn, the survey participants cannot be assumed to be necessarilyrepresentative of a larger population outside the study context. Anadditional limitation of the SSE Scale is that responses were based onself-report. Responses obtained in thismannermay be subject to socialdesirability that may bias answers towards more acceptable norms. Itwas anticipated however, that the anonymous nature of the surveyimproved the likelihood that participants responded candidly to thesurvey.

Conclusion

This paper has demonstrated that the SSE Scale is a valid and reliableinstrument. However, instrument development is an iterative processrequiring additional studies to support evidence of psychometricintegrity. Further use of the SSE Scale in different contexts and withdiverse cohorts will provide additional evidence of validity andreliability of the Scale. It should be noted that while satisfactionis a useful measure of student engagement, a limited number of studieshave shown a direct correlation between students' report of satisfactionwith simulation and other outcomemeasures and this warrants furtherinvestigation. Lastly, the ‘gold standard’ for evaluating effectiveness ofsimulation sessions is undoubtedly clinical performance, that is, doesthe learning translate to clinical practice and does it result in improvedpatient outcomes (Kardong-Edgren et al., 2010). Future research shouldconsider this important but challenging work.

References

Abdo, A., Ravert, P., 2006. Student satisfaction with simulation experiences (Electronicversion). Clinical Simulation in Nursing Education 2, e13–e16.

Aiken, L.H., Clarke, S.P., Cheung, R.B., Sloane, D.M., Silber, J.H., 2003. Educational levels ofhospital nurses and surgical patient mortality. JAMA 290 (12), 1617–1620.

Please cite this article as: Levett-Jones, T., et al., The development and pScale, Nurse Education Today (2011), doi:10.1016/j.nedt.2011.01.004

Arthur, C., Kable, A., Levett-Jones, T., 2010. Human patient simulation manikins andinformation communication technology use in Australian schools of nursing: a crosssectional survey. Clinical Simulation inNursing e1–e9. doi:10.1016/j.ecns.2010.03.002.

Bremner, M., Aduddell, K., Bennett, F., VanGeest, J., 2006. The use of human patientsimulators: best practice with novice nursing students. Nurse Educator 31 (4),170–174.

Bruce, S., Scherer, Y., Curran, C., Urschel, D., Erdley, S., Ball, L., 2009. A collaborativeexercise between graduate and undergraduate nursing students using a computer-assisted simulator in a mock cardiac arrest. Nursing Education Research 30 (1),22–27.

Cherryholmes, C., 1992. Notes on pragmatism and scientific realism. EducationalResearcher 14, 13–17.

Cioffi, J., 2001. Clinical simulations: development and validation. Nurse EducationToday 21 (6), 477–486.

Comer, S., 2005. Patient care simulations: role playing to enhance clinical understanding.Nursing Education Perspectives 26 (6), 357–362.

Creswell, J., 2003. Research design: qualitative, quantitative, and mixed methodsapproaches, 2nd ed. Sage, Thousand Oaks, CA.

de Vaus, D., 2004. Structured questionnaires and interviews. In: Minichiello, V.,Sullivan, G., Greenwood, K., Axford, R. (Eds.), Research methods for nursing andhealth science. Pearson Education, French Forest, pp. 347–392.

DeVellis, R., 2003. Scale development: theory and applications. Sage Publications,Thousand Oaks.

DeVellis, R., 2006. Classical test theory. Medical Care 44 (11), S50–S59.Feingold, C., Calaluce, M., Kallen, M., 2004. Computerised patient model and simulated

clinical experience: evaluation with baccalaureate nursing students. The Journal ofNursing Education 43 (4), 156–163.

Field, A., 2005. Discovering statistics using SPSS. Sage Publications, Thousand Oaks.Gaba, D., 2004. The future vision of simulation in health care. Quality & Safety in Health

Care 13 (Suppl. 1), i2–i10.Greenwood, K., 2004. Measurement: concepts, tools and issues. In: Minichiello, V.,

Sullivan, G., Greenwood, K., Axford, R. (Eds.), Research methods for nursing andhealth science. Pearson Education, French Forest, pp. 319–346.

Harris, L., Heard, R., 2004. Using multivariate analysis. In: Minichiello, V., Sullivan, G.,Greenwood, K., Axford, R. (Eds.), Research methods for nursing and health science.Pearson Education, French Forest, pp. 598–623.

Heard, R., Harris, L.M., 2004. Experimental, quasi-experimental and correlationalquantitative research designs. In: Minichiello, V., Sullivan, G., Greenwood, K.,Axford, R. (Eds.), Research methods for nursing and health science. PearsonEducation, French Forest, pp. 124–150.

Hoadley, T., 2009. Learning advanced cardiac life support: a comparison study of theeffects of low- and high-fidelity simulation. Nursing Education Research 30 (2),91–95.

Jeffries, P., 2007. Simulation in nursing education. National League for Nursing, NewYork.

Jeffries, P., Rizzolo, M., 2006. Designing and implementing models for the innovativeuse of simulation to teach nursing care of ill adults and children: a national, multi-site, multi-method study. National League for Nursing, New York.

Kardong-Edgren, S., Starkweather, A., Ward, L., 2008. The integration of simulation intoa clinical foundation of nursing course: Student and faculty perception. InternationalJournal of Nursing Education Scholarship 5 (1). doi:10.2202/1548-923X.1603(Article 26).

Kardong-Edgren, S., Lungstrom, N., Bendel, B., 2009. VitalSim vs. SimMan: comparingBSN student learning and satisfaction outcomes. Simulation Forum. University ofTechnology, Sydney.

Kardong-Edgren, S., Adamson, K.A., Fitzgerald, C., 2010. A review of currently publishedevaluation instruments for human patient simulation. Clinical Simulation inNursing 6, e25–e35.

Larew, C., Lessans, S., Spunt, D., Foster, D., Covington, B., 2006. Innovations in clinicalsimulation: application of Benner's theory in an interactive patient care simulation.Nursing Education Perspectives 27 (1), 16–21.

Levett-Jones, T., Hoffman, K., Dempsey, Y., Jeong, S., Noble, D., Norton, C., Roche, J.,Hickey, N., 2010. The ‘five rights’ of clinical reasoning: an educational model toenhance nursing students' ability to identify and manage clinically ‘at risk’ patients.Nurse Education Today 30 (6), 515–520.

Litwin, M., 1995. How to measure survey reliability and validity. Sage Publications.McCallum, J., 2006. The debate in favour of using simulation education in pre-

registration adult nursing. Nurse Education Today 27 (8), 825–831.Nehring, W., Ellis, W., Lashley, F., 2001. Human patient simulators in nursing education:

an overview. Simulation Gaming 32 (2), 194–204.Nursing andMidwifery Council, 2007. Simulation and practice learning project. Nursing

and Midwifery Council, London.Pike, G., 1991. The effects of background, coursework, and involvement on students'

grades and satisfaction. Research in Higher Education 32 (1), 15–31.Prion, S., 2008. A practical framework for evaluating the impact of clinical simulation

experiences in prelicensure nursing education. (Electronic version). ClinicalSimulation in Nursing 4 (5), e69–e78.

Sandelowski, M., 2000. Combining qualitative and quantitative sampling, datacollection, and analysis in mixed methods studies. Research in Nursing & Health23 (3), 246–255.

Schoening, A., Sittner, B., Todd, M., 2006. Simulated clinical experience: nursingstudents' perceptions and the educator's role. Nurse Educator 31 (6), 253–258.

Sinclair, B., Ferguson, K., 2009. Integrating simulated teaching/learning strategies inundergraduate nursing education. International Journal of Nursing EducationScholarship 6 (1) (article 7).

sychometric testing of the Satisfaction with Simulation Experience