Testing of the Patients’ Insights and Views of Teamwork (PIVOT) Survey: A Validity Study

6
Testing of the Patients’ Insights and Views of Teamwork (PIVOT) Survey: A validity study Beverly W. Henry a,b, *, Deborah M. Rooney c , Susan Eller d , John A. Vozenilek e , Danielle M. McCarthy f,g a Center for Simulation Technology and Immersive Learning, Northwestern University, Chicago, USA b College of Health and Human Sciences, Northern Illinois University, DeKalb, USA c Department of Learning Health Sciences, University of Michigan, Ann Arbor, USA d Center for Immersive Learning, Stanford University School of Medicine, Stanford, USA e Jump Trading Simulation and Education Center, OSF Healthcare, Peoria, USA f Department of Emergency Medicine, Northwestern University, Chicago, USA g Center for Healthcare Studies, Northwestern University, Chicago, USA 1. Introduction Patient surveys are frequently used in clinical settings as assessment measures to solicit patients’ feedback about their care and their care providers’ communication. Examples include the Communication Assessment Tool (CAT) of physician interperson- al skills and communication with patients; the Picker Patient Experience Questionnaire for hospital patients and their families on quality of care issues such as coordination, continuity, and communication; and the Patient Perception Scale of maternal satisfaction with communication, safety and respect after operative delivery [1–3]. These and other surveys are used not only in research, but also in practice as patient input and engagement has the potential to improve systems of care and outcomes [4,5]. Although patient satisfaction survey results have been posi- tively associated with quality indices and care process, evidence of the impact of patient surveys on systems improvement is limited [6,7]. In part, this may be due to surveys that address an expansive range of items across a variety of constructs and populations. For instance, the commonly-used Press-Ganey 1 survey collects data on patient satisfaction with communication, care providers, and the care environment from samples of discharged patients. Patient Education and Counseling 96 (2014) 346–351 A R T I C L E I N F O Keywords: Patient participation Delivery of health care Surveys Teamwork assessment Patient engagement Health care teams A B S T R A C T Objective: To explore patient observations of teamwork-related behaviors such as inter-team communication through a newly designed survey. Methods: In this cross-sectional study, 101 patients (N = 86) and caregivers (N = 15) recruited from the emergency department (ED) of an urban, academic medical center (>85,000 visits/year) completed the 16-item Patients’ Insights and Views Observing Teams (PIVOT) Survey. We evaluated validity evidence through descriptive statistics and analysis including a Many-facet Rasch model to determine associations between questionnaire items and sociodemographic characteristics. Results: Participant responses provided evidence survey items performed well and reflected patients’ awareness of team behaviors such as inter-team communication, coordination, and keeping teammates informed. Also, participants responded about the consistency of information from team members and knowing what people’s jobs were on the team. Rasch analysis largely supported that the PIVOT items reflected the intended content area and adequacy of ratings scales supporting evidence of response processes. High internal consistency (Cronbach alpha, r = .87) supported evidence of internal structure. As expected, response patterns differed by ED visit acuity level and length of stay. Conclusions: The PIVOT survey offered a means to collect patient and caregiver observations of health care teams. Practice implications: PIVOT survey responses may contribute to evaluation of teamwork behaviors. ß 2014 Elsevier Ireland Ltd. All rights reserved. * Corresponding author at: Center for Simulation Technology and Immersive Learning, Northwestern University Feinberg School of Medicine, 240 E Huron, 1st Floor, Suite 1-200, Chicago, IL 60611, USA. Tel.: +1 815 753 6157; fax: +1 815 753 6169. E-mail addresses: [email protected], [email protected] (B.W. Henry). Contents lists available at ScienceDirect Patient Education and Counseling jo ur n al h o mep ag e: w ww .elsevier .co m /loc ate/p ated u co u http://dx.doi.org/10.1016/j.pec.2014.06.002 0738-3991/ß 2014 Elsevier Ireland Ltd. All rights reserved.

Transcript of Testing of the Patients’ Insights and Views of Teamwork (PIVOT) Survey: A Validity Study

Patient Education and Counseling 96 (2014) 346–351

Testing of the Patients’ Insights and Views of Teamwork (PIVOT)Survey: A validity study

Beverly W. Henry a,b,*, Deborah M. Rooney c, Susan Eller d, John A. Vozenilek e,Danielle M. McCarthy f,g

a Center for Simulation Technology and Immersive Learning, Northwestern University, Chicago, USAb College of Health and Human Sciences, Northern Illinois University, DeKalb, USAc Department of Learning Health Sciences, University of Michigan, Ann Arbor, USAd Center for Immersive Learning, Stanford University School of Medicine, Stanford, USAe Jump Trading Simulation and Education Center, OSF Healthcare, Peoria, USAf Department of Emergency Medicine, Northwestern University, Chicago, USAg Center for Healthcare Studies, Northwestern University, Chicago, USA

A R T I C L E I N F O

Keywords:

Patient participation

Delivery of health care

Surveys

Teamwork assessment

Patient engagement

Health care teams

A B S T R A C T

Objective: To explore patient observations of teamwork-related behaviors such as inter-team

communication through a newly designed survey.

Methods: In this cross-sectional study, 101 patients (N = 86) and caregivers (N = 15) recruited from the

emergency department (ED) of an urban, academic medical center (>85,000 visits/year) completed the

16-item Patients’ Insights and Views Observing Teams (PIVOT) Survey. We evaluated validity evidence

through descriptive statistics and analysis including a Many-facet Rasch model to determine

associations between questionnaire items and sociodemographic characteristics.

Results: Participant responses provided evidence survey items performed well and reflected patients’

awareness of team behaviors such as inter-team communication, coordination, and keeping teammates

informed. Also, participants responded about the consistency of information from team members and

knowing what people’s jobs were on the team. Rasch analysis largely supported that the PIVOT items

reflected the intended content area and adequacy of ratings scales supporting evidence of response

processes. High internal consistency (Cronbach alpha, r = .87) supported evidence of internal structure.

As expected, response patterns differed by ED visit acuity level and length of stay.

Conclusions: The PIVOT survey offered a means to collect patient and caregiver observations of health

care teams.

Practice implications: PIVOT survey responses may contribute to evaluation of teamwork behaviors.

� 2014 Elsevier Ireland Ltd. All rights reserved.

Contents lists available at ScienceDirect

Patient Education and Counseling

jo ur n al h o mep ag e: w ww .e lsev ier . co m / loc ate /p ated u co u

1. Introduction

Patient surveys are frequently used in clinical settings asassessment measures to solicit patients’ feedback about their careand their care providers’ communication. Examples include theCommunication Assessment Tool (CAT) of physician interperson-al skills and communication with patients; the Picker PatientExperience Questionnaire for hospital patients and their families

* Corresponding author at: Center for Simulation Technology and Immersive

Learning, Northwestern University Feinberg School of Medicine, 240 E Huron,

1st Floor, Suite 1-200, Chicago, IL 60611, USA. Tel.: +1 815 753 6157;

fax: +1 815 753 6169.

E-mail addresses: [email protected], [email protected] (B.W. Henry).

http://dx.doi.org/10.1016/j.pec.2014.06.002

0738-3991/� 2014 Elsevier Ireland Ltd. All rights reserved.

on quality of care issues such as coordination, continuity, andcommunication; and the Patient Perception Scale of maternalsatisfaction with communication, safety and respect afteroperative delivery [1–3]. These and other surveys are used notonly in research, but also in practice as patient input andengagement has the potential to improve systems of care andoutcomes [4,5].

Although patient satisfaction survey results have been posi-tively associated with quality indices and care process, evidence ofthe impact of patient surveys on systems improvement is limited[6,7]. In part, this may be due to surveys that address an expansiverange of items across a variety of constructs and populations. Forinstance, the commonly-used Press-Ganey1 survey collects dataon patient satisfaction with communication, care providers, andthe care environment from samples of discharged patients.

B.W. Henry et al. / Patient Education and Counseling 96 (2014) 346–351 347

Keeping in mind the limitation in survey design of having toobroad a scope of topics, we focused on the single construct ofteamwork and sought to investigate this through the perspectiveof the patient. This topic is of interest because improving clinicalteamwork has been named as a priority area for improving thesafety and quality of health care [8–10]. Also, while teams havebeen assessed by trained observers or team member self-report[11–13], research suggests patients may be aware of and able toassess teamwork-related behaviors [14–17]. Whereas patientshave most often been asked only about provider–patientcommunication, we propose that patients’ perception of teamworkmay be useful to improvement efforts and may relate to theirsatisfaction with care experiences [14,16,17].

Our research objective has been to evaluate the validity of asurvey tool used to assess teamwork from the perspective of thepatient. Over the past two years, our research team engagedpatient advocates, physicians, nurses, experts in healthcarecommunication and patients themselves in a multidimensionalproject to develop a survey tool aimed at assessing patientperceptions of teamwork. In the first stage, we sought to determinewhether or not patients could report observations of health careteams in a manner distinct from their own care experience andtreatment [18]. The results of interviews with experts andpatients/caregiver supported initial item generation to determineitem importance. Secondly, we conducted a web-based survey ofhealth care providers and patient advocates to focus survey itemsfor usefulness with team improvement efforts and finallydetermined wording for items and the response scale throughthree rounds of cognitive interviews with Emergency Department(ED) patients and caregivers [19]. The result of this prior work is aninstrument called the Patients’ Insights and Views ObservingTeamwork (PIVOT) Survey. This 16-item survey is intended toallow the patient to provide feedback regarding their observationsof team interactions and coordination. In this stage, we presentvalidity evidence of the responses on the PIVOT survey from asample of ED patients.

2. Methods

2.1. Study design

We conducted a preliminary validation study with the 16-itemPatients’ Insights and Views Observing Teams (PIVOT) survey inthe ED. The ED was chosen as the setting for the study because allpatients are exposed to a team of healthcare providers. All studyprocedures were reviewed by the university IRB and determined tobe exempt. Participants indicated their informed consent bycompletion of the survey.

The study continued for six weeks from June 2012 to August2012. Participants were recruited from the ED at an urban,academic medical center (>85,000 annual visits). Two to threedays per week, research assistants (RA) scheduled a 60–120 minperiod to collect data. For this time period, RAs tracked the volumeof ED patients and subsequent increase in discharge ordersthrough monitoring of the electronic record. Once patients wereconfirmed ready for discharge one of the four research assistants(RAs) approached patients and family caregivers for studyrecruitment. The RAs explained the purpose of the study andasked for volunteers to privately complete the PIVOT survey abouttheir experience in the ED. At the midpoint of the data collectionperiod, the research team reviewed the preliminary results and theprocedures with the RAs. Though refusals were minimal, datacollection processes were refined to: maximize recruitmentopportunities during peak patient discharge periods; reassureparticipants that survey responses were not shared with ED staff;and to track refusals. RAs were available on rotating schedules

covering weekdays and on-site 27 times for a total of 36 h over thestudy period. The available hours were in the morning, afternoonand early evening and patients recruited represent a conveniencesample.

2.2. Survey

The survey consisted of 19 items related to the ED experience.The first 16 items were the Patients’ Insights and Views ObservingTeams (PIVOT) Survey instrument, developed through an iterativeprocess and reported previously [18,19]. Participants selectedsurvey response options using a 5-point frequency scale,anchored as 1 (not at all), 2 (rarely), 3 (sometimes), 4 (often),and 5 (all the time), or an added ‘‘no opportunity to observe’’option. Next followed the 3-item Patient Perception Scale (PPS);also rated on a 5-point scale [3]. The third section consisted of anadditional six items to capture demographic information,including participants’ self-reported health, role in the clinicalencounter (patient or care provider), sex, age, ethnicity, andeducational achievement. Also, the RAs acquired patient datafrom the electronic patient chart system on length of stay (LOS)and triage acuity level based on the Emergency Severity Index(ESI; Levels 1–5).

Prior to analyses, ratings for four negatively worded PIVOTitems: item 5 (I felt as if team members talked in front of me as if Iwasn’t there); item 6 (I felt that team members told me conflictingthings); item 9 (I saw team members treating each other with alack of respect); and item 10 (I heard arguments between teammembers, inside or outside the room) were reverse coded to alignwith the direction of the remaining 12 positively-worded items.

2.3. Measurement and analyses

We evaluated validity evidence relevant to (1) test content, (2)response processes, (3) internal structure, and (4) relationships toother variables [20,21]. Participant ratings were analyzed using theFacets software v. 3.68.2 (MESA Press, Chicago, IL. 2011), using aneight-facet (participant � health status � role � gender � educa-tion � acuity � length of stay � item) Rasch model.

We selected a Rasch model as they have been commonly-usedto analyze rating scale data, and have demonstrated their value instudies that evaluated psychometric properties of faculty mea-sures [22,23] and patient ratings [1]. Rasch models, which fallwithin the family of modern measurement models called itemresponse theory (IRT) [24,25], provide several improvements inscaling items and people when compared to traditional methodsbased on classical test theory (CTT). For example, the Rasch modeltransforms both participants’ ability and items’ difficulty on thesame metric, so person ability and item difficulty can bemeaningfully compared. This shared scale also allows researchersto examine potential effects across item, person, and/or facet (e.g.how specific items, participants, or participants from specificgroups performed). In addition, unlike those defined in CTT, Raschmodel parameters are not sample- or test-dependent. Thisprovides significantly greater flexibility in situations wheredifferent samples are used, such as in validation studies. Finally,the Rasch models offer a built-in quality control measure that canbe used to gauge precision of measures via item-level standarderror of measures (SEM), a function that is particularly importantin studies that make inferences about relatively small sample [26].A more complete description of Rasch measurement models can befound in Hambleton et al. [27].

2.3.1. Validity testing

A preview of the analyses we conducted to evaluate sources ofvalidity evidence with supporting references is provided below.

Table 1Demographic attributes of the PIVOT Survey sample in the ED.

Attribute N Valid percent

Sex

Male 44 47

Female 50 53

Health status

Poor 3 3.1

Fair 11 11.3

Good 23 23.7

Very good 39 40.2

Excellent 21 21.6

Participant role

Patient 86 85.1

Caregiver 15 14.9

Education

Some high school 3 3.0

High school degree 11 11.0

Trade school 5 5.0

Some college 12 12.0

College degree 38 38.0

Grad school 30 30.0

Other 1 1.0

Acuity levela

2 23 26.7

3 32 37.2

4 30 34.9

5 1 1.2

Ethnicity

American Indian 2 2.1

Asian 4 4.2

Black 22 22.9

Hispanic 13 13.5

White 54 56.3

Other 1 1.0

Language

English 91 91.9

Spanish 7 7.1

Chinese 1 1.0

a Applies to patients only.

B.W. Henry et al. / Patient Education and Counseling 96 (2014) 346–351348

� To evaluate evidence relevant to test content and each item’srelationship to a single construct, we analyzed participants’responses using Rasch indices, included observed averages, item-measure correlation and item outfit statistics [28].� To evaluate evidence based on response processes (individual

response patterns and fit between the participants’ responsesand the construct being measured) we followed the fouressential guidelines presented by Linacre [29] and Wolfe andSmith [24].� To evaluate evidence of internal structure, we employed a

traditional method based on classical test theory used toevaluate reliability. We evaluated internal consistency estimatedby Cronbach alpha and inter-rater reliability estimated byintraclass correlation coefficient (ICC) [30].� To evaluate evidence based on relationship to other variables we

compared measures of the PIVOT scale to measures of the PPS,which has been previously validated as a patient-centeredassessment of satisfaction through items on communication,safety, and respect [3]. Further, we also examined ratingdifferences across facets described earlier (health status, role,education, acuity, and length of stay).

3. Results

3.1. Demographics and PIVOT responses

A total of 101 participants completed the survey. RAs reportedthat throughout the data collection, refusals were infrequent andrecords revealed that during the second half of data collection, 48of 56 individuals agreed to participate. The refusals were evenlydivided between males and females for an estimated 86 percentresponse rate during the second half of data collection. Participantdemographic information is displayed in Table 1. The majority ofparticipants were patients (N = 86) and mean LOS was approxi-mately four hours (range 45–1024 min). As noted in Table 1,participants’ self-reported health status tended to be good toexcellent and participants’ ages covered a wide range (18–88 yearsold) with a mean of 48.2 years.

The distribution of responses on the PIVOT and PPS items isshown in Table 2 in the order they were presented on the survey.The mean scores for frequency of observations of the PIVOT itemsranged from 2.54 to 4.69 on a 5-point scale, and do not includeresponses of ‘no opportunity.’ Standard errors of measure werelow, and ranged between .07 and .15. The positively worded team-related behaviors observed least often were related to teamorganization (items 12, 15). These two items also received the mostratings that the participant had ‘no opportunity’ to observe thisactivity (n = 17). The four negative team-related behaviors wereobserved on occasion, though least frequently (items 5, 6, 9, and10).

3.2. Validity evidence

Specific findings related to the four sources of validity evidence:(1) test content, (2) response processes, (3) internal structure, and(4) relationships to other variables are described below.

3.2.1. Evidence relevant to test content

The Rasch item-measure correlations were all positive, andranged from 0.28 to 0.57 (Data Supplement 1). Review of Raschitem outfit statistics indicated that 15 of 16 items had outfitstatistics that fell below 2.0. Only item 9 (I saw team members

treating each other with a lack of respect), had a mean-square outfitstatistic of 2.06, indicating this item was associated with a higheramount of unexpected variance and noise caused by outlying

ratings. These findings indicated that the items, with the exceptionon item 9, seemed to measure the intended construct.

3.2.2. Evidence relevant to response processes

Person fit. Beginning with the underfitting ratings, 10 (10.1%)participants had outfit statistics higher than the ideal (2.0),indicating a high amount of ratings variability (error). Reviewingoverfitting responses, 7 (7.1%) participants had outfit statisticsbelow the acceptable boundary of �2.0. Analysis indicated amajority (82.8%) of participants had outfit standardized fitstatistics within acceptable limits, suggesting inferences from thisgroup of participants, aligned with the ‘‘average’’ participant.

Scale functioning. The number of observations in each categorywas well over the minimum of 10 (Data Supplement 2). Thissuggested there were an adequate number of ratings and theywere reasonably spread over the categories used to captureparticipants’ opinions of team-related behaviors. Used observationcounts ranged from 75 (category 1 – Not at all) to 769 (category 5 –All the time). Although the distribution of observations were foundto be reasonably smooth and unimodal, they were skewed towardhigher ratings, with the majority of ratings (55%) falling in category5 (All the time). Average measures increased with each highercategory, and ranged from �0.12 (category 0 – No opportunity) to1.05 (category 5 – All the time) supporting the idea that highercategories correspond to ‘‘more’’ of the variable (the conditioncalled monotonicity). For all but one category, the outfit mean-square (Outfit MnSq) values fell below 2.0. The exception, category1 (Not at all) had an outfit mean-square value at 2.0, indicating thatthere was some noise (error) associated with that particularcategory.

Table 2Descriptive statistics for ratings on the PIVOT Survey items in the ED.

PIVOT items Observed

average (SD)

SEM Total N 1 N 2 N 3 N 4 N 5 N No

opportunity, N

1. I liked the way the team

worked together

4.07 (1.53) 101 0 2 8 22 59 10

2. I thought team members enjoyed working together 3.85 (1.75) .08 99 0 2 3 27 52 15

3. I thought the team coordinated patient care well 4.16 (1.33) .09 100 4 9 9 0 57 6

4. I thought team members kept each other informed 3.96 (1.57) .08 101 1 2 14 17 57 10

5. I felt as if team members talked in front of me as if I

wasn’t therea

4.40 (1.36) .11 101 2 2 3 11 77 6

6. I felt that team members told me conflicting thingsa 4.47 (1.26) .12 99 2 3 4 7 79 4

7. I thought team members supported each other to get

the work done

3.99 (1.65) .08 100 3 2 1 26 57 11

8. I felt team members were considerate of one another 4.03 (1.68) .08 100 2 2 1 21 62 12

9. I saw team members treating each other with a lack

of respecta

4.48 (1.43) .11 100 2 2 1 1 87 7

10. I heard arguments between team members, inside or

outside the rooma

4.69 (1.11) .15 100 0 6 0 0 89 5

11. I was told the names of the people who worked on

the patient care team

4.00 (1.50) .09 100 9 3 9 17 58 4

12. I was told how the team worked as a whole 2.54 (1.89) .07 100 27 3 16 12 25 17

13. People told me what their jobs were on the team 3.93 (1.48) .09 100 9 7 7 21 53 3

14. I knew who was in charge 3.82 (1.64) .08 101 8 5 8 16 56 8

15. I heard team members use each others’ names as

they worked together

2.74 (1.82) .07 100 14 12 17 15 25 17

16. I felt there was good communication between team

members

4.12 (1.44) .09 100 1 1 9 23 58 8

PPS 1. I felt well informed due to good communication

with the doctor(s)

4.33 99 1 6 7 30 55 NA

PPS 2. I felt I was treated with respect by the doctor(s) 4.34 99 0 3 4 22 70 NA

PPS 3. I felt safe at all times 4.67 99 0 2 3 21 73 NA

5-Point frequency scale for PIVOT items: 1 = not at all; 2 = rarely; 3 = sometimes; 4 = often; 5 = all the time. Rating for PPS items: 5 point Likert scale with 1 = strongly disagree to

5 = strongly agree.a These items were reverse-coded to view comparably with the remaining positively worded items.

B.W. Henry et al. / Patient Education and Counseling 96 (2014) 346–351 349

3.2.3. Evidence relevant to internal structure

Internal consistency, estimated by Cronbach alpha, was foundto be high (r = .87) indicating all items contributed to measure acommon construct to a reasonably high degree. Inter-raterreliability, estimated by intraclass correlation (two-way random,

Table 3Indices relevant to relationship to other variables-differences in overall ratings

across facets.

Facet Group Observed

average

Model

SEM

Overall

P value

Health .43

Excellent 4.1 .05

Very good/poor 3.9 .03/.11

Good/fair 3.7 .04

Role .18

Caregiver 4.0 .06

Patient 3.8 .02

Education .97

High school

graduate

4.6 .10

Some college 4.1 .07

Trade 4.0 .09

College degree 3.8 .03

Some high school 3.6 .11

Graduate school 3.5 .04

Acuity .001

N/A (caregiver) 4.3 .23

Levels 4/3/2 4.0 .04/.04/.05

Level 5 3.8 .18

Length of stay .001

Long (8–12 h) 4.2 .13

Very long (>12 h) 4.0 .08

Moderate (4–8 h) .06

Short (2–4 h) .04

Very short (<2 h) 3.4 .04

average measures), was also found to be high, ICC (2,101) = 0.85,95% CI [0.79, 0.89], suggesting high agreement in participants’ratings for each item.

3.2.4. Evidence based on relationship to other variables

Pearson correlation of the summed PIVOT ratings to summedPPS ratings was moderate, r(101) = .43, p < 0.01. Moderatelycorrelated ratings across the 16 PIVOT and the three PPS itemsoffer evidence of a positive association between patients’responses on the PIVOT items and on the PPS, offering evidenceof convergent validity.

Examination of rating differences across participant groupsdefined by the facets described earlier (health, role, education,acuity and LOS) clearly supported evidence of generalization forsome groups, but less so for others. Summarized in Table 3,evaluation of rating differences across each facet indicated therewere no statistical differences in overall ratings by self-reportedhealth status (p = 0.43), role (p = 0.18), or education (p = 0.97).There were, however, significant differences in overall ratingsacross acuity (p = 0.01) and LOS (p = 0.01).

4. Discussion and conclusion

4.1. Discussion

The objective of this study was to evaluate validity evidence forthe use of the PIVOT Survey to solicit participants’ views of healthcare teams. After administering the PIVOT Survey to a sample of EDpatients and caregivers, we evaluated a broad range of validityevidence based on the Standards framework [20,21]. Evidencerelevant to response processes, internal structure and relationshipto other variables were well-supported by our findings. Evidencerelevant to test content was supported although Rasch item fitstatistics indicated that one item – item 9 (I saw team members

B.W. Henry et al. / Patient Education and Counseling 96 (2014) 346–351350

treating each other with a lack of respect) may not lend informationto the construct or measurement model, and could be consideredfor removal. Results indicated no significant differences inresponse patterns between groups of participants, such as role(patient or caregiver), self-reported health status, and education.This suggests that items may not be treated differently across thesegroups, and supports evidence of generalization across the varietyof populations identified in this study. Also, response patternsshowed a difference in groups based on treatment variables ofacuity and LOS, perhaps reflecting a greater opportunity forpatients who are sicker or stay in the ED longer to observe moreteam interactions. We were able to make these inferences with ahigh degree of confidence as the associated standard errors ofmeasure were very low, indicating a high level of precision, evenwith a smaller sample size [26].

Similar to other studies that evaluated patient-reportedmeasures of care, responses from participants tended to bepositively skewed as patients tend to respond positively about careproviders [1,3]. In future applications of the PIVOT survey, onemight attempt to resolve skewed responses such as by collapsingor combining rating categories. As an example, with theCommunication Assessment Tool (CAT) researchers collapsedpatient ratings to two categories of ratings; one for the highestlevel of responses and all others in the second category [1].

In addition to the validity evidence evaluated in this study, thefindings also highlight potential areas of improvement for team-based behaviors in the emergency department. For example, a highpercentage of patients did not feel that they were informed abouthow the ED team worked as a whole (item 12). If the PIVOT Surveyrevealed this as a problem area, the deficiency could be addressedon a system-level. Previous research shows that the simpleintervention of providing patients with ED information sheetssummarizing the ED purpose and how the team works significantlyincreased patients’ satisfaction and perceptions of the quality ofcare [5,31]. In general terms, the PIVOT Survey may serve as amechanism for patient input about their views of teamwork andengage patients in the quality improvement process. Patientengagement in planning and evaluating health services has beennoted as a positive step in improving the quality of care [3,32].

Conceptually, this survey focuses on patients’ observations ofteamwork behaviors. Although this survey does not provide detailsof the exact circumstances of the observations, it does inform usthat patients and caregivers are in-fact observing and formingopinions on the team-interactions that occur in their presence inthe ED. Our research team is not aware of any other surveyinstruments that assess specific team interactions from the patientperspective. Having patients assess team interactions appears tobe a promising approach for providing team feedback. Also, thissurvey could be used in conjunction with other team assessmenttools and teamwork training programs administered by skilledobservers or team members, themselves, such as the Communica-tion and Team Skills (CATS) assessment [1], or the Safety AttitudesQuestionnaire [12]. In this way, physicians may respond morepositively to patient satisfaction metrics that reflect differentviewpoints.

In this study, PIVOT Survey scores, intended to measurepatients’ opinions of observed teamwork behaviors, were posi-tively associated with the PPS scale scores, which measuredpatients’ perceived quality of care. This indicates some relationshipbetween these two areas of patients’ views. Furthermore, researchinto the influence of the relationship between clinician–patientcommunication and health outcomes provides a direction forfurther study into how patients’ view of teamwork may influenceclinical care parameters, such as development of trust betweenpatient and care team or motivation to follow dischargeinstructions [32].

4.1.1. Limitations

There are a number of limitations related to the interpretationand applications of the findings from this study. The first limitationis associated with the small participant sample size. AlthoughLinacre suggested 100 participants would be adequate for 95%confidence within �5 logits for most purposes [33], the study mayhave benefited from a larger sample. The second limitation isassociated with the homogeneity of the sample with which weworked. Although our best attempts were made to capture responsesfrom a broad sample of participants, the sample was from a singleinstitution. Also, the sample did not include participants with criticalillness requiring hospital admission to limit the occurrence ofcareless, incomplete or inaccurate responses due to distraction ofthe acute illness. Future expansion of the study to include otherinstitutions would increase the sample size and ultimately, improvevariability of ratings.

In spite of the limitations, the study, combined with previouswork, supports validity evidence relevant to test content, andoffered additional evidence relevant to response processes,internal structure, and relationships to other variables.

4.2. Conclusion

Evidence from this initial validity study indicates the Patients’Insights and Views Observing Teamwork (PIVOT) Survey may beused to measure health care team behaviors in the emergencydepartment as observed by patients and caregivers of varied healthstatus and education levels. Also, these findings indicate that thePIVOT survey could be used to engage patients in evaluation of careteams. Further exploration of the PIVOT survey may offer insightsinto patients’ views in other settings, such as EDs serving people ofdifferent demographic characteristics. In addition, these resultscombined with prior stages in our development of the PIVOTsurvey [18,19] indicate the potential for the PIVOT survey to beconsidered with the assessment of other health care teams such asobstetrics, intensive care, or ambulatory group visits. Furtherresearch with the PIVOT survey may reveal the functionality ofusing this instrument across a broader range of settings to measurepatients’ perceptions. The PIVOT survey could provide a uniquemeans for monitoring teamwork through patient and caregiverobservation and improve patient participation and engagement incare.

4.3. Practice implications

How this tool can be optimally employed in clinical practice isyet to be determined; however we believe that it could be avaluable feedback tool for team assessment. Although most EDproviders and clinicians in other team-based practice settings areaware that patients can hear what happens on the other side of thecurtain, this is often forgotten in the midst of a busy shift. Inaddition, it is unknown how these patient observations mayimpact other metrics, such as patient satisfaction, patientconfidence in their care team, or patient adherence to medicalrecommendations.

Funding

Not applicable.

Conflict of interest

All authors affirm that there are no actual or potential conflict ofinterest including any financial, personal or other relationshipswith other people or organizations within three years of beginning

B.W. Henry et al. / Patient Education and Counseling 96 (2014) 346–351 351

the submitted work that could inappropriately influence, or beperceived to influence, their work to disclose.

Acknowledgments

The authors thank the following for their contributions to thisresearch project: Alex Gerard, Lauren Davis, Nick Seivert, and AnnaNannacelli for their role collecting PIVOT surveys in the ED.

Appendix A. Supplementary data

Supplementary material related to this article can be found, in the

online version, at http://dx.doi.org/10.1016/j.pec.2014.06.002.

References

[1] Makoul G, Krupat E, Chang C. Measuring patient views of physician commu-nication skills: development and testing of the communication assessmenttool. Patient Educ Couns 2007;67:333–42.

[2] Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire:development and validation using data from in-patient surveys in five coun-tries. Int J Qual Health Care 2002;14:353–8.

[3] Siassakos D, Clark J, Sibanda T, Attilakos G, Jeffreys A, Cullen L, et al. A simple toolto measure patient perceptions of operative birth. BJOG 2009;116:1755–61.

[4] Committee on Quality of Health Care in America, Institute of Medicine.Crossing the quality chasm: a new health system for the 21st century.Washington, DC: National Academy Press; 2001.

[5] Coulter A. Patient engagement – what works? J Ambul Care Manage2012;35:80–9.

[6] Guss DA, Leland H, Castillo EM. The impact of post-discharge patient call backon patient satisfaction in two academic emergency departments. J Emerg Med2013;44:236–41.

[7] Glickman SW, Boulding W, Manary M, Staelin R, Roe MT, Wolosin RJ, et al.Patient satisfaction and its relationship with clinical quality and inpatientmortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes2010;3:188–95.

[8] Emanuel L, Berwick D, Conway J, Combes J, Hatlie M, Leape L, et al. Whatexactly is patient safety? In: Kerm Henriksen, James B. Battles, Margaret A.Keyes, & Mary L. Grady, editors. Advances in Patient Safety: New Directionsand Alternative Approaches, Vol. 1 Assessment. Rockville, MD: 2008. Accessedat http://www.ahrq.gov/downloads/pub/advances2/vol1/advances-emanuel-berwick_110.pdf.[accessed 10.08.13].

[9] (TJC) The Joint Commission. Sentinel Event Statistics Data – Root causes byevent type (2004–Fourth Quarter 2010); May 25, 2011. <http://www.joint-commission.org/Sentinel_Event_Statistics/> [accessed 10.08.13]

[10] Uhlig PN, Brown J, Nason AK, Camelio A, Kendall E. John M. Eisenberg PatientSafety Awards. System innovation: Concord Hospital. Jt Comm J Qual Improv2002;12:666–72.

[11] Frankel A, Gardner R, Maynard L, Kelly A. Using the Communication andTeamwork Skills (CATS) assessment to measure health care team perfor-mance. Jt Comm J Qual Patient Saf 2007;33:549–58.

[12] Cooper S, Cant R, Porter J, Sellick K, Somers G, Kinsman L, et al. Rating medicalemergency teamwork performance: development of the Team EmergencyAssessment Measure (TEAM). Resuscitation 2010;81:446–52.

[13] Sexton JB, Helmreich RL, Neilands TB, Rowan K, Vella K, Boyden J, et al. TheSafety Attitudes Questionnaire: psychometric properties, benchmarking data,and emerging research. BMC Health Serv Res 2006;6:44–54.

[14] Rathert C, Brandt J, Williams ES. Putting the ‘patient’ in patient safety: aqualitative study of consumer experiences. Health Expect 2012;15:327–36.

[15] Mercer LM, Tanabe P, Pang PS, Gisondi MA, Engel KG, Donlan SM, et al. Patientperspectives on communication with the medical team: pilot study using thecommunication assessment tool-team (CAT-T). Patient Educ Couns2008;73:220–3.

[16] Fan VS, Burman M, McDonell MB, Fihn SD. Continuity of care and otherdeterminants of patient satisfaction with primary care. J Gen Intern Med2005;20:226–33.

[17] Govindarajan P, Larkin GL, Rhodes KV, Piazza G, Byczkowski TL, Edwards M,et al. Patient-centered integrated networks of emergency care: consensus-based recommendations and future research priorities. Acad Emerg Med2010;17:1322–9.

[18] Henry BW, McCarthy DM, Nannicelli AP, Seivert NP, Vozenilek JA. Patients’views of teamwork in the emergency department offer insights about teamperformance. Health Expect 2013. http://dx.doi.org/10.1111/hex.12148 [Epubahead of print, 7 October].

[19] Henry BW, Rooney DM, Eller S, McCarthy DM, Seivert NP, Nannicelli AP, et al.What patients observe about teamwork in the emergency department: devel-opment of the PIVOT questionnaire. J Participat Med 2013;5:e4.

[20] American Educational Research Association, American Psychological Associa-tion, National Council on Measurement in Education. Standards for educa-tional and psychological testing. Washington, DC: American EducationalResearch; 1999.

[21] Downing S. Validity: on the meaningful interpretation of assessment data.Med Educ 2003;37:830–7.

[22] Slatt LM, Steiner BD, Hollar DW, Chessman AW, Xin J, Hedgpeth MW. Creatinga multi-institutional family medicine clerkship examination: lessons learned.Fam Med 2011;43:235–9.

[23] Iramaneerat C, Myford CM, Yudkowsky R, Lowenstein T. Evaluating theeffectiveness of rating instruments for a communication skills assessmentof medical residents. Adv Health Sci Educ 2009;14:575–94.

[24] Wolfe EW, Smith Jr EV. Instrument development tools and activities formeasure validation using Rasch models: Part II—validation activities. J ApplMeas 2007;8:243–90.

[25] Rasch G. Probabilistic models for some intelligence and attainment tests(Copenhagen, Danish Institute for Educational Research) [expanded editionwith foreword and afterword by B.D. Wright]. Chicago, IL: The University ofChicago Press; 1960/1980.

[26] Hambleton RK, Swaminathan H. Item response theory: principles and appli-cations. Boston, MA: Kluwer-Nijhoff Publishing; 1985.

[27] Hambleton RK, Swaminathan H, Rogers HJ. Fundamentals of item responsetheory. Sage: Newbury Park; 1991.

[28] Linacre JM. What do infit and outfit, mean-square and standardized mean?Rasch Meas Trans 2002;16:878.

[29] Linacre JM. Optimizing rating scale category effectiveness. In: Smith Jr EV,Smith RM, editors. Introduction to Rasch measurement: theory, models andapplications. Maple Grove, MN: JAM Press; 2004. p. 258–78.

[30] Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability.Psychol Bull 1979;86:420–8.

[31] Krishel S, Baraff LJ. Effect of emergency department information on patientsatisfaction. Ann Emerg Med 1993;22:568–72.

[32] Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal?Pathway linking clinician–patient communication to health outcomes. PatientEduc Couns 2009;74:295–301.

[33] Linacre JM. Sample size and item calibration stability. Rasch Meas Trans1994;7:328.