Improving attitudes about exit exams through a better understanding of the educational goals and...

99
IMPROVING ATTITUDES ABOUT EXIT EXAMS THROUGH A BETTER UNDERSTANDING OF THE EDUCATIONAL GOALS AND MOTIVATIONAL FUNCTIONS THAT UNDERLIE THEM by LAURA S. WOODWARD DISSERTATION Submitted to the Graduate School of Wayne State University, Detroit, Michigan in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY 2007 MAJOR: PSYCHOLOGY (SOCIAL) Approved by: Advisor Date

Transcript of Improving attitudes about exit exams through a better understanding of the educational goals and...

IMPROVING ATTITUDES ABOUT EXIT EXAMS THROUGH A BETTER UNDERSTANDING OF THE EDUCATIONAL GOALS AND MOTIVATIONAL

FUNCTIONS THAT UNDERLIE THEM

by

LAURA S. WOODWARD

DISSERTATION

Submitted to the Graduate School

of Wayne State University,

Detroit, Michigan

in partial fulfillment of the requirements

for the degree of

DOCTOR OF PHILOSOPHY

2007

MAJOR: PSYCHOLOGY (SOCIAL)

Approved by:

Advisor Date

© COPYRIGHT BY LAURA S. WOODWARD

2007 All Rights Reserved

ii

DEDICATION

When I think about the different peoplewho helped me acquire skills to become able to

complete the monumental task of a dissertation, there are so many and I will not be able to list

them all. I would like to dedicate this to my mother who gave me a goal-setting journal instead

of a diary in the eighth grade. It meant that my dreams became something I could break down

and accomplish in concrete steps. For example, that year I became the first girl to represent our

middle school in the Math Counts. Completing this project relates to my practice with a goal

book so long ago. In addition, many people have offered me encouragement and hope during

this process and this dissertation is dedicated to them. I especially remember the funny

mnemonic that a supervisor told me that she used to study for Qualifying Exams. I also

remember the friend who told me to stop studying every day at around five and then start fresh in

the morning. Finally, I would also like to dedicate this to my kind husband who has been such

a great support to me.

iii

ACKNOWLEDGMENTS

This dissertation would not be possible without the interesting conversations and

mentorship from each of these experts. First, I would like to recognize my advisor who has

helped me to make the transition from a college student into a professional. Second, I would also

like to recognize the Conjunction Function research group which introduced me to the concept of

attitude functions. Some former members include Craig Wendorf and Sharon Hughes. That

interest in attitudes grew with discussions with Kalman Kaplan and David Williams about

attitudes and persuasion. Third, a number of people have encouraged me to learn more about

assessment of motivation and learning on a college level including Jan Collins Eaglin, Stuart

Karabenick, Jina Yoon and Cary Lichtman. Fourth, I would like to thank Robert Partridge, David

Williams and Sebastiano Fisicaro for their statistical guidance and support. Finally, I would like

to thank the following authors for their permission to use their scales in my study: Stuart

Karabenick, Noel Entwistle and Michael Middleton.

iv

TABLE OF CONTENTS

Chapter Page

DEDICATION................................................................................................................................ ii

ACKNOWLEDGMENTS ............................................................................................................. iii

LIST OF TABLES......................................................................................................................... vi

LIST OF FIGURES ...................................................................................................................... vii

CHAPTER 1: INTRODUCTION................................................................................................... 1

CHAPTER 2: METHOD .............................................................................................................. 20

CHAPTER 3: RESULTS.............................................................................................................. 35

CHAPTER 4: DISCUSSION........................................................................................................ 49

APPENDIX A: Rotated Component Matrix (a) of the Multiple-Function Scale ......................... 57

APPENDIX B: Focus Group Summary of Items by Function and General Theme..................... 60

APPENDIX C: Argument Strengths of Potential Bullet Points for the Message......................... 63

APPENDIX D: Items of the Additional Scales Included in the Analysis .................................... 65

APPENDIX E: Manipulation Checks on Message Perception..................................................... 67

APPENDIX F: Effect Sizes (Eta Squared) for the Relationship Between Ad Attitude and Each

Non-cognitive Variable................................................................................................................. 68

APPENDIX G: Order effects for the practical exam.................................................................... 69

APPENDIX H: Demographics ..................................................................................................... 70

APPENDIX I: Item Preference in the Messages .......................................................................... 71

APPENDIX J: HIC APPROVAL ................................................................................................. 72

v

REFERENCES ............................................................................................................................. 76

ABSTRACT.................................................................................................................................. 90

AUTOBIOGRAPHICAL STATEMENT..................................................................................... 91

vi

LIST OF TABLES

TABLE PAGE

Table 1. Scales for the Direct and Proxy Measures..................................................................... 24

Table 2. Factor Structure for the Direct Instrument .................................................................... 26

Table 3. Direct Attitude Function Measure Items........................................................................ 27

Table 5. The Dependent Attitude Variables ................................................................................. 30

Table 6. Manipulation checks for the two exams ......................................................................... 31

Table 7. Relationship between the proposed covariates and the dependent variables................ 33

Table 8. Repeated Measures Analysis of Variance of Direct Function Measures on Exam

Attitudes ........................................................................................................................................ 38

Table 9. Exam Attitudes by Utilitarian [U] and Cognitive [C] Attitude Function Strength....... 38

Table 10. Marginal Means of Exam Attitudes for the Direct Measures of Utilitarian [U] and

Cognitive [C] Functions ............................................................................................................... 40

Table 11. Linear Trend Test Where the Attitude Function and Message Match ........................ 42

Table 12. Repeated Measures Analysis of Variance of Proxy Measures on Exam Attitudes ..... 43

Table 13. Exam Attitudes by Surface [SU] and Deep [D] Cognitive Orientation ..................... 43

Table 14. Marginal Means of Exam Attitudes for the Proxy Measures of Surface [SU] and

Deep [D] Cognitive Orientation ................................................................................................... 45

Table 15. Linear Trend Test Where the Attitude Function and Message Match ........................ 47

Table 16. Effect Sizes (η²) of the Matched Conditions ................................................................. 48

vii

LIST OF FIGURES

FIGURE PAGE

Figure 1. Two persuasive messages about exit exams served as the attitudinal objects. From left,

the learning ad, then the practical ad. .......................................................................................... 22

Figure 2. Scree plot of Eigenvalues for the factor analysis of the direct instrument: Two main

factors are indicated by the bend in the plot................................................................................. 26

Figure 3. Means of Exam Attitudes by Utilitarian [U] and Cognitive [C] Attitude Function

Strength for the Direct Measure. .................................................................................................. 39

Figure 4. Marginal Means of Exam Attitudes for the Direct Measures of Utilitarian [U] and

Cognitive [C] Functions ............................................................................................................... 41

Figure 5. Means of exam attitude across all the proxy functional tertiles................................... 44

Figure 6. Marginal Means of Exam Attitudes for the Proxy Measures of Surface [SU] and Deep

[D] Cognitive Orientation ............................................................................................................ 46

1

CHAPTER 1

INTRODUCTION

Universities are facing growing pressure to offer proof that students are performing at

expected target levels. Education is increasingly viewed as an investment, and accountability

approaches encourage universities to justify that students are learning specific skills (Yudof,

2004). At the same time, academia is experiencing an uncertain future, as resources become

more scarce, public funding is reduced, and greater pressure regarding accountability is exerted

(Ramsden, 1998). For example, the Higher Learning Commission of the North Central

Association, which accredits our university, now requires an assessment of student academic

achievement in various areas of study (Young & Lakey, 2004).

Accountability for learning in higher education, although currently a hot topic, is not

new. Since the 1970’s, a political agenda of accountability has entered the academic realm

(Ohmann, 2000). Similar summative evaluation is evident in the No Child Left Behind Act, the

current federal legislation for K-12, as well as in other legislation proposed by the Department of

Education specifically regarding higher education (Lane, 2004). Since the 1970’s, there has been

less funding available to universities, although more program justification is required through

institutional data (Watt, Lancaster, Gilbert, & Higerd, 2004). In addition to governmental

requirements, the emphasis placed on accountability is evident in university advertisements

aimed at parents, students, and governmental agencies which note the proportion of students who

graduate, the number who go on to graduate school, and other indicators of student success.

Accountability also manifests itself in the importance placed on rankings published by the US

News and World Report or the Princeton Review. Universities with low rankings face several

2

consequences, including diminished application levels, even if they manage to maintain federal

funds.

As a result of these accountability trends, the use of exit exams is on the rise.

Historically, testing has not been a popular subject with students (Brim, Glass, Neulinger,

Firestone, & Lerner, 1969). Mandated testing, especially in the absence of funding to support

evaluation programs, can be a hard sell for university administrators. Practically, colleges may

damage their public image if the initiation of testing is viewed negatively by the student body. In

the realm of K-12, the American Federation of Teachers (2003) has been critical of

accountability approaches which link funding to indicators of student success because of

problems with assessment methods. If these proposals are enacted as they have been in public

schools, and funding sources become dependent upon high scores from students, improving

student test performance will become very important. As many know from experience, student

attitudes toward tests can influence performance: A bad attitude toward an exam can translate

into poor performance. Research shows that messages included before an exam can influence

both student attitudes and performance. For example, Steele (1997) found that student test scores

on standardized exams could be depressed by priming certain attitudes before the exam.

Increased testing, as mandated by the federal government, may not be greeted favorably

by students (Education Week, 2002; Higgins, 2004; Schantz, 2000) who likely will not

appreciate having to take more tests. Because student attitudes toward these exams can influence

performance, administrators may find themselves struggling for the best way to present news of

this impending requirement. The literature indicates that attitudes toward academics can be

measured (Biggs & Leung, 2001; Entwistle, 1987; Schmeck, Geisler, Brenstein & Cercy, 1991)

and that attitudes can be predicted and shaped (Katz, 1960; Shavitt, 1992; Snyder & Debono,

3

1985). On a pragmatic level, articles giving advice to educators grappling with new requirements

have suggested improvement of student motivation toward learning through changing instruction

methods. For example, Gueck (2003) has suggested the use of teaching techniques which build

intrinsic interest in the material. Similarly, Ramsden (1992) has encouraged the facilitation of

deep approaches to learning because they lead to higher grades, better long-term retention of

facts, and better organization of study habits. The present research is aimed at showing that

student attitudes toward university exit exams may be improved by the way in which the

message is presented and that messages meeting the motivational needs of students were

expected to be more effective at persuading students about the value of newly required exams

than messages that do not speak to their needs.

Persuasion regarding exit exams can be approached from different theoretical traditions.

On the one hand in the educational literature, it can be approached from the perspective of

learning styles. The learning styles approach looks at the best way to motivate a particular type

of learner. On the other hand, it can also be approached from social psychological theorizing

about functions underlying student attitudes. Although the Educational and Psychological fields

are very different, some of the ways they would go about motivating a student are similar. The

next section will describe the history of the educational approach toward motivating a student. It

will be followed by a description of how attitudes and psychological approaches can complement

the educational paradigm.

Education Researchers Developed Non-cognitive Predictors of Academic Performance

Improved understanding of student attitudes of students toward academic testing is not a

new goal. Many a professor has wondered about the best way to motivate their students to

accomplish more. Master professors have developed complex techniques of drawing in and

4

inspiring their students to learn. However, these techniques are often hard to describe because

they are accomplished in such an intuitive way. Hence, the puzzle of the unmotivated student has

inspired much research, in both education and subject-oriented journals such as psychology.

As a result, a number of measures have been created in an attempt to better understand

what motivates students. Brown and Holtzman (1966) developed one of the first learning style

inventories. It featured two unusual scales: favorable attitudes toward teachers and acceptance of

the cognitive orientation. This scale marked the beginning of the measurement of non-cognitive

variables and their influence on student academic performance.

Early educational work in the learning styles area took the shortcut of using broad but

indirect personality trait-like variables as measures of non-cognitive dimensions to predict

student approach to studying and learning. Indirect measurement involves using broad

personality measures to predict specific behavioral accomplishments. In contrast, direct

measures add a degree of specificity by targeting the object(s) of focal interest. Weschler, who is

famous for his work on intelligence testing, made an interesting commentary about misuses of

measures like his in 1951. He criticized the usage of indirect testing because of the danger of

misinterpretation when the results of a general test are used to predict a person’s beliefs or

performance on a more specific issue. His main criticism of indirect tests was this reduction of

accuracy. However, this approach was taken by most of the researchers in this area as a starting

point.

For example, in the United Kingdom, Entwistle and Entwistle (1970) began to study

psychological approaches toward academics. They found that introversion led to better study

habits, but high motivation to do well improved performance of extroverts. Similarly, Entwistle

and Wilson (1977) explored the complicated motivating force of anxiety upon academic

5

performance. They found that strong performance, consistent use of study techniques, and high

motivation were related to fear of failure in a complicated way, generally with a positive

correlation except in the case of extremely high fear of failure, which correlated with ineffective

studying and poor grades.

Similarly, Biggs (1970), in Australia, began work on a measure to capture non-cognitive

variables. He found a relationship between anxiety and memory. Biggs used a behavioral

approach to describe learning processes in relation to arousal. Arousal was broken down into

different domains. One domain is called “utilizing,” which describes a surface, grade-oriented,

unquestioning acceptance of information presented and is found to lead to assessment anxiety.

Another domain is labeled “internalizing,” which is described as a deep, intrinsic interest in the

course content, a determination to understand, and an openness to different perspectives on the

material. (Biggs, 1976)

In addition, Schmeck, Ribich and Ramanaiah (1977) developed the Inventory of Learning

Processes which looks at the tactics that students use to learn in different situations. Of interest

here is their scale that measured what they called the synthesis/analysis approach. Synthesis

items deal with integration of meaning from various sources and abstracting that meaning into

useful themes. The synthesis scale was later renamed the Deep Processing Scale (Schmeck,

1983). Early validation research found a correlation between the Inventory of Learning Processes

and measures of academic achievement such as reading comprehension, measured by the

Nelson-Denny (Schmeck and Phillips, 1982).

As the research continued, a categorization of student academic orientation emerged.

This categorization distinguished intrinsic/deep with extrinsic/surface (Biggs & Leung, 2001;

Entwistle, 1987; Schmeck et al., 1991). Intrinsic learners see learning as inherently motivating

6

and are motivated by the learning process. In contrast, extrinsic learners are more motivated by

the secondary, or “economic” benefits of education. For them, good grades are important in

helping to secure jobs, for example. Entwistle’s typology helps to describe the differing

responses that students have to various teaching methods. For example, this theory can help

explain responses to a professor’s experimental technique in my college of offering A’s

regardless of performance. Although the professors intend this as a way to reduce performance

anxiety, attendance lessens when surface benefits, such as grades, are ignored. However, other

students continued to work for the benefit of learning; they obviously were not motivated by

external rewards. Entwistle’s theory can explain the motivations behind these different levels of

student effort.

The differentiation between deep and surface approaches to learning was developed by

Marton and Saljo in 1976. In their study, they gave students an ambiguous task and had them

describe how they went about studying for it. The researchers found that the approaches reflected

different levels of processing. Deep learning was found to be related to an intention to

understand while surface learning was related to an intention to reproduce. The surface approach

was not necessarily associated with minimal effort, nor was the deep approach associated with

greater effort. Instead, the focus was on the motivation behind the effort. Marton and Saljo’s

terminology gave a language to the research teams interested in non-cognitive, motivational

variables, such as the Entwistle, Smeck, and Biggs research teams.

Based on this deep/surface distinction, Entwistle and Ramsden (1983) developed the

Approaches to Studying Inventory (ASI), which explored the views of motivation toward

learning. They identified three factors: deep, surface, and strategic approaches to learning.

Entwistle and McCune (2004) describe the deep approach as one that is focused on the ideas,

7

comprehension, critical use of evidence, and intrinsic learning for learning’s sake. In contrast,

the surface approach reflects an extrinsic motivation in which the student is geared toward

meeting the syllabus requirements and avoiding failure.

Three main researchers have actively pursued the deep/surface distinction in their

research: Entwistle, Schmeck, and Biggs. These researchers have adopted similar language in the

measures they developed; a comparison of the scales suggests good convergent validity. For

example, the correlation was .64 between Entwistle’s deep approach and Schmeck’s elaborative

processing. Also, it was .50 between Entwistle’s surface approach and Schmeck’s surface

processing (Entwistle & Waterston, 1988, p. 260).

Research has continued on these measures and they have been adapted to improve their

reliability and factor structure. Recent revisions indicate that Schmeck’s 18-item subscale for

deep processing has an alpha of .92, (Schmeck, Geisler-Brenstein, & Cercy, 1991, p. 355). In

addition, the Biggs measure has an alpha of .62 for deep motivation and .72 for surface

motivation using a five item scale for each measure (Biggs, Kember, & Leung, 2001, p. 135).

Finally, the Entwistle measure has reliabilities of .84 for the deep approach and .80 for the

surface approach (McCune & Entwistle, 2000, p. 2).

An Analysis of the Educational Approaches

A recent critique of the Entwistle measure (Richardson, 2004) notes that it demonstrates

reasonable stability over time, moderate convergent validity with scores on other questionnaires,

and reasonable levels of criterion-related validity and discriminant power. In personal

communication (2005), Richardson recommended the Entwistle measure over the Biggs and

Schmeck scales. A detailed critique of the three measures is available in Researching Student

Learning by John T.E. Richardson (2000). In short, he indicates that both the Biggs and

8

Schmeck models have problematic factor structures. Attempts to replicate the Biggs factor

structure have failed, and there appears to be a big difference in factor structure across different

ethnicities. Similarly, the Schmeck factor structure fails to replicate across studies. Richardson

(2000) also criticized its focus upon levels of cognitive processing which has been abandoned in

current memory research. In contrast, Richardson indicates that the Entwistle measure has a

reproducible factor structure and satisfactory psychometric properties.

Although these measures have furthered our understanding of variation in student

motivation, a limitation of these measures is that they look at the individuals out of situational

context. Despite limitations in precision, this has been a simplified methodological shortcut

which has furthered research across many theoretical traditions. In psychology, trans-situational

individual differences have been the mainstay of personality research simply because they are

easier to capture given the limitations of our measurements (Shoda & Mischel, 1996). But the

interaction between situation and personality trait occurs within predictable patterns. The

differentiation of meaningful patterns of interaction is the new task of personality theorists as it

will yield a more accurate prediction of behavior (Mischel & Shoda, 1995). A method that

analyzes personality variables within the framework of the situation should afford a more

accurate understanding of a student’s response to the demands of a particular type of situation.

For example, a person may be differently motivated when it comes to the idea of a test than to

the idea of a discussion.

The Need for a Direct Assessment of Non-cognitive Variables

The direct approach is an important one to consider. Essentially it takes the view that

people have various attitudes and motives which are contextually sensitive. It is a common

misperception that people behave the same way in different situations (Heider, 1958; Ross,

9

1977). However, attitudes and behaviors can vary in relationship to situations (Firestone, Kaplan

& Moore, 1974). For example, a person inclined toward yelling probably will not be found

yelling in a hospital room full of sleeping babies. Similarly, students have different motivational

approaches toward different situations.

In this vein, Pintrich’s (1991) Motivated Strategies for Learning Questionnaire [MLSQ]

is one of the first direct measures in the non-cognitive variables area as it limits the questions not

to just personality descriptions but to personality within situational context. It limits the student

to evaluating him or herself within one specific type of situation, as it asks the individual to

apply the questions to a particular class. Although it was normed in introductory psychology

classes which eliminated situational variance, the measure can be adapted to fit any classroom.

This seemingly small difference allows the MSLQ to be more direct than other measures of

non-cognitive variables as it asks students how they are motivated in particular classes, reducing

the error inherent in indirect measurement that seeks answers to broad general statements across

multiple situations.

With roots at the University of Michigan, the survey assesses a number of areas of

motivation that can influence a student’s learning and academic performance. The MSLQ

measures self-efficacy, extrinsic academic orientation, interest in the subject, and other variables.

These variables are intended to measure self-regulated learning: students’ ability to monitor,

regulate, and reflect on learning. From the self-regulated learning paradigm, students are not

viewed as simply passive users of one particular learning style. Rather, they actively note the

extent to which they can learn in a particular situation and can make adaptations to increase their

learning. One aspect of being a self-regulated learner is one’s score on the intrinsic scale, which

is similar to the measurement of being a deep learner. This is contrasted to the extrinsic aspect of

10

being a low self-regulated learner, what Dweck and Leggett (1988) described as a ‘helpless’

student who avoids challenges and only enjoys tasks they do well. Self-regulated learners are

what Dweck and Leggett would call ‘mastery-oriented’ students who seek out challenging tasks.

To put it in the earlier educational terminology, intrinsic self-regulated learners are like deep

learners, with a goal of learning, while the extrinsic, non-self-regulated learners are described as,

performance-oriented students, with a surface learning goal.

Direct measurement, available in such measures as the MSLQ, allows for a more accurate

assessment of the bases of attitudes within a particular setting than measures which ask for a

general motivational disposition across multiple situations. The literature indicates that students

are motivated differently across situations (i.e., Entwistle & Ramsden, 1983). Perhaps they have

a deep approach toward major-field course readings, but a surface approach toward required

classes outside their major. A generalized measurement of attitude toward academics would not

capture this variation in attitude.

The MSLQ is direct and has a scale to measure intrinsically self-regulated (deep)

/extrinsically-regulated (surface) learning. This move toward direct measurement in the

educational literature will be interesting to watch. However, for now, the MSLQ captures the

intrinsic/extrinsic concept incomprehensively; the most recent version of this measure has only a

few items that deal with the intrinsic/extrinsic (deep/surface) construct (Karabenick, personal

communication, 2004).

The ability to differentiate intrinsic learners from extrinsic learners is important because

of its implications for motivation. When we understand what functions motivate a person’s

attitude, we are in a better place to change the attitude. Knowing that a person is only interested

in the extrinsic practical benefits of an education is helpful when determining how to persuade

11

him or her. Many of the educational measures in the literature deal with logistical concerns such

as study skills, yet they ignore issues of motivation. In a political climate of increased testing,

knowledge regarding what influences attitudes toward exit exams can potentially improve

student test results.

Specifying the object of the attitude can further improve persuasion. Perhaps a person is

only interested in knowing what to study in order to pass a chapter exam, but is interested in

learning more from a non-graded exit exam. Similarly, another student might be interested in the

intrinsic learning opportunities for improvement in a chapter exam, but only interested in job

market opportunities available to a student attending a program with exit exams. The situation

can change the functional orientation.

Knowing more about a student’s motivation can facilitate the development of more

positive attitudes toward exit exams. An extrinsically-oriented student may be responsive to one

kind of message while an intrinsically-oriented student may be more responsive to another.

Correspondingly, having more information such as a specification of the type of object students

are responding to can further strengthen message persuasiveness. Message perception can be

enhanced by attending to both motivation and attitude object.

Similarities to the Field of Attitude Functions in Social Psychology

The change reviewed above from indirect to more direct measurement of motivation has

parallels in social psychological approaches to attitude measurement. An early pioneer in this

area was Daniel Katz. Like the educational researchers, he was focused upon the motivational

bases of attitudes, “The reasons people hold the attitudes they do” and the process of attitude

change (1960, p. 170). Like the educational researchers, he noted that “The same attitude can

have a different motivational basis in different people” (p. 167). For example, a student may

12

have the same attitude toward rigorous academic work as does a classmate but be motivated by

different goals. Furthermore, he asserted that, “Unless we know the psychological need which is

met by the holding of an attitude we are in a poor position to predict when and how it will

change” (p. 170).

Given Katz’s line of reasoning, one student may be more motivated by the reward of

good grades, while the other may be more motivated by learning for its own sake. Katz (1960)

offered descriptions of a number of different functions that an attitude could hold and said that

persuasion would be enhanced by appeals that targeted the function that motivated a person.

When we consider the educational literature, deep and surface approaches to learning could be

categorized into the Katz motivational functions. The deep approach would fall under the gestalt

function of cognition/knowledge. In comparison, the surface approach would fall under the

behavioral function of utility.

In an evaluation of this literature, Eagly and Chaiken (1993) noted that much enthusiasm

followed the Katz functional approach. However, researchers, while originally enamored by the

Katz (1960) typology of motivational functions, found a stumbling block when they attempted to

measure the motivations that underlie people’s attitudes. For example, research was limited

because the functions were idealized and researchers predicted the attitude functions to be

mutually-exclusive. Yet in actually, an attitude may reflect the simultaneous operations of

several functions. In addition, some researchers originally tried to measure the functional

motivations of attitudes by using extant personality measures such as the F scale (Adorno,

Frenkel-Brunswik, Levinson and Sanford, 1950) and the MMPI (Hathaway & McKinley, 1942)

to indirectly infer the person’s functionally ego-defensive roots for racial attitudes. These

methods were very limited in what they could predict about a person, and so the research

13

momentum was slowed. Shavitt and Nelson (2002) have noted an early disconnect between the

theory and readily testable models, in addition to a lack of acceptable techniques for measuring

functions. Eagly and Chaiken (1998) echoed this commentary by noting that there was not a lot

of work in this area during the period of 1965-1985 due to the lack of accepted methods to

operationalize the functions.

However, eventually, progress was made with the development of the Need for Cognition

Scale (Petty & Cacioppo, 1979) to measure the cognitive function and the Self-monitoring Scale

(Snyder & DeBono, 1985) to measure the social-adjustive function. Although these measures

moved the field forward through offering standardized measurement, they took the broad trait

approach which was limited in its ability to capture the motive functions operative in particular

situations. This approach is similar to the trait approach which was taken in education.

Attitude Functions in Relation to Exit Exams

Two of the functions that Katz (1960) suggested are of direct relevance to the topic of

exit exams. One is the cognitive function and the other is the utilitarian function. The cognitive

function would appear to cover much the same domain as Entwistle’s deep approach to learning.

Influenced by the existential psychological paradigm, this is a focus upon learning for the love of

it. Cacioppo, Petty and Kao (1984) suggest that people are motivated by the need to think about

something, to give schematic meaning to an ambiguous world. Similarly, in the field of higher

education, Entwistle and Waterston (1988) suggest that some students actually seek out learning

for its own sake. They seek out meaning, find relationships between ideas, use evidence, and are

find interest in ideas. The professor’s dream student, this type of student finds an intrinsic

interest in the material at hand. Such a student who values cognition in itself as a motivation may

14

look at the value of exit exams from a learning perspective and be most persuaded by arguments

that implicate their role in strengthening the learning process.

In comparison, the utilitarian function appears to cover what Entwistle (1987) has

characterized as the surface approach to learning. Influenced by the behaviorist paradigm, the

utilitarian function is focused on the attitude as a predictor of specific rewards and punishments

that acting on an object may offer. Influenced by the integration issues of his time, Katz (1960)

offered this example of the utilitarian function from the realm of education: A person who

refuses to let his or her child be bussed to an integrated school may fear that the child might

somehow being harmed by attending that school. So the utilitarian attitude function involves

liking or disliking the attitude object as specified by the practical focus on what the person is

going to gain or lose from the attitudinal object. The utilitarian function places an emphasis on

what the physical or symbolic attributes intrinsic to the object afford for a person. So people

evidencing a utilitarian function might be concerned with the costs or benefits their child might

face as a consequence of this policy. Katz (1960) suggests that appeals addressing these

utilitarian concerns would be particularly effective in persuading parents to permit the bussing of

their children to an integrated school.

Similarly, Entwistle (1987) suggests a pragmatic goal of students who are focused on

class work as a means to an end. These students are focused on the behavioral rewards. Their

sense of educational purpose is to concentrate only on what they specifically have to learn. They

are likely to read little beyond what is required to attain a grade satisfactory for extrinsic goals,

and worry about keeping up. So a utilitarian-oriented student may base his or her attitude toward

an exit exam on the beliefs about the practical benefits and be more persuaded by a description

of personal benefits that would come from taking the exam.

15

The Need for Direct Measures of Attitude Function

Although indirect approaches can identify the underlying functional motivations behind

attitudes toward academics, they are considered proxy measures because they do not ask about a

specific situation or issue. Although the indirect functional approach is useful because of its

target on personal motivation, it loses sight of the problem of situationally-linked attitudes.

To improve attitudes about exit exams, it may be useful to discover the motivations of

students toward this particular object before attempting to help them adapt to a testing

requirement. A direct approach toward attitudes regarding exit exams is indicated by the

attitudinal literature where there has been a move toward more direct measures of their

underlying function. Pioneer efforts include those of Herek (1986), in his research on attitudes

toward homosexuality, as well as Shavitt (1992), in her research on attitudinal reactions to

consumer goods.

Early on, Herek asked students to discuss their attitudes specifically in relationship to the

topic of homosexuality, utilizing qualitative methodology. He asked participants to write essays

about their attitudes toward homosexuality and then content analyzed them for functional

themes. Using these techniques, Herek was able to categorize attitude functions using the

Attitude Functions Inventory (1986).

One difficulty of these essay assessment techniques is Herek’s (1986) practice of asking

the participants to state where their attitudes come from. Research on heuristics (Tversky &

Kahneman, 1974) indicates that it is questionable whether most participants would know where

their attitudes come from. People tend to make errors when asked how they make decisions.

Except for distinctive attitude objects, attitudes may have been established gradually over a

period of years as a result of a variety of sources of information and/or direct encounters. A

16

checklist might make it easier for a participant to remember (from church, from my professor,

from my friends, and so on) and allow for easier replication, but these attitude roots would be

difficult to standardize because they would be different for different people. In addition, these

methods were time consuming to score and vulnerable to scorer biases. To address these issues,

Herek later developed the Attitudes Functions Inventory. Unfortunately, these developments

have yet to be evaluated in terms of testing specific functional hypotheses.

This was further explored in Shavitt’s (1989) more standardized but still direct approach.

She measured student attitudes specifically in relationship to various products. She asked

participants to list their thoughts on a number of consumer goods which were pre-selected to

elicit certain functional themes. Shavitt (1989, 1990) proposed the existence of direct linkages

between attitude functions and specific classes of objects. Evidence supporting such linkages

was gathered when participants were asked to list their thoughts, a technique developed by Petty

and Caccioppo (1981) regarding various objects. She found that some attitudes serve only one

function while others are multifunctional (Ennis & Zanna, 2000; Shavitt, 1992). For example, a

flag was found to serve a value-expressive function of patriotism, while an air conditioner served

a utilitarian function of cooling. Other objects have more than one function, while some attitude

objects have more functions than others. Furthermore, Herek moved the field forward by directly

assessing attitude as indirect assessments are unable to capture the motivational bases of attitudes

toward important and complex objects, such as homes or relationships to others, which serve

many functions. For example, Ennis and Zanna (2000) found that attitudes toward an automobile

served four functions. This was echoed in our research (Woodward & Firestone, 2003) that

showed automobiles serve utilitarian, social-adjustive, cognitive and values-based functions.

17

The matching hypothesis (Petty & Wegener, 1998; Shavitt, 1992; Snyder & Debono,

1985) asserts that persuasion regarding these object-linked functions can be particularly effective

when it targets the function served by the object. Evidence for the matching hypothesis was

evident in Snyder and Debono’s (1985) finding that high self-monitors are drawn toward

products which enhance their social acceptance, while low self-monitors are less influenced by

socially enhancing types of products. The matching hypothesis was further supported in

Cacioppo and Petty’s (1982) finding that people high in the cognitive function, “need for

cognition,” were more likely to prefer a complicated version of a cognitive task than people low

in the cognitive function. The cognitive underpinnings of the matching hypothesis were better

documented when Shavitt and Nelson (2000) found that people were more likely to remember

thoughts about an object that matched their personal functional leaning; however, this memory

further increased when the object itself also matched this functional category. These results

support the need for measures of attitude function specific to the attitude object studied when

assessing the matching hypothesis. A person who generally has utilitarian attitudes may respond

with very value-oriented attitudes when presented with a flag, but will most likely respond

differently with other objects.

The Gap in both Psychological and Educational Literatures

The direct approach to understanding message acceptance based on a person’s attitude

function can greatly illuminate the process by which students can be helped to accept new

academic requirements. In our experience (Woodward & Firestone, 2003), specifying the

function(s) served by an attitudinal object increases the precision of attitude measurements.

More precise information about attitudinal functions improves our ability to predict a how a

person will respond to a persuasive communication. Researchers have found stronger

18

correlations between attitude function and attitude toward a persuasive message when they are

both focused to relate to a specific situation than when one is a general measure and the other is

specific (Fishbein and Ajzen, 1972). For example, Woodward and Firestone (2003) found that a

direct measure of social-adjustive attitudinal function of automobiles showed a stronger

relationship to the degree of liking toward a functionally pitched advertisement for a

prestige-oriented car than did the Self-Monitoring Scale, a proxy measure of social-adjustive

attitude function.

One of the purposes of the present research was to develop a more direct measure of

student approaches to exit exams. Ideally, such a measure would combine motivational focus

offered by Entwistle’s research (1987), with the direct approach of Pintrich’s research (1991).

Furthermore, it would be informed by the functional literature of the psychology of attitudes. The

functional research area is useful because it suggests potentially efficacious avenues of

motivation for improving acceptance of exit exams. By gaining a deeper understanding of that

situation-motivation interface, we will more likely be able to influence attitudes toward the

exams.

Hypothesis

The idea that message-favorability can be enhanced when the text of the message

matches the reader’s dominant attitude function is often labeled the matching effect. A principle

hypothesis of the present study was that the matching effect would prevail. That is, higher

functional ratings in cognition would predict higher acceptance of exams described in terms of a

cognitively-framed message. Similarly, higher functional ratings for utility would predict higher

message acceptance of the practical message. Students were expected to be more appreciative of

messages with a strong match to their functional motivation than those with a poor fit.

19

The secondary hypothesis was that the direct measurement of attitudes toward exit exams

would be more highly related to message acceptance than indirect measurement. The direct

measure was piloted in this study and captures attitudes specifically to exit exams. In

comparison, the proxy measure developed by Entwistle (1987) captures a generalized attitude

toward education. The two types of measures vary in degrees of specificity, including one direct

measure and one proxy measure. Trend tests explored the relationship between attitude function

strength and the degree to which the message was received favorably. The direct functional

assessment was expected to prevail (as indicated by a stronger trend) over the proxy measure at

predicting message acceptance. So the direct measure was expected to have the stronger

relationship to message response, while the proxy measure, Approaches and Study Skills

Inventory for Students, ASSIST (Entwistle, Tait, & McCune, 2000), was expected to have a

weaker relationship to message favorability.

20

CHAPTER 2

METHOD

Participants

Participants for this study were 244 undergraduate students attending an urban commuter

campus. The participants received extra credit in their psychology course for their research

participation. 18% of participants were male; the average age was 22; 32% of participants were

first year students. Further demographic information assessing gender, age, ethnicity, major field

of study, year in school, student status, and experience with exit exams were tangential to the

principal focus of the study and are included in Appendix H.

The number of participants was large enough to allow for the effect of interest to be

found. Research has indicated that effect sizes in attitudes research are usually larger than

moderate, at D= .56 (Richard & Bond, 2001, p. 1). According to Murphy & Myors (1998, pp.

56-57), the number of participants needed for an adequately powered study and this effect size is

134.

Design

A mixed design was employed; it consisted of two between-subjects variables and one

within-subject variable. This crossed two different types of attitude function measures (utilitarian

and cognitive) with two corresponding persuasive messages. The between-subjects variables

were trichotomizations of scores on two measures which assessed utilitarian and cognitive

attitude function. First, this was measured with a direct measure. Second, this was replicated

using a proxy measure of attitude function by Entwistle, Tait, and McCune (2000) that is seen as

less direct. Then, for the within-variable, each participant was asked to respond to two persuasive

messages, one describing a practically-framed exam and one describing an intellectually-oriented

21

exam. These were designed to relate to one or the other attitude functions and were shown to all

of the participants. Response to each message was assessed through answers to a series of

evaluative scales. The survey concluded with manipulation checks and demographic data.

Within Subjects Variables

Each participant viewed persuasive messages about two different exit exams. These

exams were each described to meet the motivational needs of two kinds of students. The first

was tailored to be attractive to students who were focused on the practical concerns regarding

testing (utilitarian function). The second was designed to appeal to those who enjoy thinking

about things (cognitive function). The messages were composed of theme-supportive bullet

points that participants in an earlier pilot study found to be of relatively the same strength (See

appendix C). Each was crafted to engage different motivational functions in the students. These

messages are shown in Figure 1.

Students were asked to provide evaluative ratings of the messages as well as give their

attitude toward two proposed exit exams. This design was influenced by the Petty, Harkins and

Williams (1980) study that asked participants to rate essays supporting the adoption of senior

comprehensive exams, presumably by writers applying to attend a journalism program. In this

study, the researchers asked participants to look at the idea of a new exam rather than an

established one with which they had experience. They did this because some attitude theorists

have suggested that there can be a difference between a person’s attitude toward an object and

their attitude toward the message that describes/promotes that object. For example, a person

might love a Pepsi advertisement, but hate the syrupy soft drink. In this case, there would be a

difference between the person’s attitude toward the object and his or her attitude toward the

message. This difference between attitude toward object and attitude toward message should be

22

Figure 1. Two persuasive messages about exit exams served as the attitudinal objects. From top, the learning exam, then the practical exam.

23

minimal when the attitude object is a novel one, with which the respondent has no direct

experience. Measurement of a new attitude allows the researcher to avoid the difficulty of trying

to change highly ego-involved attitudes that tend to be more resistant to modification (Chaiken &

Tordesillas, 1995). For example, a consumer’s personal aversion to Pepsi may be very difficult

to change. This technique is evident in advertising when marketing executives use the phrase,

“new and improved.” Attitudes toward something new can be more susceptible to influence than

measuring entrenched preferences.

The design was originally conceptualized as having two levels of a dependent variable.

The first was toward the message and the second was toward the object. To improve

susceptibility to influence, this design focused on measuring attitudes toward a new type of

exam, rather than one that students had personal experience taking. In this design, asking

participants to rate message quality in addition to their acceptance of the ideas put forth allowed

the option of measuring each attitude separately. This allowed for the capture of the dependent

variable in case there was a difference between liking for the message and persuasion regarding

its object, potentially increasing the sensitivity of the design.

Between Subjects Variables

The main interest of this study was in validating a new, direct measure of the

motivational functions underpinning attitudes. Validation will result from comparison of the

measure with an existing measure, as well as through the demonstration replicating the matching

effect with the new measure. The strength of the utilitarian and cognitive motivations toward the

attitude object defined the between-subjects variable. Each participant was categorized according

to the strengths of each attitude function resulting in a three (high, moderate, low

utilitarian-orientation attitude function strength) by three (high, moderate and low

24

cognitive-orientation) attitude function strength research design. Once the new measure is found

to be similar to the surface and deep approaches to learning measured with items from the

surface and deep approaches to learning measured with the ASSIST (Entwistle, Tait, & McCune,

2000), the second goal was to contrast the two types of cognitively-oriented measures and

utilitarian-oriented measures that vary in their level of directness.

Participants were grouped based upon direct and proxy measurements of function

strength. Respondents were placed in groups based on scores on the following scales: cognitive

and utilitarian scores on a new direct measure of attitude function toward exit exams, and proxy

measure of deep and surface attitudes toward academics derived from the ASSIST (Entwistle,

Tait, & McCune, 2000). Table 1 describes the two levels of directness, that is, how well the

measure specifically targets the idea of exit exams. The direct measure was very direct as it

specifically asks about exit exams. The second measure, ASSIST (Entwistle, Tait, & McCune,

2000), was less direct because it asks about attitude in general toward learning.

Table 1. Scales for the Direct and Proxy Measures

Measure Cognitive Function Utilitarian Function The direct measure

Cognitive scale of our new piloted measure

Utilitarian scale of our new piloted measure

The proxy measure

The Deep Scale of the Approaches and Study Skills Inventory for Students, ASSIST (Entwistle, Tait, & McCune, 2000).

The Surface/Apathetic Scale of the Approaches and Study Skills Inventory for Students, ASSIST (Entwistle, Tait, & McCune, 2000).

For analytic purposes, data distributions for the strength of cognitive and utilitarian

functions in both levels of directness were trichotomized. Participants were sorted on the basis of

a bivariate distribution of utilitarian and attitude strength scale scores of the two attitude function

measures. While participants would not necessarily be sorted into the same high/mod/low

25

categories for each measure, it was expected that there would be some overlap as the measures

are correlated significantly (r= .525 between the utilitarian and the cognitive measures).

The first and most direct variables were scores on a recently piloted direct measure of

attitude function toward the object of exit exams. The direct measure asked participants about

their attitudes in relationship to a particular attitudinal object, the exit exam in the major field,

rather than just academics in general.

The direct measure was developed in three steps. First, four focus groups were held to

garner student opinion about exit exams. The groups included a discussion of current exams, a

discussion of the new exam, and a hands-on trial of a sample exam. Participants also discussed

the peculiarities of this exam: that completion would be required, but students would not be

required to pass the exam. Instead the exam would be used to help the department gauge the

effectiveness of its curriculum. Nine students participated. Their thematically categorized

commentary is included in Appendix B.

Second, a measure was created based on commentary shared by the focus group

participants. The measure was piloted in an online assessment and 410 students participated,

exceeding Kline’s (1994) suggestion of a minimum of 100 participants when analyzing

instrument factor structure. A factor analysis of the entire instrument is available in Appendix A.

The analysis included the principal components analysis extraction method and the Varimax

rotation method with Kaiser Normalization. Varimax was selected because an orthogonal

solution yielded a simple structure (Kline, 1994).

Third, a measure was created (the direct measure) to include the second and third factors

of the larger piloted instrument because they were most relevant to the existent literature in the

26

educational field. The resultant measure had two factors: cognitive and utilitarian. The factor

structure is available in Table 2 and the Scree plot of Eigenvalues indicating two factors is

available in Figure 2.

Table 2. Factor Structure for the Direct Instrument

Rotated Component Matrix (a) Component 1 2

5. Depends on how well it helps me think about the material in different ways.

.825 .282

13. Depends on how well it challenges me to think. .820 .32017. Is based on how much I get to put my knowledge to work. .774 .39610. Depends on how well this process improves my knowledge of the material.

.763 .418

38. Is related to its ability to trigger deep thinking about the subject. .732 .39525. Depends on how well it indicates whether I am learning all I can. .668 .53528. Is related to its ability to tell me what I need to work on to learn more about.

.571 .589

23. Is related to how well I will do on it. .366 .84421. Is based primarily on what the grade can do for my career. .259 .82630. Relates to how much effort it would take to prepare for it. .411 .7421. Depends on if my score on it will help me meet my goals. .385 .72016. Depends on what the test can do for me. .438 .68812. Depends on how much time it will take away from my other responsibilities.

.302 .622

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization A Rotation converged in 3 iterations.

Figure 2. Scree plot of Eigenvalues for the factor analysis of the direct instrument: Two main factors are indicated by the bend in the plot.

27

Table 3. Direct Attitude Function Measure Items

My attitude toward exams...

Direct Cognitive Function Is related to its ability to trigger deep thinking about the subject. Depends on how well it challenges me to think. Depends on how well it indicates whether I am learning all I can. Is based on how much I get to put my knowledge to work. Depends on how well it helps me think about the material in different ways. Is related to its ability to tell me what I need to work on to learn more about. Depends on how well this process improves my knowledge of the material.

Direct Utilitarian Function Is based primarily on what the grade can do for my career. Is related to how well I will do on it. Depends on if my score on it will help me meet my goals. Depends on what the test can do for me. Relates to how much effort it would take to prepare for it. Is related to how difficult it would be to do well on it. Depends on how much time it will take away from my other responsibilities.

Note. These items were rated on two nine point scales: “To what extent is this true of you?” and “To what extent does this statement fit your thinking?”

The cognitive subscale of seven items had a Cronbach’s alpha of .84 for the pilot data

(n=410) collection and .936 for the dissertation data (n=244). Sample items began, “My attitude

about exams...” and continued “Depends on how it challenges me to think,” “Depends on

whether I am learning all I can,” and “Depends on how well it helps me think about the material

in different ways.”

The utilitarian subscale of seven items had a Cronbach’s alpha of .80 for the pilot data

collection (n=410) and .910 for the dissertation data n=244). Questions followed the same format

as the cognitive items but items covered more practical concerns such as, “Is based primarily on

what the grade can do for my career,” “Depends on what the test can do for me,” or “Relates to

how much effort it would take to prepare for it.” The items for the scales are shown in Table 3.

28

Each attitudinal statement was paired with two nine-point unipolar scales recording (a)

how well it fit their thinking and (b) the importance of the particular attribute. The first scale

ranged from one as “not at all true of me” to nine as “very true of me.” The second ranged from

one as “not at all important” to nine as “very important.” These two statement types were

expected to, and in fact yielded, highly equivalent scores. The correlation between the two scores

was .886, df=242, p<.001, for the cognitive items and .880, df=242, p<.001, for the utilitarian

items.

Each individual’s scores of direct utilitarian and cognitive functional assessment were

calculated by summing responses to individual items of the two scales attached to each relevant

item statement. The range for the cognitive scores was 106, from 20 to 126, and tertile cut points

were 75 and 93. The range for the utilitarian scores was 117, from 9 to 126, and tertile cut points

were 83 and 99.

The second pair of independent variables comes from a proxy measure of attitude toward

academics, the ASSIST (Entwistle, Tait, & McCune, 2000). The relevant scales, shown in Table

4, are the deep subscales, based on sixteen items with internal consistency (alpha) of .84 (Centre

for Research on Learning and Instruction, 1997, p. 6) and the surface subscales based on 16

29

Table 4. The Proxy Measure: Approaches and Study Skills Inventory for Students (ASSIST)

(Entwistle, Tait, & McCune, 2000)

Deep Approach Surface Apathetic Approach Seeking meaning Lack of purpose 4. I usually set out to understand for myself the meaning of what we have to learn.

3. Often I find myself wondering whether the work I am doing here is really worthwhile.

17. When I’m reading an article or book, I try to find out for myself exactly what the author means.

16. There’s not much of the work here that I find interesting or relevant.

30. When I am reading I stop from time to time to reflect on what I am trying to learn from it.

29. When I look back, I sometimes wonder why I ever decided to come here.

43. Before tackling a problem or assignment, I first try to work out what lies behind it.

42. I’m not really interested in this course, but I have to take it for other reasons.

Relating ideas Unrelated memorizing 11. I try to relate ideas I come across to those in other topics or other courses whenever possible.

6. I find I have to concentrate on just memorizing a good deal of what I have to learn.

21. When I’m working on a new topic, I try to see in my own mind how all the ideas fit together.

19. Much of what I’m studying makes little sense: it’s like unrelated bits and pieces.

33 Ideas in course books or articles often set me off on long chains of thought of my own.

32. I’m not really sure what’s important in lectures, so I try to get down all I can.

46. I like to play around with ideas of my own even if they don’t get me very far.

45. I often have trouble in making sense of the things I have to remember.

Use of evidence Syllabus-boundness 9 I look at the evidence carefully and try to reach my own conclusion about what I’m studying.

12. I tend to read very little beyond what is actually required to pass.

23. Often I find myself questioning things I hear in lectures or read in books.

25. I concentrate on learning just those bits of information I have to know to pass.

36. When I read, I examine the details carefully to see how they fit in with what’s being said.

38. I gear my studying closely to just what seems to be required for assignments and exams.

49. It’s important for me to be able to follow the argument, or to see the reason behind things.

51. I like to be told precisely what to do in essays or other assignments.

Interest in ideas (Related sub-scale) Fear of failure (Related sub-scale) 13. Regularly I find myself thinking about ideas from lectures when I’m doing other things.

8. Often I feel I’m drowning in the sheer amount of material we’re having to cope with.

26. I find that studying academic topics can be quite exciting at times.

22. I often worry about whether I’ll ever be able to cope with the work properly.

39. Some of the ideas I come across on the course I find really gripping.

35. I often seem to panic if I get behind with my work.

52. I sometimes get ‘hooked’ on academic topics and feel I would like to keep on studying them.

48. Often I lie awake worrying about work I think I won’t be able to do.

30

items exhibiting an internal consistency (alpha) of .80. Scores on these instruments were used to

group the participants into tertiles. Each item was rated on a five-point Likert type scale, as

originally proposed by its designers. Students were grouped into high, medium, and low on the

ASSISTS’s (Entwistle, Tait, & McCune, 2000) subscales for deep and surface cognition. In this

sample, the measures were found to be very reliable. Cronbach’s Alpha for the 16 item Deep

scale was .963, while alpha was measured at .941 for the 16 item Surface scale. The range for

the Deep scores was 58, from 21 to 79, and tertile cut points were 53 and 62. The range for the

Surface scores was 63, from 13 to 76, and tertile cut points were 47 and 55.

Dependent Variables

Two classes of dependent variables were assessed. The first included the student’s rating

of messages regarding an upcoming exit exam (called here The Ohio Exit Exam versus the Iowa

Exit Exam). The second included the impact of these messages’ favorability of response toward

the attitude objects. These are shown in Table 5. Scores were calculated by summing responses

to a nine-point Likert-type scale.

Table 5. The Dependent Attitude Variables

Message Acceptance Exam Appeal

How convincing did you find this message? How appealing is this exam?

Does this message make vital points? How important do you find this exam?

How interested are you in what this message is saying? How do you feel about taking this exam?

Message response and attitude toward the object of the message allowed for the

assessment of the functional matching effect at two seemingly distinct levels. That is, did a

match between attitude function and message content increase susceptibility to message

influence and appeal of the object? This design allowed the researchers to pursue the experiment

31

if the manipulation of having a “new” exam did not reduce the difference between message and

object favorability. Preliminary analyses revealed that the manipulation of having a “new” exam

was effective. There was little difference between exam appeal and message acceptance.

Although originally intended to be measured individually, the slight difference (the mean

difference was 0.763 for the practical exam and 1.021 for the learning exam) between the two

variables did not indicate that they measured different concepts. For the practical ad for the

exam, the correlation between appeal and acceptance was .84, df=242, p<.001.and for the

intellectual exam, correlation was .79, df=242, p<.001. Because these were not separate

constructs, this justified summing the items from both scales to create one variable, henceforth

named “exam attitude.” Exam attitude was measured twice as a repeated variable: once for the

intellectually oriented message and once for the practically framed message.

Manipulation Checks

The study concluded with manipulation checks to assess the degree to which messages

met the goals intended by the researchers. Results for each message were the sum of three

nine-point Likert-type scales. Items are included in Table 6.

Table 6. Manipulation checks for the two exams

The Practical Exam The Intellectual Exam

How practical do you see this exam being for students?

How cognitively oriented do you find this exam?

How convenient do you find this exam? To what extent do you think this test has academic value?

To what extent does this test seem to be valuable for your career?

How valuable do you find this exam from the perspective of learning?

32

Other Measures Considered as Predictors of Attitude toward the Practical and Learning

Exams

Although there were no specific predictions about their performance, a few other

variables were included in the study to allow for consideration of their similarity to the

independent variables. From the education literature, three scales were used from the Motivated

Strategies for Learning Questionnaire (MSLQ) (used with permission from Karabenick, personal

communication, 2004) and one was adapted from the Goal Orientation Scale (GOS) (Midgley,

Kaplan, Middleton & Maehr, 1998) with changes to make it age appropriate. The three scales

from the MSLQ were the Classroom Approach Mastery, Approach Mastery, and the Extrinsic

scale. All but extrinsic were intended to relate to the cognitive measure. Extrinsic was expected

to relate to the utilitarian measure. Furthermore, two items (Lichtman, personal communication,

2005) were included for each function. These scales are documented in Appendix D.

Proposed but Ultimately Abandoned Covariate

Earlier research has found that attitudes toward exams can be related to student

achievement levels in school (Brim, Glass, Neulinger, Firestone and Lerner, 1969). To allow for

the possible interference of this relationship with the other relationship this experiment was

aimed at capturing, the measurement of a covariate was suggested which would control for

academic achievement levels. Academic achievement levels were measured in three ways: GPA

from student records, self-reported GPA, and self-reported ACT score. The assumption was that

there would be a linear relationship between scholastic achievement and exam attitude, which is

a requirement for ANCOVA (Wildt & Ahtola, 1978). However, this assumption was not met for

any of the three indicators of academic achievement. Instead, the relationship between academic

achievement levels and exam attitude was very small, as is indicated in Table 7. Pearson’s r was

33

calculated as a linear measure of the relationship while η² was calculated as an indicator of the

nonlinear relationships. Because the assumptions for the ANCOVA were not met, the covariate

was abandoned, and the analysis was completed without including academic achievement in the

analysis.

Table 7. Relationship between the proposed covariates and the dependent variables

Relationship r η² N Significance

Attitude Toward the Learning Exam

Grade point average from the records -.057 .141 124 ns

Self-reported grade point average -.026 .066 218 ns

Self-reported ACT score .074 .170 75 ns

Attitude Toward the Practical Exam

Grade point average from the records -.033 .133 123 ns

Self-reported grade point average .039 .114 217 ns

Self-reported ACT score .174 .218 74 ns

Note. This relationship was captured using Pearson’s r to explore linear relationships and partial η² using SPSS to explore other relationships. None of these relationships were significant.

Procedure

Participants were offered extra credit for participation in this study, and they were given

business cards with the study’s website address on them. Data was collected through an on-line

survey. At the convenience of the participants, the survey could be accessed on any computer.

One page contained an information sheet. Included in this sheet was a priming statement,

“Students end up taking a lot of different exams during their time in college. Tests can serve

different purposes. Some are better at some things than others. For example, some are more

oriented toward fact retrieval while others evaluate your ability to think on your feet rather than

34

rote memory. We are interested in what you think about different types of tests.” By clicking “I

agree,” each participant consented to the process. No ISP numbers were collected for the purpose

of this study, so the responses remain anonymous.

Participants first responded to items from the proxy measure of deep/surface approach to

learning subscales in the ASSIST (Entwistle, Tait, & McCune, 2000). Then, participants

answered questions regarding their views on various attributes and features of exit exams as a

Direct assessment of the strength of attitude function. Finally, participants answered questions

from the MSLQ, GOS and Lichtman’s items. The next step was for participants to view the first

of two randomly ordered messages. (Presentation of results collapses over these two orders as

no meaningful/significant differences in response were associated with order.) Each message

was followed by a six-item attitude measure (Table 5) and the six item manipulation check

measures (Table 6). The study concluded with demographic items (Appendix F).

Participants were encouraged to contact the principal investigator for additional

information about the study. No post investigation debriefing was provided as the study involved

no deception nor harm to participants. Information about the results was made available on

request.

35

CHAPTER 3

RESULTS

Two goals were met in the process of data analysis: 1) To provide support for the

matching hypothesis for the direct measure and partial support for the proxy measure and 2) to

provide support for direct measurement over the proxy measurement of attitude functions.

Support for the first hypothesis was collected by completing linear trend tests on two mixed

design ANOVA’s, one utilizing direct measurement of attitude function and the other utilizing a

more indirect measurement. Support for the second hypothesis was garnered from a comparison

of the effect sizes of the two linear trend tests which demonstrated a larger effect for the direct

measure.

Manipulation Checks

Manipulation-check scales indicated that the messages were perceived as matching the

functional categories intended by the research. When items reflecting the exam’s practical

benefits, convenience, and career value were summed, the practical exam had a higher mean

(m=19.15) than the learning exam (m=16.72), F (1, 239) = 24.58. Similarly, the learning exam

was viewed more as cognitive, academic and learning oriented (m=18.64) than the practical

exam (m=16.48), F (1,239) =15.58. (See Table 6 for the items and Appendix E for the analysis.)

Main Effects

Although there were no predictions regarding main effects, they were significant for the

direct [F Utilitarian (2,231) =4.386, p<.05, F Cognitive (2,231) = 5.452, P<.01] but not the proxy [F

Surface (2,231) =1.660, NS, F Deep (2,231) = 2.963, NS] measures of attitude. An analysis of the

data plots (Figure 3) indicates that participants in the higher tertiles had generally more favorable

ratings of the messages.

36

Order Effects

At the end of the study, participants were exposed to two persuasive messages about exit

exams. To reduce order effects, both messages were displayed side-by-side in the same browser

window. The order in which each message was placed and rated was randomly counterbalanced

to allow for an analysis of possible order effects (Rutherford, 2001). One version of the survey

displayed the Iowa exam to the right of the Ohio exam. This was followed by evaluation items

on the following page for first, the Iowa, and second, the Ohio exam. The second iteration of the

survey displayed the Ohio exam to the right of the Iowa exam. This was followed by evaluation

items on the following page for first, the Ohio, and second, the Iowa exam. This attempt at

reducing order effects was successful: There were no significant order effects when the data was

analyzed using ANOVA. The results are summarized in Appendix G.

The Relationship of Functional Match to Ratings of Message Favorability

For the first hypothesis, a match between score on the grouping variable (a measure of

attitude function strength) and score on the dependent variable was expected. That is,

message-favorability was expected to be enhanced when the persuasive language of the message

matched the reader’s attitude function. Support for the first hypothesis was found by conducting

two mixed design ANOVAs. The first represented the direct attitude function measurement

(Table 3) while the second represented the proxy attitude function measurement (Table 4). For

the first ANOVA, two between-subject factors were used as the grouping variables: utilitarian

attitude function and cognitive attitude function. For the second ANOVA, the two between-

subject factors were deep attitude toward learning and surface attitude toward learning. These

factors were each broken down into three levels of strength by trichotomizing the distribution of

obtained scores into low, moderate and high tertiles. The higher the tertile, the stronger the

37

evaluation of that motive for the individual. For both of these ANOVAS, the dependent variables

were the same: exam attitude favorability to the practically-framed and to the learning-framed

exams.

Interactions were of primary importance for this analysis. Attitudinal favorability toward

the practical exam was expected to increase across the utility function importance tertiles. In

contrast, evaluation of the practical exam was not expected to exhibit a significant main effect

across the cognitive tertiles. Similarly, favorability of reaction to the learning themed exam was

expected to increase across the tertiles of the cognitive function. Evaluation of this

learning-framed exam was also expected to yield an insignificant trend across the utilitarian

function attitude importance tertiles. Expectations were met for the direct measure, but only

partially met for the proxy measure.

The Direct Measure

As predicted, there was a significant two-way interaction of utilitarian attitude tertiles and

message type on exam attitudes [F (2,231) =7.58, p<.01]. Similarly, there was a significant

two-way interaction of cognitive tertiles and message type on exam attitude [F (2,231) =5.76,

p<.01]. The three way interaction of message type, cognitive tertiles and utilitarian tertiles on

exam attitude was not significant. Please refer to Table 8.

38

Table 8. Repeated Measures Analysis of Variance of Direct Function Measures on Exam

Attitudes

Between Subjects Source df F PDirect Utilitarian Tertiles (U) 2 4.386 *Direct Cognitive Tertiles (C) 2 5.452 **U x C 4 1.849 NSS within group error 231 (163.059)

Within Subjects Source df F PMessage (A) 1 0.275 NSA x U 2 7.575 **A x C 2 5.757 **A x U x C 4 1.078 NSA x S within group error 231 (105.77)

Table 9. Exam Attitudes by Utilitarian [U] and Cognitive [C] Attitude Function Strength

Practical Exam Attitude Learning Exam Attitude

ULow

UModerate

UHigh

ULow

UModerate

U High

C Low 32.980 28.133 45.500 C Low 31.000 29.533 30.714N=49 n=15 n=14 N=49 N=15 n=14

CModerate

31.750 33.871 38.219 CModerate

33.500 33.645 35.063

N=20 n=31 n=32 N=20 N=31 n=32C High 32.000 37.500 39.165 C High 39.636 42.625 38.444

N=11 n=32 n=36 N=11 N=32 n=36

Note. Higher numbers indicate more favorable ratings of the message.

39

Practical Exam Attitude by Attitude Function Strength

45.50

38.22

32.98

28.13

33.87

31.75

39.17

32.00

37.50

20

25

30

35

40

45

50

Low Moderate High

Direct Utilitarian Tertiles

Exam

Att

ituLow Cognitive Tertile

Moderate CognitiveTertileHigh Cognitive Tertile

Learning Exam Attitude by Attitude Function Strength

39.64

31.00

33.50

42.63

33.65

29.53

38.44

35.06

30.71

20

25

30

35

40

45

50

Low Moderate High

Direct CognitiveTertiles

Exam

Att

itu

Low Utilitarian Tertile

Moderate UtilitarianTertileHigh UtilitarianTertile

Figure 3. Means of Exam Attitudes by Utilitarian [U] and Cognitive [C] Attitude Function Strength for the Direct Measure.

40

Exam attitude increased directly with the strength of the person’s attitude function where

this function was relevant to the content of message. Mean attitude toward the practical exam

was 32.24 for those respondents who were low in utilitarian attitude strength, 33.17 for those

with medium utilitarian attitude strength, and 40.96 for those high in utilitarian attitude strength.

Attitude toward the learning exam was 30.42 for those respondents who scored low in cognitive

attitude strength, 34.07 for those with medium cognitive attitude strength, and 40.24 for

participants with high cognitive attitude strength. (See the Figure 4 and Table 10.)

Table 10. Marginal Means of Exam Attitudes for the Direct Measures of Utilitarian [U] and Cognitive [C] Functions

Exam Attitude by Utilitarian Function Pooling Over the Cognitive Function Groups

Exam Attitude by Cognitive Function Pooling Over the Utilitarian Function Groups

U Low

UModerate

UHigh

CLow

CModerate

CHigh

32.243 33.168 40.962 35.538 34.613 36.222Practical Exam

Attitude n=80 n=78 n=82

Practical Exam

Attitude n=78 n=83 n=79

34.712 35.268 34.740 30.416 34.069 40.235Learning Exam

Attitude

n=80 n=78 n=82 Learning

Exam Attitude

n=78 n=83 n=79

This pattern was confirmed by two linear trend tests (see Table 11) which indicated that

when there was a match of function to exam message, persuasion was enhanced. For the learning

exam, a linear trend was evident for the cognitive function [F (1,231) =21.263, p<.01].

Furthermore, for the practical exam, a linear trend was also significant [F (1,231) = 17.176,

p<.01.] for the utilitarian function.

41

Figure 4. Marginal Means of Exam Attitudes for the Direct Measures of Utilitarian [U] and

Cognitive [C] Functions

Exam Attitude by Utilitarian Function Across the Cognitive Function Tertiles

40.96

34.7432.2433.17

34.71 35.27

20

25

30

35

40

45

50

Low Moderate High

The Direct Utilitarian Tertiles

Practical AdReception

Learning AdReception

Exam Attitude by Cognitive Function Across the Utilitarian Function Tertiles

35.54 36.22

30.42

40.24

34.61

34.07

20

25

30

35

40

45

50

Low Moderate High

The Direct Cognitive Tertiles

Practical AdReception

Learning AdReception

42

Table 11. Linear Trend Test Where the Attitude Function and Message Match

The Effect of Utilitarian Function Tertiles on Practical Exam Attitude

Source df F PPractical Exam Attitude 1 17.176 **Error 231 (136.510)

The Effect of Cognitive Function Tertiles on Learning Exam Attitude

Source df F PLearning Exam Attitude 1 21.263 **Error 231 (132.318)

Note. Values enclosed in parentheses represent mean square errors. S= subjects. * p<.05. ** p<.01.

In contrast, exam attitude was unrelated to attitude function strength in cases when the

message describing the exam was irrelevant or unrelated to the function. Attitude toward the

practical exam was 34.71 for low cognitive attitude strength, 35.27 for medium cognitive attitude

strength, and 34.74 for high cognitive attitude strength. Attitude toward the learning exam was

35.54 for low utilitarian attitude strength, 34.61 for medium utilitarian attitude strength, and

36.22 for high utilitarian attitude strength. (See the Figure 4.) For the practical exam, a linear

trend was absent for the cognitive function [F (1,231) =0.10, ns]. Furthermore, for the learning

exam, a linear trend was absent for the utilitarian function [F (1,231) = 0.00, NS].

The Proxy Measures

The same analysis strategy was applied to the proxy measure of attitude function. In this

case, expectations for the proxy measure were only partially met. There was a significant

two-way interaction between surface attitude tertiles and message [F (2,231) =3.31, p<.05]. In

addition, there was a significant two-way interaction between deep tertiles and message [F

43

(2,231) =4.21, p<.05]. The three way interaction of message with cognitive tertiles with

utilitarian tertiles was also significant. Please refer to Table 12.

Table 12. Repeated Measures Analysis of Variance of Proxy Measures on Exam Attitudes

Between Subjects Source df F PDirect Surface Tertiles (SU) 2 1.660 NSDirect Deep Tertiles (D) 2 2.963 NSU x C 4 0.24 NSS within group error 231 (178.741)

Within Subjects Source df F PMessage (A) 1 0.741 NSA x SU 2 3.314 *A x D 2 4.210 *A x SU x D 4 3.317 *A x S within group error 231 (103.720)

Note. Values enclosed in parentheses represent mean square errors. S= subjects. * p<.05. ** p<.01.

Table 13. Exam Attitudes by Surface [SU] and Deep [D] Cognitive Orientation

Practical Exam Attitude Learning Exam Attitude SU

Low SU

Moderate SU

High SU

Low SU

Moderate SU High

D Low 33.821 37.429 35.375 D Low 34.071 29.107 29.708N=28 n=28 n=24 n=28 n=28 n=24

DModerate

34.793 40.409 32.452 DModerate

41.207 32.455 35.452

N=29 n=22 n=31 n=29 n=22 n=31D High 38.692 33.375 35.357 D High 38.115 37.917 36.429

N=26 N=24 n=28 n=26 n=24 n=28

44

Figure 5. Means of exam attitude across all the proxy functional tertiles.

Practical Exam Attitude by Learning Goal

30.64

32.96

34.82

35.72

38.86

35.4333.69

38.26 36.23

0

5

10

15

20

25

30

35

40

45

Low Moderate High

Indirect Surface Tertiles

Exam

Atti

tude

Deep (Low)

Deep (Moderate)

Deep (High)

Learning Exam Attitude by Learning Goal

33.1031.33

37.02

28.28

32.7234.98

40.10

35.9739.90

0

5

10

15

20

25

30

35

40

45

50

Low Moderate High

Indirect Surface Tertiles

Exam

Atti

tude

Deep (Low)

Deep (Moderate)

Deep (High)

lwoodward
Line

45

Exam attitude increased directly with the strength of the person’s attitude function where

the exam attitude was relevant to (supportive of) this function for the deep scale but not the

surface scale. Mean attitude toward the practical exam was 35.77 for those respondents who

were low in surface attitude strength, 37.07 for those with medium surface attitude strength, and

34.40 for those high in surface attitude strength. Mean attitude toward the learning exam was

30.96 for those respondents who were low in deep attitude strength, 36.37 for those respondents

who were moderate in deep attitude strength, and 37.49 for those respondents who were high in

deep attitude strength (see Figure 6 and Table 14.).

Table 14. Marginal Means of Exam Attitudes for the Proxy Measures of Surface [SU] and Deep [D] Cognitive Orientation

Exam Attitude by Surface (SU) Cognitive Orientation Pooling Over the Deep (D) Cognitive Groups

Exam Attitude by Deep (D) Cognitive Orientation Pooling Over the Surface Cognitive (SU) Groups

SU

Low SU

Moderate SU

High D

Low D

Moderate D

HighPractical Exam Attitude

35.769 37.071 34.395 Practical Exam Attitude

35.542 35.885 35.808

n=83 n=74 n=83 N=80 n=82 N=78Learning Exam Attitude

37.798 33.159 33.863 Learning Exam Attitude

30.962 36.371 37.487

n=83 n=74 n=83 N=80 n=82 N=78

This pattern was confirmed by a linear trend test which indicated that when there was a

match of function to exam attitude, persuasion was enhanced for the deep variable. For the

learning exam, a linear trend was significant [F (1,231) = 12.377, p<.01] for the deep function.

However, for the practical exam, the linear trend was nonsignificant for the surface function [F

(1,231) = 0.529, NS]. See Table 15.

46

Figure 6. Marginal Means of Exam Attitudes for the Proxy Measures of Surface [SU] and Deep [D] Cognitive Orientation

Exam Attitude by Deep Cognitive Style over the Surface Cognitive Groups

35.808

30.962

37.48735.542

35.885

36.371

20

25

30

35

40

45

50

Low Moderate High

The Proxy Deep Tertiles

Practical AdReception

Learning AdReception

Exam Attitude by Surface Cognitive Style over the Deep Cognitive Groups

34.39537.071

35.769

33.863

33.15937.798

20

25

30

35

40

45

50

Low Moderate High

The Proxy Surface Tertiles

Practical AdReception

Learning AdReception

47

Table 15. Linear Trend Test Where the Attitude Function and Message Match

Linear Trend Test for the Effect of Surface Tertiles on Practical Exam Attitude

Source df F PPractical Exam Attitude

1 .529 NS

Error 231 (136.510)

Linear Trend Test for the Effect of Deep Tertiles on Learning Exam Attitude Source df F PLearning Exam Attitude 1 12.377 **Error 231 (132.318)

Note. Values enclosed in parentheses represent mean square errors. S= subjects. * p<.05. ** p<.01.

Exam attitude was expected to be unrelated to attitude function strength in cases that the

exam message was irrelevant to the function for both proxy function measures. However it was

found to be unrelated for the deep, but not surface, measure. Attitude toward the practical exam

was 35.54 for low deep attitude strength, 35.89 for medium deep attitude strength, and 35.81 for

high deep attitude strength. Attitude toward the learning exam was 37.80 for low surface attitude

strength, 33.16 for medium surface attitude strength, and 33.86 for high surface attitude strength

(see Figure 6). For the practical exam, a linear trend was absent for the deep function [F (1,231)

=0.019, ns]. However, contrary to expectations, for the intellectual exam, a linear trend was

significant [F (1,231) = 4.722, p<.05] for the surface function.

Effect Sizes

The second hypothesis asserted that direct measurement would demonstrate a stronger

relationship for the direct than the proxy function measurements in the matched conditions. The

strength of each relationship was calculated for comparison purposes using eta squared for each

linear trend measurement. This variable is able to capture the strength of a relationship and is

48

commonly used to compare the strength of a relationship across multiple scales(Cohen, 1969).

Because this was a factorial ANOVA, eta squared was calculated using the ratio of sum of

squares of the effect to sum of squares of the error (Tabachnick & Fidell, 2001, 54). These

calculations are summarized in Table 16. In short, the direct measure had a stronger effect than

the proxy measure.

Table 16. Effect Sizes (η²) of the Matched Conditions

Practical Exam

Learning Exam

Direct Utilitarian Attitude Function

0.069 Direct Cognitive Attitude Function

0.084

Proxy ASSIST Surface Cognitive Orientation

0.002 Proxy ASSIST Deep Cognitive Orientation

0.020

Note: Effect sizes were calculated from the linear trend analyses of the matching effect. K=1

49

CHAPTER 4

DISCUSSION

Results show support for the idea that attitude function influences the ways in which

messages about the object of the attitude are received. Overall, the results strongly support one

out of two of the hypotheses; they partially support the other.

Hypothesis one part A stated that the direct measure of attitude functions would predict

student response to an announcement regarding an upcoming exit exam. Results strongly

supported the hypothesis regarding the matching effect for the direct attitude measure. A strong

linear trend showed that messages regarding practical exams were more favorably perceived by

students who had higher utilitarian attitude function scores. Similarly, a strong linear trend

demonstrated that messages regarding the learning-oriented exam was more favorably received

by students who had higher cognitive attitude function scores. This, plus the failure to find such

linear trends for variation in the irrelevant attitude function provides support for the direct

approach of assessing the latent functions served by attitudes. Such objective, standardized

measures of multiple attitude functions can facilitate further research, as well as our

understanding of the processes underlying the matching effect.

The finding that direct attitude function measure would predict student response to a

message is similar to findings throughout the attitude function literature supporting the matching

hypothesis. The direct instrument identified those with a cognitive function and those with a

utilitarian function specifically in relationship to exit exams. This adds support for the strength of

the matching hypothesis, offered by many researchers (i.e., Petty & Cacioppo, 1979; Petty &

Wegener, 1998; Snyder & DeBono, 1985) and particularly for the strength of the direct approach

(Shavitt, 1992; Shavitt & Nelson, 2002; Woodward & Firestone, 2003). The direct approach

50

refers to the idea posited by researchers that we have different attitude functions for different

objects and that a measure which measures function in relationship to just one object class will

be more effective than one which generalizes across diverse object classes.

Hypothesis one part B stated that a proxy measure of attitude would also predict student

response to an announcement regarding an upcoming exit exam. The results offered partial

support for the second hypothesis regarding the matching effect for the proxy attitude measure.

First, a strong linear trend supported the idea that messages regarding the learning-oriented exam

was more favorably received by students who had a deeper approach to cognition. Second, the

non-significant linear trend failed to offer support for the idea that messages regarding practical

exams were more favorably received by students who evidenced a greater surface approach to

cognition.

The ASSIST’s (Entwistle, Tait, & McCune, 2000) measure of surface approach to

cognition did not perform as expected. It may not have performed as well because part of the

content of the scale extends to domains at some remove from the utilitarian construct. Although

theoretically comparable to the utilitarian function literature, a more complete examination of the

individual items reveals subscales which cover a broader range of experience, including a

pragmatist ideal, as well as qualities more clearly related to being a poor student. Particularly

divergent from the utilitarian construct were the subscales, “unrelated memorizing,” and “fear of

failure.” An example of a problematic item from the “unrelated memorizing” subscale is “I’m

not really sure what’s important in lectures, so I try to get down all I can.” An example of

problematic item from the “fear of failure” subscale is “Often I feel I’m drowning in the sheer

amount of material we’re having to cope with.” Such subscales cloud the measurement of the

more specific ends-oriented, utilitarian concerns which are captured in the other subscales.

51

Perhaps this combination of two constructs reduced the precision of the measure. This clouded

measurement may be one reason why the surface scale was not as strong a predictor of reaction

to the practical exam.

In addition, the direct measure of utilitarian function which was developed for this

investigation was conceptualized as an overlapping construct with the cognitive attitude function.

There is less overlap between the deep and surface scales from the ASSIST (Entwistle, Tait, &

McCune, 2000).That we conceptualized these scales in an overlapping way reflects an American

perspective that a search for knowledge and a focus on practical ends can be overlapping

concerns.

Finally, there may be a difference in the type of student participant who participated in

norming to develop the two measures. The ASSIST (Entwistle, Tait, & McCune, 2000) was

developed in Scotland. Although a number of initiatives have been developed to improve the

diversity of universities in the UK, norming of the deep/surface measure may reflect an

educational system in which there were fewer working-class students enrolled in universities. In

contrast, this sample represents students at a public university serving an industrial region of the

Midwest where working-class values are common. Thus, the deep/surface measure may reflect a

less utilitarian-oriented college system than the one where this study was completed.

Hypothesis two stated that the direct measure of attitude would offer a stronger

relationship to exam attitude than the proxy measure. The last hypothesis was supported

regarding strength of the direct over the proxy measure. Measures of effect size indicated that the

direct measure of attitude function had a stronger effect size than the proxy measure when

comparing performance of the direct utilitarian measure to the proxy surface measure, as well as

when comparing the direct cognitive scale to the proxy deep scale.

52

That the direct measure should outperform the proxy measure offers support for the idea

that trait measures are stronger when they specify the situation than when they generalize across

multiple situations. This combines with earlier research to offer parallel support for this

behavioral prediction while demonstrating a relationship between the direct approach (Ajzen &

Fishbein, 2005; Fishbein & Ajzen, 1972) and the attitude function literature (Herek, 1986; Katz,

1960; Shavitt, 1992; Woodward & Firestone, 2003).

In addition to the main hypotheses, the results of this study were influenced by three

design attributes. These three manipulations may have improved the sensitivity of the design:

The first attribute was an order effects intervention. The second was to measure responses to a

new stimulus. The third was to measure attitudes and then message evaluations.

First, revealing the two messages at the same time may have been responsible for the

nonsignificant order effects. To reduce order effects, the first view of the messages showed them

side-by-side in the same browser window. Our previous research, (Woodward & Firestone,

2003) found an order effect in which the first message viewed tended to be more favorably

received than the second message. This design intervention aimed to force participants to

consider both messages at the same time, and thus reducing the preference for the first message

viewed.

Second, asking students about a new exam rather than one they have previously

experienced may have helped to reduce the difference between message appeal and object

appeal. In this design, ratings of message quality and acceptance of exit exams were measured

separately, but a very small difference was found between the two measurements. This may be

due to perceptions students had of this “new” requirement. Because their attitudes were new, this

may have reduced the difference expected between message and object appeal, avoiding the

53

difficulty of trying to change pre-formed attitudes which may be more resistant to modification

(Chaiken & Tordesillas, 1995).

Third, asking participants about their attitudes toward tests before they rate a message

about exit exams may heighten the distinctiveness of the messages and may enhance the captured

effect. Ordinary advertisements in the real world may demonstrate a weaker link between

attitude and object ratings because a reminder of ones’ attitudes before message exposure may

temporarily enhance participants’ likelihood of choosing an exam which matches their attitudes.

Limitations and future directions

A first weakness of this design has to do with field validity. Although it offers strong

support for the idea that attitudes can be shaped by messages which match the goals of a

participant, it does so in a structured setting with messages about hypothetical exams. In the real

world, exams are usually existing ones, and the reactions students feel toward them can be much

stronger. Replications using actual exams in use with university students could strengthen the

field validity of this research design.

A second weakness of this design was related to the relationship between attitude and

behavior. The literature indicates that there tend to be relationships between attitudes and

behavior. Although research has shown that there is a relationship between attitudes and

behavior (Ajzen & Fishbein, 2005), it remains to be seen if enhanced attitudes toward the exams

will similarly enhance exam performance or intention to study for the exam. As a result, further

research is needed to understand this relationship.

A third weakness of this design is its practical relevance to current persuasive

approaches. Usually, advertisements target multiple attitude functions because they cannot be

targeted to specific types of people. Although niche advertising is a growing part of the industry,

54

a clearly slanted message may avoid engaging other types of people and may target limited types

of people. Most communicators want to avoid putting all their eggs in one basket and use mixed

messages because they deem them more inclusive of different types of people.

However, there is a gap in the literature regarding typical mixed message approaches. It

is unclear whether these are effective or whether a “something for everyone” approach actually

serves to water down the message and reduce appeal to message recipients. However, research

has shown support for taking the risk of using functional techniques and targeting message as

part of a niche marketing approach (Shavitt & Nelson, 2002). These researchers examined

Advil’s “Can Do Generation” advertising campaign as a successful use of functionally matched

persuasion. The “Can Do Generation” specifically targets people who are interested in utilitarian

goals in regard to a pain reliever choice.

However, educational persuasion is not as cutting edge as it is in the pharmaceutical

industry. Current web technology aimed at revealing targeted messages for specific population

segments has been a goal of such companies as Amazon or My Space and is evident in

cookies-based page-linking. Universities willing to use a similar computerized targeting process

would benefit from this type of persuasive approach; however, it is not yet commonly used on

university campuses. In educational practice, individuals who work with students on study skill

enhancement regularly use short questionnaires to help them offer the students targeted advice.

Helping educational facilitators to be more effective in motivating students regarding exams

could be a usage of the instrument piloted in this design. Further research is needed to see how

this information could be applicable to a university market.

Despite the limitations of the current study, the results give support for the

experimenter’s hypothesis that attitude functions can predict exam attitudes and that direct

55

attitude function measurement is stronger than proxy measurement. Specifically, it appears that

students with a strong cognitive attitude function prefer messages which are aimed at learning.

Furthermore, students with a strong utilitarian attitude function prefer messages which are aimed

at practicality. Finally, an assessment of attitude function that directly relates to the topic of exit

exams is a stronger predictor of exam attitude than one which is an indirect, proxy measures.

The primary goal of this study was to integrate educational and social psychological

research by bringing concepts derived from each field together within a single investigation. The

study addressed specific concepts related to student attitudes toward exit exams, a new

requirement in university accreditation practices. The findings of this study help to offer

strategies for bolstering students’ motivation to perform on these exams from both educational

and social psychological perspectives. From the social psychological perspective, a primary goal

was to develop a useable measure of the motives that underlie attitudes and to provide construct

validation of this measure through replication of the matching effect in persuasion. Similarly,

this goal of the present study was strongly influenced by research on classroom motivations and

cognitive orientations conducted by educational investigators (Biggs & Leung, 2001; Dweck &

Leggett, 1988; Entwistle, 1987; Marton & Saljo, 1976; Pintrich, 1991; Schmeck et al., 1991)

who sought to understand how student attitudes relate student motivation and performance.

These results are similar to previous research comparing direct to proxy measures of

attitude (Shavitt, Lowrey & Han, 1992; Woodward & Firestone, 2003) which indicate that

attitudinal functions are better understood as an interaction between the person and the situation.

Shavitt, Lowrey and Han found that the relationship between attitude function and attitudinal

object is an interaction between the participant’s functional type and type of object measured.

They found that people tended to explain their attitudes toward a product using arguments which

56

matched their attitude function when the object was multifunctional, but arguments which

matched the object type when the object only served one function. Similarly, Woodward and

Firestone found that direct measures of attitude are more predictive of exam attitude than less

direct measures of attitude. They observed responses to a multifunctional object and found that

an attitudinal measure which specified what the object was offered stronger predictive power

than one which generalized across multiple objects.

57

APPENDIX A Rotated Component Matrix (a) of the Multiple-Function Scale Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a. Rotation converged in 13 iterations.

1Fa

irne

ss

2C

ogni

tive

3G

oals

4Im

prov

ePr

ogra

m

5So

cial

Adj

ustiv

e

6C

onve

nien

ce

35. Depends on whether people who take it have an equal chance of doing well.

.71 .24 .24 .14 .15

34. Is primarily related to its fairness. .67 .21 .26 .27 .1427. Is influenced by how fair I think that it is. .64 .11 .12 .16 .4139. Is based on the idea that everyone should be able to have an equal shot.

.63 .31 .29 .23

4. Is determined by how well the test allows people who have studied effectively to perform well.

.62 .39 .21 .27 .20

9. Depends on how understandable it is for everyone who takes it.

.56 .36 .15 .31 .15

41. Is mostly related to whether review notes are available to help me pass the test.

.56 .35 .31 .13 .34

22. Depends on how well it is organized. .55 .43 .30 .25 .1233. Is based on how accurate an indicator it is of what I know.

.52 .26 .27 .47

45. Reflects whether study materials are available so I can keep my grades up.

.50 .30 .46 .20 .39

5. Depends on how well it helps me think about the material in different ways.

.24 .69 .11 .20 .18

13. Depends on how well it challenges me to think. .22 .67 .29 .14 .1617. Is based on how much I get to put my knowledge to work.

.13 .63 .27 .39 .16 .12

10. Depends on how well this process improves my knowledge of the material.

.34 .61 .27 .14 .13 .13

14. Depends on if the test reflects ideas we learned in the required readings.

.45 .57 .31

58

4. Reflects whether the test accurately represents material covered in class.

.42 .51 .41

38. Is related to its ability to trigger deep thinking about the subject.

.22 .51 .47 .22

7. Reflects my comfort level with the test format. .39 .50 .36 .12 .1411. Depends on if the test will be used to improve my department.

.47 .26 .25 .32 .23

6. Is related to how people who understand the information do on it.

.37 .39 .25 .30 .11

21. Is based primarily on what the grade can do for my career.

.26 .14 .68 .26 .21

1. Depends on if my score on it will help me meet my goals. .13 .27 .65 .11 .19 .1716. Depends on what the test can do for me. .13 .24 .63 .16 .24 .2223. Is related to how well I will do on it. .48 .25 .61 .2418. Is related to the way others will see my degree based on the test.

.10 .58 .34 .34

2. Is influenced by whether prep materials are available to help me master the material.

.37 .40 .50

3. Relates to how much effort it would take to prepare for it. .46 .15 .49 .24 .24 .2331. Depends on what the test can do to help all students. .38 .27 .12 .60 .21 .1324. Depends on if it can improve teaching. .30 .23 .20 .60 .25 .1525. Depends on how well it indicates whether I am learning all I can.

.39 .45 .23 .54

32. Depends on how well the test lives up to traditions in my field of study.

.33 .19 .35 .51 .24 .20

46. Is related to what scores on it will do to improve our academic program.

.32 .25 .35 .51 .28

36. Is based on how well it reflects the cutting edge in the field.

.22 .32 .27 .48 .16 .31

28. Is related to its ability to tell me what I need to work on to learn more about.

.44 .41 .22 .47

42. Depends on the respect that completing the test will give me.

.12 .15 .32 .46 .35 .25

26. Reflects what my close friends are likely to think. .11 .78 .1620. Is based primarily on what my classmates think about the test.

.12 .13 .74

29. Is strongly affected by the beliefs and opinions of those whom I respect and admire.

.21 .14 .35 .58 .21

15. Depends on the opinions of people who have been through the test before.

.26 .18 .31 -.11 .54 .20

19. Is based on how well the test weeds out the slackers. .11 .30 .38 .533. Reflects what people I know at highly prestigious schools think.

.18 .38 .51

59

8. Depends on how expensive it is to take. .17 .20 .16 .49 .4243. Is related to how interesting it is to puzzle through. .32 .41 .45 .3444. Is related to whether it is held in a convenient location. .14 .27 .26 .7037. Is related to how long it takes me to get through it. .21 .22 .14 .16 .6812. Depends on how much time it will take away from my other responsibilities.

.13 .24 .34 .26 .55

60

APPENDIX B

Focus Group Summary of Items by Function and General Theme

Cognitive Learning more

Pros • It might motivate us to do better in our classes. 7/30/04 • It could motivate us to remember more, after the final. Sort of a second final. 7/30/04 • It might make me want to know that much more. 7/30/04 • I may end up reviewing more, remembering more, and ultimately learning more. 8/1/04 • I like the process of reviewing my notes for a test. I think I learn more. 8/1/04

Prestige Program reputation

Cons • Will these tests be used to prove that my program is a good one or a bad one? 8/1/04

Pros • It may improve the way we are looked at by other universities. 8/1/04 • It might help me get a better job or get into grad school. 8/1/04 • I’m in a competitive field where a school’s image is everything. Image is the difference

between writing a weekly advice column and working for National Geographic. 8/1/04 • In law it is so important what people think of you. Being able to say I came from a good

program will help my career. 8/1/04 • Being able to say I did well in a difficult program can make all the difference in a job

interview. 8/1/04

Utility Giving me feedback

Cons • Stressful- You wonder if you picked the right answer 6/23/04 • It makes me anxious. 8/1/04

Pros • You can determine your own proficiency in a subject. 7/30/04 • This lets you see what you really learned. It gives you feedback. 8/1/04 • It is a step completed. 7/30/04 • It gives you an idea about when people graduate. 7/30/04 • It gives you practice. 7/30/04 • Am I in a program that is giving me the tools to compete? This test might let me know

that. 8/1/04

61

Logistics Cons

• Inconvenience 6/23/04 • It would be more convenient if it were online, or not scheduled outside of class time.

7/30/04 • I don’t want to have to shift my schedule around. 8/1/04 • It would be more convenient if it were scheduled during class 8/1/04 • How much time will it take me? 8/1/04 • Do I have to schedule this while I am at work? I don’t want to explain to my boss. They

try to be understanding but it gets old after a while when I have to take time off. 8/1/04 • Could it be online so I don’t have to deal with traffic? 8/1/04 • I don’t like HAVING to take it. 7/30/04 • An added requirement makes me feel rebellious. 8/1/04

Measurement concerns Cons

• What will they put on the test? Who decides what’s important? 7/30/04 • I don’t like tests that don’t reflect what I learned in class. If I made A’s in class, then I

shouldn’t fail this test. 6/23/04 • It’s not so accurate- it doesn’t necessarily let you know if the person knows what really

works 7/30/04 • I may have taken the classes a long time ago. It doesn’t just measure my memory but

when I took the class. 7/30/04 • They should measure more than just content- the semester taken and time of day. There

are more issues than just what is retained. 7/30/04 • A test doesn’t necessarily measure how I am with patients. 8/1/04 • Yeah, you have the academics and that’s needed. But there is also how you are with

patients. A test doesn’t measure that. 8/1/04 • This is testing student knowledge, not necessarily what was taught. 6/22/04 • There is no way to differentiate between whether it wasn’t taught, it was taught poorly, or

it wasn’t learned. 6/22/04 • Is it repetitive? I have to take so many tests already. 8/1/04

62

Values Altruism

Pros • It won’t hurt anyone. 7/30/04 • It shows that the department cares about improving itself. 7/30/04 • It might make it better for future students. 7/30/04 • It may improve the program. 8/1/04 • It might help our department get better funding. 8/1/04 • Maybe it will replace the teaching evaluations. That may be fairer. People tend to rush

through those unless they have an ax to grind. This might be less biased. 7/30/04

Exclusion Cons

• Paper tests are limited when testing different types of people. 6/22/04 • Bias in regards to gender, race or age 6/23/04 • Tests are limited in what they measure. Is the test a fair one or does it exclude certain

people? 8/1/04 Quality Assurance

Cons • I don’t like spending classroom time to prepare for the test, instead of on subjects

normally covered. We did that for the MEAP’s and I really think we missed out on some interesting presentations to get ready for this boring test. 6/23/04

Pros • I like to be able to learn about the quality of my class. I like to take a test and learn that I

have the background I needed. It’s like a pat on the back and I know about the quality of the class I took. 8/1/04

• I think this is important. Because, I mean, somebody needs to be checking up on those professors and keeping track of what goes on- the quality of their teaching. 8/1/04

• Those teachers who just read from the book- somebody needs to know. If everyone in your class is failing the test, then something is wrong with the teaching. 8/1/04

• We need to know what has been taught. 6/22/04 • We need to know that people are equipped with tools and accurate information. 6/22/04 • And when it comes to board certification, maybe we need certification. I don’t

necessarily want a professor who can pass a test but not teach. As a reporter, I’m not going to kill someone with my lack of knowledge. But a doctor or a physical therapist, that’s more important. 8/1/04

63

APPENDIX C Argument Strengths of Potential Bullet Points for the Message

Function Description Mean*Cognitive 5. The risk of failing the exam is a challenge strong students would

welcome. 4.52

Cognitive 31. The exams would increase student fear and anxiety enough to promote studying

4.71

Cognitive 27. The exams improved scores on achievement tests at other universities.

4.96

Cognitive 22. The Educational Testing Service would not market the exams unless they had great educational value.

4.99

Cognitive 1. Exam difficulty is preparation for later competitions in life. 5.01Cognitive 9. Comprehensive exams improve student long-term memory of

classroom material. 5.01

Cognitive 32. The exams would allow students to compare their performance to that of students at other schools.

5.11

Cognitive 14. Reviewing for exit exams gives students a big picture. 5.22Cognitive 18. Reviewing for the test sharpens student knowledge for their careers. 5.45Cognitive 38. Exit exams give students feedback about the quality of their

education. 5.49

Cognitive 12. Exit exams will give students feedback about their strengths and weaknesses.

5.60

Cognitive 24. Tests show me the areas I need to study. 5.65Cognitive 35. Exams provide students a chance to test their knowledge. 5.73Prestige 36. By not administering the exams, a tradition dating back to the

ancient Greeks is being violated. 3.22

Prestige 2. Most of my friends think exit exams are a good idea. 3.62Prestige 1. Parents wrote to administrators in support of the exit exams. 3.86Prestige 23. If the exams were instituted, your university would become the

American Oxford. 4.09

Prestige 6. My major advisor took a comprehensive exam and now has a prestigious academic position.

4.18

Prestige 15. The _National Accrediting Board of Higher Education_ would give the University its highest rating if the exams were instituted?

4.77

Prestige 33. Adopting the exams would allow the university to move up in the rankings.

4.82

Prestige 19. Departmental reputation has been improved by the initiation of exit exams.

4.88

Prestige 28. Prestigious universities use comps to maintain academic excellence. 5.04Utilitarian 7. Exit exams cut costs by eliminating the need for extra tests. 4.47Utilitarian 3. Alumni would contribute more if the exams were instituted, allowing

a tuition increase to be avoided. 4.87

Utilitarian 34. Schools with the exams the best corporations to recruit students for jobs.

4.90

Utilitarian 25. Having taken an exit exam can help in a job interview. 4.97

64

Utilitarian 29. Salaries are higher for graduates of schools with these exams. 4.98Utilitarian 16. Improved student funding is available at universities that have the

exams. 5.13

Utilitarian 2. The university has arranged testing to be very flexible and convenient.

5.24

Utilitarian 37. University exit exams help students perform better on entrance exams for graduate work or job placement.

5.43

Utilitarian 11. Graduate and professional schools prefer undergraduates who have done well on this comprehensive exam.

5.59

Values 26. This exam is very accurate. 4.62Values 4. Teaching quality has improved at schools with the exams. 4.76Values 17. Students have found the test to be impartial and even-handed. 5.01Values 13. Exit exams are fairer guides to instructor performance than teaching

evaluations, which can be biased. 5.08

Values 21. The exam is high quality. 5.11Values 3. This test measures necessary skills. 5.35Values 8. This exam does not discriminate against students from diverse

groups. 5.36

* To help the researchers create messages about the exams of roughly equivalent strength,

participants were asked to rate each of these messages about an exit exam on two nine-point

scales. The first measured “How strong do you find this argument?” The second measured,

“How important does this argument seem to you?” The number noted above is the mean of the

two scales for all the participants. N=410. This technique was effective. In the final study

(N=244), the mean difference between the two messages was only .641. The average rating of

the practical exam was 35.60, while the average rating of the learning exam was 34.96. Both

ranged from 6 to 54.

65

APPENDIX D

Items of the Additional Scales Included in the Analysis

Function Served

Scale Item(s)

Utilitarian Extrinsic Scale from the Motivated Strategies for Learning Questionnaire (used with permission from Karabenick, personal communication, 2004)

• My main goal in this course is to get a good grade.

Items suggested by Lichtman (personal communication, 2005)

• I am interested in using my education to get a good job or advance in my career.

• My family, friends or teachers encouraged me to come to college.

Cognitive Approach Mastery from the Motivated Strategies for Learning Questionnaire (used with permission from Karabenick, personal communication, 2004)

• An important reason why I do the work in this course is because I like to learn new things.

• I like coursework best when it really makes me think.

• In this class, I prefer course material that arouses my curiosity even if it is difficult to learn.

• When I have the opportunity in this class, I choose course assignments that I can learn from even if they don't guarantee a good grade.

• I like course work that I learn from, even if I make a lot of mistakes.

Classroom Approach Mastery from the Motivated Strategies for Learning Questionnaire (used with permission from Karabenick, personal communication, 2004)

• In this course, learning new ideas and concepts is very important.

• An important reason why I do the work in this course is because I like to learn new things.

• In this course, how much you improve is really important.

• In this course, the instructor thinks how much you learn is more important than your grades.

• I like course work best when it really makes me think.

• In a class like this, I prefer course material that arouses my curiosity, even if it is difficult to learn.

• When I have the opportunity in this class, I choose course assignments that I can learn from even if they don't guarantee a good grade.

66

Task Goal Orientation from the adapted from the Goal Orientation Scale by Midgley, Kaplan, Middleton & Maehr, 1998, with changes to make it age appropriate (used with permission from Middleton, personal communication, 2007)

• I like school work that I’ll learn from, even if I make a lot of mistakes.

• An important reason why I do my school work is because I like to learn new things.

• I like school work best when it really makes me think.

• An important reason why I do my work in school is because I want to get better at it.

• I do my school work because I’m interested in it.

• An important reason I do my school work is because I enjoy it.

Items suggested by Lichtman (personal communication, 2005)

• I am interested in learning new things. • I wanted to come to college.

67

APPENDIX E

Manipulation Checks on Message Perception

Univariate Repeated Measures Analysis on the Practical Questions Within Subjects Source df F P Message 1 24.584 ** Error 239 (28.704)

Dependent variable means Practical Message Learning Message 19.146 16.721

Univariate Repeated Measures Analysis on the Learning Questions Within Subjects Source df F P Message 1 15.583 ** Error 239 (35.735)

Dependent variable means Learning Message Practical Message 18.638 16.483

Note. Values enclosed in parentheses represent mean square errors. S= subjects. * p<.05. ** p<.01.

68

APPENDIX F

Effect Sizes (Eta Squared) for the Relationship Between Ad Attitude and Each Non-cognitive Variable

Practical

Ad

Learning

Ad

Utilitarian Attitude Function 0.069 Cognitive Attitude Function 0.084

Lichtman's Student Questions 0.014 Lichtman's Student Questions 0.030

Extrinsic from the MSLQ 0.006 Task Goal Orientation 0.156

Extrinsic from the MSLQ 0.005 Approach Mastery 0.122

Extrinsic from the MSLQ 0.004 Class Approach Mastery 0.052

ASSIST Surface Learning 0.002 ASSIST Deep Learning 0.020

Note: Effect sizes demonstrate the strength of the linear trend analyses of the matching effect.

K=1

69

APPENDIX G Order effects for the practical exam

Between subjects

Source df F POrder 1 3.095 NSError 238 (146.844)

Means table

Order 1 (Intellectual First) Order 2 (Practical First) M SD N M SD N

Attitude toward the practical exam

34.50 1.00 146 37.32 1.25 94

Order effects for the intellectual exam Between subjects

Source df F POrder 1 1.939 NSError 239 (145.600)

Means table

Order 1 (Intellectual First)

Order 2 (Practical First)

M SD N M SD NAttitude toward the intellectual exam

35.84 1.00 146 33.62 1.24 95

Note. Values enclosed in parentheses represent mean square errors.

70

APPENDIX H

Demographics

Gender

Male 44

Female 189

Undeclared 11

Ethnicity

Undeclared 73

African

American

42

Asian 8

Biracial 2

Caucasian 86

East Indian 9

Hispanic 11

Middle Eastern 12

Native

American

1

Total 244

71

APPENDIX I

Item Preference in the Messages

Item from the Practical Message Number Who

Preferred It Over

the Other Items

The Iowa helps students to perform better on entrance exams for graduate

work or job placement.

111

Top corporations prefer undergraduates who have done well on this exit

exam.

90

The university has arranged testing to be very flexible and convenient. 21

Improved student funding is available at universities that have the exams. 9

Item from the Learning Message Number Who

Preferred It Over

the Other Items

The Ohio gives students feedback about their strengths and weaknesses. 107

Exit exams like this one improve students’ long term memory of

classroom material.

61

The Ohio gives students feedback about the quality of their education. 34

The exam provides students a chance to test their knowledge. 27

72

APPENDIX J

HIC APPROVAL

73

74

75

76

REFERENCES

Adorno, T.W., Frenkel-Brunswik, E., Levinson, D.J., & Sanford, R.N. (1950). The

Authoritarian Personality. New York: Harper.

lcek Ajzen, Icek & Fishbein, Marty. (2005). The influence of attitudes on behavior. In

Dolores Albarracin, (Eds.) Handbook of Attitudes.Mahwah, NJ, USA: Lawrence Erlbaum

Associates, Incorporated, 2005. 173-224.

American Federation of Teachers. (2003). Student Persistence in College: More Than

Counting Caps and Gowns. http://www.aft.org/pubs-reports/higher_ed/student_persistence.pdf

Anderson, Deborah S., & Kristiansen, Connie M. (2001). Measuring attitude functions.

The Journal of Social Psychology: 130, 419-421.

Biggs J.; Kember D.; & Leung D.Y.P. (2001). The revised two-factor Study Process

Questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71, 133-149.

Biggs, J. B. (1970). Faculty Patterns in Study Behaviour. Australian Journal of

Psychology, 22, 161-174.

Biggs, J.B, (1976) Dimensions of Study Behaviour. British Journal of Educational

Psychology 46 pg 68 - 80.

Bishop, F. (1967). The anal character: A rebel in a dissonance family. Journal of

Personality and Social Psychology, 6, 23-36.

Blair, W., Jarvis, G, Petty, R.E. (1996). The Need to Evaluate. Journal of Personality and

Social Psychology, 70, 172-194.

Brim, O.G.; Glass, J.R.; Neulinger, J.; Firestone, I; & Lerner (1969). American Beliefs &

Attitudes about Intelligence. New York: Russell Sage Foundation.

77

Brown, W.F. & Holtzman, W.H., (1966). Manual of the Survey of Study Habits and

Attitudes, New York: Psychological Corporation.

Buchanan, Tom. (2002). Online Assessment: Desirable or Dangerous? Professional

Psychology - Research & Practice. 33:148-154.

Cacioppo, Harkins, & Petty. (1981). Cognitive Responses in Persuasion. Hillsdale, NY:

Erlbaum. 31-54.

Cacioppo, J.T., Petty, R.E., and Kao, C.F. (1984). The efficient assessment for need for

cognition. Journal of Personality Assessment, 48, 306-7.

Cacioppo, John T., Kao, C.F., Petty, R.E., & Rodriguez, R. (1986). Central and

Peripheral Routes to Persuasion: An Individual Difference Perspective. Journal of Personality

& Social Psychology, 51, 1032-1043.

Cacioppo, JT; Petty, RE. (1982).The Need for Cognition. The Journal of Personality and

Social Psychology: 42, 116-131.

Cavanagh, Sean. (2002). Exit-exam trend prompts scrutiny of consequences. Education

Week: 22, 1-3.

Check, J.F. (1982). Relative merits of test items as perceived by college students. College

Student: July 16,100-104.

Christie, Richard. (1991). Authoritarianism and related constructs. In John P. Robinson,

Phillip R. Shaver, and Lawrence S. Wrightsman, (Eds.) Measures of Personality and Social

Psychological Attitudes. San Diego: Harcourt Brace Jovanovich. 501-571.

Cohen, J. (1992). Quantitative methods in psychology. Psychological Bulletin, 112,

155-159.

78

Cohen, Jacob. (1969). Statistical Power for the Behavioral Sciences. New York:

Academic Press.

Cronbach, Lee J.; Gleser, Goldine C.; Nanda, Harinder; & Raajaratnam, Nageswari.

(1972). The Dependability of Behavioral Measurements: Theory of Generalizability for Scores

and Profiles. New York: John Wiley and Sons.

Devine, P.G. (1995). Are racial stereotypes really fading? The Princeton Trilogy

revisited. Personality and Social Psychology Bulletin: 21, 1139-1150.

Dweck, Carol S. & Leggett, Ellen L.(1988). A Social-Cognitive Approach to Motivation

and Personality. Psychological Review. 95, 256-273.

Eagly, A.H. (1992). Uneven progress social psychology and the study of attitudes.

Journal of Personality and Social Psychology: 63, 693-710.

Eagly, A.H., Chaiken, S. (1998). Attitude structure and function. In Gilbert, D.T., Fiske,

S.T., and Lindsay, G, (Eds.) The Handbook of Social Psychology. Boston: McGraw Hill,

269-322.

Ennis, Richard. & Zanna, Mark P. (2000). Attitude function and the automobile. In G.R.

Maio and J.M. Olson, (Eds.) Why We Evaluate: Functions of Attitudes. Mahwah, NJ: Erlbaum,

395-416.

Entwistle, & McCune. (2004). The conceptual bases of study strategy inventories.

Educational Psychology Review, 16, 299-409.

Entwistle, N. & Waterston, S. (1988). Approaches to studying and levels of processing in

university students. British Journal of Educational Psychology, 58, 258-265.

79

Entwistle, N. J., and Wilson, J. D. (1977). Degrees of Excellence: The Academic

Achievement

Game, Hodder and Stoughton, London.

Entwistle, N.J. & Entwistle, D. (1970). The relationships between personality, study

methods and academic performance. British Journal of Educational Psychology, 40, 132-143.

Entwistle, N.J. & Ramsden, P. (1983). Understanding Student Learning. New York:

Nichols.

Entwistle, N.J. & Waterston, S. (1988). Approaches to studying and levels of processing

in university students. British Journal of Educational Psychology: 58, 258-265.

Entwistle, N.J.(1987). A model of the teaching-learning process. In J.T.E. Richardson,

M.W. Eysenck, and D. Warren Piper (Eds.) Student Learning: Research in Education and

Cognitive Psychology, New York: Taylor and Francis.

Entwistle, N. J., Tait, H. & McCune, V. (2000). Scoring Key for the Approaches and

Study Skills Inventory for Students (ASSIST). Retrieved online at

http://www.ed.ac.uk/etl/questionnaires/ASSIST.pdf on 9/29/2006.

Fazio, R.H. (1990). A practical guide to the use of response latency in social

psychological research. In Hendrick, C. and Clark, M.S. (Eds.) Research Methods in Personality

and Social Psychology. Newbury Park, CA: Sage Publications, 74-97.

Firestone, I., Kaplan, K., & Moore, M. (1974).The attitude gradient model. In A.

Harrison (Eds.), Explorations in Psychology. Monterey: Brooks/Cole, 248-26

Fishbein, M. & Ajzen, I. (1972). Attitudes and opinions. Annual Review of Psychology,

23, 487.

Four More Years: How Will Colleges Fare? Chronicle of Higher Education, 51, p 20.

80

Freud, S. (1930). The Ego and the Mechanisms of Defense. New York: International

Universities Press.

Freud, S. (1930). The Ego and the Mechanisms of Defense. New York: International

Universities Press.

Greenwald, A.G. (1989). Why attitudes are important: Defining attitude and attitude

theory 20 years later. In A.R. Pratkanis, S.J. Breckeler, and A.G. Greenwald (Eds.) Attitude

Structure and Function. Hillsdale; NJ: Erlbaum, 429-440.

Hathaway, S.R. & McKinley, J.C. (1942). A multiphasic personality schedule

(Minnesota): III The measurement of symptomatic depression. Journal of Psychology, 14, 73-84.

Heider, F. (1958). The Psychology of Interpersonal Relations. New York: Wiley.

Herek, G.M. (1986). The instrumentality of attitudes: Toward a Neofunctional theory.

Journal of Social Issues, 42, 99-114.

Herek, G.M. (1987). Can functions be measured? A new perspective on the functional

approach to attitudes. Social Psychology Quarterly, 50, 285-303.

Herek, G.M. (2000). The social construction of attitudes: Functional consensus and

divergence in the US public's reactions to AIDS. In G. Maio & J. Olson (Eds.), Why We

Evaluate: Functions of Attitudes. Mahwah, NJ: Lawrence Erlbaum. 325-364.

Higgins, L. (2004). High schoolers' apathy could help kill MEAP. Detroit Free Press,

May 3, 2004.

Hirschman, Elizabeth C., Scott, Linda, & Wells, William B. (1998). Journal of

Advertising: 27, 33-50.

Hirschman, Elizabeth C., Scott, Linda, & Wells, William B. (1998). Journal of

Advertising: 27, 33-50.

81

Hughes, S. & Firestone, I. (1998). The Effect of Mood on the Processing, Representation,

and Potency of Persuasive Communications. Dissertation in Social Psychology. Detroit: Wayne

State University.

Johar, J.S., & Sirgy, M. Joseph. (1991). Value-expressive versus utilitarian advertising

appeals: When and why to use which appeal. Journal of Advertising: 20, 23-33.

Johar, J.S., & Sirgy, M. Joseph. (1991). Value-expressive versus utilitarian advertising

attitudes: When and why to use which attitude. Journal of Advertising: 20, 23-33.

Katz, D. (1960). The functional approach to the study of attitudes. Public Opinion

Quarterly, 24, 163-204.

Karabenick, S. A. (2003). Seeking help in large college classes: A person centered

approach. Contemporary Educational Psychology, 28, 37-58.

Kelman, H.C. (1958). Compliance, identification, and internalization: Three processes of

attitude change. Journal of Conflict Resolution, 2, 57-78.

Kelman, Herbert C., Hovland, C.I. (1953). “Reinstatement” of the Communicator in

Delayed Measurement of Opinion Change. Journal of Abnormal and Social Psychology: 48,

327-335.

Kline, Paul. (1994). An Easy Guide to Factor Analysis. New York: Routledge.

Lane, Kristina. (2004). Accountability Plan for Higher Education Would Be a Mistake,

Report Says. Black Issues in Higher Education, 21, 6.

Lowrey, Tina M., Englis, Basil G., Shavitt, Sharon, & Soloman, Michael R. (2001).

Response latency verification of consumption constellations: Implications for advertising

strategy. Journal of Advertising: 30, 30-39.

82

Maio, G.R. & Olson, J.M. Eds. (2000). Why We Evaluate: Functions of Attitudes.

Mahwah, NJ: Erlbaum.

Maio, G.R. (1998). Values as truisms evidence and implications. Journal of Personality

and Social Psychology: 74, 294-311.

McCune, V. & Entwistle, N. ( 2000). Patterns of response to an approaches to studying

inventory across contrasting groups and contexts. European Journal of Psychology of Education,

15, 33-48.

Midgley, C., Kaplan, A., Middleton, M., & Maehr, M.L. (1998). The Development and

Validation of Scales Assessing Students’ Achievement Goal Orientations. Contemporary

Educational Psychology, 23, 113–131.

Millar, M.G. & Millar, K.U. (1990). Attitude Change as a Function of Attitude Type and

Argument Type. Journal of Personality and Social Psychology: 59, 217-228.

Mischel, W. & Shoda, Y. (1995). A cogntiive-affectve system theory of personality:

Reconceptualizing situations, dispositions, dynamics and invariance in personality structure.

Psychological Review, 102, 246-268.

Murphy, Kevin R. & Myors, Brett. (1998.) Statistical Power Analysis: A Simple and

General Model for Traditional and Modern Hypothesis Tests. Mahwah, NJ: Lawrence Erlbaum.

Pettigrew, T.F. & Mertins, R.W. (1995). Subtle and blatant prejudice in western Europe.

European Journal of Social Psychology, 25, 57-75.

Petty, Harkins and Williams (1980). The effects of group diffusion of cognitive effort on

attitudes: An information processing view. Journal of Personality and Social Psychology, 38,

81-92.

83

Petty, R. E., & Wegener, D. T. (1998). Matching versus mismatching attitude functions:

Implications for scrutiny of persuasive messages. Personality and Social Psychology Bulletin,

24, 227-240.

Petty, R. E., Tormala, Z. Hawkins, C., & Wegener, D. T. (2001). Motivation to think and

order effects in persuasion: The moderating role of chunking. Personality and Social

Psychology Bulletin, 27, 332-344.

Petty, R.E. & Caccioppo, (1981). Communication and Persuasion: Central and

Peripheral Routes to Attitude Change. New York: Springer Verlay, 54-59.

Petty, R.E., Wegener, D. (1998). Attitude change: Multiple roles for persuasion variables.

In Gilbert, D.T., Fiske, S.T., and Lindsay, G. (Eds.) The Handbook of Social Psychology.

Boston: McGraw Hill. 323-74.

Petty,. R.E. & Cacioppo, D, (1979). Issue involvement can increse or decrease persuasion

by enhancing message-relevant cognitive responses. Journal of Personality and Social

Psychology, 37, 1915-1926.

Pintrich, P. R., Smith, D. A. F., Garcia, T., and McKeachie, W. J. (1991). A Manual for

the Use of the Motivated Strategies for Learning Questionnaire (MSLQ), National Center for

Research to Improve Postsecondary Teaching and Learning, Ann Arbor, MI.

Pomerantz, E. M., Chaiken, S., Tordesillas, R.S. (1995). Attitude Strength and Resistance

Processes. Journal of Personality and Social Psychology, 69, 408-419.

Pool, Gregory J., Wood, Wendy & Leck, Kira. (1998). The Self-Esteem Motive in Social

Influence: Agreement With Valued Majorities and Disagreement With Derogated Minorities.

Journal of Personality & Social Psychology. 75(4): 967-975.

84

Pratkanis, A.R., Breckler, S.J., & Greenwald, A.G. Eds. (1989). Attitude Structure and

Function. Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Priester, J. R. & Petty, R.E. (1996). The Gradual Threshold Model of Ambivalence:

Relating the Positive and Negative Bases of Attitudes to Subjective Ambivalence. Journal of

Personality and Social Psychology: 71, 431-449.

Ramsden, P. (1992). Learning to Teach in Higher Education, New York : Routledge.

44-85.

Ramsden, P. (1998). Learning to Lead in Higher Education, New York : Routledge.

Richard Ohmann. (2000). Historical reflections on accountability. Academe. 86, 24-30.

Richard, F. D., & Bond Jr., C.F. (2001). Empirically-based Effect Size Estimates from

One Hundred Years of Social Psychology. 13th Annual American Psychological Society

Convention, Toronto, Ontario, Canada.

Richardson, J. T. E. (2000). Researching Student Learning: Approaches to Studying in

Campus-Based and Distance Education, SRHE/Open University Press, Buckingham, UK.

Richardson, J.T.E. (2004). Methodological issues in questionnaire-based research on

student learning in higher education. Educational Psychology Review, 16, 347-358.

Richardson, John T. E. (2005). Instruments for Obtaining Student Feedback: A Review of

the Literature. Assessment and Evaluation in Higher Education, 30, 387-415.

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the

attribution process. In L. Berkowitz (Ed.), Advances in experimental social psychology (vol. 10).

New York: Academic Press.

Rutherford, Andrew. (2001). Introducing ANOVA and ANCOVA: A GLM Approach.

Thousand Oaks, CA: Sage Publications.

85

Sarnoff, I. (1960). Psychoanalytic Theory and cognitive dissonance. In R.P. Abelson, E.

Aronson, W.J. McGuire, T.M. Newcomb, M.J. Rosenberg, & O.H. Tannenbaum (Eds.), Theories

of Cognitive Consistency: A Sourcebook. Chicago: Rand McNally. 192-200.

Sarnoff, I. (1960). Psychoanalytic Theory and cognitive dissonance. In R.P. Abelson, E.

Aronson, W.J. McGuire, T.M. Newcomb, M.J. Rosenberg, & O.H. Tannenbaum (Eds.), Theories

of Cognitive Consistency: A Sourcebook. Chicago: Rand McNally. 192-200.

Sarnoff, L. & Corwin, S.M. (1959). Castration anxiety and the fear of death. Journal of

Personality, 27, 374-385.

Schmeck, R.R. & Phillips, J. (1982). Levels of processing as a dimension of difference

between individuals. Human Learning, 1, 95-103.

Schmeck, R.R. (1983). Learning styles of college students. In R. Dillon and R.R.

Schmeck, (Eds.) Individual Differences in Cognition, 233-279.

Schmeck, R.R., Geisler-Brenstein, E., and Cercy, S.P. (1991). Self-concept and learning:

The revised inventory of learning processes. Educational Psychology, 11, 343-362.

Schmeck, R.R., Ribich, F.D. & Ramanaiah, N. (1977). Development of a self-report

inventory for assessing individual differences in learning processes. Applied Psychological

Measurement, 1, 416-431.

Scoring Key for the Approaches and Study Skills Inventory for Students. (1997).Centre

for Research on Learning and Instruction, p 6. Accessed online at

http://www.ed.ac.uk/etl/questionnaires/ASSIST.pdf

Shavelson, Richard J.; Webb, Noreen M.; & Rowley, Glenn L. (1989). Generalizability

theory. American Psychologist, 44, 922-932.

86

Shavitt, S. & Nelson, M. (2002). The role of attitude functions in persuasion and social

judgment. In The persuasion handbook: Theory and practice (J.P. Dillard and M. Pfau),

Sage:Thousand Oaks, CA, 137-153.

Shavitt, S. & Nelson, M. R. (2000) The Social Identity Function in Person Perception:

Communicated Meanings of Product Preferences. In G. R. Maio, & J. M. Olson (Eds.) Why We

Evaluate: Function of Attitudes. Mahwah NJ: Lawrence Erlbaum Associates. 37-57.

Shavitt, S. (1989). Operationalizing functional theories of attitude. In Pratkanis, A.R.,

Breckler, S.J, Greenwald, A.G. (Eds.) Attitude Structure and Function. Hillsdale, New Jersey:

Lawrence Erlbaum Associates, 311-338.

Shavitt, S. (1990). The role of attitude objects in attitude functions. Journal of

Experimental Social Psychology, 26, 124-148.

Shavitt, S.(1992). Evidence for predicting the effectiveness of value-expressive versus

utilitarian appeals: A reply to Johar and Sirgy; Comment. Journal of Advertising: 21, 47-51.

Shavitt, S., Lowrey, T. M., & Han, S. (1992) Attitude Functions in Advertising: The

Interactive Role of Products and Self-Monitoring. Journal of Consumer Psychology, 1(4),

337-364.

Shavitt, Sharon, Lowrey, Pamela, & Haefner, James. (1998). Public attitudes toward

advertising: More favorable than you might think. Journal of Advertising Research: 38, 7-22.

Shavitt, Sharon, Lowrey, Tina M., & Han, Sang-Pil. (1992). Attitude functions in

advertising: The interactive role of products and self-monitoring. Journal of Consumer

Psychology: 1, 337-364.

87

Shavitt, Sharon, Sirgy, M., & Johar, J.S. (1992). Evidence for predicting the effectiveness

of value-expressive versus utilitarian attitudes: A reply to Johar and Sirgy; Comment. Journal of

Advertising: 21, 47-51.

Shimp, Terrence A., Hyatt, Eva, M., & Snyder, David J. (1991). A critical appraisal of

demand artifacts in consumer research. Journal of Consumer Research: 18, 273-283.

Shimp, Terrence A., Hyatt, Eva, M., & Snyder, David J. (1993). A critique of Darley and

Lim’s “Alternative Perspective.” Journal of Consumer Research: 20, 496-501.

Shoda, Y., Mischel, W. (1996). Toward a Unified, Intra-Individual Dynamic Conception

of Personality. Journal of Research in Personality, 30, 414-428.

Sirgy, M. Joseph, Grewal, Dhruv, Mangleburg, Tamara F., Park, Ja-ok, Chon, Hye-Sung,

Claiborne, C.B., Johar, J.S. & Berkman, Harold. (1997). Assessing the predictive validity of two

methods of measuring self-image congruence. Journal of the Academy of Marketing Science: 25,

229-241.

Sirgy, M. Joseph, Grewal, Dhruv, Mangleburg, Tamara F., Park, Ja-ok, Chon, Hye-Sung,

Claiborne, C.B., Johar, J.S. & Berkman, Harold. (1997). Assessing the predictive validity of two

methods of measuring self-image congruence. Journal of the Academy of Marketing Science: 25,

229-241.

Sirgy, M. Joseph, Johar, J.S., Samli, A.C., & Claiborne, C.B. (1991). Self-congruity

versus functional congruity: Predictors of consumer behavior. Journal of the Academy of

Marketing Science: 19, 363-375.

Smith, M. Brewster; Bruner, Jerome S.; White, Robert W. (1956). Opinions and

Personality. NY: John Wiley & Sons.

88

Snyder, M. & DeBono, K.G. (1985). Attitudes to image and claims about quality:

Understanding the psychology of advertising. Psychology of Advertising, 49: 586-597.

Snyder, M. (1974). The monitoring of expressive behavior. Journal of Personality and

Social Psychology, 30, 526-537.

Snyder, M., & Gangestad, S. (1986). On the nature of self-monitoring: Matters of

assessment, matters of validity. Journal of Personality and Social Psychology, 51, 125-139.

Steele, Claude M. (1997). A Threat in the Air: How Stereotypes Shape Intellectual

Identity and Performance. American Psychologist, 52, 613-629.

Tabachnick, Barbara G. & Fidell, Linda S. (2001). Computer-assisted Research Design

and Analysis. Boston: Allyn and Bacon.

Tversky, A. and Kahneman, D. (1974) Judgment under uncertainty: Heuristics and

biases. Science, 28.5. 1124-113l.

Walton, Rose. (1982). Comments by three students. Academic Testing and the

Consumer: New Directions for Testing and Measurement, 16, S.B. Anderson and L.V. Cobrun

(Eds.) San Francisco: Jossey-Bass, December.

Watt, C., Lancaster, c. Gilbert, J. & Higerd, T. (2004). Performance Funding and Quality

Enhancement at Three Research Universities in the United States. Tertiary Education and

Management, 10, 61 - 72.

Weschler, I.R. (1951). Problems in the use of indirect methods of attitude measurements.

Public Opinion Quarterly.15, 133-138.

Wildt, Abert R. & Ahtola, Olli T. (1978). Analysis of Covariance. Thousand Oaks, CA:

Sage Publications.

89

Woodward, L.S. & Firestone, I.J. (2003). A Functional Approach to Persuasion: Thesis

for the completion of Master of Arts in Psychology (Social). Detroit: Wayne State University.

Young, J. & Lakey, B. (2003). Evaluating Competency Associated with Completing the

Psychology Major at Wayne State University. Report to the Psychology Department’s

Undergraduate Committee. Detroit, Wayne State University.

Zajonc, R. (1968). Attitudinal effects of mere exposure. Journal of Personality and

Social Psychology Monographs, 9 (2, pt. 2), 1-27.

90

ABSTRACT

IMPROVING ATTITUDES ABOUT EXIT EXAMS THROUGH A BETTER UNDERSTANDING OF THE EDUCATIONAL GOALS AND MOTIVATIONAL

FUNCTIONS THAT UNDERLIE THEM

by

LAURA S. WOODWARD

May 2007

Advisor: Dr. Ira Firestone

Major: Psychology

Degree: Doctor of Philosophy

A new, direct method of assessing the strength of motivation functions that may underlie

attitudes toward exit exams has been developed and was experimentally validated. Those with

high motivational scores on the cognitive scale favored the learning-framed message regarding

exit exams significantly more than those with lower scores. Furthermore, scores on the utilitarian

motivational scale were unrelated to attitude toward the cognitive message. Conversely, those

with high scores on the utilitarian scale favored the practically-framed message significantly

more than those with lower scores, while scores on the cognitive scale were unrelated to attitude

toward the practical message. In addition, the direct measure was compared to an proxy measure

from the educational field, the ASSIST deep and surface learning scales corresponding

respectively to the cognitive and utilitarian scales. The direct method of assessment was a better

predictor of message-favorability than the indirect method.

91

AUTOBIOGRAPHICAL STATEMENT

LAURA WOODWARD EDUCATION

Ph.D. graduation expected May, 2007. Wayne State University. Detroit, Michigan. • Dissertation: Improving Attitudes about Exit Exams through a Better Understanding of

Educational Goals and Motivational Functions that Underlie Them, proposed October 26, 2005.

Masters of Arts, Awarded Winter, 2003. Wayne State University. Detroit, Michigan.

• Masters Thesis, A Functional Approach to Persuasion, defended December 2002. EXPERIENCE Learning Specialist, Academic Success Center. Wayne State University. Detroit, Michigan. 2000-date.

• Trained tutors, staff and faculty regarding ways to improve college student learning. • Improved departmental reporting of retention trends of our students by encouraging

empirical strategies among current research efforts by staff. • Assisted students through classes, workshops and individual meetings to improve their

strategies for academic success. Graduate Teaching Assistant, Department of Psychology. 1998-2000.

• Taught classes in Health Psychology, Statistics and Social Psychology. • Pioneered new methods of teaching including improved visual aids and online practice

opportunities. PUBLICATIONS

• Reeves, R. & Woodward, L. (2006). Reconceptualizing at risk: A discussion of findings. E-Source for College Transitions. 4:1, 3-5. Retrieved 9/24/2006 from http://www.sc.edu/fye/esource

• Woodward, Laura. (2006, May). [Review of the book Public Education in New Mexico]. Education Review, Retrieved 4/28/2006, from http://edrev.asu.edu/brief/index.html

PRESENTATIONS

• Reaves, Rosalind, Woodward, Laura & Collins-Eaglin, Jan. (November 6-8, 2005). Retaining the Academically-Talented Student. Presented at the 12th National Conference on Students in Transition. Costa Mesa, California.

• Schoeberlein, Steve & Woodward, Laura. (November 6-8, 2005). Managing Test Anxiety Mindfulness-Based Cognitive and Skills Building Intervention and Evaluation. Presented at the 12th National Conference on Students in Transition. Costa Mesa, California.

lwoodward
Note
Accepted set by lwoodward
lwoodward
Note
Completed set by lwoodward