Case Study of a Change Process: BA (Hons) Primary Education, University of Winchester

16
1 TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011 Transforming the Experience of Students through Assessment (TESTA) Case Study of a Change Process BA (Hons) Primary Education, University of Winchester Context The BA (Hons) Primary Education degree is a large, complex programme, recruiting about 200 students in each year of a four year degree. It also has a three year accelerated route. The degree in Initial Teacher Education (ITE) is accredited by the Training and Development Agency for Schools (TDA), inspected by Ofsted, and leads to a degree award with recommendation of Qualified Teacher Status. There are about 55 lecturers teaching on the programme, some of whom are part-time. The demography of students is mainly female, white British and from the south of England. The programme has strong practice elements within school placements, placing the relationship between theory and practice at the heart of the student experience. The BA Primary team requested to take part in the TESTA process, a methodology used to map programme-wide assessment in the TESTA National Teaching Fellowship Project (www.testa.ac.uk). There are eight programmes from four universities participating in the original TESTA research. The philosophy of TESTA is premised on the view that changing modular assessment is insufficient to address programme wide features of assessment on a whole degree. A whole programme approach is intended to ensure coherence, progression, sequencing of assessment, and enables team members to work with each other, across modules, for the benefit of student learning on a whole programme. TESTA research was one of many development activities which the BA Primary team were pursuing, to evaluate and further develop the programme in preparation for a new version of the degree. Importantly, the team were in the process of pausing to evaluate the first four completed years of a new BA Primary programme. TESTA provided one angle on this broader evaluation. Other drivers for participation from colleagues on the BA Primary were a growing sense and desire to value student voice; perceptions of assessment overload; the dominance of a focus on modular assessment, sometimes to the exclusion of in-class work; and perceptions of students’ increasingly strategic approaches to learning. Methodology There are three main methods of data collection in the TESTA process, which are triangulated in a case profile. The methodologies draw on both qualitative and quantitative research traditions. The purpose of the research is to construct a rich and detailed picture of the typical final year experience of assessment on the whole programme, providing evidence for teams to make targeted interventions. The tables below summarises the three methodologies.

Transcript of Case Study of a Change Process: BA (Hons) Primary Education, University of Winchester

1

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

Transforming the Experience of Students through Assessment (TESTA)

Case Study of a Change Process

BA (Hons) Primary Education, University of Winchester

Context

The BA (Hons) Primary Education degree is a large, complex programme, recruiting about

200 students in each year of a four year degree. It also has a three year accelerated route.

The degree in Initial Teacher Education (ITE) is accredited by the Training and Development

Agency for Schools (TDA), inspected by Ofsted, and leads to a degree award with

recommendation of Qualified Teacher Status. There are about 55 lecturers teaching on the

programme, some of whom are part-time. The demography of students is mainly female,

white British and from the south of England. The programme has strong practice elements

within school placements, placing the relationship between theory and practice at the heart

of the student experience.

The BA Primary team requested to take part in the TESTA process, a methodology used to

map programme-wide assessment in the TESTA National Teaching Fellowship Project

(www.testa.ac.uk). There are eight programmes from four universities participating in the

original TESTA research. The philosophy of TESTA is premised on the view that changing

modular assessment is insufficient to address programme wide features of assessment on a

whole degree. A whole programme approach is intended to ensure coherence, progression,

sequencing of assessment, and enables team members to work with each other, across

modules, for the benefit of student learning on a whole programme.

TESTA research was one of many development activities which the BA Primary team were

pursuing, to evaluate and further develop the programme in preparation for a new version of

the degree. Importantly, the team were in the process of pausing to evaluate the first four

completed years of a new BA Primary programme. TESTA provided one angle on this

broader evaluation. Other drivers for participation from colleagues on the BA Primary were a

growing sense and desire to value student voice; perceptions of assessment overload; the

dominance of a focus on modular assessment, sometimes to the exclusion of in-class work;

and perceptions of students’ increasingly strategic approaches to learning.

Methodology

There are three main methods of data collection in the TESTA process, which are

triangulated in a case profile. The methodologies draw on both qualitative and quantitative

research traditions. The purpose of the research is to construct a rich and detailed picture of

the typical final year experience of assessment on the whole programme, providing evidence

for teams to make targeted interventions. The tables below summarises the three

methodologies.

2

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

Programme Audit which maps features of assessment using documentary evidence

and a workshop type session with the programme leader and key members of the team;

Assessment Experience Questionnaire (AEQ), a 28 question survey constructed

around nine assessment and study behaviour scales, which was developed by Professor

Graham Gibbs. The AEQ has nine scales based on three questions each linked to

relevant pedagogic research, and one question about overall satisfaction.

Focus Groups with final year students, exploring their perceptions of assessment and

feedback.

A key aspect of the TESTA process is presenting findings to programme teams, and creating

space for these to be contextualised by ‘insider’ teaching experiences. This social process is

central to bringing about change because it engenders critical dialogue about evidence

related to the pedagogy of assessment on the programme. Course teams have found the

TESTA data compelling and useful, leading them to talk about programme-level issues in

ways that they largely had not before, and they have found this process very stimulating.

The methodology entails devising changes based on the evidence, which may be evaluated

by a further iteration of TESTA research.

For the sake of comparability with other undergraduate degree programmes, we have

decided to reduce the audit of the four year BA Primary programme to reflect assessment up

until the end of third year. We administered questionnaires and conducted focus groups with

final year students, i.e. those in their 4th year of the degree, to capture student perceptions

towards the end of their studies.

Methodology in action on the BA Primary

1. The researchers mapped assessment with key programme players (the audit).

2. They statistically analysed 98 AEQ questionnaires from final year students.

3. They conducted six focus groups with 37 final year students.

4. They discussed case findings with three programme team groups (circa 45 lecturers).

5. The Director of Initial Teacher Education, Programme leader and Research Officer

have devised interventions, working with the programme team.

Key findings on the BA Primary

About 50% of the final year cohort (98 students) completed the Assessment Experience

Questionnaire (AEQ) in May 2010. The AEQ measures student perceptions of assessment

on a Likert scale where: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 =

strongly agree. A ‘good’ score on the Likert scale is 4 and above, i.e. students agree with the

statement, aside from the surface approach scale where you would hope that students

disagreed (score = 2 or <2). Scores on the BA Primary AEQ were similar to the eight

programmes from TESTA’s four partner universities in the main research sample. The only

3

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

score which was the equivalent of a 4 = agree was the Surface Approach score, which

indicated that students disagreed that they were taking a surface approach to their studies.

Features on the BA Primary which stood out are reflected in the following summary table:

AEQ highlights: summary of issues

Quantity of Effort: Students on the BA Primary believed they were working harder

and more regularly than students on other TESTA programmes (3.83 compared to

the TESTA mean of 3.69)

Coverage of Syllabus: More than other TESTA programmes, they felt that the

assessment pattern helped them to cover all the topics and not ‘spot’ (3.41 compared

to TESTA mean of 2.85)

Appropriate assessment: They felt that the assessment took them beyond

memorisation and surface approaches to learning (3.86 compared to 3.79)

Quantity and Quality of Feedback: But they did not rate highly the quantity and

quality of feedback (2.83 compared to 3.38)

Use of Feedback: Nor did they feel encouraged to use their feedback (3.16

compared to 3.79)

Clear Goals and Standards: And they weren’t clear about what the assessment

required of them, or what a ‘good’ piece of work looked like (2.88 compared to 3.46

TESTA mean).

Overall Satisfaction: Students sat on the fence (3.50).

The audit data provided plausible explanations for some of these responses, but not all.

Focus groups with six groups of students provided texture and explanatory information. In

the next section we look at the three key areas where students identified problems and ways

in which the programme team has sought to address them. These were i) Quantity and

quality of feedback; ii) Use of Feedback; and iii) Clear goals and standards.

The rationale for focusing on three areas which raised challenges for the BA Primary

programme is that these areas have prompted changes in practice. The case study is

focusing on this change process. Clearly, there are many areas where the BA Primary is

exemplary, and a case study of change is not intended to cast a shadow over excellent

practice, but to show the value of good practitioners working with programme-level data to

improve practice. The complexity of making these changes should not be under-estimated,

in the context of an existing multi-layered and intricate system that is familiar and works

reasonably well.

i) Quantity and quality of feedback (QQF)

Before TESTA

Students’ apparent dissatisfaction with the quantity and quality of feedback (AEQ score =

2.83) is puzzling, given the high volumes of written and oral feedback they receive. The

4

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

Programme Audit showed that BA Primary students can expect to have received 7,996

words of written feedback and 7 hrs 49 minutes of oral feedback by the end of the third year

of the degree. On average, a student receives about 200 + words of feedback, and 13

minutes of oral feedback on each assessed university task. The BA Primary is investing

significant resources of staff time in something which seems not to be highly valued by

students. There are several plausible explanations for this anomaly in the data.

There is a huge discrepancy between the amount and type of oral feedback students get on

school experience and at university. During school placement, oral feedback is almost

immediate, and much more extensive. The AEQ was administered during university time,

which may have led students to focus on low volumes at university, rather than high volumes

at school. Reasons for low take up of oral feedback during the university part of the degree

are speculative. The large size of the programme may be a factor. Students may also fail to

recognise oral feedback when it is given, or lack the confidence to be proactive in taking up

tutorial opportunities. The administration of returned assessments by the faculty office may

distance lecturers from students at the time of return. Although most tutors make sure they

are in office on the published return date and provide opportunities for students to sign up for

a tutorial, a minority of students take up the opportunity.

Students described variations in the quality of feedback they receive, linked to the tone of

feedback, its developmental value and how personal or standardised it was. It was striking

how emotionally invested students were in their feedback, aligning with the view that in mass

higher education, with increasing student to staff ratios, students place more ‘relationship’

value on feedback than other dimensions of learning and teaching (Nicol 2010).

In focus groups, students reflected on variations in the tone and style of feedback:

With some markers, your feedback will be positive with a few targets … whereas others go

into it looking for what’s wrong with it and just give you ‘This isn’t good’ all the way through.

There are some tutors you want to mark your essay and they’re really good at feeding back

and being really positive and they will set good targets that you can achieve, which is

important.

Some people had really good tutors who gave quite positive feedback, but still gave you good

tips and things to do, whereas other tutors - I know I had one particular tutor who wasn’t

particularly helpful.

Some described their own tendency to remember only negative comments:

It’s always the negatives you remember, as we’ve all said. It’s always the negatives. We

hardly ever pick out the really positive points because once you’ve seen the negative. The

negatives can outweigh the positives.

Many students were dissatisfied with standardised and impersonal electronic feedback:

5

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

You know that twenty other people have got the same sort of comment.

A lot of the comments have been copied and pasted onto the feedback sheet...it’s just so

generic.

They wanted to be known by their tutors, and to get to know them, and they focused their

expectations on feedback:

They don’t know you as a person. You don’t get to know them.

It was like ‘Who’s Holly?’ It’s that relationship where you’re just a student.

It’s like they say ‘Oh yes, I don’t know who you are. Got too many to remember, don’t really

care, I’ll mark you on your assignment’.

Students used quite emotional language about receiving written feedback:

When you hand it in you’re so nervous that you’re going to get it back with all these red

marks saying that it’s wrong...

You’d actually dread going to pick up assignments I think - even if you’ve passed, but you’ve

got some really negative comments.

Some seemed to lack the courage and confidence to approach tutors for a tutorial:

Sometimes I almost feel like I’m burdening them (the tutors) with asking for a meeting.

Others, who did, found an invaluable resource in oral feedback:

I look back at the first essay I ever wrote and then look at my essays now I really see

how I’ve grown over the course, but I kind of feel that that’s because of me again

being proactive and going to see people about the essays

If you take the initiative and go and ask them why, then they’re quite good, aren’t

they? They’ll sit and talk to you about why, but they don’t unless you actually go and

see them.

After TESTA

The main outcome of the TESTA process has been a reduction in summative assessment

on 15 credit modules from two to one tasks, with more linked two stage assessments which

have a strong formative element. The rationale for this development has sprung partly from

the conclusion that too much summative assessment has led to heavy workloads, and

diminished opportunities for oral feedback, discussion, and meaningful dialogic feedback.

The incorporation of more multi-stage assessment is deliberately designed to encourage

feedback comments to feed forward.

6

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

A second less tangible output from the TESTA discussion with lecturers has been a renewed

focus on student motivation and identity in relation to how feedback nurtures professional

and academic qualities, mirroring students’ focus on the ‘whole child’ in the school context.

The perception that some feedback was overly negative prompted much discussion among

lecturers about the feedback sandwich, the positive-negative-positive framing of comments,

as well as some discussion about the need for honest feedback, while tempering this with a

gentle tone. Students’ reluctance to accept challenging feedback and their perception of it

as negative may be part of the problem. Focus group data showed that many students feel

exposed by the assessment tasks on the programme, and that they lose confidence and

motivation easily in the face of difficult or negative comments.

A third outcome of the TESTA research emerged from discussion with lecturers about

standardised ‘cut and paste’ feedback to students. In contrast to the assumption that this

was mainly a product of workload – using shortcuts enabled by technology - lecturers

described a situation where standardised feedback had been encouraged to accommodate

perceived external and regulatory scrutiny. This had encouraged a culture of low risk,

cautious feedback, which lacked authenticity and uniqueness in the eyes of students. The

programme team have since discussed the value of standardisation which leads to ‘tick box’

comments with their managers, and agreed to write more authentic, unique and relational

feedback.

The programme is exploring opportunities to have a less centralised return system and has

tried distributing at different locations. The rationale for making these changes is to increase

privacy and confidentiality, by discouraging queuing, and also to bridge the distance

between lecturers and students in the feedback process.

Summary of Changes

1. Whole programme reduction in summative assessment, in parallel to an

increase in multi-stage formative assessment.

Rationale: to reduce assessment overload, mark orientation, and allow feedback to

feed forward to the next task.

Anticipated outcomes:

a) more space for explicit oral, dialogic, and developmental feedback

b) feedback has a clear target linked to the next task

c) greater attention to written feedback by students.

2. Renewed emphasis on nurturing students’ confidence and motivation through

balancing the tone and content of feedback.

Rationale:

Feedback has motivational and emotional consequences for students (Boud, 1995),

impacting on their subsequent achievement.

Anticipated outcomes:

a) encouraging feedback leads to better motivation and effort by students

7

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

b) students grow in confidence and enjoy the course more

c) improvements in tutor-student relationships and tutorial take-up

d) modelling good feedback practice in the university context helps students to put

this into practice in the school context.

3. Reducing standardised ‘tick box’ feedback in favour of specific, unique and

authentic feedback

Rationale:

Students disregard feedback which sounds the same as their friends’ or is similar to

feedback on previous assessed work because it lacks the particularity which pertains

to their unique piece of work

Anticipated outcomes:

a) more students attend to their feedback

b) more lecturers enjoy writing feedback

c) specific developmental feedback leads to improvements in student achievements.

4. Assessed work returned through alternative channels and in less public

locations

Rationale: Students on this large programme collect work as a cohort which can be

a lengthy and impersonal process

Anticipated outcomes:

a) collection of student work is streamlined

b) students feel more positive about collecting work

c) students feel more at ease with seeking tutorial dialogue.

ii) Use of Feedback

Before TESTA

Students report not paying enough attention to their feedback and not really trying to

understand what it means, recording a low mean score (3.16) on the AEQ for use of

feedback. There are 51 occasions when students receive feedback (36 summative, 15

formative); they receive more than seven hours of oral feedback and some 7,000 words of

written feedback, yet they claim not to use it. What could be going wrong here?

Firstly they could be receiving feedback too late to act on, with the result that they are unable

to apply it to the next task. The audit indicated that it took an average of 28 days to return

feedback to students, by which time many will have forgotten the task or moved on to new

tasks. The timing of assessment at the end of modules exacerbates slow returns. A further

factor, particular to the programme, has been the policy of withholding distributing marks and

feedback during school placements in order not to upset students who have performed

badly.

8

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

A second reason may be that there are many varieties of assessment (13 on the audit)

which are randomly sequenced across modules, so that students may find it hard to transfer

knowledge from one type to the next. There are credible pedagogic rationales for different

varieties of assessment, but the tensions between meeting different learning outcomes, and

introducing new processes and content simultaneously, need to be managed. Students may

experience difficulty if new varieties are not supported by formative practice runs, or logical,

linked sequences.

A third plausible reason may relate to how useful they find written feedback and the

limitations inherent in this transmission model of communication. Issues about how specific,

detailed and developmental the feedback is, and about how transferable it is across modules

and tutors may interfere with its use. It is not clear that there are coordinated strategies in

place across the programme to actively engage students in using their feedback, for

example, through reflective processes or showing how it has been used on submission of

the next assessment.

The mark-oriented culture among students may contribute to their eyes straying to the mark

in favour of digesting the comments. Deliberate strategies, such as providing feedback first

which needs to be responded to, before releasing the mark, may be required in order for

students to pay attention to the feedback. Another successful, if ethically daring practice, on

a programme at another university, has been to make online feedback to individuals public

to the whole cohort, with a ‘right to reply’.

The administration of assessment returns in bunches may also reduce the impact of

particular comments for each task. Students are more likely to engage with and use

feedback from individual assessments returned in staggered fashion across the semester.

In focus groups, students said that they were disinclined to use feedback after four weeks:

By the time we got our feedback back I’ve kind of gone past that and forgotten what I did.

Assessment tended to be too late to make any effect.

Receiving feedback for multiple tasks in bunches diminished its value:

You have so many back at the same time that it’s just a bit overwhelming really.

If I had feedback from each individual one quicker it would have helped each assignment,

rather than getting five back at the same time and you think ‘OK, so that’s on five

assignments I haven’t done well on that’.

Students found feedback from tasks which were not linked to other tasks less useful:

It’s really difficult because your assignments are so detached from the next one you do for

that subject.

9

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

It jumps all over the place. So you have to be really dedicated to trawl all the way back

through your previous assignments and actually make the links and connections.

Some feedback helped students to make the links, but others didn’t:

Some people mark them and give you a lot of good critical feedback and you can pick it up

and use it for another one, whereas some people just write things that are really brief and

not relevant and you can’t really use it in any other essay.

Students felt that regular formative feedback on smaller tasks would be more useful:

If each week you were given a little bit of focus that would be more useful for me than these

big bulky 3,000 or 4,000 word essays at the end of a module and it all depends on it to pass.

There needs to be some more regular feedback, rather than one assignment.

They found oral feedback more useful than written:

The actual written feedback on the essays, I don’t think that in itself has been overly helpful.

It’s actually going to someone who actually goes through what you’ve done.

Oral is much better. I’d much rather sit down and get into a discussion with someone

because then if you don’t understand something you can still ask why or say you don’t

understand.

I did get verbal feedback that was really helpful and I could apply that for the rest of my

assessments and it helped me to do a lot better.

They rated peer feedback highly, whether formal or informal:

X and I quite frequently exchanged our dissertations and would read them through and

comment on our work and feed back to each other and we found that really useful.

Peer assessment really does help.

We used peer assessment as in we had to do a presentation as a group and then our peers

asked us questions to assess us and I found that a really useful experience actually. That was

actually one of my favourite ways of being assessed.

After TESTA

The programme has directly addressed the four week return period by instituting three

weeks as the norm for return of feedback, and has recently received commendation from the

Pro Vice Chancellor (Academic) for reaching and even exceeding this target. This change

has been part of a broader university review of return of feedback, informed partially by

TESTA findings. During school placements, students may now collect work in school

holidays, and generic cohort feedback is published on the Learning Network within two to

10

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

three weeks of submission to ensure that all students receive some feedback, even though

they may not be able to collect it during the placement due to travel distances. The

University’s plans to move to electronic submission of assessment will simplify return of

assessment during placement.

A significant development arising directly from TESTA has been a programme wide review

of sequencing of assessment, both to reduce bottlenecks in the system, and to sequence

varieties so that feedback feeds forward to the next assessment task of a similar type.

Mapping assessment sequences has also helped the programme clarify types of

assessment in relation to learning outcomes, leading to a more aligned and balanced

assessment diet.

Summary of Changes

1. Students receive feedback in three weeks or less

Rationale: To ensure that students receive feedback in time to act on it, and

while the task is reasonably fresh in their minds.

Anticipated outcomes:

a) students pay more attention to their feedback;

b) they are motivated to use feedback for the next task

c) they receive feedback before working on the next task.

2. Programme-wide mapping of timing of assessments.

Rationale: To reduce bunching of assessment returns so that students pay more

attention to feedback comments on individual assessments.

Anticipated outcomes:

(a) students pay more attention to each piece of feedback.

(b) they have time and space to reflect on and use feedback for their next task.

3. Programme-wide mapping of assessment variety

Rationale: Students need planned cycles of assessment which lead to mastery

of skills in particular assessment types.

Anticipated outcomes:

(a) sequences help students to master various types of assessment

(b) student confidence and achievement improves

(c) students use feedback from the cycles to improve in their next task

4. Provision of generic online feedback during school placements

Rationale: Delaying feedback to protect students from disappointment while they

are on school placement weakens their use of feedback

(a) students will get feedback in better time to remember the task and attend to it

(b) receiving feedback on teaching practice will help students bridge the theory-

practice divide.

11

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

iii) Clear Goals and Standards

Before TESTA

Students on the BA Primary recorded a low score for Clear Goals and Standards (AEQ

=2.88), which indicates that they are not clear about expectations for assessed work. The

programme has good documentation with clear criteria, specific to modules and even tasks.

Tutors are diligent in explaining the nature of assessment tasks in lead lectures, and they

have slots for tutorial consultations available. There are a good number of formative tasks

(15), a key factor in students coming to know what ‘good’ is. Self and peer assessment are

practised to a limited extent on some modules, contributing to the development of students’

awareness of standards. Similarly, some module tutors show students exemplars of good

practice from previous years. Moderation practices are strongly embedded so as to assure

common standards among lecturers. In spite of these positive practices, students give the

programme the thumbs down for clarifying goals and standards. What might be underlying

their claim that they do not understand what is expected of them?

Firstly, students may not be able to make sense of written criteria and guidelines because of

the language and format of these guidelines. The interpretation of guidelines is subtle,

informed by cycles of practice and feedback, and by active engagement with criteria.

Teacher-directed explanations in lead lectures may also baffle some students, without the

use of examples or active engagement strategies. The timing of these explanations may also

be quite distant from submission. Strategies like re-writing criteria in their own words, or

marking mock assignments, using the criteria, or guided self and peer review are helpful.

These practices may be happening at the module level but they are not currently programme

wide.

The high variety of tasks (n=13), randomly sequenced across modules may be a factor in

students’ confusion about goals and standards. New assessment types are more difficult to

describe in criteria, and to get a shared understanding among tutors for whom these may

also be novel. They are usually more complex than essays and exams, and more difficult to

assess reliably (Knight and York 2003).

Audit data shows that the BA Primary has 15 formative assessment tasks, which should help

students to feel clear about expectations, but perhaps not all come with feedback or are

compulsory. Optional formative assessment has low take-up because of the strong mark

oriented culture of most students. This is a frustrating and complex problem, because it

straddles the tension between independent learning and mandating tasks, and while

students themselves may give mental assent to formative assessment, they rate it lower in

priority than summative. Changing the culture may require linked formative-summative

cycles, to make formative feed forward in ways which enable students to meet both their

instrumental and learning goals.

The programme adopts good mechanisms for standardising marking between tutors, but

there may be limitations to its effectiveness because of the size of the programme and the

number of part-time tutors. Moderation itself is not a ‘silver bullet’ for ensuring common

12

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

standards, as it is a social process requiring interpretation among groups of people who

might have different perceptions of quality (Bloxham 2009). Most humanities, arts and social

science programmes draw criticism from students for variations in marking standards, to

greater or lesser degrees.

In focus groups, students described written criteria for assessment tasks as helpful:

Marking criteria helps guide you towards what’s good.

In every module handbook we have had guidelines. Basically we’ve been given the marking

criteria, which can be good in some ways. You can kind of look back and say ‘Right, I’ve done

this and that’.

But they recognised the limitations of written criteria:

There is a check list and from what I got I thought I was doing everything on it, but it’s the

depth in which I was doing it that was wrong.

They welcomed oral clarification about tasks in lectures:

Usually we have a lecture just after they’ve given an assignment out, when they go through

it, which is always helpful.

They felt that they bore some responsibility for hazy conceptions of tasks:

Maybe it’s our fault as well – but the tutor assumes that you’ve read the handbook so you

know exactly when the hand in date is and what you’re doing.

The handbook does lay out each week what we’re doing in the seminars and if we made

more use of it then we’d probably know.

Students valued online forums for clarifying goals and standards:

There have been forums for the assignments and that’s come more in place when we’ve got

a new Learning Network over the past year or so. It’s been really good in terms of students

can put up questions so everyone can access it.

It goes onto a forum and that’s really good. That’s was really from the start, so I think that

works really well. It depends how proactive students are at asking.

One-to-one lecturer clarifications were very useful to them, but required some initiative and

effort from them:

There is quite a bit sometimes in the criteria in the handbook, but it’s very much reliant on

you being proactive, which isn’t a bad thing. To go off and ask questions, because everybody

will have their own individual questions in going to see a lecturer.

13

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

It’s great that you can go and see the lecturers in your own time ... they’re very open – well I

found anyway. You can go to them and get further guidance on it. But that really requires

you to be proactive.

It was only through talking to my tutor that I really felt I understood. She talked it through

properly and I was like ‘Oh, I get it’.

In spite of these examples, students described not understanding the distinction between

different levels of achievement:

You think ‘That’s great, knowing good and excellent, but what makes it good and what

makes it excellent?’

Some did not know what was expected, particularly the first time or with more unusual tasks:

It wasn’t clear at all what they expected you to do.

We’d never had to do anything like that before. We had to do that book review didn’t we,

and I remember leaving home and being at uni and I just didn’t have a clue.

If I hadn’t done a portfolio before I wouldn’t have had a clue what to put in it. I don’t feel like

our tutor really told us much of what to put in.

They could not predict their grades:

I have no idea what grade I’m going to get. Up until I actually get it in my hand and have a

look ‘Passed’, that’s when the relief sets in. But I think before that there’s no way of knowing

really what you’ve got.

They asked for a more uniform approach to providing examples of past work to see what

was required:

If it’s an assignment which in their opinion is quite hard then they might have brought in past

years’ but it’s not that often.

They’ve got the portal there, so why not put all of this (exemplars) on there and actually help

us pass and give us the levels and the marking criteria and everything on the subject

information page so we can access it at any time.

With general assignments there are no examples or anything about what we should be

doing.

Some felt that lecturers varied in their standards and expectations:

Everyone has different expectations. Like the lecturers have different expectations. If you do

the same thing as someone else and it’s marked by somebody different.

14

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

We’ve seen lectures about what critical justification is, but to one lecturer it’s one thing ‘This

is what I think. This person agrees with me. This person doesn’t’, but to another lecturer it’s

something completely different.

After TESTA

The programme’s decision to increase formative and multi-stage assessments while

reducing summative tasks (as described in (i) and (ii)) should help students to clarify goals

and standards. Low-risk dry runs with feedback help students to improve their grasp of high-

stakes summative tasks. The BA Primary team has decided to retain the current variety of

assessment to ensure that their students meet the range of learning outcomes for a career in

primary teaching, but better sequencing and timing of tasks ensures more logical flow and

greater mastery of these different tasks. The reduction in summative assessment on the

whole programme is likely to create more opportunities for oral and dialogic feedback to

clarify standards and expectations. The programme has also committed to more widespread

demonstration of examples of good practice.

Summary of changes

1. Increase in formative and linked multi-stage tasks

Rationale: to ensure that students practice tasks and attend to feedback in order

to clarify expectations before being assessed for marks

Anticipated outcomes:

(a)students use formative feedback to clarify goals and standards

(b) students get more than one opportunity to master different types of

assessment

(c) students grow in confidence and knowledge about what is expected.

2. Better sequencing of assessment varieties of tasks

Rationale: to improve students’ mastery of different assessment processes and

types through cycles of practice and feedback of similar assessment types.

Anticipated outcomes:

(a)students will become clearer about creative and different forms of assessment

through cycles of practice and feedback.

3. Reduction in summative assessment points

Rationale: to create space for lecturers and students to engage in dialogue

about assessment goals and standards

Anticipated outcomes:

(a) students take up opportunities to clarify expectations orally with lecturers

(b) lecturers have more availability for oral tutorial time, and for in class

clarifications

15

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

4. More widespread use of exemplars

Rationale: to provide transparent means of students learning from past

examples to improve their grasp of expectations

Anticipated outcomes:

(a) students use exemplars as a touchstone for clarifying assessment

requirements

(b) students who find criteria difficult to understand are helped by seeing the

outworking of criteria in an assessment artefact.

Next steps

The TESTA case study has mapped an evidence-informed process of assessment change

on a whole programme, attempting to contextualise change on the broader canvas of a

large, complex and highly successful professional programme. Research literature on

conditions of assessment which support learning (Black and William 1998; Boud 2000;

Knight and Yorke 2003; Gibbs and Simpson 2004) has undergirded the process. Empirical

data from the particular programme and a cohort of final year students has contributed to the

fine-grained texture and authenticity of the evidence-to-intervention process. So far, on

process measures, the verdict on using TESTA as a change tool and as evidence for

periodic review is positive:

Involvement in TESTA has prompted significant, enjoyable discussion between tutors, and with you and your team - as a catalyst for change. Our sequence map is evolving, our conversations continue... and we've got lots of support for massive changes in the new version of the degree (Programme Leader, BA Primary).

The measurable impact of the TESTA process will come in the form of improved student

experience and learning, which will be undertaken in the medium term through a repeated

exercise of using the TESTA methodology. What this case study has shown, is the process

of giving teams robust, independent evidence to discuss, the kinds of evidence which the

TESTA methodology provides, and its catalytic effect on changing assessment patterns.

Biographies

Dr Tansy Jessop is the Project Leader of TESTA, and a Senior Fellow at the University of

Winchester, where she works in the Learning and Teaching Development Unit.

Penny Lawrence is a Research Officer in the Faculty of Education, Social Care and Health,

an Early Years specialist, film maker, and EdD student at Winchester.

Dr Helen Clarke is Head of the BA Primary Education Programme, Chair of the Faculty

Learning and Teaching Committee, and a Science Educator.

16

TESTA Case Study Tansy Jessop, Penny Lawrence, Helen Clarke July 2011

References

Black, P. & D. William (1998) ‘Assessment and Classroom Learning’, Assessment in Education:

Principles, Policy and Practice. 5(1): 7-74.

Bloxham, S. (2009). Marking and moderation in the UK: false assumptions and wasted resources.

Assessment & Evaluation in Higher Education, 34(2), 209-220.

Boud, D. (1995) Assessment and Learning: Complementary or Contradictory? Chapter 2 (35-48) in

Knight, P. T. (1995) Assessment for Learning in Higher Education. Birmingham. Routledge Falmer.

Boud, D. (2000) Sustainable Assessment: Rethinking assessment for the learning society, Studies in

Continuing Education, 22: 2, 151 — 167.

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students' learning.

Learning and Teaching in Higher Education. 1(1): 3-31.

Gibbs, G., & Dunbar-Goddet, H. (2007) The effects of programme assessment environments on

student learning. Higher Education Academy.

http://www.heacademy.ac.uk/assets/York/documents/ourwork/research/gibbs_0506.pdf

Gibbs, G. & Dunbar-Goddet, H. (2009). Characterising programme-level assessment environments

that support learning. Assessment & Evaluation in Higher Education. 34,4: 481-489.

Knight, P.T. and Yorke, M. (2003) Assessment, Learning and Employability. Maidenhead. Open

University Press.

Nicol, D. J. and McFarlane-Dick, D. (2006) Formative Assessment and Self-Regulated Learning: A

Model and Seven Principles of Good Feedback Practice. Studies in Higher Education. 31(2): 199-218.

Nicol, David (2010) From monologue to dialogue: improving written feedback processes in mass higher education, Assessment & Evaluation in Higher Education, 35: 5, 501 — 517.

TESTA (2009-12) Transforming the Experience of Students through Assessment. Higher Education

Academy National Teaching Fellowship Project. www.testa.ac.uk (accessed 15 April 2011).