Engaging with student writing & providing meaningful feedback

61
www.routledge.com ROUTLEDGE . TAYLOR & FRANCIS Engaging with student writing & providing meaningful feedback A Chapter Sampler

Transcript of Engaging with student writing & providing meaningful feedback

www.routledge.com

R O U T L E D G E . T A Y L O R & F R A N C I S

Engaging with student writing& providing meaningful

feedback A Chapter Sampler

Contents

www.routledge.com

20% Discount AvailableYou can enjoy a 20% discount across our entire range of Routledge books. Simply add thediscount code F023 at the checkout.

Please note: This discount code cannot be combined with any other discount or offer and is only valid on print titles purchased directly from www.routledge.com. Valid until 31st October 2019.

1. Chapter 3: Academics’ experiences of engaging with student textsFrom: Academics Engaging with Student Writing by Jackie Tuck

2. Chapter 4: Using assessment and feedback to empower students and enhance their learningFrom: Innovative Assessment in Higher Education, Edited by Cordelia Bryan, Karen Clegg

3. Chapter 9: The relational dimension of feedbackFrom: Designing Effective Feedback Processes in Higher Education, By Naomi Winstone, David Carless

www.routledge.com

R O U T L E D G E . T A Y L O R & F R A N C I S

Academics’ experiences ofengaging with student texts

From: Academics Engaging

with Student WritingWorking at the Higher

Education Textface, 1st Edition

By Jackie Tuck

A Chapter Sampler

3 Academics’ experiences of engaging with student texts

What is work like for academic teachers engaging with undergraduate writing in the disciplines? And what are the consequences for them and for students? Analysis of multiple sources of data gathered in the study revealed the complex-ity entailed in the routine practices of participants and the considerable variety both between and within the practices of individuals. Such detail and variation are rarely documented for a number of reasons. Firstly, as discussed in Chapter 1, they are simply taken for granted as part of the everyday labour of academia – too familiar to be of note even to those who spend a large proportion of their time doing this work. Secondly, many of the routine practices involved in work with student writing and writers are relatively invisible empirically and institution-ally for reasons which will be explored in the chapter. Thirdly, academic teach-ers’ practices around student writing, however expansive, are often ephemeral, and the permanent record of such activities, for example, in the form of marked assignments and feedback communications, are frequently (though not always) terse in comparison. This is important because it is the textual trace of such activ-ity which tends to ‘count’ both in institutional terms and also for the student presented with written feedback on their writing, particularly in contexts where there are limited opportunities for follow-up discussion. As noted in Chapter 2, research on feedback is often confined to a study of such textual traces, that is, the comments themselves, and even more often to students’ experience of read-ing them.

Any exploration of a social practice from the ‘inside’ outwards quickly reveals that there are no neat boundaries around social (including pedagogical) activities and practices – the blurring of work and personal life has already been noted. What ‘counts’ as work with student writing was not the same for everyone in the study, and could not be taken for granted a priori, but had to be explored as open-mindedly as possible. This point can be illustrated using an example from the case study of James (NU1),1 whose talk focused throughout two interviews on his feedback practices, and within this on features of student writing such as spelling, but also ‘grammar’,2 paragraphing and punctuation. It is only when we are winding up the first interview that James refers in passing to developing new

54 Academics’ experiences of engaging with student texts

Profile: James

James is a senior lecturer in Human Geography at NU1. He is in his fif-ties and a senior member of departmental staff, having been in his current post for about twenty years. He estimates that about half his working time is spent on teaching undergraduates, with the other half on postgradu-ate supervision, administration and research. However, his main priority is teaching: “I do tend to do my curricular work and when that’s concluded I turn my attentions to scholarship.”

James believes that the discipline of Geography has been associated in recent years with a range of genres of academic writing – including those which dovetail well with an “employability” agenda (such as the writing of reports and assessments) and those which are influenced by what James describes as “the postmodern turn” in human geography, including per-sonal and reflective writing. Nevertheless, James frequently uses the word ‘essay’ rather than ‘assignment’ or other descriptor. Anonymous marking is the norm in James’s institution, and he believes “it works all right for essays and tests and fairly routine assessments [on] the [taught] modules.” However, a number of written assignments are set which are exceptions to this rule: dissertations, fieldwork, group work, presentations and reflective diaries.

James identifies strongly with the teaching-led character of his institution and is positive about practices in his department around student writing. He is generally in favour of institutional policies which standardise practice

disciplinary and academic vocabulary in students, which he does through his deliberate ways of talking in lectures:

[T]he way you introduce new terms to [students], it’s all about trying to improve their literary skill . . . that’s one of the main points of lectures is not so much the content but how it’s delivered . . . [It] reflects obviously their oral work first and foremost . . . but also their written work . . . so I’m not informal in my language I try and introduce as many specialist terms as I can.

This aspect of James’s practice had not emerged when, earlier in the same inter-view, I had asked an open question (asked in all first interviews) about what he did as part of face-to-face teaching which related to student writing. This example illustrates the fact that some practices around writing seem to surface more read-ily at moments when attention is allowed to wander away from ‘writing’ – in this case because the interview was coming to an end, allowing for ‘off-topic’ chat. It also shows that some practices which impinge on student writing may not ini-tially be thought worthy of mention, even by those doing them, thus serving to illuminate the benefits of an in-depth and open-ended approach.

Academics’ experiences of engaging with student texts 55

Even setting aside the methodological complexities of deciding what ‘counts’ as work with student writing for participants and researcher, there are challenges in placing boundaries which are inherent in the nature of any phenomenon when viewed as a social practice. When viewed through the specifics of a particular situ-ation and text, it becomes clear that it is not easy to decide where the teacher’s work with a student text begins and ends, either temporally or spatially. Under the influence of ethnographically oriented studies and of research informed by Actor Network Theory and Activity Theory, writing is now commonly recognised as a networked activity which almost always involves more than one social partici-pant, real and/or textually invoked, bringing complexity to the ‘who’ of writ-ing (e.g., Manion and Selfe, 2012; Lillis and Curry, 2010; Ivanic and Satchwell, 2007). This is the case even in academic contexts where there is a presumption of sole and independent authorship – manifest, for example, in the requirement to declare that a thesis or article is all one’s own work. Similarly, work focusing on text trajectories has shown that there is no way of clearly delimiting the ‘when’ or ‘where’ of what is happening when texts are produced and used (Blommaert, 2005). So the aim of the study was to cast the net as widely as possible: data gathered threw light on a broad range of activities which participants felt were connected – even indirectly – with student writing: this incorporated setting, reading, responding to and assessing writing but went beyond an interaction with texts to include a broad sweep of pedagogic activity, and conversations with col-leagues and students, within and outside classrooms and offices.

At the same time, the study sought to linger on details and be alert to what was most significant for participants themselves. In this respect, study participants were key in shaping one of the central empirical foci of the study: the hidden

with regard to written assessments, such as anonymous script marking, guidelines about referencing or rules for contact time around dissertations. James cares about language and values its skilled use. He uses quite emo-tionally charged vocabulary to describe poor writing (“terrible”; “horrible” “drivel”). He feels that standards in students’ writing are declining, for example, as a result of new technologies. He finds “grammatical or phra-seology problems . . . not understanding what a paragraph is” particularly irritating. But James does albeit reluctantly spend time on these matters: “I quite often have to teach them about paragraphs . . . I always comment in writing on a piece of work in first year if there are spelling or gram-matical issues.” On the other hand, despite his explicit interest in language, and his consistent willingness to put time in to address language issues in his students’ writing, James makes a distinction between his work around student writing as a geographer and the focus on writing he’d expect in English: “one of the issues we’re very conscious of here is we’re not an English department.”

56 Academics’ experiences of engaging with student texts

‘core’ of work engaging individually with student texts for assessment: reading, judging, making marks on scripts, awarding grades, giving feedback. These core activities were hidden, in part, because they are generally conducted alone and/or silently and are therefore not amenable to observation either by researchers or by colleagues and managers. The opportunity to be asked about practices which are felt to be in normal circumstances invisible to others was probably a factor in bringing participants back to this aspect of work at the ‘textface’: they may have been motivated to share experiences of work which they felt others (colleagues, managers and students) may not have noticed. This chapter, therefore, focuses on this relatively unexplored core of academic teachers’ work as they engage with students’ written texts but without viewing such solitary and even lonely prac-tices in isolation from the institutional contexts in which they take place. Sub-sequent chapters will progressively open up the empirical lens to look in greater depth at the wider pedagogic activities around student writing evidenced in the study. Another dimension of practice addressed here is that of emotion. Perhaps as a consequence of a focus on deep-rooted questions of power and identity, a number of studies drawing on an academic literacies paradigm have addressed emotion as an important aspect of academic writing for students, for example, Rai (2009) and Baker and Cremin (2016). However, this dimension has been more fully explored in relation to students engaged in writing at university and to academics writing for publication (Lillis and Curry, 2010) than in relation to teachers as they engage with student writing which is the focus here.

Marking student writing: hidden labour

Assessment, feedback giving and marking

An overriding theme emerging in academic teachers’ perspectives was a sense of practice around student texts as “labour-intensive” (Tom, OBU) and entailing “a lot of time” (Dan, RGU), effort and “psychological energy” (Diane, NU2; Robert, P92). Decisions about practice were not merely isolated pedagogical choices but made in the context of participants’ universities as workplaces: the demands placed on them as employees and the time, money and recognition allocated (or not) to particular activities. A good example of how this perspective emerged is provided by the term ‘marking’, which is relatively scant in higher education research literature but used extensively by academic teachers in the study (alongside terms which also feature heavily in policy and research, such as ‘feedback’ and ‘assessment’). Participants used the term ‘marking’ to refer to the practice of reading and annotating (making marks on) students’ scripts, award-ing a grade (mark) and summarising feedback for the student; it also referred (as it does generally in the vocabulary of teachers) to this process as a task and to the physical scripts which are central to the task, as in the phrase “a pile of marking” (Diane, NU2). In interviews across the study, the word ‘marking’ in this sense frequently collocated with the word ‘pile’ and sometimes with the word ‘batch’. Importantly, this combination of shades of meaning for the word

Academics’ experiences of engaging with student texts 57

Profile: Sue

Sue, in her fifties, teaches for Distance Learning University (DLU). She works part time from her cottage home in a picturesque English coastal village, tutoring Science and Environmental Science at levels one and two. Her main responsibilities are to deliver monthly face-to-face group tuto-rials; to support students via online forum, email and telephone; and to mark coursework assignments, which all contribute to formal assessment outcomes. Sue has worked in the role for a number of years: she is viewed by colleagues and views herself as highly experienced. She also acts as a peer moderator. Her academic field is Geology, but her current employment does not involve research.

Sue works with groups of fifteen to twenty-five undergraduates (though she may have up to three groups at any one time). In this institutional context, Sue and an individual student may never meet since attendance at tutorials is optional. Hence, submission of assignments and their return to the student with grade and commentary is a central part of the com-municative exchange between teacher and learner. At DLU, coursework is not marked anonymously: Sue knows the name of the student writer and will mark a series of assignments by the same person. She is therefore in a position to compare an assignment with others completed before it and to comment on students’ response to feedback and progress through the course.

Sue is positive about her job but also finds aspects of it time-consuming and frustrating. A large part of this frustration seems to be linked for Sue with the experience of marking assignments where students’ written skills are not at an appropriate level:

[T]hey walk straight into level two and they can’t write at all and sud-denly I’m a foundation tutor on a level two course, and there’s only so many hours you can devote to one student; if they say 45 minutes to mark an assignment that’s ridiculous.

She sometimes feels taken for granted by students, sometimes by the uni-versity. Her role involves no direct input to the curriculum, materials or

‘marking’ highlights that the feedback relationship is not, from a teacher per-spective, usually a ‘teacher/learner’ dyad, as is often the focus, but one in which the teacher responds to several – or many – students’ texts, a perspective rare in published empirical research to date. The study presented here offers a different orientation: the emergence of ‘marking’ in the words and worlds of participants highlights teachers’ experience of the assessment of student writing as labour (see also Tuck, 2012).

58 Academics’ experiences of engaging with student texts

What do academics do when they are marking student texts?

The answer to this question may be obvious, at one level, to any reader of this book who works routinely with undergraduates who have to write as part of assessment in their discipline. We think we know what we do when we are mark-ing. However, by building up a ‘rich picture’ through analysis of interviews, texts and in some cases audio self-recordings, the study uncovered a wide range of activities directly associated with script marking, many of which go beyond the immediately obvious stages of collecting and reading a student’s script, writing comments, assigning a grade and returning the script. The accumulation of detail afforded by the ethnographic approach revealed the blurred boundaries between the notionally discrete activities of reading, writing and grading. The sheer com-plexity of such practices can be illustrated using descriptive summary notes from one detailed case study, that of Sue (DLU), working from home, mainly at her own PC in an upstairs bedroom.

Preparation for a ‘batch of marking’: setting up the track changes system to suit the particular purposes of marking; setting up keyboard shortcuts to be able to cut and paste standardised feedback grids into students’ assignments; check-ing online forums and course website for any errata published or specific advice on dealing with tricky aspects of the assignment; emailing the student group to give advice about meeting the deadline and handling the electronic system; let-ting students know which assignments have been received; checking students have made online postings which form part of submitted assignments; checking to make sure they have not shared original material with other students; reading an early assignment “to get a feel for the mark scheme and a feel for the type of answer that the students may give”.

Preparing for work with an individual student assignment: checking the website a student plans to use for a project to see if it is suitable and finding others to recommend if not; checking a student has submitted all parts of the assignment and in the correct format; cutting and pasting student work all into one document.

The marking process itself: moving amongst several different documents on screen (spreadsheet of marks, student assignment, feedback form, tutor notes

structure of the programme or in setting assessments. She therefore works with the guidance provided by the university’s ‘central academics’, many of whom Sue may only rarely, if ever, meet in person. She frequently distances herself from these colleagues, mainly through the use of personal pronouns ‘we/us’ and ‘they/them’:

I love my job . . . I love being able to do what I should be doing . . . but I object when people . . . have a go at me when I’m feeling I can’t do more . . . [tutors should spend] a reasonable time that you set not them.

Academics’ experiences of engaging with student texts 59

Figure 3.1 Sue’s workstation

and assessment criteria and electronic course materials) and also paper-based documents (assignment guidelines and course books); dealing with occasional technical hitches (e.g., in one case, a problem with a shortcut key and accidental deletion).

Reading the text: variously reading and rereading sections of the student’s text to try to work out what he or she is trying to say; going over mathematical problems to find out where a student’s calculation has gone wrong or a set of data to identify why it doesn’t seem right; checking a student’s source in course materials to confirm a suspicion of ‘cutting and pasting’ by the student.

Writing script comments: turning track changes off and on to avoid crowd-ing the student’s text with unhelpful markings; correcting student’s formatting thrown out by the use of track changes; typing, sometimes undoing and retyp-ing, comments in the student’s text and in grid boxes she pastes in; cutting and pasting exemplars into student’s text; composing and typing own exemplar text into a student’s text where not available in tutor notes or not liked by the tutor; highlighting ‘errors’; making changes to students’ sentences and paragraphs to improve the ‘written expression’; either at the level of ‘grammar’, vocabulary or formatting; placing ticks on the student’s text. Sue’s usual practice is to distin-guish her own feedback commentary from the words supplied by ‘central aca-demics’, through the use of different fonts or colours on student scripts. In the

60 Academics’ experiences of engaging with student texts

extract in Figure 3.2 from a first-year assignment, her own comment is in red (shown here as first two sentences in lighter font) and text provided by the central course team is in blue (shown here as lighter font, last two sentences).

Writing assessment feedback summary form: looking back over the stu-dent’s assignment and at her own comments and ticks; looking for things to ‘praise’; typing and retyping; spellchecking her own words; looking at previous assignment feedback summaries to see if student had responded to earlier advice; checking website to see if student has collected previous assignment; “going on to the student website so that I could see what resources were on offer”.

Grading the assignment: reading and rereading passages (sometimes “over and over again”) to decide if they demonstrate the learning outcomes; writing the final mark; reviewing grades given, especially to check if she has been “mean”. Sue waits before settling finally on an assignment grade until she has rechecked online forums to see if there has been any relevant discussion about how parts of the assignment should be marked and until she has satisfied herself that there has been no collusion amongst students. Sue has two weeks between receiving the scripts and her deadline for returning them.

Much of the detail here would differ for different participants in the study: however, this complex mixture of tasks, many of which are invisible to others (students, colleagues and the institution), includes many activities typical of the textual practices revealed in the study around student writing.

The influence of institutional context

In this list of activities, the focus is on the ‘micro’ detail of textual practice. How-ever, these are situated details which strongly reflect the institutional context in which Sue is working, her role within that context and how she understands that role. For example, because she is a part-time distance learning tutor, she has not

Figure 3.2 Sue’s script comments

Academics’ experiences of engaging with student texts 61

Marking Routines

Emma: RGU, Computer Science, second-year3 technical report

Since Emma is not responsible for setting or introducing this assign-ment to students, her first ‘official’ engagement with student writing for

set the assignment herself and hence spends time looking out for errata or other guidance. She also adopts a practice of swapping between font colours to distin-guish between her own and the institutional voice in feedback, partly because she regards it as “only professional” not to pass off others’ words as her own, and partly because there are times when she finds the model text provided by central course teams inadequate, especially where the Geology parts of the courses are concerned, and therefore edits it, carefully owning and disowning different parts of the text through different colours and fonts (see Figure 3.2). At times Sue also talks about the advantages of being able to relinquish authority for the wording of such texts:

If you want to take issue with that [feedback text taken from course model answers] I can just say ‘nothing to do with me’ . . . if it’s in red I put my hands up and say shoot me!

The specific assessment regime in her institution means that Sue has the opportu-nity to check previous assignments by the same student, so she sometimes makes longitudinal comments on the student’s progress from one assignment to the next, for example, “You have successfully resolved the issues I made on your first assignment” and “You have greatly improved your written work and explained yourself much more clearly, well done” – though it is her own choice to make use of this affordance rather than an obligation. The role of institutional context and individual role within that context, as it permeates the minute-to-minute practices of academics with students’ texts, emerged from the study as it brought to light huge variation from one participant to another (even where they were working in the same institution). What is ‘routine’ in some contexts is not ‘rou-tine’ in others (e.g., with other assignments, other student levels and cohorts, in other departments and disciplines, etc.) or for other individuals.

The boxed text that follows shows a series of short portraits of marking rou-tines, drawn from case studies for individuals based in very different institutional and disciplinary contexts, assessing specific written assignments. Though these glimpses are less detailed than the activities listed for Sue, they nevertheless show considerable complexity in the practices engaged in by these academic teachers (and this is without even commenting on the thinking involved):

62 Academics’ experiences of engaging with student texts

this unit is when fifty technical reports are sent to her electronically for assessment. Before starting to mark individual scripts, Emma checks an ‘automarker’ to see if the computer programmes the students were tasked with producing have worked at a technical level. She then opens a couple of scripts at random to get a “feeling for what the majority thought they needed to do”. She then reads each one as a. pdf document on screen, with her marking criteria to hand (criteria which she explicitly asked the unit leader to draft for the purpose but which are not shared with stu-dents), and types one or two paragraphs of feedback directly into the system; no comments are made on the student’s text itself. As she works, she checks students’ discussion of technical issues against the automarker report to see if they have commented appropriately on any problems. In one iteration of this routine (i.e., in one academic year), Emma received the scripts very late (two months after the course finish and weeks after the official date for return to students) due to factors beyond her control and had a very short time to turn them around, so she worked intensively on them in the office to get them back to students. At other times she marks at home.

Tom: OBU, Law, second-year essay

Tom’s usual routine is to set an assignment at the end of a tutorial or semi-nar; students submit by 5 p.m. the day before the following week’s session: there are usually eight, or fewer, scripts in a batch. He collects the assign-ments as email attachments, usually loading them onto a memory stick to take home; occasionally he prefers to print them off and mark by hand. If marking on screen, he uses “track changes” to “correct grammar and the like”, and the “insert comments” function to annotate the text. Tom sprinkles comments in the margins, but there is no formal paperwork or feedback sheet to complete. Nor is there any reference to “assessment cri-teria”: the mark, expressed on an alphabetical scale, e.g., B+, C–, represents a global judgment. Students’ names are on the assignments. Typically, Tom spends the evening assessing and giving feedback on the eight assignments for the relevant class, returning them to students by email, usually before the next day’s session which will cover the same topic. Tom feels that there is no time in class to discuss students’ actual scripts, and these are normally put to one side in favour of more general discussion. However, misconcep-tions he identified during marking will be raised.

Academics’ experiences of engaging with student texts 63

In these brief thumbnail sketches of routine activities with student texts, we can see the influence of institutional context clearly in shaping particular prac-tices and decisions. For example, the prominent role played by assessment crite-ria, official paperwork and moderation procedures in Diane’s context contrasts sharply with the informal framing of Tom’s routine, as does the presumption of anonymity in Diane’s context when viewed alongside the relative intimacy of Tom’s context, where there are fewer students, they are known personally and names appear on scripts. Diane’s way of working is based on an assumption of predetermined assessment criteria; Emma has to approach a colleague to gener-ate these to help her mark assignments, and these are for her reference only, not shared with students: these different practices reflect the different emphasis given in each institution to transparency. Timescales for the work also vary hugely and can fluctuate for individuals, sometimes being distinctly suboptimal as Emma’s example shows.

Diane: NU2, Sports Science, first-year essay

After reading and adding marginal comments and annotations to thirty assignments, marked a few at a time, Diane adopts detailed procedures to “objectify the process” of awarding grades. First, she jots notes down on a separate piece of paper about how an assignment rates against each assess-ment criterion; criteria have been developed alongside curriculum and have been shared in advance with students. These notes are then converted into structured written feedback written by hand on a top sheet and used to decide a level for each criterion. Levels are then fed into a spreadsheet devised by Diane which calculates a final percentage based on different weightings. Diane passes a couple of the earliest-marked assignments to a colleague, informally, to check if marks seem appropriate. Before decid-ing on grades, Diane again goes through and checks that she is happy and that she has awarded similar grades to assignments of similar quality, partly to counter the effect of marking a large number of assignments in smaller batches of three or five: “You’ve got to go back and check and make sure that you’re not feeling a bit better today and not had a rough day that’s going to affect your judgment.” This work is usually done at her kitchen table, at home in the evening, “when the children have gone to bed”. A sample of the assignments is then handed to another colleague for formal moderation before Diane assigns a grade and returns the assignment to the student. The final grade is not awarded until the external examiner has viewed 10 percent of the scripts.

64 Academics’ experiences of engaging with student texts

Common threads: (lack of) time, space and recognition

Amidst this complex and varied picture, strong themes in common connected participants across the study. Unsurprisingly, one of the key themes emerging in the experiences of academic teacher participants was time. Detailed ethnographic descriptions such as that of Sue’s textual practices go part of the way to explaining why this work is experienced as taking up a lot of time and this perception was indeed echoed strongly by all but one of fourteen participants based in diverse institutions. Practices often took shape amidst conflicts over the ways participants should be spending precious time; dilemmas heightened when it came to work around student writing because this was felt to be a particularly time-consuming aspect of academic work. As Emma (RGU) put it when describing the work-load associated with students’ written work across the academic year: “I only have peaks, not valleys . . . sometimes very steep”. The challenges of finding time emerged particularly strongly in relation to the more solitary aspects of work around student writing. Deborah (P92U) associates painstaking work with students’ texts with “arts subjects” including her own discipline of History, as opposed to subjects such as Engineering:

To prepare students adequately to write well, which includes giving detailed thoughtful feedback on what they have written, takes a lot of time . . . you need to spend a lot of time looking in detail at how students have expressed themselves.

However, the experience of the “hugely time-consuming” (Deborah, P92U) nature of engagement with students’ written texts was echoed by many, if not all, participants, irrespective of discipline or type of institution. This is summed up by Sports Science specialist Diane (NU2): “that’s the thing about marking, it takes for ever . . . there’s so much of it.”

Participants in diverse contexts expressed a belief that there was insufficient time allocated for this work.4 For example, Sue at DLU nearly always exceeded what she held to be the official time allocation of forty-five minutes per (2,000-word) script and reported occasionally taking up to three hours to mark one assignment. By contrast, in NU2, a face-to-face institution with larger student-teacher ratios for assessment, Paul was allocated twenty minutes to mark a 3,000-word assign-ment. Despite these different models of ‘delivery’ and hugely different timings, both Sue and Paul experienced the time allocation as inadequate for the task in hand. For example, commenting on the twenty-minute allocation, Paul said: “I don’t think I could read the three thousand words in twenty minutes to do it credit really, to mark it, to consider it, to comment and give feedback.” Sue felt the inadequate time allocation in her institution would have negative conse-quences for students: “You’re not going to . . . help the student as much as you’d like to; there’s not enough hours.” Thus, while the benchmarks of time alloca-tion for work with students’ texts varied hugely from one participant’s context to another, the lived experience of insufficient time to ‘do credit’ to students’

Academics’ experiences of engaging with student texts 65

work was a common thread running across the study. This problem is described by Gibbs (2006:12):

A lecturer may find that she has fifty hours allocated to give one lecture a week to a hundred students for ten weeks . . . but no time at all to mark a . . . hundred exam scripts at the end of the course.

Gibbs goes on to discuss the overall deleterious effect on students in terms of “radical surgery” to the volume of assessment and, in particular, feedback (2006:12). These details also raise basic questions about the nature of the read-ing which is taking place in these various marking contexts with varying time pressures, given that the time allotted has to accommodate a complex mixture of reading, annotation, feedback comments and grading as described.

Another strong emerging characterisation of participants’ experience of this work with student texts, closely linked with its time-consuming nature, was their sense that it was institutionally invisible. One significant consequence for partici-pants of the time involved in marking, together with its solitary aspect (and the benefits of working uninterrupted), seemed to be that it was often pushed to the margins of the working day and so also to marginal working spaces: participants talked about late-night or holiday-time marking at home (Tom, Dan, Diane, Sue, Angela, Pam and Emma), at the kitchen table (Diane and Pam), in a corner of the family living room (Pam), in bed (Angela), or in hotel rooms (Russell) as well as in the office after the end of ‘normal’ working hours (Mike and Dan) or very early in the morning (Angela). They also referred to snatching time for marking on train journeys (Deborah and Mike) or in cafés (Mike) and when travelling to con-ferences (Emma and Russell). A field note records that Tom (OBU) commented trenchantly that there was “no chance” he would ever do marking within normal working hours. These routine activities are often hidden at a literal level – because they are conducted “out of sight” of others – at home and out of normal working hours. A frequent issue raised by participants was an accompanying feeling that the work was also “out of mind” in institutional terms. Much of the work which participants talked about for the study was not formally recorded at all and thus was even more ‘invisible’ to audit and evaluation processes in their institutions. Few of the interlinked activities listed for Sue, for example, were directly traceable in feedback texts. Sometimes, written comments underrepresented the totality of feedback precisely because they were time-consuming to produce: Emma com-ments in respect of one assignment:

We try to get them in [to the assessment system] fairly quickly and give them a lot of oral feedback and type in only a couple of things because they need to keep going otherwise the teaching block is over

Some aspects of the work, such as trawling through Internet sites to find a student’s source, or talking face-to-face with a colleague about a borderline assignment, are barely if at all represented in permanent written form. Deborah

66 Academics’ experiences of engaging with student texts

(P92U), for example, describes how she “follows” a student’s “trail” on the Internet to find out where he or she has been getting information about what the term ‘empiricism’ means. In doing so, she identifies some better sites the student could use (since the latter appears to be comfortable with online sources) and recommends them in feedback. As she explains: “it’s not just the words [of feedback], it’s all the stuff that’s gone on behind the words” which constitute her practices around student writing. Deborah’s activities are designed to increase the usefulness of feedback for students, yet much of this effort is lost to view institutionally.

Issues of resourcing in terms of time had corresponding implications for the extent to which individuals felt their work with student writing was recognised by others. For some, this view was reflected in references to “unpaid work” (Martin, RGU), or being “voluntary workers” (Sue, DLU); in other cases, participants talked about whether this work was valued by the institution in a broader sense. For example, Diane (NU2) comments:

[We] are praised for the effort we put in, in turning student work around in a timely manner with the numbers that we’ve got . . . but as an overall depart-ment structure and process . . . they’re not working with us so well in terms of helping us to do this and making it as valued as it could be.

Deborah (P92U) expressed a similar sense that such activity was not recognised at higher levels in her institution:

I don’t think [senior management] even see it . . . they think we ask [students] to write two and a half thousand word essays and . . . then give them minimal feedback . . . that seems to be what they think we do.

Emma (RGU) comments that her efforts to give helpful feedback to student writers are “completely irrelevant” to the university. Sue (DLU) shares a modera-tor’s response to her feedback on one assignment which reads: “I do hope that the student will bother to read [it] because there is so much on the feedback sheet.” Sue appears to read this comment as a dismissal of her efforts. As course leader, Martin (RGU) experienced difficulties in finding staff to supervise student dissertations because

it will not appear anyway in their tally of what they’ve done that year. It’s invis-ible because grants and publications are all that an academic is assessed on, obviously, but that’s the world one lives in, and I’m fortunate that I think a topic like [specialist subject], it has a lot of good will and that’s what we harness.

Given this “obvious” state of affairs, Martin’s role as course leader is to resource the need for this work by harnessing colleagues’ “good will”. This extract pro-vides an interesting glimpse into the system of value operating in Martin’s institu-tion in which work with student writing does not count in the evaluation of an academic.

Academics’ experiences of engaging with student texts 67

Perspectives on the emotional dimension of work with student writing

“You wonder why you’re doing it”

The portraits of marking routines here focus intentionally on what participants do on a routine basis with student scripts, with a clear focus on such activity as work. This second section focuses more particularly on how they feel about what they do: here the data was overwhelmingly negative. Emma, for example, describes the experience of working through fifty scripts on the same topic as “horrendous because it’s fifty times the same stuff ” and “boring and tedious”. Diane described often marking scripts three at a time before going off to do something else, and “in batches of maybe six at a time at the most” because she cannot bear to mark more than this in one go: if she did “it would bore [her] to tears.” Similar discontent-ment about the lack of intellectual reward such work seemed to entail for partici-pants surfaced regularly in accounts across the study: writing work often appeared to be associated with boredom and a sense of dullness. This sometimes applied to student academic writing broadly – described by Dan (RGU) as “dry” to teach and by Robert (P92U) as “dull”. This experience of tedium surfaced even more fre-quently in relation to the assessment of writing. Participants frequently referred in interviews to marking and feedback giving in terms such as “tedious and disheart-ening”, “a pain” (Angela, OBU), “galling” (James) or “soul destroying” (Tom, OBU). James commented that marking exam scripts was “the worst part of the job”. Deborah described a colleague who uses a countdown on his Facebook page just to cope with the boredom of marking and jokes that every academic she knows “feels like slashing their wrists two thirds of the way through a pile of marking”.

A sense of weary boredom emerged in other sorts of data, too. For example, Mike (NU1) recorded himself talking while marking two students’ essays. The assignment is:

1500 Word Essay: “Modernity and the metropolis: discuss the trans-formation of urban social life as represented in [insert name of artist/artwork here]”

Students are supplied with a choice of Impressionist paintings to focus on but can choose their own. At interview Mike describes the task as a “traditional, boring essay”, which involved reading some rather “dull literature”. However, he is also hoping that the painting interpretation element of the assignment will enable him to assess a “broadened definition of scholarship . . . the ability to creatively inter-pret the world as well as to mechanically cite what others have known about it”. He comments that half of the task is about “hoop jumping” (“I know what they need to cite, because I’ve set it all I know exactly what references they need to put in because I know what’s in the library, because I’ve bought the books”) and the other about “you [the student] as a creative interpretive being making sense of the painting”. The reality of the assessment reading experience emerging from the transcript of the audio recording suggests, however, a much heavier emphasis

68 Academics’ experiences of engaging with student texts

on the mechanics of scholarship than on any element of creativity, as illustrated in the following detailed transcription extract:

Time (mins: secs)

Transcript of Mike’s speech; words also written down indicated in bold.

1:38 here we go (.) ok the first comment I’m making is actually on line one where there’s a reference to X which could be anybody really that they’re citing, possibly Schopenhauer I suppose, but they haven’t referenced the author in their citation so I’m writing ‘who is this a quote from’ (.) big question mark

2:25 I’m correcting some grammar here (.) this is about Impressionism (.) and experience of the city and they haven’t capitalised [enunciated very clearly] Impressionism

2:50 ok so the first web link comes up already now er which is to the phrase X which is clearly just lifted off a website it is cited so it’s not erm poorly cited it’s just not filling me with joy that they’ve just read a website

3:25 ok it’s a little bit better than I’d feared (.) they got the right Bonaparte . . . ok that’s fine

3:40 oh dear ‘installs himself as Emperor in 1951’ ‘check (.) your (.) dates’ I have written (.) a hundred years out

4:07 oh dear [sighs] they’re now citing an author who (.) wouldn’t even have been a twinkling in his daddy’s eye when- in 1935 yet he’d apparently written a book (.) then (.) ‘X not born then’ [underlines his words?]

4:42 ok they’ve got the basic point about geometry (.) cleanliness (.) clean streets they get a tick for that

4:57 oh::: my:: what a good one to choose (.) now I’m getting people citing their lecture notes (.) [tuts] er ‘not really a source to be cited (.) use published sources’ (.) quite poor practice for second years (.) you do sometimes see first years doing it (.) er just stuff they’ve jotted down when you’ve been rambling on about something and it comes back to haunt you (.) as this one just has

6:42 ok so there’s some repetition here now talking again about hot water systems this is the third paragraph making the same point

7:25 ok there’s a classic mistake here (.) the key character in all of these essays is X and they’ve spelled [slowing down speech] his name wrong (.) which means they quite possibly have not been reading about him (.) yet trying to pass off to me that they have

8:26 stylistically it’s interesting all of the (.) quotations are italicised for some strange reason (.) and they’re still getting the dates wrong: ‘X’ published in [enunciated very clearly] 1935 apparently (.) one of the world’s earliest e-books then is it

9:45 [sighs] oh dear (.) ok here there’s just some misunderstanding about what the reconstruction efforts were about

10:30 and they’re misspelling the word ‘cited’ throughout spelling it with an s (.) I realise people’s spelling is (.) variable and I’m not actually deducting any marks for this erm but one thing it does demonstrate to me quite clearly (.) is that they’re not reading enough because (.) if they were they would know how to spell the word cited as in ‘cited in work elsewhere’ they’d know how to do it and (.) they don’t.

(Continued)

Academics’ experiences of engaging with student texts 69

Time (mins: secs)

Transcript of Mike’s speech; words also written down indicated in bold.

11:15 and the content of the paper itself is not leading to contradiction (.) erm or contradicting my view

12:08 ok so finished reading this one now [pages turning] [exhales loudly] unbelievable (.) apparently X wrote two books in 1935 (.) [pages turning]

Mike’s tone throughout the fifty-minute recording (during which he marks two essays and writes feedback) seems weary and sarcastic in comparison with his talk generally and in interviews. The most audibly animated moments occur at points in the recording when he finds some of things he is expecting to find in the students’ work in terms of references to the relevant literature. These were not strong scripts, were the last two of a “batch” of thirty scripts, and it was “the end of a long day” in the office for Mike, so it is likely that this recording repre-sents a relatively low point in the “rollercoaster of experience” (English, 2011) of marking. However, these detailed audio-recorded data provide insight into a significant element of the lived reality of the “traditional, boring essay” for the reader/assessor. Mike’s sense of boredom seems a long way from the excitement, enthusiasm and energy for teaching he expresses in an interview:

I feel it’s a really important job, teaching students in an exciting, enthusiastic, energizing way, ok it doesn’t happen for every student, but for some students it changes their lives . . . it happened to me.

There are also indications of a sense of distrust – the possibility that the student is trying to “pass off ” an impression of wider reading, a lack of which seems to be betrayed by misspellings. The lack of enthusiasm and engagement the assign-ment appears to inspire comes across in Mike’s written feedback to the student, in which he comments:

Your essay has many of the right ingredients. However, your discussion fails to really lift off and offer a sense of evaluation, or really get to grips with the art work which you have chosen.

It is no wonder that after collection, “piles” of unmarked scripts sometimes “lurk” on the floor in Mike’s office, waiting to be tackled (see Figure 3.3).

Tom (OBU) comments that when he sits down in the evening (like Diane at NU2, when “family time” is over), he thinks to himself:

I’ve got two solid hours ahead of me and I’m not really going to get anything out of it myself . . . but you do it, that’s the job, pour a glass of red wine and get on with it . . . students never appreciate that you’ve given up your evening.

70 Academics’ experiences of engaging with student texts

Tom’s comment here is telling in its reference to students: the weariness of this text work seemed to be intensified for some by doubts that it would register with – or have any palpable benefits for – students. Most participants expressed a doubt that many or any of the students would even read their feedback or respond to the advice given. A sense of wasted effort provides a downbeat thread of feeling running across participants’ accounts. For example, Russell (DLU) felt that he sometimes repeated the same feedback many times before a student was able to hear and respond to it, if he or she ever did: “you can say it ’til you’re black and blue, and they don’t do it”, a mixed metaphor which suggests battling resistance as well as exasperation. Pam (DLU) comments that: “the nervous students want to take my advice but can’t ’cause they’re too nerv-ous and the confident students just look at the mark ignore everything I’ve said and carry on as normal.” James (NU1) describes the “disappointing” feeling that he is “flogging a dead horse with a student and not making much impres-sion on their way of thinking”. Angela expresses fears that despite students having a lot of comments on their written work they can learn from, “you just end up going into the ether.” Robert’s (P92U) notion is that when students receive assessed work, they turn to the front page and glance at the mark: “if

Figure 3.3 Mike’s office

Academics’ experiences of engaging with student texts 71

somebody’s written in on the front, and they go away, and they say ‘what did you get?’, ‘fifty-seven’, ‘that’s great’ and then that’s it.” Mike comments in an interview: “Students pick up this carefully crafted feedback they see sixty-two and then they put it back on the pile and then they go home. They don’t read feedback.” His phrasing here suggests work done painstakingly and on an indi-vidualised basis along with a lack of reciprocation in students’ responses and so of work with few rewards.

At times the best-case scenario participants seemed to be able to imagine was “hope . . . that they will read it and learn from it . . . at least there’s a chance” Deborah (P92U). Figure 3.4 shows a short extract from a series of extensive paragraphs of feedback on a second-year History portfolio assignment for Deb-orah. The student has written approximately 3,200 words; Deborah has added 2,000 words of feedback in colour. (Deborah’s text is in brown in the original version. Student’s text shown here in italics; Deborah’s text shown without ital-ics except where used for emphasis.)

Over the course of the assignment, Deborah’s feedback comments suggest increasing frustration with the student’s work. This emotive dimension of Debo-rah’s experience of marking is not overtly expressed in words but is strongly hinted at in non-verbal aspects of her written feedback, for example, here sig-nalled by a crescendo of non-standard punctuation such as many exclamation marks and the use of block capitals and italics for stress. These features may be an attempt to convey interpersonal elements of the spoken voice in written form. The exaggerated punctuation, along with her use of imperatives such as “Read it again”, seem to indicate that Deborah feels her advice is not getting through: there is a sense of failure of ‘take-up’, perhaps a failure of ‘voice’ in the sense used by Blommaert (2005:68–78). While some research on student’s responses to feedback has shown that students do – at least sometimes – apparently read it and care about its content (e.g., Higgins et al., 2002), nevertheless this study showed that the feeling that students do not read it and/or do not benefit can be very strong amongst academic teachers and is likely to shape their practices as a result. Although they occupy a powerful position in their engagement with stu-dents’ texts, since as assessors their verdict will be heard, their power as teachers to ensure ‘take-up’ of their advice by students is less assured.

Figure 3.4 Deborah’s feedback

72 Academics’ experiences of engaging with student texts

Profile: Deborah

Deborah is a professor in History at P92U. She spent many years as head of department, until stepping down recently to “concentrate on research”, and still plays an active role teaching undergraduates. She spends about two-thirds of her time on teaching and one-third on research. Interview and textual data suggest that she takes both activities very seriously, par-ticularly feedback on written work, and that these twin commitments com-pete fiercely for her time, often at great personal cost.

Deborah’s department uses a wide variety of assessment genres including essays, reviews, seminar papers, document tests and exer-cises, presentations, Internet-based research, projects, dissertations and end-of-year exams. However, Deborah comments that in practice, the “default model” of academic writing required of students in her context is “the two and a half thousand word essay with the usual stuff about introductions and conclusions and logical development of argument and referencing and so forth”. In interviews she mostly talks about her work on a “core”, compulsory, level-two module in which traditional essays do not feature. Students complete a series of portfolio tasks and may completely redraft, following formative feedback, before the port-folio is finally assessed. Deborah explains that anonymous marking was introduced across the institution in response to student concerns about favouritism and prejudice, but this is one of the cases where anonymity is not possible. On some modules the teacher-student ratios are very low – Deborah can have up to two hundred assignments to mark. In other cases, as in the portfolio module, ratios are better, with about fifty students.

Deborah proudly refers to her department’s successful performance in the NSS. She objects to what she sees as institutional perceptions that historians aren’t interested in developing teaching and learning. Her alle-giances generally seem to lie with the department rather than the wider university, and she often comments on an apparent lack of understanding of her department and its “historians” on the part of institutional manag-ers. To some extent she also celebrates this sense of marginality: “We’re on a different site, and we do a different sort of thing in a different sort of way.” Deborah’s approach to student writing is in part informed by a strong sense that writing is a disciplinary matter:

we decided at an early stage we were not going to deliver “skills”, we were going to totally embed [writing] as part of good historical practices, that to be a good historian you must have these skills . . . that you can’t make an argument, which is what history is all about, unless you can organise your material effectively.

Academics’ experiences of engaging with student texts 73

A sense of futility emerged in other examples of participants’ talk around spe-cific texts. For example, Emma (RGU) brings to the second interview an assign-ment for a second-year module which has received a “low mark” (52%). The student has completed a practical task followed by a “technical report”: Emma comments that the student “rescues himself ” with the practical part, but his writing is “relatively . . . meagre”. This is the first report of this kind students have been asked to produce and something which they have not yet come across in reading either. She also provides a “typical snip of feedback”, a paragraph of about 130 words, including the following comment: “The flow chart . . . is confusing and wrong.” Commenting on this feedback in an interview, Emma explains that the student has not used the set of conventions and symbols typi-cally employed in flow charts (e.g., diamond-shaped boxes to indicate where different outcomes must be considered – see Figure 3.5). Emma tells me this type of chart will be important and useful to the students if they take up careers as computing engineers. When probed delicately to find out where she thinks the student might learn what to do next time, Emma laughs loudly and expresses doubts about the effectiveness of this “snip” of feedback:

I’m not sure that the student, by getting it wrong and then by getting short remarks on it which tell him that’s not good, actually can really improve to be honest.

Figure 3.5 Emma’s student’s flow chart

74 Academics’ experiences of engaging with student texts

Emma tells me students can come to see lecturers for an explanation of what their flow charts should have looked like but does not know if the “responsible lecturer” here will pick this issue up or if he or she will circulate examples of good work, as happens in “many other courses”. There is a sense here that the feed-back on its own is scarcely meaningful for students but that routine requires the student to come forward if he or she wishes to make sense of it. No wonder then that Emma comments: “You wonder why you’re doing it.”

Rewarding moments and strategic benefits

Despite these negative experiences, participants also occasionally found more posi-tive rewards in work with students’ texts, particularly at moments where they had stepped outside the usual routines for writing and asked students to do so. This some-times occurred where participants had departed from routine practice by choosing to set an experimental assignment in an unfamiliar genre. Departure from routine process was also likely to result in greater satisfaction, for example, where students were asked to engage with a purely formative written assessment. What made the time and effort worthwhile varied for individuals at different times. This could be simply an opportunity to link written assignments with learning for the teacher as well as the student. Seeing students’ writing improve, particularly where participants felt able to directly relate this to their own efforts as academic teachers, was also an important source of personal satisfaction and reward. Some even expressed pleasure. For example, Angela (OBU), although she had found working with one student’s writing rather onerous and was not sure how much her advice had helped him, nev-ertheless wrote on his final essay for her: “It has been a pleasure to see your essays improve.” Sue (DLU) writes on one script: “Well a very good second assignment and a joy to mark, thank you”. There were also other practical and strategic benefits which provided a positive rationale for participants in spending time and energy in paying attention to student writing. For example, Emma (RGU) explains that she has introduced innovations in a third-year assignment because master’s students who also take the module, often overseas students, benefit greatly from the chance to practice this sort of research-oriented writing in English in a UK setting. This makes her life “considerably easier” at a later stage when she is trying to support their dissertations and help them understand “what scientific means in terms of writing”. Paul (NU2) was willing to read repeated drafts of second-year assign-ments on a module where students worked with an external organisation. Where academic teachers had a sense that their efforts were likely to be worthwhile – for whatever reason – they were often prepared to draw on their personal resources to “do something about [student writing]” (Tom, OBU). In Chapter 7 the book will return to explore in greater depth the nature of the rewards and satisfactions which participants experienced and worked to achieve in some parts of their practice.

Conclusion: the labours of Sisyphus

However, the occasional satisfactions referred to briefly in this chapter were generally rare in data for the study and were often mixed with or overwhelmed

Academics’ experiences of engaging with student texts 75

by other, less positive feelings. Cumulatively across the fourteen cases, a sense emerged that much work with students’ texts is made up of rather dull tasks to be endured rather than enjoyed, in which the costs far outweigh the benefits for aca-demic teachers and where actual benefits to students are perceived to be severely limited; this was particularly so for marking formally assessed writing. Deborah (P92U) offers a graphic description of the experience of marking in particular which powerfully captures the despair engendered by a sense of fruitless effort: “the actual doing it [as opposed to one-on-one meetings with students after-wards] is Chinese water torture, it’s horrible, it’s awful . . . it’s Sisyphean, isn’t it?” Deborah’s use of the word “torture” is perhaps melodramatic, but it is useful in highlighting a theme that runs through the study: it is not so much the work, or the time, but above all the sense that no one is listening which can make this process of marking torturous and ‘soul destroying’. Deborah’s classical reference conjures up the image of the mythical king of Sisyphus rolling a boulder endlessly up a hill in Hades, only to find it crashing to the bottom at the end of every day, ready to be pushed to the top again the next. The tutor’s voice as assessor may be authoritative and her verdict on the assignment (almost) final – but she feels she cannot make her voice as a teacher heard or make her work count.

The emphasis in this chapter has been on the practical, including some exam-ples where participants in the study were able to turn their time and effort to practical or strategic advantage. Work around writing is time- and effort consum-ing, and individuals negotiated ways to resource this aspect of their practice in part through their – and others’ – perceptions of what was ‘worthwhile’. How-ever, a view of textual practices as social practice acknowledges that what we do is also intimately bound up with who we perceive ourselves and are perceived to be in any given context. The generally negative and sometimes positive experiences described here are bound up with the way participants see themselves and are seen by others in their contexts. Accounts in this chapter suggest that marginal work can be experienced as marginalising to the individuals who undertake it (signalled, e.g., in participants’ sense of such work being un-recognised or invis-ible). Thus practices around student writing involve questions of professional identity, in turn linked to ways in which different identities are valued by partici-pants and by others in their contexts. In Chapter 5 I will return to the notion that academic teachers working at the ‘textface’ are continually weighing up costs and benefits as they invest their time but will expand the notion to include more explicitly the identity costs and benefits that are incurred when such work is undertaken in particular institutional contexts.

This chapter has focused on academic teachers’ working practices as they engage with student’s mainly assessed texts. It also brings to light some of the strong feelings – usually negative – experienced by those doing such work and their perception of the costs of this work in terms of time and effort, along with a per-ceived lack of impact, reward and recognition. Exploring such practices through the teacher-focused lens of “marking” offers insights which are rare in education research and which reveal work at the ‘textface’ as a particularly demanding and draining form of academic labour. An accumulation of detailed, ‘thick’ descrip-tion enables both complexity and variety to emerge and goes some way towards

76 Academics’ experiences of engaging with student texts

explaining the burdensome quality of such work as experienced by study partici-pants. As data analysis has shown, participants seemed to frequently feel that they were going through the routine motions of higher education pedagogic practice, while far from sure about their effectiveness. The data – particularly interviews and talk around text when marking – demonstrate the potential for boredom, even alienation, in work around writing in the disciplines. A number of authors have described this condition of alienation in relation to reading and writing at university for students (e.g., Ashwin and McClean, 2005; Lillis, 2001; Mann, 2000). The next chapter returns to this notion of alienation in connection with the practices of assessment which are so closely bound up with writing in aca-demic contexts. I turn more specifically to the institutional dimension of practice around student writing which emerged in the study and examine the ways in which individuals’ practices were powerfully shaped by the material conditions of their various university assessment regimes and by their accompanying discourses of assessment in contemporary higher education.

Notes 1 Throughout the book I use short, pseudonymous acronyms to indicate the insti-

tutional location of participants. See Table 1 for how these correspond to different institutional types.

2 The word ‘grammar’ was used by different participants with different meanings, but usually it carried connotations as a superficial or merely technical aspect of writ-ing. However, this shorthand belies the actual complexity of grammatical choices in academic writing and their implications for meaning, widely recognised in linguis-tics, and, as Turner (2011) has shown, downplays the work involved in developing and improving grammar in student writing as meaning making.

3 Where participants used the terminology “first year, second year” and so on to denote levels of study, this has been preserved.

4 The exception was James (NU1).

www.routledge.com

R O U T L E D G E . T A Y L O R & F R A N C I S

Using assessment andfeedback to empower

students and enhance theirlearning

From: InnovativeAssessment in Higher

Education A Handbook for AcademicPractitioners, 2nd Edition

By Edited by Cordelia Bryan &Karen Clegg

A Chapter Sampler

50

4 Using assessment and feedback to empower students and enhance their learning

Sally Brown

Introduction

Assessment matters more today than ever before. If we want to help students’ engagement with learning, a key locus of enhancement must be around assessment, and in particular, exploring how it can be for rather than just of learning, since it is a crucial, nuanced and highly complex process.

Graham Gibbs somewhat controversially (but still pertinently) argued nearly a decade ago that:

Assessment make more difference to the way that students spend their time, focus their effort, and perform, than any other aspect of the courses they study, including the teaching. If teachers want to make their course work better, then there is more leverage through changing aspects of the assessment than anywhere else, and it is often easier and cheaper to change assessment than to change anything else.

(Gibbs, 2010)

Having worked for more than four decades in higher education teaching, learning and assessment, it is to make these kinds of approaches to enhancing courses/ programmes of study that I care deeply about, and so I propose five propositions to ensure that assessment and feedback are integral to learning and genuinely contribute to it, rather than being simply a means of capturing evidence of achievement of stated outcomes.

1. Assessment must serve student learning. 2. Assessment must be fit for purpose. 3. Assessment must be a deliberative and sequenced series of activities dem-

onstrating progressive achievement. 4. Assessment must be dialogic. 5. Assessment must be authentic.

Assessment must serve student learning

In former times, assessment was (and in some locations still is) seen as a process detached from everyday student life, something that happens once

Using assessment and feedback 51

51

learning has finished, but many today recognise the importance of assessment being a means through which learning happens, particularly those who cham-pion the assessment for learning movement in higher education including Bloxham and Boyd (2007) and Sambell, McDowell and Montgomery (2012), who led a multi- million pound initiative, the Assessment for Learning Centre for Learning in Teaching Excellence, exploring to what extent that reviewing and revising the assessment and feedback elements of curriculum design and changing students’ orientation towards assessment through fostering enhanced assessment literacy can have high impact on their engagement, retention and ultimate achievement.

Assessment literacy implies helping students develop a personal toolkit, through which they can better appreciate the standards, goals and criteria required to evaluate outputs within a specific disciplinary context (Sambell et al, 2012). This is likely to involve enabling students to:

• make sense of the complex and specialist terminology associated with assessment (such as weightings, rubrics, submission/ resubmission, deferrals, referrals and condonements);

• encounter a variety of potentially new- to- them assessment methods such as vivas, portfolios, posters, pitches, critiques and assessed web participa-tion, as well as to get practice in using them; and

• be strategic in their behaviours, putting more work into aspects of an assignment with high weightings, interrogating criteria to find out what is really required and so on.

While an appropriate balance of formative and summative assessment needs to be achieved, it is formative assessment that has the potential to shape, advance and transform students’ learning, while summative assessment is necessary to demonstrate fitness for practice and competence. The develop-mental formative guidance they receive though using exemplars, getting peer feedback and undertaking incremental tasks can lead to improved perform-ance in final assessments, while end- point summative assessment is princi-pally designed to evaluate students’ final performances against the set criteria, resulting in numbers, marks or grades. Most assessors, of course, use a mix-ture of formative and summative assessment to achieve both ends, but to make a difference to students lives we must be clear about what function is predominant at any particular time.

Bloxham and Boyd (2007) proposed a number of crucial assessment for learning principles:

1. Tasks should be challenging, demanding higher order learning and inte-gration of knowledge learned in both the university and other contexts. This means in practice that assignments should not just focus on recall and regurgitation under exam conditions but instead should be designed to showcase how students can use and apply what they have learned.

52 Sally Brown

52

2. Learning and assessment should be integrated, assessment should not come at the end of learning but should be part of the learning process: we need to ensure that every assessed task is productive of learning throughout a course of study. One of the particular problems we face currently is managerial drives to reduce the amount of assessment to make it ‘more efficient’. While we do not want to grind staff or students down by over- assessing, if you agree with me that assessment has a core role in engendering learning, we must regularly review programme assessment strategies to ensure that there is sufficient to help engage students without being overwhelming.

3. Students are involved in self- assessment and reflection on their learning, they are involved in judging performance: this is a lifelong learning skill, since experts and professionals all need to become adept at gauging personal performance and continuously enhancing it. Rehearsal and guidance are essential to ensure that disadvantaged students who typically tend to underestimate their abilities are supported to make and accurate mean-ingful judgements matching their performance against criteria.

4. Assessment should encourage metacognition, promoting thinking about the learning process not just the learning outcomes: research clearly shows that students tend ultimately to be more successful if they interrogate not just the performance criteria but also how they themselves are learning.

5. Assessment should have a formative function, providing ‘feedforward’ for future learning which can be acted upon. There is opportunity and a safe context for students to expose problems with their study and get help; there should be an opportunity for dialogue about students’ work:  a principal focus by markers on correcting errors and identifying faults is less helpful to learning than the offering of detailed and developmental advice on how to improve future performances in similar assignments and in future professional life.

6. Assessment expectations should be made visible to students as far as pos-sible: old- fashioned (and mean) assessment approaches sometimes sought to catch out unwary students and keep the workings of assessment in a metaphorical black box, but this is counterproductive. All elements of judgement need to be transparent, not in order to make it easy for students but to help students understand what is required of them to succeed. If we play fair with students, there is a lower chance of them adopting unsuitable or unacceptable academic practices because they will trust our systems and judgements.

7. Tasks should involve the active engagement of students developing the capacity to find things out for themselves and learn independently:  it’s important to prepare twenty- first- century students for careers and wider lives where they are not dependent on limited information sources which have been provided by others, but instead seek out knowledge and advice, and even more importantly, systematically and appropriately to evaluate their information sources.

Using assessment and feedback 53

53

8. Tasks should be authentic; worthwhile, relevant and offering students some level of control over their work: when students can see the relevance and value of what they are being asked to do, they are less likely to adopt superficial approaches which focus just on getting the required mark and instead to engage more deeply in learning for its own sake.

9. Tasks are fit for purpose and align with important learning outcomes: too often we ask students to undertake tasks that are proxies for assessments that genuinely demonstrate the achievement of the actions we delineate in our learning outcomes, because they seem to be easier to assess, or because we have always done it that way!

10. Assessment should be used to evaluate teaching as well as student learning. How well students perform the tasks we set them can give us some indi-cation of how effectively we’ve taught them, how well we’ve designed the assessments and how effective we have been in supporting students, although of course it’s not a perfect measure.

(After Bloxham & Boyd, 2007, with my gloss).

Summing up these factors underpinning effective assessment, Sambell et al (2017) subsequently proposed a cyclical model of assessment for learning that:

• emphasises authentic and complex assessment tasks;• uses ‘high- stakes’ summative assessment rigorously but sparingly;• offers extensive ‘low stakes’ confidence- building opportunities and

practice;• is rich in formal feedback (e.g. tutor comment, self- review logs);• is rich in informal feedback (e.g. peer review of draft writing, collabora-

tive project work);• develops students’ abilities to evaluate own progress, direct own learning.

I have worked over the years with many course teams who have used this model as a framework by which they can periodically review their modules and programmes to maximise their effectiveness, and this approach can readily be adopted by others wishing to spring clean their assessment design and delivery.

Assessment must be fit for purpose

Fitness for purpose implies a deliberative and measured approach to deci-sion making around assessment design, so that choices of methodologies and approaches, orientation, agency and timing are all made in alignment with the purposes of the task with that particular cohort of students, at that level, in that context, that subject discipline and at that stage in their academic careers. It can be counterproductive, for example, to offer too many, early, feedback- free summative tasks in a programme, which can simply serve to undermine students’ confidence and self- efficacy, while extensive opportunities offered to

54 Sally Brown

54

students at the end of a programme discussing potential improvements can be pointless if they are offered after the student has completed all work neces-sary to graduate. Some kinds of formal exams can be quite helpful for final assignments, but learning outcomes claiming to test interpersonal, problem- solving and communication skills may well be better demonstrated through practical skills, which need to be tested practically, rather than students being asked to write about them. Portfolios are likely to be better at demonstrating evidence of achievement of a range of skills and capabilities than multiple- choice questions ever could (Brown and Race, 2012, includes table of pros and cons of different forms of assessment for different purposes).

As practitioners, we are making crucial design decisions about assessment, based on the level of study, the disciplinary area and professional discourses in which our students (and subsequently our graduates) are working. Where we are in control of such decisions (and this is not the case in all nations) it is incumbent upon us to adopt purposeful strategies to enhance the formative impact of assessment. This could include, for example:

• thinking through the overall diet of assessment for a programme, to avoid stress- inducing bunching of hand- in dates and to ensure that any unfamiliar types of assignments are introduced with rehearsal opportun-ities and the chance to practice in advance of summative assignments;

• considering making the first assignment of the first semester of the first year one that every student making a reasonable attempt at would achieve at least a pass mark as a means of fostering confidence, while allowing high flyers to score higher marks. failing the very first assignment on a course is a very dispiriting experience and one to be avoided;

• avoiding excessive numbers of very demanding summative tasks too early which might undermine confidence, but at the same time ensuring that there are included plenty of opportunities early on for students to gauge how they are doing through informal activities;

• helping students become confident in their own capabilities by giving them opportunities to develop and demonstrate interpersonal skills, though, for example, small- scale group tasks which rely on everyone contributing;

• enabling students to build up portfolios of evidence that demonstrate achievement of learning outcomes in a variety of media, for example, testimonials from work- placement supervisors, reflective commentaries, videos of them demonstrating skills in the lab, annotated photos of studio outputs showing how these have been achieved and so on.

Some assignments similarly are oriented towards assessing theory and under-pinning knowledge, while others are focused primarily on practical application of what has been learned or indeed both. Many would be reassured to know that airline pilots are assessed for safety through simulations before being declared fit to fly professionally. Practical skills are central to assessment of physiotherapists and sports scientists and most of us would be more trusting

Using assessment and feedback 55

55

of our engineers, doctors and architects if we were confident they understand both underlying technical or scientific principles and their implementation.

Who is best placed to assess?

In terms of who should assess, in most nations not as much use is made of assessment agents other than tutors as could be beneficial. Students’ meta- learning is enhanced by being involved in judging their own and each other’s performance in assessed tasks (Boud, 1995; Falchikov, 2004; Sadler, 2013), helping them not just to become more effective during their studies, but also building lifelong skills valuable after graduation, through the cap-acity to monitor their own work rather than relying on someone else to do it. Furthermore, students nowadays undertake study in a variety of practice contexts, so employers, placement managers, internship supervisors and ser-vice users can all play a valuable part in assessing students’ competences and interpersonal skills.

Much conventional assessment is end- point, and while summative assessment is necessarily final, there can be substantial benefits to integrating incremental and staged tasks culminating in the capstone task at the end, rather than giving students a single shot at the target, leading to high stress levels and risk of failure. With an increasing focus on student wellbeing and good mental health, we have a duty of care as practitioners to ensure that the assignments we set, while remaining testing and challenging, should not dis-advantage any groups of students disproportionately (in the UK particularly this means making reasonable adjustments for existing disabilities or spe-cial needs, for example, to make sure each student has a fair chance of dem-onstrating their capabilities). Dweck (2000) argues that students who have static views of their capabilities should be encouraged to adopt more flexible perspectives, that enable them to move beyond thinking they are simply stupid or dreadful at a particular subject seeking instead to use strategies to help improve their competence and thereby building self- efficacy.

Students with more of an entity theory of intelligence see intellectual ability as something of which people have a fixed, unchangeable amount. On the other end of the spectrum, those with more of an incremental theory of intelligence see intellectual ability as something that can be grown or developed over time.

(Yeager & Dweck, 2012, p. 303).

What works best on substantial tasks like final projects and dissertations, for example, is to have some element of continuous assessment of work in progress, with feedback and advice along the way, avoiding ultimate ‘sudden death’. Again, avoiding early assessment can be problematic, since students, particularly the tentative and under- confident can find their anx-iety building if they have no measure of how they are doing in the early parts

56 Sally Brown

56

of a programme, so modest self- or computer- marked assignments such as multiple- choice questions can be efficient and effective at helping them gauge their own progress.

The five elements that collectively make up the fit- for- purpose model can work together coherently to help assessment achieve its first aim, to contribute to rather than distract from student learning.

Assessment must be a deliberative and sequenced series of activities demonstrating progressive achievement

If assessment is indeed to be the fulcrum around which change manage-ment of student learning can be levered, then nothing can be left to chance. Alverno College in the USA is one of the most cited examples of a progres-sive and highly developed approach to curriculum design and, delivery and assessment is probably the most influential series of active interventions that demonstrates what can be achieved with a very systematic and articulated college- wide strategy for advancing student learning, capability and confi-dence as Vignette One demonstrates:

Vignette One The Alverno model

Alverno College is a small, women’s higher education institution in Milwaukee Wisconsin in the USA, where the curriculum is ability based and focused on student outcomes integrated in a liberal arts approach, with a heritage of educational values. It is rooted in Catholic tradition and designed to foster leadership and service in the community and over the years, it has established a global reputation as a unique provider of integrated learning opportunities within a tightly focused learning com-munity. Marcia Mentkowski (2006), in the first edition of this book, defined the Alverno approach to assessment as learning as: ‘a process … integral to learning, that involves observation, analysis/ interpretation, and judgement of each student’s performance on the basis of explicit, public criteria, with self- assessment and resulting feedback to the stu-dent’ (p. 48) ‘where … education goes beyond knowing, to being able to do what one knows’ (p. 49).

Mentkowski says that Alverno’s assessment elements, such as making learning outcomes explicit through criteria or rubrics, giving appro-priate feedback, and building instructor, peer- and self- assessment, may seem directly adaptable at first. Yet the adaptability of Alverno’s inte-gration of these conceptual frameworks shapes whether and how any one institution’s faculty can apply or elaborate – not adopt or replicate – elements of a curriculum developed by another faculty (pp. 49– 50).

Using assessment and feedback 57

57

Alverno’s success is built on a systematic approach to integrating deeper cultural qualities where: the curriculum is a source of both indi-vidual challenge and support to the development of both autonomy and orientation to interdependence in professional and community service and where education is seen as developing the whole person  – mind, heart and spirit – as well as intellectual capability (p. 52). This accords well with Dweck’s approach and current thinking about techniques that can help students find strategies that through feedback can help them believe that they can improve.

What students need the most is not self- esteem boosting or trait labelling; instead, they need mindsets that represent challenges as things that they can take on and overcome over time with effort, new strategies, learning, help from others, and patience.

(Yeager & Dweck, 2012, op cit p. 312)

The following learning principles have shaped the elements of Alverno’s assessment as learning:

• ‘If learning that lasts is active and independent, integrative and experiential assessment must judge performance in contexts related to life roles.

• If learning that lasts is self- aware, reflective, self- assessed and self- regarding, assessment must include explicitness of expected outcomes, public criteria and student self- assessment.

• If learning that lasts is developmental and individual, assessment must include multiplicity and be cumulative and expansive.

• If learning that lasts is interactive and collaborative, assessment must include feedback and external perspectives as well as performance.

• If learning that lasts is situated and transferable, assessment must be multiple in mode and context.’ (p. 54.)

In a highly effective cycle, Mentkowski describes how students learn through using abilities as metacognitive strategies to recognise patterns, to think what they are performing, to think about frameworks and to engage in knowledge restructuring as they demonstrate their learning in performance assessments. (p. 56).

A particular feature of the Alverno approach to assessment is their commitment to integration of assessment across programmes, cur-ricula, and institution- wide rather than allowing separate piecemeal assessment design, and they argue that this approach enables it to:

• be integral to learning about student learning;

58 Sally Brown

58

Assessment must be dialogic

All too often, students perceive assessment as something that is done to them, rather than a process in which they engage actively. As Sadler argues:

Students need to be exposed to, and gain experience in making judgements about, a variety of works of different quality … They need to create verbalised rationales and accounts of how various works could have been done better. Finally, they need to engage in evaluative conversations with teachers and other students [and thus] develop a concept of quality that is similar in essence to that which the teacher possesses, and in particular to understand what makes for high quality.

(Sadler, 2010)

What we can do here is move away from monologic approaches so that students truly become partners in learning. Vignette Two illustrates a very practical and impressive means by which art and design students can engage fully in the ongoing critique process during the production of assessed work:

• create processes that assist faculty, staff and administrators to improve student learning;

• involve inquiry to judge programme value and effectiveness for fostering student learning;

• generate multiple sources of feedback to faculty, staff and administrators about patterns of student and alumni performance in relation to learning outcomes that are linked to curriculum;

• make comparisons of student and alumni performance to standards, criteria or indicators (faculty, disciplinary, professional, accrediting, certifying, legislative) to create public dialogue;

• yield evidence- based judgements of how students and alumni benefit from the curriculum, co- curriculum, and other learning contexts;

• guide curricular, co- curricular, institution- wide improvements.(Adapted from the Alverno Student Learning

Initiative; Mentkowski, 2006)

The Alverno approach has been widely admired and emulated inter-nationally:  it influenced my work in the late 1990s and impacted pro-foundly on much other work in the UK, for example, particularly programme level assessment approaches of the kinds adopted at the Bradford University- hosted Programme Assessment Strategies (PASS) project, led by Peter Hartley (McDowell, 2012).

Using assessment and feedback 59

59

Vignette Two Encouraging dialogic approaches to assessment by using a simple feedback stamp to provide incremental feedback on work in progress

(Richard Firth and Ruth Cochrane, Edinburgh Napier University)

Within the context of the creative industries, students do not always rec-ognise when feedback is being given, because it is embedded in ongoing conversations about their work in progress. This is often reflected in students’ responses to student satisfaction surveys, since they do not always perceive advice and comments given in class as being feedback. In this field, students often work for extended periods on tasks and briefs, often characterised by creative and ‘messy’ processes and assessment times can be lengthy. Orr and Shreeve (2018) describe the challenge of anchoring and negotiating shared understandings of sometimes abstract, conceptual and subjective feedback, so Richard Firth and Ruth Cochrane set out to find a very practical way to open constructive dialogues with students that would capture ephemeral conversations in the studio, since they were keen to help students recognise that the feed-back they received was deliberately and purposefully designed to focus on their own individual improvement.

They recognised the importance of engaging students proactively in meaningful feedback dialogues (Carless et  al, 2011) to help them get to grips with the tacit assumptions underpinning the discipline and better understand fruitful next steps. They recognised that opening up dialogues about the quality of work and current skills can help to strengthen their capacities to self- regulate their own work (Nicol & McFarlane- Dick, 2006), which are key to their future professional prac-tice in the creative industries.

The originators’ approach involved the use of an actual rubber stamp that can be used to ‘anchor’ a diverse range of tutor formative feedback on, for example, sketching, note making and the use of visual diagrams. The stamp is used frequently and iteratively as the module unfolds, with tutors sitting beside them commenting purposefully on student sketch books, presentation boards and three- dimensional prototype models, printing directly on to student sketchbooks or similar with marks on five axes indicating progress to date on each from low, novice, at the centre, to high, expert, at the periphery. These axes covering the essen-tial cyclical, indicative elements of the design process, in the example of product design comprise:

• research: the background work the student has undertaken in prep-aration; this can include primary and secondary research drawing on theoretical framework from other modules;

60 Sally Brown

60

Assessment must be authentic

The very act of being assessed can help students make sense of their learning, since for many it is the articulation of what they have learned and the application of this to live contexts that brings learning to life. Authentic assessment implies using assessment for learning (Sambell et al, 2017) and is meaningful to students in ways that can provide them with a frame-work for activity, since assessment is fully part of the learning process and integrated within it. The benefits of authentic assessment can be significant for all stakeholders, because students undertaking authentic assessments tend to be more fully engaged in learning and hence tend to achieve more highly, because they see the sense of what they are doing (Sadler, 2005).

• initial ideas: this covers the cogency and coherence of the students’ first stab at achieving solutions. this is likely to include an evalu-ation of the quantity, diversity and innovative nature of those idea;

• proto (typing) and testing: the endeavours the student has made to try out provisional solutions to see if they work; this could include user testing, development, infrastructure and route to market;

• presentation: an evaluation of how effective the student has been in putting across ideas or solutions in a variety of formats, including via two- and three- dimensional; virtual and moving images;

• pride:  a reflective review of the students’ professional identity as exemplified in the outcome in progress; indicators might include punctuality, organisation, care and engagement.

Ensuing discussions enable tutors’ tacit understandings to become explicit for the student, so they can be translated into improved per-formance/ outcomes (Sadler, 2010). The visual build- up of the stamp’s presence during a student’s documented workflow helps everyone to see the links between formative and summative assessments, so that feedforward from the former to the latter is clear. This is deemed by tutors and students to be much more productive than writing exten-sive written feedback after the work has been submitted. This regular dialogic review enables rapid feedback at each stage of the design pro-cess in a format that is familiar to both design tutors and students working in the creative field. It thereby avoids awkward and problem-atic misapprehensions about desired outcomes and the level of work required: meaning no shocks or nasty surprises when it comes to the summative assessment! It also builds over time and across programme levels, helping to develop a sense of a coherent, integrated and incre-mental feedback strategy that builds developmentally throughout the programme (for an extended account, see Firth et al, 2018).

Using assessment and feedback 61

61

University teachers adopting authentic approaches can use realistic and live contexts within which to frame assessment tasks, which help to make theor-etical elements of the course come to life and employers value students who can quickly engage in real- life tasks immediately on employment, having practised and developed relevant skills and competences through their assignments. In addition, such assignments can foster deeper intellectual skills, as Worthen, argues:

Producing thoughtful, talented graduates is not a matter of focusing on market- ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious explor-ation of complicated intellectual problems, the gift of time in an institu-tion where curiosity and discovery are the source of meaning.

(Worthen, 2018)

A useful lens through which to view authenticity in assessment is to consider what kinds of questions employers might ask at interview that might help us to frame authentic assignments. For example, graduating students could be asked to describe an occasion when they had:

• worked together with colleagues in a group to produce a collective outcome;

• worked autonomously with incomplete information and self- derived data sources;

• taken a leadership role in a team, and ‘could you tell us your strategies to influence and persuade your colleagues to achieve a collective task?’;

• communicate outcomes from project work orally, in writing, through social media and/ or through a visual medium.

In this way, learning outcomes around graduate capabilities could be embodied in assignments that truly tested them, with concomitant oppor-tunities for students to develop self- knowledge and confidence about their professionally related competences.

Conclusions

Too often, student enthusiasm, commitment and engagement are jeopardised by assessment that forces students to jump through hoops, the value of which they fail to perceive. Assessment can and should be a significant means through which learning can happen, a motivator, an energiser and a means of engendering active partnerships between students and those who teach and assess them. Students can be empowered by being supported in their learning through the assignments we set. What we do as practitioners and designers of assessment can have positive or negative impacts on students’ experiences of

62 Sally Brown

62

higher education, so I argue strongly that we have a duty to do so by adopting purposeful, evidence- informed and, where necessary, radical approaches to assessment alignment, design and enactment.

References

Bloxham, S. and Boyd, P. (2007) Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead, Open University Press.

Boud, D. (1995) Enhancing Learning Through Self- Assessment. London:  Routledge Falmer.

Brown, S. (2015) Learning, Teaching and Assessment in Higher Education:  Global Perspectives. London: Palgrave MacMillan.

Brown, S. and Race, P. (2012) Using effective assessment to promote learning. In: Hunt, L. and Chambers, D. (eds) University Teaching in Focus. Abingdon:  Routledge, pp. 74– 91.

Carless, D., Salter, D., Yang, M., Lam, J. (2011) Developing sustainable feedback practices. Studies in Higher Education 36 (4): 395– 407.

Dweck, C. S. (2000) Self Theories:  Their Role in Motivation, Personality and Development. Lillington, NC: Taylor & Francis.

Falchikov, N. (2004) Improving Assessment through Student Involvement: Practical Solutions for Aiding Learning in Higher and Further Education. London: Routledge.

Firth, R., Cochrane, R., Sambell, K., Brown, S. (2018) Using a Simple Feedback Stamp to Provide Incremental Feedback on Work- in- Progress in the Art Design Context. Enhance Guide 11. Edinburgh: Edinburgh Napier University.

Gibbs, G. (2010) Using Assessment to Support Student Learning. Leeds:  Leeds Met Press.

McDowell, L. (2012) Programme Focussed Assessment:  A Short Guide. Bradford: University of Bradford.

Mentkowski, M. (2006) Accessible and adaptable elements of Alverno student assessment- as learning: strategies and challenges for peer review. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. Abingdon: Routledge, pp. 48– 63.

Mentkowski, M. and associates (2000) Learning That Lasts:  Integrating Learning Development and Performance in College and Beyond. San Francisco, CA: Jossey- Bass.

Nicol, D. J. and Macfarlane- Dick, D. (2006) Formative assessment and self- regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 199– 218.

Orr, S. and Shreeve, A. (2018) Art and Design Pedagogy in Higher Education. London: Routledge.

Sadler, D. R. (2013) Making competent judgments of competence. In Blömeke, S., Zlatkin- Troitschanskaia, O., Kuhn, C. and Fege, J. (eds) Modeling and Measuring Competencies in Higher Education:  Tasks and Challenges. Rotterdam:  Sense Publishers, pp. 13– 27.

Sadler, D. R. (2010) Beyond feedback:  developing student capability in complex appraisal. Assessment and Evaluation in Higher Education 35 (5): 535– 550.

Sadler, D. R. (2005) Interpretations of criteria- based assessment and grading in higher education. Assessment and Evaluation in Higher Education 30: 175– 194.

Using assessment and feedback 63

63

Sambell, K., McDowell, L., Montgomery, C. (2012) Assessment for Learning in Higher Education. Abingdon: Routledge.

Sambell, K., Brown, S. and Graham, L. (2017) Professionalism in Practice:  Key Directions in Higher Education Learning, Teaching and Assessment. London and New York, NY: Palgrave Macmillan.

Worthen, M. (2018) The misguided drive to measure ‘learning outcomes’. New York Times 23 February. www.nytimes.com/ 2018/ 02/ 23/ opinion/ sunday/ colleges- measure- learning- outcomes.html?smid=tw- share (accessed 4 January 2019).

Yeager, D. S. and Dweck, C. S. (2012) Mindsets that promote resilience: when students believe that personal characteristics can be developed. Educational Psychologist 47 (4): 302– 314.

www.routledge.com

R O U T L E D G E . T A Y L O R & F R A N C I S

The relational dimension offeedback

From: Designing Effective

Feedback Processes inHigher Education

A Learning-Focused Approach

By Naomi Winstone & DavidCarless

A Chapter Sampler

Chapter 9

The relational dimension offeedback

The somewhat hidden nature of much assessment and feedback practice canmake it easy to forget that the process involves people, not just pieces of work.Students may submit an assignment through the LMS, identified only by theirstudent registration number. The marker may read this student’s work as oneout of a large number of assignments and might type comments onto anelectronic version of the student’s assignment. The marked work might thenbe made available to the student via the LMS, with the marker’s identity per-haps also being hidden. Crucially, the marker may never see that student’swork again, so has little opportunity to see the impact of their feedback onstudents’ subsequent learning.

The Latin roots of the word ‘assessment’, ad sedere, translates as ‘to sitbeside’. Thus, it is rather worrying that in many contemporary higher edu-cation environments, the whole process could take place without any inter-personal interaction whatsoever. Within the current climate of contemporaryhigher education, paying attention to the relational dimension of the feed-back process is perhaps even more important, given recent changes to staff-student ratios and class sizes, and increasing diversity in student demo-graphics (Rowe, 2011). It is not just students who can find the assessmentprocess disheartening; many teachers report feelings of anxiety and frustrationin response to assessment and feedback processes (Molloy, Borrell-Carrió, &Epstein, 2013). Because so many elements of higher education have becomedepersonalised, recognition of human relations within the feedback process isarguably more important than ever. The relational dimension of feedback isan important part of new paradigm feedback approaches, because emotionsinfluence students’ reception and processing of feedback, and their motivationto take subsequent action (Värlander, 2008). As argued by Watling andGinsburg (2019, p. 79), “Teacher-learner relationships based on trust createsafety for learners to engage with feedback”. Furthermore, the relationaldynamics within feedback process provide the context in which students’sense-making of feedback information takes place (Esterhazy, 2018).

The complex relationships between emotion and response tofeedback

The primary focus of this book is the creation of environments that facilitate stu-dents’ uptake of and learning through feedback. In this context, it is essential forus to understand emotional barriers, defences, and threats to self-efficacy in thefeedback process. Emotional responses can prevent the educational benefit offeedback from being realised (Carless, 2006), where feedback can be “obscured byemotional static” (Chanock, 2000, p. 95). Students in a study by Ferguson (2011)reported that overly critical feedback was too upsetting to look at; such reactionscan lead to what has been termed “academic paralysis” (Nash, Crimmins, &Oprescu, 2016, p. 596), whereby negative emotion has a detrimental impact onacademic motivation. Many experienced academics report a similar feeling whenreceiving critical comments through peer review; sometimes, they report a need toput them to one side until the initial emotional reaction has subsided. Crucially,students may be less likely to use comments if they have a negative emotionalimpact or are demotivating (e.g. Poulos, & Mahony, 2008), and this effect can beexacerbated for international students who are entering a new academic culture (e.g. Tian, & Lowe, 2013). Indeed, international students are more likely than homestudents to find feedback critical and upsetting (Ryan, & Henderson, 2018).Beyond the motivational effect, emotion can also have a direct impact on cognitiveprocessing of feedback. For example, positive emotions can broaden the focus ofattention, whilst limited attention can result from negative emotion (Huntsinger,2013). However, the broader literature on emotion and learning tells us that thedistinction between positive and negative emotions, where negative emotions leadto detrimental outcomes, is far too simplistic to account for the impact of emo-tions on behaviour in the context of feedback.

The relationship between emotion and feedback is complex; for example, whilepraise can elicit positive emotions, it can concurrently undermine motivation toinvest future effort (e.g. Cassidy et al., 2003). Pekrun’s Control-Value Theory(CVT) offers some indication of why these complex interactions between emotionand response to feedback might occur. This theory identifies a suite of ‘achieve-ment emotions’ which can be elicited in the context of assessment, defined as“emotions that are directly linked to achievement activities or achievement out-comes” (Pekrun et al., 2011, p. 37). Pekrun (2006) suggests that emotions differnot only according to valence (positive/negative), but also their activation poten-tial (whether they are ‘activating’, in terms of initiating action or effort; or ‘deac-tivating’, in terms of inhibiting action or effort). For example, enjoyment andpride are emotions that are positive in valence and activating in effect. In contrast,contentment and relief are also positive in valence, but deactivating in their effectas they often do not promote further action (Pekrun, et al., 2007). Similarly,shame and anxiety are negative in valence and activating in effect, whereas hope-lessness and disappointment are negative deactivating emotions (Pekrun, Elliot, &Maier, 2006).

150 The relational dimension of feedback

The crucial contribution of CVT to our understanding of the impact of emo-tion within assessment contexts is the recognition that the outcome of achieve-ment emotions interacts with emotional valence. Thus, whilst it might beperceived that positive emotions are motivationally beneficial, positive emotionssuch as relief can actually discourage further effort and learning. This distinction iscrucial for our exploration of the influence of relational factors on students’ uptakeof feedback because it leads us to expect that two students may display a similarbehaviour (limited engagement with feedback) on the basis of polarised emotionalexperiences (e.g. one student may be relieved, a positive deactivating emotion; andone student may be disappointed, a negative deactivating emotion).

Also within the framework of CVT, Peterson, Brown, and Jun (2015) used adiary method to track which achievement emotions were commonly experiencedwithin an assessment cycle. Those students who performed better on the assess-ment reported a higher level of positive emotions and lower levels of negativeemotions, with the opposite being true for more poorly-performing students.Importantly, many students who had performed well reported feeling ‘chilled’,which, as a positive deactivating emotion, might limit engagement with feedback.In another study, ‘feedback-seekers’ (those who positively anticipate performancefeedback), demonstrated fewer negative and more positive emotions in responseto negative feedback, and greater hope in response to constructive criticism (Fonget al., 2016). Thus, one recommendation for supporting students’ uptake offeedback is to support cognitive reframing as a way of managing the emotionalresponse to feedback, hopefully leading to superior cognitive processing of feed-back (Raftery, & Bizer, 2009).

The influence of feedback on identity and self-efficacy

The impact on emotion is just one part of the relational dimension of feedback.Feedback can also influence the way in which a student views themselves as alearner, and their own levels of confidence and self-belief. As argued by Laveand Wenger (1999, p. 31), “learning and a sense of identity are inseparable.They are aspects of the same phenomenon”. As well as feedback having thepotential to threaten self-esteem (Mutch, 2003), a student’s level of self-esteemcan also influence the ways in which they respond to feedback. In particular,findings from a study of mature students by Young (2000) suggest that studentswith high and medium levels of self-esteem respond to feedback with a sense ofagency to act upon it; in contrast, the impact of feedback on students with lowself-esteem is more negative, leading some students to question whether theyshould be at university at all.

Students’ beliefs about their own ability are also likely to influence their pre-conceptions of what grade they expect to achieve on their work. If students holdan expectation of what they expect to achieve, there is clear potential for dis-crepancy between expectation and reality (see Chapter 7). If expectations are met,then positive emotions such as pride are likely to be experienced; in contrast,

The relational dimension of feedback 151

doing worse than one expects is likely to lead to feelings of disappointment andshame (e.g. Kahu, et al., 2015). For example, Ryan and Henderson (2018) reportedthat students who had received a grade lower than expected were more likely toexperience sadness, anger, and shame than students who had achieved a grade thatwas higher than expected. These differential emotional reactions can lead to differ-ential behavioural responses to feedback. In a study where physicians were presentedwith multi-source feedback, they exhibited a positive emotional response when thefeedback aligned with their own self-perceptions of their competence, and a negativeemotional response when there was misalignment (Sargeant, et al., 2008). Impor-tantly, alignment between feedback and self-perceptions resulted in constructiveengagement with and use of feedback. In contrast, where misalignment occurred,there was evidence of less adaptive responses to feedback. For example, one physicianwho received feedback that did not align with his own self-perception remarked that:“I did not get a good report … I didn’t sleep for several nights after that” (Sargeant,et al., 2008, p. 280).

In the context of our relational focus, receiving feedback that is discrepant withone’s own self-concept is not just important because it can lead to negative emotions;this mismatch between one’s own self-concept and the evaluation of another canresult in the student adjusting their beliefs in their own capability to succeed in future.Albert Bandura (1997, p. 3) described “beliefs in one’s capabilities to organize andexecute the course of action required to produce given attainments” as their level ofself-efficacy. Evidence attests that self-efficacy is positively related to academic attain-ment (e.g. Honicke, & Broadbent, 2016), and feedback itself is an important influ-ence on raising self-efficacy, if encouraging, and especially if from key figures such asteachers and lecturers (Bandura, 1997). In contrast, negative feedback can lower self-efficacy, leading to “debilitating effects on future performance on similar tasks”(Ilgen, & Davis, 2000, p. 562). Another important influence on self-efficacy is theexperience of mastery: achieving and doing well in a particular task. This suggests thatencouraging students to feel that they can improve through feedback is facilitatedwhere they have experienced low-stakes opportunities to receive formative feedback(van Dinther, et al., 2014). It also suggests that the student–teacher relationship iscrucial to the formation of positive beliefs in one’s ability that are not detrimentallyaffected by feedback experiences. Whereas some students, particularly in the earlystages of their university experience, crave encouraging comments, it is important torecognise that in some contexts, individuals desire critical feedback (see, for example,Fishbach, & Finkelstein, 2012). Just as interactive cover sheets can be used for stu-dents to request feedback on specific elements of their work (see Chapter 6), theycould also enable students to request comments of a preferred tone.

Power relations in the feedback process

Boud and Molloy (2013) identified as one challenge with feedback the process ofstudents and their work being judged by their teachers, whereby comments per-ceived to be judgemental and not with students’ best interests at heart do not

152 The relational dimension of feedback

inspire action on the part of students. This process of casting judgement on stu-dents’ work from a position of authority, that may ultimately influence a student’soverall performance, places teachers in a position of power, with assessment being“a primary location for power relations” (Reynolds, & Trehan, 2000, p. 267). Thepower asymmetry between teacher and student is exacerbated by the fact that theformer occupies a dual role, as facilitator of learning, whilst also passing judgementon the quality of students’ work (Higgins, Hartley, & Skelton, 2001). The powerdifferential can exacerbate the emotional impact of teachers’ judgements (Carless,2015a). Of course, this position of expertise can give the marker credibility in theeyes of students, leading to them valuing the judgements they might give on theirwork. For example, a student participating in a qualitative study conducted bySmall and Attree (2016, p. 2089) reported that:

… it’s when you get feedback from the likes of, you know, this person who isa University Masters Professor person. You know? … and it’s someone youhold a lot of respect for … and they’re highly qualified to comment. Andwhen you get a good comment from someone like that, you’re like, wow.They said the word good. So, yeah, it really invigorates you to go again to tryagain for your next blog and try to work out what is required so that you canget those extra marks.

Conversely, the power imbalance between teacher and student can detrimentallyaffect students’ uptake of feedback, if students feel reticent to ask for further dis-cussion to understand feedback, because they perceive academics to be too busyand with much more important work to do (Small, & Attree, 2016).

Furthermore, international students may feel that it is disrespectful to engage indialogue about feedback which might be perceived as challenging academicauthority (e.g. Tweed, & Lehman, 2002). Where emotional and relational factorscan limit dialogue in the feedback process, this may act as an impediment to a newparadigm approach to feedback where student involvement is critical. Where dia-logue is such an important part of the feedback process (see Chapter 6), howmight we facilitate the development of positive student-teacher relationships thatovercome the challenges inherent to this power imbalance?

We’re in this together: building trust and relationships throughfeedback

Opening up one’s work to evaluation by another person, particularly someoneheld in a position of esteem or authority, opens up oneself to judgement andhopefully constructive criticism. This process has the potential to be particularlyproblematic if the feedback process is characterised by feedback as ‘telling’ (Sadler,2010). This model results in the student receiving a unilateral declaration, withoutany opportunity for dialogue to co-construct meaning of the teacher’s thoughtsabout the work.

The relational dimension of feedback 153

Building strong relationships within the feedback process is important becauseit can support students to feel confident to engage in meaningful dialogue withtheir teachers, through which they can learn more from the feedback, byunpacking its meaning and discussing future actions. Students often report thatthey wish to engage in one-to-one dialogue with their teachers (Blair, &McGinty, 2013), but relationships need to be established in order for studentsto feel confident to do so (Poulos, & Mahony, 2008). A student in Sutton andGill’s (2010) study reported that: “We know we need to approach this tutor toget help but we cannot because we are all scared” (Sutton, & Gill, 2010, p. 9).In contrast, where students do feel that they have a strong relationship withtheir teacher, they describe a sense of agency and volition to engage with anduse feedback (Rowe, Fitness, & Wood, 2014).

High quality relationships between students and teachers can also buffer againstthe negative emotional impact of receiving critical comments on one’s work, andlead to greater acceptance of the developmental advice (e.g. Lizzio, & Wilson,2008). A feedback culture characterised by trust, empathy, and authentic guidancecan enable students to overcome inhibiting emotional reactions in response tofeedback, resulting in more meaningful uptake of the advice (Carless, 2013).Creating such a culture is not easy, particularly when, as disciplinary experts, it canbe difficult for teachers to place themselves in the position of novice studentsexperiencing inculcation into the conventions of an academic discipline (Vär-lander, 2008). One way to develop a feedback culture involving mutual perspec-tive-taking between teachers and students is through ‘feedback preparationactivities’ (Värlander, 2008), whereby students can explore through dialogue theprocesses of giving and receiving feedback, which also affords staff insight intostudents’ concerns about receiving feedback. These processes are fundamental tothe development of feedback literacy, and the Developing Engagement with Feed-back Toolkit (Winstone, & Nash, 2016), discussed in Chapter 2, contains activitiesthat serve this function.

A further reason why relationships are important in the feedback process is thatthe massification of higher education is limiting opportunities for personalised andsustained dialogue with teachers (Nicol, 2010). Placing focus on the relationaldimension of feedback has the potential to support students to feel that they arean individual, not just a registration number or a face within a large cohort ofstudents. While many universities worldwide have moved towards anonymousmarking, such practices can prevent teachers from providing personalised com-ments to students (Forsythe, & Johnson, 2017). It is also evident that the provi-sion of anonymous feedback limits students’ perceptions of their relationship withtheir teacher and leads them to perceive the feedback as being less useful (Pitt, &Winstone, 2018). It is beneficial for students to feel that their teacher genuinelycares for them as an individual. In a series of focus groups to explore students’experiences of receiving constructive criticism, Fong et al. (2018) reported thatstudents perceive feedback as constructive rather than critical if they perceive thatthe teacher cares about their work and their progress, and that the teacher is

154 The relational dimension of feedback

perceived to hold authority and expertise. However, if unnecessarily harsh, theauthority is counterproductive.

A recent meta-analysis of 78 studies by Fong et al. (2019) suggests that ifnegative feedback is delivered in person, intrinsic motivation is enhanced relativeto neutral feedback, whereas negative feedback that is not delivered in personreduces intrinsic motivation. The authors argue that this reflects the fact that, inperson, care can be expressed more easily, and that the critique is perceived to beless threatening. This important meta-analysis reminds us that the affective andmotivational impact of feedback is dependent on the relational features of thefeedback environment. As we shall see later in the chapter, the beneficial char-acteristics of in-person feedback do not necessarily require meeting face-to-face.We now turn to explore two examples from the literature which demonstrate howthe relationship between emotional responses to feedback and uptake of thefeedback are influenced by relational factors.

Key examples from the literature

The two examples we have selected from the literature illustrate through thevoices of students the issues of emotion, power, and self-efficacy that we haveexplored thus far. The first (Shields, 2015) provides insight into the affectiveimpact of feedback on first-year undergraduate students. The second (Pitt, &Norton, 2017) has implications for managing the ‘emotional backwash’ offeedback.

Shields (2015) approached the relational dimension of feedback from the per-spective of students’ transition to university, and the processes through whichstudents come to develop a sense of belonging in their new academic environ-ment. She deliberately avoids a deficit approach, where students are seen as unableto ‘cope’ with negative feedback; instead, the focus is on the role of feedback infostering ‘belongingness’ and a sense of competence.

To this end, Shields interviewed 24 first-year undergraduate students from apost-1992 University in the UK studying on one of two modules: an optionalstudy skills module available to all Humanities and Social Science students, and aPsychology module. The assessment for the former module involved a portfolio,and for the latter, two research reports. The students were interviewed at the startof their second semester, and were invited to discuss their experiences of thefeedback they had received during their first semester at university. Students wereinvited to bring that feedback to the interview, as a stimulus for discussion. Thisstudy is of particular value because building relationships in the feedback process islikely to be enhanced by understanding the lived experience and ‘social reality’ ofstudents. Such an approach may have a positive impact on student retention.

The analysis revealed that students can hold quite fragile academic identities,and that the assessment process is a strong influence on their developing identityas a learner. The students revealed that waiting for their first piece of feedback atuniversity is a particularly anxiety-provoking time, as illustrated by this participant:

The relational dimension of feedback 155

I think feedback is important if you can get it as soon as possible becauseyou’re already anxious as to how good the work is and the longer it takes toget feedback you start thinking of all sorts of things like maybe I didn’t do itquite well and then you have got other things that you are working on.

(Shields, 2015, p. 618)

If the first feedback that students receive is on a high-stakes piece of summativework, then they may be waiting a long time for some information that might helpthem to benchmark their work, and the approach they might be taking for otherassignments, against expected standards and assessment criteria. When feedbackdoes come, it can create anxiety, meaning that “it takes students a long time toengage with feedback when it has a detrimental impact on their confidence, par-ticularly on their first assignments” (Shields, 2015, p. 620). Thus, here we can seethat assessment design is important not just in facilitating opportunities to actupon feedback (see Chapter 5), but also in ensuring that students can gain con-fidence early on in their programme of study. For this reason, Shields (2015)suggests that it is likely to be highly beneficial for new students to get feedbackearly on, as a result of completing low-stakes tasks, as “the finality of submittingassignments without any chance to improve or with little sense of being able toevaluate your own assignments is likely to increase anxiety and lessen confidence”(p. 622). This serves to illustrate the value of staged assessment designs which arediscussed in more depth in Chapter 5.

Students in this study also revealed how feedback can influence their self-esteemand associated feelings of competence. Crucially, it is not simply the case thatnegative feedback can lead to them feeling they lack the necessary academiccompetence; additionally, the difficulties they might experience understandingwhat feedback is asking them to do can also diminish their self-esteem, as revealedby a student who, in not being able to understand the meaning of the marker’sfeedback comments, asked “is she saying I’m being stupid?” (p. 619). In sup-porting students’ uptake of feedback, learner identity is a critical factor.

A striking feature of the findings reported by Shields is the extent to whichstudents experience the power differential between teacher and student. It appearsthat part of the fragile learner identity in students comes from the power thatteachers have in pointing out what is wrong in students’ work, where students findit difficult to see the distinction between themselves ‘being wrong’, and an aspectof their work that is ‘wrong’. The ‘red pen’ effect is a further example of thepower exerted over students by those marking their work, as described by thisstudent:

They are writing all over my work and it is like mangled up and most of thelecturers use red pen and I don’t know it kind of gets to me if I open it upand it’s covered in red crosses and marks and it’s horrible. It’s like my work isbleeding.

(Shields, 2015, p. 620)

156 The relational dimension of feedback

Particularly in the early stages of a university course, the importance of buildingrelationships between markers and students cannot be underestimated. As stu-dents’ identities as learners influence their engagement with feedback (Shields,2015), the relational dimension is important in developing new paradigm approa-ches to feedback.

Our second example extends the work of Shields by shifting the focus fromfirst-year to final-year undergraduates, demonstrating how emotional responsesto feedback can influence student uptake at a later stage of the academicjourney. Pitt and Norton (2017) explored the experiences of final-year under-graduates when receiving feedback on their work. The study is built on arecognition that the learning potential of feedback can only be realised if stu-dents engage with and act upon comments, thus situating this work firmlywithin a new paradigm approach. The first author interviewed 14 final-yearundergraduate students on a Sports Studies programme, using a phenomeno-graphic approach. Students were invited to bring examples of feedback onwhat they perceived to be a ‘good’ and a ‘bad’ piece of work that they hadcompleted at any stage of their programme. The interviews began by discuss-ing the ‘good’ piece of work, followed by discussion of the ‘bad’ piece ofwork. In each case, students were invited to summarise the marker’s feedbackand to identify the suggested directions for improvement. Next, they wereasked to discuss how the feedback made them feel, how they reacted, and howthey implemented the feedback.

Pitt and Norton (2017) identified three patterns of emotional response tofeedback: motivating positive feedback, motivating negative feedback, and demo-tivating negative feedback. These categories resonate with the distinction betweenactivating and deactivating emotions in CVT, discussed earlier in the chapter(Pekrun, 2006). There was evidence in Pitt and Norton’s analysis of negativeemotion being both activating and deactivating, as these student responses serveto illustrate:

Saying I didn’t do so well makes me feel bad and spurs me onto wanting toget a better mark next time.

(Pitt, & Norton, 2017, p. 504)

If I see a negative comment I blank it out of my mind instead of maybelooking over it and going right, that’s what I actually needed to do.

(Pitt, & Norton, 2017, p. 504)

In this sense, Pitt and Norton’s study is a good example of the complex rela-tionship between emotion and response to feedback.

The influence of feedback on students’ competence beliefs and self-efficacy wasalso evident in Pitt and Norton’s findings. The final-year undergraduates in thisstudy showed a similar response to the first-year undergraduates in Shields’ (2015)

The relational dimension of feedback 157

study, whereby the judgements of markers can be internalised as representingstable features of their own competence, rather than of the work itself; one stu-dent expressed that “if I’ve got bad feedback I think I’m obviously not good atthe subject. Basically if the tutor’s saying I’m no good at it then obviously I thinkI’m not” (Pitt, & Norton, 2017, p. 506). This illustrates how feedback can have adirect impact on students’ self-worth and self-efficacy; however, Pitt and Norton’sanalysis also revealed that comments can have a positive impact on raising stu-dents’ self-efficacy, if they communicate a belief that students are very much cap-able of making the recommended improvements, as illustrated by these studentnarratives:

The feedback made me realise my weakness but also the fact that with theright preparation I could do it right.

(Pitt, & Norton, 2017, p. 508)

It actually made me think ‘actually I can do this’, instead of thinking ‘I did allright’. I need good support, someone to tell me ‘yes you can do it’. Theyobviously believe that I can do it, which is kind of pleasing for me.

(Pitt, & Norton, 2017, p. 507)

Pitt and Norton (2017) introduce the concept of ‘emotional backwash’ torepresent students’ affective responses to feedback. Their analysis illustrates thatfeedback can be motivating or demotivating, and that markers hold thepower to influence students’ self-efficacy and competence beliefs in bothpositive and negative ways. It is likely that facial expressions and verbal cuescan enhance the communication of markers’ perceptions; such cues are likelyto be harder to convey through written feedback. We now turn to examine acase study of practice where these interpersonal factors enhance the relationaldimension of feedback.

Box 9.1 Key research findings

� The relationship between emotional responses to feedback and subsequentuptake is complex (Pekrun, 2006).

� The power imbalance between teachers and students can lead to a sense ofdistance, where students are reluctant to engage in dialogue with teachers(Small, & Attree, 2016).

� It is important to build relationships between teachers and students so thatstudents feel comfortable approaching teachers for guidance and further feed-back (Sutton, & Gill, 2010).

� Receiving feedback in person may minimise the negative effects of criticalfeedback on intrinsic motivation (Fong, et al., 2019).

158 The relational dimension of feedback

� Students can find it difficult to differentiate between critical feedback on theirwork, and on themselves as a learner (Pitt, & Norton, 2017; Shields, 2015).

� If students do not understand how to enact feedback, this can have a negativeimpact on their self-esteem and self-efficacy. Conversely, where markersexpress a belief that students can improve, students’ self-efficacy can grow(Shields, 2015).

The case: Putting a face to the name through video feedback

Context

This case focuses on the work of Dr Emma Mayhew, an Associate Professor in theDepartment of Politics and International Relations at the University of Reading,UK. In her third-year undergraduate British Foreign and Defence Policy module,Emma introduced video/screencast feedback to two different cohorts: thosetaking the module in 2013–2014 and those taking the module in 2014–2015.Each year around 30 students were enrolled on the module. Emma recognises thepowerful learning potential of feedback, viewing timely, constructive, and moti-vating feedback as central to the facilitation of student engagement, learning, andattainment.

The feedback design

The assessment for the module involved students writing two 3,000–3,500-word essays. Emma wanted to personalise the feedback process by having amore direct link with her students than could be provided by text-based feed-back, and wanted her students to get a better sense of the feedback process,and how she was constructing her feedback to them. Emma decided to intro-duce video feedback on students’ first essays each year. In order to capture thevideo, she used Camtasia, an inexpensive piece of screen capture software. Thestudent saw their essay on the left-hand side of the screen, and Emma’s faceon the right-hand side of the screen. The student followed Emma through theprocess of providing the feedback as she scrolled down. Each recording lastedfor between five and ten minutes, and was saved as a simple MP4 file whichwas released to students via the LMS.

Student response

Emma used an anonymous questionnaire to explore the student response tovideo feedback. She asked students to respond to a series of statements using asimple five-point Likert scale with additional open-ended questions. In total,50 out of a possible 60 students completed the questionnaire, with theirresponses providing insight into their experience of the process (see Box 9.2).

The relational dimension of feedback 159

Box 9.2 Student responses to video feedback

� 90% said that they preferred video to written feedback.� 81% said that they would prefer video feedback than written feedback on their

next essays.� 100% thought that they should receive at least one piece of video feedback at

university.� 72% of students reported that being able to see the marker’s face made the

feedback feel more personal.� 86% said that the video feedback helped them to clarify areas they did not

understand and 84% said that there was less scope for misunderstanding incomparison to written feedback.

� 88% of students felt that they received more detailed comments on their workthan they might have done with written feedback. Video feedback contained anaverage of 1,360 words in a typical eight-minute video, three or four timesmore than the amount of words students would typically get in a word-pro-cessing document.

� 78% of students felt that video feedback prompted them to look back over thesubject matter more than written feedback.

� 87% felt that they would perform better in subsequent work following videofeedback, in comparison to written feedback.

While these findings are promising, it is not clear whether students’ beliefsabout the impact of video feedback actually translated into behaviour. Such insightcould have been achieved by building into the process an opportunity for studentsto respond to the feedback.

Enabling factors

Emma worked in a department that fosters innovation in learning and teaching,with particular encouragement for innovations in the electronic management ofassessment. Thus, the departmental feedback culture was supportive of experi-mentation, favouring innovation rather than adherence to the status quo. Emmawas already familiar with the use of the technology supporting screen capture. TheCamtasia platform is simple to use and has been widely deployed for the deliveryof screencast feedback.

Emma’s values were also an enabling factor in this case; she recognised that theprovision of screencast feedback did not necessarily reduce the amount of timespent marking students’ work, yet saw that this was still a highly beneficial strategyin terms of building a stronger relationship with her students in the marking pro-cess, and providing clearer and richer feedback information. In this sense, thepractice neither increases nor decreases workload, but results in a more satisfyingprocess. Emma also made the process more personal through her creativity; when

160 The relational dimension of feedback

recording her feedback, she sat in front of her Christmas tree during Decembermarking and her Easter display during March marking. Emma decided that shewould not attempt to create perfect video files. In one instance, her cat jumped upin front of the camera, and Emma resisted the temptation to edit interruptions tothe dialogue. Students valued a human approach over a tightly-controlled formaldelivery of feedback.

Challenges

Not all students show a preference for video feedback; some of Emma’s studentsasked for written feedback in addition to video feedback so that they could moreeasily refer back to it. Emma suggested that they could transcribe her words if theyfelt that this was important because that process in itself supports greater engage-ment with the content of the feedback. Emma’s response is important in empha-sising the importance of students rather than their teachers doing more to realisethe impact of feedback.

There are also challenges relating to the process of video feedback. Emma cau-tions that given the natural dialogic feel of audio feedback, it is easy to give toomuch! Indeed, less can be more in feedback (Boud, & Molloy, 2013), and stu-dents might be more likely to revisit shorter, more focused comments. The mod-eration process also requires internal and external moderators to watch thefeedback videos, so mechanisms need to be in place to ensure that they haveaccess to the video files.

Relationship to the literature

Emma’s experience, and that of her students, aligns with other reports in the lit-erature whereby students experience video feedback to be more personalised thanwritten feedback (see Chapter 4). There is also evidence that being able todemonstrate empathy and respect through facial expressions and tone of voice canhelp mitigate against the common power imbalance inherent to the marker–stu-dent relationship (Ryan, & Henderson, 2018). Reducing the perceived distancebetween assessor and student is likely to facilitate more meaningful communica-tion in feedback exchanges, because assessment relationships are commonly char-acterised by power asymmetry (Värlander, 2008). The nuanced communicationafforded by facial expressions and hand gestures, for example, may convey to stu-dents their teachers’ investment in their learning (Mahoney, Macfarlane, & Ajjawi,2019). One of the aspects that students really liked was Emma’s use of nonverbalcommunication and the way she used her voice to soften criticism. Video feedbackaffords this kind of communication in ways that text-based feedback does not.

However, while we can speculate that such affordances of video feedback mightpositively influence students’ uptake of feedback, it is uncommon for this practiceto actually go one step further and build into the process opportunities for stu-dents to respond to the feedback (Mahoney, Macfarlane, & Ajjawi, 2019). This

The relational dimension of feedback 161

leads us to question the extent to which audiovisual approaches to feedbackembody new paradigm principles, or whether they are mainly rooted in old para-digm approaches, due to emphasis on the delivery of feedback (see Pitt, & Win-stone, 2019). As discussed in Chapter 4, if audiovisual approaches to feedbackmerely replicate the transmission of comments through a different medium, thenthey remain aligned with old paradigm principles. While students often perceiveaudiovisual feedback to feel more dialogic than written feedback, this dialogue issomewhat illusory (Mahoney, Macfarlane, & Ajjawi, 2019), unless students havethe opportunity to respond to the feedback. It is relatively easy to facilitate stu-dents’ response to the feedback; Henderson and Phillips (2015) conclude eachvideo feedback recording with a direct invitation to continue the feedbackexchange with the marker, and, as discussed in Chapter 4, screencast technologycould be utilised for students to submit a recording of their response to the feed-back (Fernández-Toro, & Furnborough, 2014).

Despite the presence of some old paradigm features, this does not mean thataudiovisual feedback is not effective, nor that it cannot concurrently embody newparadigm principles. By adopting a relational approach, Emma is able to minimisethe power asymmetry between herself and her students (Värlander, 2008), and bypinpointing within students’ work where and how they can improve, this approachis likely to facilitate student uptake. This is where Emma’s approach illustrates corefeatures of new paradigm approaches. Managing affect in the feedback process isan important component of student feedback literacy (Carless, & Boud, 2018),and feedback literate teachers will be aware of the potential impact of their stu-dents’ emotional responses to feedback. By recognising and seeking to overcomethese challenges, Emma is demonstrating her own feedback literacy as a teacher.

Significance of this practice

This case is a good example of how individual students can experience a strongerconnection to the marker through quite a simple change to the feedback process.The response of Emma’s students demonstrates that as well as preferring videofeedback, they reported that this practice would facilitate greater uptake of feed-back in comparison to their experience of written feedback. One potential reasonfor this belief is that the video format leads students to perceive markers to have agenuine interest in supporting their improvement. This creates the conditions forthe development of a strong ‘educational alliance’, which further facilitates a beliefthat they are more likely to engage and to partake in further dialogue (Telio,Regehr, & Ajjawi, 2016). A strong and authentic educational alliance is indicativeof a feedback culture characterised by trust and mutual value placed on students’development. Emma is also frank in sharing that this approach did not necessarilysave her any time in the assessment process. This is an important reminder thatinnovation in feedback processes is not always about saving time, but seeking torepurpose and reinvest time to make the process more meaningful and impactfulfor teachers and students.

162 The relational dimension of feedback

Emma’s approach has opened up a critical discussion about the value of old and newparadigm approaches to feedback. While the delivery of comments through audio-visual media could be seen as embodying old paradigm features, this does not meanthat the practice does not have promise. We have seen how students value thisapproach and perceive that their uptake of feedback will be enhanced, and withminimal adjustment, student response could be built into the process. By gaining amore nuanced understanding of teachers’ comments on their work, students arelikely to develop their feedback literacy. Furthermore, by removing many of thebarriers to the use of feedback that stem from power differentials, this practice isalso likely to enable students to better manage affect in the feedback process, also akey dimension of student feedback literacy (Carless, & Boud, 2018).

Box 9.3 Implications for practice

� Supporting students’ uptake of feedback is likely to benefit from building strongrelationships between students and their teachers, so that they feel comfor-table to engage in further dialogue about their work.

� In the early stages of their programmes, low-stakes tasks with feedback arelikely to give students more confidence in their ability to meet degree-levelstandards in their work.

� Students and teachers are likely to benefit from activities that prepare studentsto give and receive feedback, and to manage emotional reactions to the feed-back process.

� Improving the level of personalisation in the feedback process does not neces-sarily require feedback to be given face-to-face. Using technology wherebystudents can see the marker’s face, or hear their voice, can lead to a greatersense of personalisation.

� Markers can raise students’ self-efficacy to improve by framing comments insuch a way that communicates their belief that improvement is something thatthe student can achieve.

� Teachers can model to students how to handle critical feedback and emotionalresponses to feedback, as part of supporting students to manage affect in thefeedback process.

Conclusion

Very few of us would claim that we enjoy receiving critical feedback. We can oftenfind ourselves feeling defensive in response to others’ critique, and often try toprotect our own self-esteem as a result. Our students are also at the mercy of theiremotions during feedback processes. In most cases, they will have submitted workthat they are proud of, and as a result, the critical judgements passed by theirteachers can often be unexpected and can lead to students feeling anxious,

The relational dimension of feedback 163

frustrated, despondent, and even angry. In this chapter, we have discussed how therelationship between emotion and response to feedback is a complex one; it is notsimply the case that negative emotions lead to resistance to take action in response tofeedback. In fact, it may well be the case that in some cases, experiencing negativeaffect in response to feedback may lead to stronger motivation to act than experien-cing positive affect (Pekrun, 2006). We have also seen that feedback can have a realimpact upon students’ sense of competence and identity as a learner.

If, as part of a new paradigm feedback approach, we wish to provide an environ-ment that facilitates students’ uptake of feedback, then we cannot ignore the moti-vational, emotional, and interpersonal dimensions of the feedback culture. Teachersoccupy a position of power over students: holding students’ academic performanceand progress within their hands. Perhaps more importantly, students can internalisethe judgements of their teachers, and can often see comments on their work ascomments on themselves as a learner. However, central to a new paradigm approachto feedback is facilitating student advancement; thus, sometimes feedback needs to befrank and critical, as well as being sensitive to students’ likely emotional responses.Indeed, attempts to avoid being too critical in feedback exchanges through the use oflanguage, sometimes referred to as ‘hedging’ (Ginsburg, et al., 2016), can obscurethe message of the feedback, thus impeding student uptake.

A new paradigm feedback culture requires us to invest effort in ensuring thatstudents and teachers develop relationships characterised by willingness to engagein dialogue. Students are often portrayed as being primarily interested in grades,rather than feedback. While this may be true, it is often the words of their teachersthat have a stronger and more lasting effect on students’ self-esteem than anumerical or alphabetic grade. If we can motivate students by conveying ourinterest and belief in their improvement through ongoing dialogue, they may wellengage more meaningfully with feedback processes.

Box 9.4 Key resources

� A summary of the common emotional and defensive reactions to feedback: http://www.bbc.com/future/story/20170308-why-even-the-best-feedback-can- bring-out-the-worst-in-us

� How staff-student relationships might influence engagement with critical feed-back: http://www.learningscientists.org/blog/2016/11/1-1

� How neuroscience can help us give and receive critical feedback – a post fromMonash University: https://www2.monash.edu/impact/articles/how-neuroscience-can-help-us-give-and-receive-critical-feedback/

� Developing assessment and feedback processes in partnership with students: a casestudy from University College London: https://www.ucl.ac.uk/teaching-learning/case-studies/2018/feb/how-ucl-department-improved-assessment-and-feedback-partnership-students

164 The relational dimension of feedback

Box 9.5 Questions for reflection and debate

� Reflect on a recent experience of receiving feedback, perhaps as part of thepeer review process, or from a teaching evaluation. What emotions did youexperience, and in this situation, were they ‘activating’ or ‘deactivating’ in theireffect?

� Take a piece of feedback you have received and look at the language used bythe feedback-giver. What elements of their feedback make you feel uncomfor-table? How could the comments be reframed?

� How can you empower your students to use emotions to support positiveengagement with feedback?

� How could you reduce the power asymmetry in your feedback exchanges withstudents?

� How can teachers provide honest, constructive feedback without riskingupsetting students? Is there a risk that supportive feedback becomes anodyne?How are these tensions managed? Is greater partnership between teachers andstudents a possible way forward?

The relational dimension of feedback 165

www.routledge.com

R O U T L E D G E . T A Y L O R & F R A N C I S

Engaging with student writing& providing meaningful

feedback A Chapter Sampler