Student Assessment in Portugal: Academic Practice and Bologna Policy

16
1 This is an Accepted Manuscript of an article published by Palgrave in Higher Education Policy, 2014, 27, pp. 323-340, first published online on 27 August 2013, available at: http://www.palgrave-journals.com/hep/journal/vaop/ncurrent/full/hep201327a.html Student assessment in Portugal: Academic practice and Bologna policy Cristina Sin and Maria Manatos, CIPES (Centre for Research in Higher Education Policies), Rua 1º Dezembro 399, 4450-227 Matosinhos, Portugal Correspondence: [email protected] Abstract This paper investigates institutional policies and academic practices of student assessment in four Portuguese higher education institutions (HEIs) in the wake of European policy developments driven by the Bologna Process. Specifically, it examines the correspondence between European policy recommendations related to student assessment (promotion of student-centred learning by the Bologna Process and the European quality assurance standard and guidelines on student assessment) and actual assessment procedures in the investigated Portuguese HEIs. It concludes that despite student-centred methodologies having started to make inroads, it does not emerge clearly how far changes (and practices) have been driven by recent European policy. Another notable aspect is that despite apparent institutional compliance with national or European orientations meant to improve the student experience of assessment (as reflected in policy documents), academic practices and students’ experiences sometimes tell a story of resistance and enduring academic beliefs and traditions. Introduction The Bologna Declaration was signed in 1999 by the higher education ministers of 29 European countries, officially launching the Bologna Process whose ultimate aim was to establish the European Higher Education Area (EHEA) by 2010. Bologna’s endeavours to generate more synergy between European higher education systems through the pursuit of a wide range of objectives have resulted in transformations such as reorganisation of degree structures, new qualifications, quality assurance reforms, or increased emphasis on lifelong learning. In later years, a concern with the substance (rather than the structure) of education has become visible in Bologna policy, through a growing emphasis on a new pedagogic approach and the concept of student-centred learning. Whereas the Trends III report (Reichert and Tauch, 2003) referred only once to student-centred learning, the 2010 report discussed the concept extensively and described the shift to student-centred learning as the ultimate measure of the success of the Bologna reforms (Sursock and Smidt, 2010, 32). The latest Bologna Communiqué defined as a priority for national systems, in cooperation with HEIs, to ‘establish conditions that foster student-centred learning, innovative teaching methods and a supportive and inspiring working and learning environment’ (Bucharest Communiqué, 2012, 5). Sursock

Transcript of Student Assessment in Portugal: Academic Practice and Bologna Policy

1

This is an Accepted Manuscript of an article published by Palgrave in Higher Education

Policy, 2014, 27, pp. 323-340, first published online on 27 August 2013, available at:

http://www.palgrave-journals.com/hep/journal/vaop/ncurrent/full/hep201327a.html

Student assessment in Portugal: Academic practice and Bologna policy

Cristina Sin and Maria Manatos, CIPES (Centre for Research in Higher Education Policies), Rua

1º Dezembro 399, 4450-227 Matosinhos, Portugal

Correspondence: [email protected]

Abstract

This paper investigates institutional policies and academic practices of student assessment in

four Portuguese higher education institutions (HEIs) in the wake of European policy

developments driven by the Bologna Process. Specifically, it examines the correspondence

between European policy recommendations related to student assessment (promotion of

student-centred learning by the Bologna Process and the European quality assurance standard

and guidelines on student assessment) and actual assessment procedures in the investigated

Portuguese HEIs. It concludes that despite student-centred methodologies having started to

make inroads, it does not emerge clearly how far changes (and practices) have been driven by

recent European policy. Another notable aspect is that despite apparent institutional

compliance with national or European orientations meant to improve the student experience

of assessment (as reflected in policy documents), academic practices and students’

experiences sometimes tell a story of resistance and enduring academic beliefs and traditions.

Introduction

The Bologna Declaration was signed in 1999 by the higher education ministers of 29 European

countries, officially launching the Bologna Process whose ultimate aim was to establish the

European Higher Education Area (EHEA) by 2010. Bologna’s endeavours to generate more

synergy between European higher education systems through the pursuit of a wide range of

objectives have resulted in transformations such as reorganisation of degree structures, new

qualifications, quality assurance reforms, or increased emphasis on lifelong learning. In later

years, a concern with the substance (rather than the structure) of education has become

visible in Bologna policy, through a growing emphasis on a new pedagogic approach and the

concept of student-centred learning. Whereas the Trends III report (Reichert and Tauch, 2003)

referred only once to student-centred learning, the 2010 report discussed the concept

extensively and described the shift to student-centred learning as the ultimate measure of the

success of the Bologna reforms (Sursock and Smidt, 2010, 32). The latest Bologna

Communiqué defined as a priority for national systems, in cooperation with HEIs, to ‘establish

conditions that foster student-centred learning, innovative teaching methods and a supportive

and inspiring working and learning environment’ (Bucharest Communiqué, 2012, 5). Sursock

2

and Smidt (2010) list a set of characteristics of student-centred learning, and practices

occurring in European HEIs, which they claim to have been driven by the Bologna Process.

Some of these impinge on assessment: a shift in focus from the teacher and what is taught to

the learner and what is learnt, or on learning outcomes; concern with critical thinking and

deeper understanding rather than with knowledge transfer; formative assessment and

continuous feedback rather than summative assessment (Sursock and Smidt, 2010, 31-32).

In addition, the European Standards and Guidelines (ESG) for Quality Assurance, also

developed in the context of the Bologna Process, include one standard on student assessment

(Standard 1.3). This states that ‘students should be assessed using published criteria,

regulations and procedures which are applied consistently.’ Assessment is described as ‘one of

the most important elements of higher education’, since its outcomes ‘have a profound effect

on students’ future careers’. Among others, the guidelines recommend that student

assessment procedures shall be designed to measure the achievement of learning outcomes;

be appropriate for their objectives; have clear and published criteria for marking; not rely on

the judgements of single examiners, if possible; be accurately defined, regulated and

communicated to students (ENQA, 2005, 16-17).

However, Bologna policy travels a tortuous implementation path. Three particular features of

the Bologna Process combined make it unlikely for intentions at European level to be faithfully

translated in enacted policy, that is, in ground-floor practice in the institutions and countries

affected. First, the non-statutory character of the Bologna Process gives countries and

institutions generous leeway regarding the depth and breadth of uptake and embedding.

Second, the diversity of actors involved in its enactment – the academic community playing a

key role – renders the process ‘bottom-heavy’ (Cerych and Sabatier, 1986) and creates an

effect of diffusion of authority throughout the system. Third, Bologna policy rarely goes

beyond statements of intent, giving little indication of concrete means for the

operationalisation of the proffered intentions. The vagueness of policy texts leaves it up to

individual actors to decide on practical actions, largely determined by the characteristics of the

local context. To cite Stephen Ball, ‘the more ideologically abstract any policy is, the more

distant in conception from practice, the less likely it is to be accommodated in unmediated

form into the context of practice’ (Ball, 1994, 19). This resonates greatly with Bologna policy-

making which pursues broad general objectives meant to establish the EHEA; however,

countries choose how, when, and in which order they address the objectives depending on

national circumstances and priorities. Furthermore, when moving from national to institutional

level, a variation in responses to policy is encountered, too. How individual institutions and

departments address Bologna objectives related to student-centred learning and assessment

and the practices they develop are likely to vary from setting to setting.

In Portugal, a new student-centred pedagogic paradigm permeated the political discourse

during the implementation of the Bologna Process. A new policy concern with the outcomes of

student learning was evident, student competences receiving extensive attention in official

documents. For instance, Decree-Law 74/2006, which aligned Portuguese degrees to the

Bologna cycles, highlighted repeatedly the centrality of competences in the new study

architecture and stated that ‘a core issue in the Bologna Process is the transition from a

3

passive education paradigm based on the acquisition of knowledge to a model based on the

development of competences’ (MCTES, 2006, 6). However, how this was to manifest itself in

the pedagogical process was not spelled out. Neither has training been offered in this area.

This led to the emergence of uncoordinated, scattered practices which academics judged to be

student-centred (Sin, 2012).

Specifically on assessment, Decree-Law 42/2005, passed in the aftermath of the Bologna

Process, contains some provisions on assessment procedures in higher education. Alongside

establishing the adoption of the European credit system (ECTS), it stated some generic aspects:

e.g. scale definition, pass mark of 10, and comparability between the Portuguese quantitative

scale and the European scale (A, B, C, D, E). Beyond this, the principle of pedagogical autonomy

enshrined in Law 38/2007 granted HEIs decision-making power regarding their student

assessment procedures and methods. However, according to regulations issued by the Agency

for Assessment and Accreditation of Higher Education, legally established in 2007, the

accreditation of study programmes requires these to demonstrate the coherence between

learning outcomes and methodologies of teaching, learning and assessment. It thus echoes

Biggs’ concept of constructive alignment, according to which all the pedagogic elements –

curriculum and intended outcomes, teaching methods, assessment tasks – are in tune with

each other. Consequently, learners construct their own learning through relevant activities

driven by such a pedagogic approach (Biggs, 2003).

Against this backdrop, the paper investigates institutional and academic practices of student

assessment in four Portuguese HEIs in the wake of European policy developments driven by

the Bologna Process. Specifically, it examines the correspondence between Bologna policy

with a bearing on student assessment (promotion of student-centred learning in general and,

particularly, the European standard on student assessment) and the assessment procedures

observed in the investigated Portuguese HEIs. It aims to contribute to research, still sparse,

into ground-floor academic practices and the student experience further to recent Bologna

policy, thus enriching the still thin evidence base on institutional and academic enactment.

Approach

Theoretical considerations

The research question this paper pursues is how Bologna policy advocating student-centred

learning and assessment has trickled down to national, institutional and practitioner levels in

Portugal, impacting on student assessment. It therefore appeared meaningful to approach the

study through the lens of policy theories (Ball, 1994; Cerych and Sabatier, 1986; Ozga, 2000;

Trowler, 2002). A focus on actors’ interpretation and enactment of policy – as opposed to a

‘rational-purposive’ model (Trowler, 2002) which assumes linear, unproblematic

implementation – emerged as particularly relevant, in light of the bottom-heaviness nature of

the field of higher education referred to earlier. The analysis of institutional uptake of policy,

academic practices and student experience required the consideration of actors’ responses to

priorities defined at higher policy levels, as explained by Gornitzka, Kyvik and Stensaker (2005)

and Trowler, Saunders and Knight (2004).

4

The ‘implementation staircase’ proposed by Reynolds and Saunders (1987) was chosen as an

analytical lens because it conveys the idea of the situated nature of the different actors’

experience and enactment of policy, which, in the case of the Bologna Process could look as

below.

Bologna Process

National policies

Institutional policies

Academic practice

Student experience

Figure 1: The implementation staircase in Bologna policy enactment

The value of the implementation staircase resides in its metaphorical illustration of the

evolution of policy as it travels from setting to setting and from one set of actors to a different

one. Actors’ location, roles and interests shape reception and interpretation. The

implementation staircase makes agency and bottom-up influences explicit, by portraying all

actors involved in enactment as active agents who shape policy according to their own

interpretations, interests and resources, determined by their specific contextual

circumstances. Reynolds and Saunders’ implementation staircase therefore shows how policy

undergoes a process of adaptation and translation so that it becomes meaningful to those

people who have to deal with it in their day-to-day jobs. The following quote highlights the

variation between the planned, enacted and constructed reforms at the level of top policy-

makers, academics and, respectively, students (although the term ‘implementation gap’

would fit better in an engineering, top-down implementation model rather than in an

understanding of implementation as an actor-driven process):

Any idea that (...) policy will look the same at the bottom of the staircase as it looked at the top

would be naïve. Instead we find implementation gaps between the changes that are planned in

policy, the changes that are enacted in practice, and the changes as they are constructed in the

understandings of the students whose learning they were intended to affect. There are

differences, invariably, between planned, enacted and constructed changes. (Bamber, et al.,

2009, pp. 12-13)

The staircase metaphor, therefore, highlights the likely variation in actors’ responses and

experiences. The task of analysis becomes to uncover these differences, shaped by the unique

and situated experience of each actor group. Additionally, the staircase metaphor suggests

that policy messages travel up and down the staircase, leading to an ongoing re-creation of

policy – hence each group acts as both a policy receiver and policy agent, thus remaking policy

in the process of enactment. It is therefore suggested that the elaboration, or uptake, of

5

national or European policy is not necessarily synonymous with intended ground-floor change.

The transposition of national policy in institutional regulations is likely to differ from HEI to

HEI; likewise, the formulation of institutional policies provides no guarantee for embedding

and change in academic practice. As it is the latter that impacts directly on the student

experience, the concept of 'teaching and learning regimes' has also emerged as an appropriate

theoretical lens to understand ground-floor practice. It is defined as a ‘shorthand term for a

constellation of rules, assumptions, practices and relationships related to teaching and

learning issues’ (Trowler and Cooper, 2002, 222). Although expressed in individual behaviour

and assumptions, teaching and learning regimes are ‘primarily socially constructed and located

and so are relatively enduring.’ Departments are, according to Trowler and Cooper, the

primary location where regimes develop and are transmitted. Here, ‘academics engage

together on tasks over the long term’ and ‘in interaction both construct and enact culture’

(Trowler and Cooper, 2002, 222).

Data collection and analysis

The sample for this study comprised four Portuguese HEIs illustrative of the diversity of the

country's public higher education landscape (university/polytechnic sector, geography and

size). Out of concern with potential disciplinary differences, two programmes per institution

were chosen, as shown in Table 1, belonging to two major disciplinary areas in order to

represent the soft and hard categories outlined by Becher and Trowler (2001). For simplicity,

these are generically referred to as Engineering and Arts throughout the paper. The choice of

disciplines was limited by the necessity of their presence in all the selected HEIs. The research

does not lay claim to the representativeness of the selected institutions or programmes, and

the findings are limited to this sample.

HEI Type/location Faculty/School Study Programme

HEI

A

Small

university

(interior)

School of Sciences and Technology Civil Engineering

School of Arts

(Visual Arts and Design Department) Design

HEI

B

Big university

(litoral)

Faculty of Engineering Civil Engineering

Faculty of Fine Arts Communication Design

HEI

C

Big

polytechnic

(interior)

School of Technology and

Management Civil Engineering

School of Education Arts and Design

HEI

D

Small

polytechnic

(interior)

School of Technology

Civil Engineering

Plastic Arts – Painting and

Inter-Media

Table 1: Sample of HEIs, faculty/schools and study programmes

The data was mined from two sources. First, an analysis was undertaken of official institutional

documents (i.e. pedagogic, academic and assessment rules and regulations) publicly available

online, including both institution-wide and faculty/school-level regulations. The analysis

focused on the following topics: main policy provisions; criteria for marking, absence, class

attendance; stipulations regarding communication of assessment types, methods, criteria etc.

to students; the number of examiners (as a guarantee for impartiality). Second, a total of 21

semi-structured interviews were conducted, targeting two groups in each of the four HEIs:

first, institutional and faculty/school top representatives (Rector/President or Vice-

6

Rector/Vice-President; Dean or equivalent; and representatives of quality units at institutional

and faculty level) and, second, study programmes directors. These participants were chosen

because of their role in policy elaboration at institutional, departmental and programme level

and, especially in the case of the first group, because of their position of mediation between

external and internal demands and priorities. Additionally, 16 focus groups – two for each

study programme, one with academics and one with students – with around six participants

were conducted. The research participants among these target groups were recruited by the

study programme directors themselves, following our expression of interest in interviewing

teaching staff and students in the study programme they were leading. Interviews and focus

groups explored themes primarily around assessment practices and were mainly drawn from

the guidelines of ESG Standard 1.3. Specifically, the following items were discussed:

participants’ awareness of the existence of an institutional assessment policy and knowledge

of its provisions; if and how this information was shared with students; development of

assessment methods and their fitness-for-purpose (formative, summative etc.); the extent to

which assessment evaluates learning outcomes; assessment methods; compliance with official

rules and regulations; recent changes in assessment methods and drivers for these. Interviews

and focus group discussions were fully transcribed and subjected to content analysis. An

analytical framework integrating the above-mentioned themes (i.e. the topics covered in

interviews and focus groups) guided the codification of interview and focus group data.

Throughout the analysis of both policy texts and interview data, their resonance with

European policy recommendations on the student experience was an underlying concern.

Institutional policies, academic practice and student experience

This section is divided in two parts. The first one presents the findings on institutional policies

for student assessment, while the second one focuses on the assessment practices of

interviewed academics and the students’ experience of these.

Institutional policies on student assessment

Despite sparse national regulations, as mentioned earlier, a certain degree of policy

convergence across institutions has been noted. All four HEIs have institutional policies either

specifically on assessment or on academic procedures which include assessment, thus

reflecting the Standard 1.3 recommendation on ‘published criteria, information and

procedures’ for assessment. At the same time, some faculties/schools have their own

academic/assessment policies which bring an additional layer of detail to institution-wide

regulations, tailoring these to the disciplines in question. Sometimes departments, too, have

their own regulations, as in the case of the Department of Visual Arts and Design in HEI A,

meant to give coherence to assessment across courses.

As a general rule, policies contain broad regulations on possible assessment types and

methods, assessment procedures, examination periods, attendance requirements, marking,

student rights (access to exam papers, complaints and appeals). Identical exam periods,

regulated in detail in all institutional policies, apply in the four HEIs: the normal period (for

final exams), the appeal period (recurso) for re-examinations of students who have failed the

course or who wish to improve their marks, and the special period for the assessment of

special-regime students and students who miss a limited amount of credits to complete the

7

degree. Institutional policies unanimously acknowledge the circumstances of special-regime

students (i.e. working students, students in the army, student representatives on institutional

governing bodies, etc.) and indicate exceptions for them. This consideration appears to have

emerged further to the legislative changes which widened higher education participation to

non-traditional student populations. Decree-Law 64/2006 approved a new path to higher

education for students older than 23 not holding the standard entry requirements. This

generated a significant increase in their numbers. Especially with respect to working students –

the group most commonly referred to in policies – special conditions apply to their attendance

and assessment regimes. However, some interviewees, especially students, claimed that

academics sometimes appeared reticent to acknowledge special-regime student needs.

Policies either give clear information on attendance requirements, or ask programmes to

establish their own criteria. All the programmes considered here have clear attendance

requirements, compulsory for a proportion of between two thirds and eighty percent of

classes. A minimum required attendance functions as a pre-requisite to assessment, since

students can otherwise be denied the right to be assessed. Attendance can also influence the

final mark for a course, such as in the Arts programme in HEI A where students receive a mark

for their attendance record. As previously said, the circumstances of special-regime students

are taken into account, as they are exempt from such attendance requirements.

Concerning communication of assessment information to students, explicitly recommended by

Standard 1.3, all academic or assessment policies stipulate clearly teaching staff’s obligation to

inform students of assessment procedures, methods, criteria, marking etc. Usually regulations

require the inclusion of this information in course descriptions, which all policies ask to be

made available at the beginning of the semester or of the course. Some call explicitly for online

publication, whereas others state solely that the information must reach students at the

outset.

The number of examiners, also mentioned in Standard 1.3, could represent an impartiality

factor in assessment. In this respect, only one institutional policy (HEI A) and one faculty policy

(Engineering in HEI B) require the constitution of course assessment panels, consisting of

minimum three and, respectively, minimum two teaching staff. Both these institutions belong

to the university sector. In the other two HEIs, institutional regulations make no reference to

the number of examiners. However, regulations for Arts in HEI C stipulate that although the

course lecturer is responsible for assessment, marks must be certified by the head of

department and another member of the teaching staff. In the other instances, interviews

confirmed that responsibility for assessment lies solely with the course lecturer. When a

course is jointly taught by several academics, assessment will usually be shared, too.

Another observed tendency is the noticeable growing emphasis on continuous assessment: in

three of the four HEIs policies explicitly favour this form of assessment, described as student-

centred by European documents. Finally, an aspect observed across the board is the absence

of explicit qualitative assessment criteria corresponding to the different levels of the

Portuguese 0 to 20 marking scale (i.e. general descriptors of the quality levels of student work

corresponding to specific marks on the scale), thus going against the ESG recommendation for

8

‘clear and published criteria for marking’. The only indication given in policies is the pass mark

of 10, as stipulated in Decree-Law 42/2005.

Academic practices related to assessment

This section draws primarily, but not exclusively, on the interviews and the focus group

discussions. It presents the academics’ and students’ perceptions regarding the assessment

methodologies employed in the selected HEIs and degrees. It also addresses recent changes in

assessment procedures.

In all four institutions pedagogic autonomy grants academics the freedom to design

assessment methods as they consider appropriate. According to both Engineering and Arts

academics across the four surveyed HEIs, when they choose assessment methods (exams,

various tests or student tasks throughout the semester, projects etc.) they bear in mind course

objectives and typology (i.e. theoretical, practical, project-based etc.), thus suggesting

alignment is sought between objectives and methods, as recommended in Standard 1.3.

However, in the Engineering programmes of the HEIs from the university sector, academics felt

that their pedagogic autonomy was overridden by institutional regulations which favoured

continuous assessment over final exams – this might suggest academics’ preference for final

exams. For example, in HEI A, for a course to have final assessment only, the academic in

charge must justify this choice of assessment. In HEI B, regulations prescribe that at least one

course per semester must have continuous assessment.

Continuous assessment, whose benefits are normally associated with formative feedback, was

sometimes summative. For instance, in HEI B, Engineering academics’ understanding of

continuous assessment reflected mainly a preoccupation with inducing a regular pattern of

student learning along the semester rather than with its potentially formative function. In

practice, this often translated into summative mini-tests during the teaching period, with little

feedback provided to students. Academics considered such summative mini-tests, given their

perceived urgency, disruptive for student attendance and learning in other courses.

Continuous assessment, a practice supposed to benefit student learning, thus appeared

counterproductive, tiring and stressful because of the way it was implemented. According to

one academic’s perception:

Continuous assessment and the existence of different evaluations make sense if we are talking

about annual courses, to avoid the accumulation of subject matter. However, it does not make

sense in semi-annual courses and I believe it can be counterproductive to the learning process.

Several evaluations during a semester can impact negatively the normal functioning of the

courses and cause a lot of stress in the students.

This provides a clear example of how policy intentions can be re-interpreted in ground-floor

enactment: a different understanding of continuous assessment by academics, combined with

a preoccupation with compliance, could end up being more detrimental than beneficial for

student learning.

9

On the contrary, in the Arts programmes in all four HEIs academics appeared primarily focused

on the formative purpose of continuous assessment. It was the most widespread assessment

method, too. Academics and students alike discussed the practical nature of arts courses in

which students performed practical tasks and developed projects. The function of continuous

assessment during the course was to follow students’ performance and progress, to give

feedback and to guide further work. Then, the final assessment was summative, the evaluation

of the final piece of work. In HEI C, for example, practical tasks were the most important part

of student assessment. Academics also emphasised the importance of students’ self-

assessment and self-reflection on objectives and progress:

We want students to be aware of the difficulties of the course, the work they have to do and

the main goals they have to achieve. And since the practical component is very strong in the

programme, practical tasks are the most important part of student assessment.

Students also seemed to appreciate the value of the practical component, stating that, except

for some courses, they learnt more through practice. In one student’s words:

We prefer practical works to tests or exams, because we learn more with those works and with

the practice.

As opposed to Arts where continuous assessment appeared as the main assessment method,

in the analysed Engineering programmes continuous assessment and final assessment (as an

exam) seemed to coexist more. The degree of reliance on final exams varied as follows. In HEI

A, both students and academics spoke of a variety of assessment methods (exams, various

tests throughout the semester, projects), including the students’ possibility of sometimes

choosing among these. Continuous assessment was described as predominant. It was

perceived by academics and students alike as an added-value, since the courses were mainly

practical and formative assessment seemed most fit-for-purpose. Contrary to the frequency of

continuous assessment in HEI A, Engineering lecturers and students in HEI B reported that final

exams were the most commonly-employed assessment method with the exception of one

course per semester which had to be assessed continuously, as per regulations. However, as

discussed earlier, in this case continuous assessment did not stop being summative. A senior

academic criticised the tendency of overvaluing continuous assessment and advocated a

hybrid methodology, defending the final exam and its specific purpose of preparing students

to act under more stressful or tense circumstances. In HEI C, students emphasised two

assessment methods: tasks during the semester and the final exam. They complained about

the undervaluation of tasks, which they believed were more beneficial to learning than exams.

In HEI D, practical tasks were generally the assessment method in practical courses, while final

exams were employed in foundation courses. The close relationship and informal dialogue

between academics and students stood out as a positive characteristic in participants’

accounts, since they enabled flexibility and adjustment of assessment methodologies. As a

student declared:

We talk a lot with them [the professors] and ... if we have lot of works or tests they try to adjust

the assessment methods and try to help us.

The use of learning outcomes was another issue of consideration, given the significance

Bologna policy invests in these (Bucharest Communiqué, 2012; ENQA, 2005; Leuven

Communiqué, 2009; London Communiqué, 2007), deeming them to be central drivers of

10

student-centred teaching, learning and assessment methods. The first observation emerging

from this study, both in official institutional documents and during the interviews, was the lack

of a consistent Portuguese terminology for learning outcomes. Several terms which appeared

to refer to learning outcomes were used interchangeably: competences (competências),

learning outcomes (resultados de aprendizagem) or objectives (objectivos).

Thus the academic regulations in HEI A state that assessment aims to quantify the mastery of

student competences. The assessment regulations in HEI B require that course descriptions

should state objectives and learning outcomes. In HEI C, Engineering pedagogic regulations

require that the envisaged learning outcomes and student competences should be stated in

course descriptions, while assessment is described as a process which determines the extent

to which these have been reached. The Arts policy highlights assessment as having a central

role in the promotion of competences and the development of appropriate methodologies. HEI

D regulations state that assessment tests acquired knowledge, taking into account the

objectives defined in course descriptions. However, it is unclear whether these terms are

consistently employed to mean learning outcomes or, indeed, what they refer to.

A second observation is related to the dichotomy between official institutional policies and the

actual understanding and practice related to learning outcomes as revealed by focus groups.

As seen above, in all the analysed HEIs assessment or academic regulations make reference to

competences, learning outcomes or objectives. Institutional representatives at high and

middle levels also demonstrated understanding of learning outcomes and their pedagogic

function. However, a majority of the lay academics in focus groups – especially in the area of

Engineering – did not seem to have grasped the concept. In answering the question about the

link between assessment and learning outcomes, they led the discussion elsewhere. This

occurred despite the fact that they had drafted course descriptions where they identified

learning outcomes. Indeed, some central faculty representatives in HEI B confirmed that

despite institutional efforts to introduce a learning outcomes approach, these were still little

understood, implemented and had not yet triggered changes in pedagogic methods. One

academic’s statement reinforced this perception:

I think that sometimes we seem amateurs on student assessment. I believe that assessment

should be designed before the course begins, because the course objectives and the

competences that students are expected to achieve should be defined a priori.

In all institutions but HEI D (where the concept was totally obscure across the board), learning

outcomes appeared more accessible to Arts academics as opposed to Engineering ones. Some

illustrative examples follow. In HEI A, Arts academics defined learning outcomes as students’

understanding of a designer’s responsibility and role, teamwork, basic working tools, various

areas of knowledge etc. and stated that these were taken into account in course design. They

also believed that continuous and project-based assessment facilitated the evaluation of

intended learning outcomes, which also attempted to build bridges with the world of work.

The close teacher-student relationship allowed dedicated attention to each student’s progress

and their attainment of intended outcomes. In HEI C, too, an academic declared:

11

Students assessment methods are made to measure the learning outcomes of students. In our

courses we try to build assessment procedures (tests, individual assignments, group

assignments) to evaluate if students achieved the goals of the course.

Art students in HEIs A, B and C also appeared more familiar with the learning outcomes

defined in course descriptions as opposed to Engineering students, and also seemed to grasp

better the concept. They were also more satisfied with how assessment and teaching methods

reflected learning outcomes. According to one student in HEI C:

We know which are the goals that we have to achieve and the things that we have to learn,

because everything is written an explained in the programme specification. And we know what

we have to learn in our courses, because they explained that to us.

Engineering students, in contrast, struggled with understanding the learning outcomes for

their programme or courses. In HEI A, they claimed to receive no information on learning

outcomes. In HEI C, students were unsure about what these were. They did not understand

the reasons why they did certain tasks or answered certain questions in exams. Some students

declared to have comprehended only later the purpose of their previous years’ learning and

knowledge and intuitively understood the associated learning outcomes. In a student’s word:

Most of the time we do not understand why we are studying a subject or how it will be useful

for us in the future. Mainly in the first years, where there are a lot of things that we have to

learn, but we do not understand if we will apply the things in the future. Sometimes we

understand it later, and we realize why we studied those things.

The greater understanding and recognition of learning outcomes among Arts academics and

students appears facilitated by the fact that Arts courses involve the acquisition of practical

skills through hands-on product development and creation of artwork. Nonetheless,

Engineering is arguably a practical applied discipline, too, given its laboratorial component, but

this seems to hardly influence Engineering academics and students’ perception of learning

outcomes. It can be speculated whether this is due to the first years in Engineering being

dedicated to more theoretical foundation courses, and a maybe less practical nature of

courses than overall expected.

Changes in academic practices of assessment were a consideration of this research since they

could potentially result, at least partly, from new policy, i.e. Bologna’s promotion of student-

centred learning and the ESG. It was primarily academics’ accounts that shed light onto this

issue, since students’ time-span in higher education is limited and their perspectives are

unlikely to include pre- and post-Bologna experiences. However, we did get some insight into

their opinions. In the Engineering programmes, the most noteworthy change research

participants reported was a diversification of assessment methods and the increasing weight

of continuous assessment and ongoing student work at the expense of the final exam. This was

noted in three of the four institutions and was driven partly by regulations in HEI B and HEI A.

In HEI B, for instance, Engineering academics resented the increasingly prescriptive character

of assessment regulations (i.e. the rule of one course per semester with continuous

assessment only, or an earlier regulation which only allowed continuous assessment), which

they felt impinged on their pedagogic autonomy:

12

I am strongly against ‘pedagogic dirigisme’ and against the establishment of general rules and

impositions to such different courses and teaching methodologies. Some years ago teachers

could define the best assessment method and there were no impositions to follow a continuous

assessment.

In HEI A, academics are now required to write an end-of-course report and suggest

improvements. Additionally, if failure rates go over 25%, they must provide justification for this

and propose measures to tackle the problem. One academic declared to have increased the

number of tests further to student feedback. In HEI C, academics highlighted the increase in

assessed student assignments and tests as the main change in assessment procedures, and

classified it as pre- and post-Bologna. Students were strong defenders of these assessment

methods, believing them to be better for their learning. They disagreed with the assessment of

some courses by one exam only, as they learnt less than with assignments. They also

complained about the concentration of content in one exam. Mainly after Bologna, students

perceived this as problematic since the subject matter of previously long degrees was now

delivered in less time. This criticism suggests the superficial alignment of degrees to Bologna

principles and the compression of subject matter from previously longer degrees in the new

shorter ones, instead of a deep re-thinking of the curriculum.

Arts academics reported changes of a more varied nature. For instance, in HEI A the School

developed some general assessment rules to do away with the inconsistencies students

experienced between different courses. Students seemed satisfied with the functioning of

assessment, mainly because they felt that their views on how courses could be improved

mattered. As one student put it:

They always ask us about how we believe the course can be changed and improved; our

opinion counts.

In HEI B, academics noted a diversification of assessment methods as well as more systematic

course descriptions focused on the expected student achievement, thus claiming assessment

had become more reliable and appropriate for its purpose. In HEI C, changes were reported in

procedures related to the communication of assessment-related information to students

through new online platforms. In HEI D, academics said that the appropriateness of

assessment procedures for their purpose received more attention – hence their continuous

adjustment.

These changes suggest increased alignment with the guidelines of Standard 1.3. Some of them

(more continuous assessment, course descriptions focused on learning outcomes to inform

teaching and assessment, etc.) also imply an increased attention to assessment practices

aimed at enhancing student learning. Furthermore, academics described other initiatives

illustrative of increased engagement with reflective activities around teaching and assessment.

Thus, in two HEIs (A and B) lecturers write a report at the end of the course to evaluate the

fulfilment of its objectives and make recommendations on how it can be improved.

Initiatives which foster reflection and discussion around pedagogic issues, including

assessment, (and ways to improve these) also occur at faculty/school and programme level:

e.g. a relatively new teaching and learning lab in the Faculty of Engineering in HEI B which has

resulted in increased pedagogic awareness; an annual workshop launched recently by the

13

Engineering programme, also in HEI B, meant as an ongoing revision of the programme; or

regular meetings among the Arts academic staff in HEI A with the same reflection and

enhancement purpose. Academics felt that the above activities all resulted in ongoing

adjustments and improvements in pedagogic practices, including assessment. In HEI C, the

Bologna Process was claimed to have strengthened this continuous adjustment process.

Conclusion

This paper has aimed to investigate institutional policies and academic practices of

assessment, as well as whether these have been influenced by Bologna policy advocating

student-centred learning and assessment. A general finding is that assessment practices in the

four investigated Portuguese HEIs appear to increasingly embrace student-centred

methodologies and to some degree reflect European orientations regarding student

assessment, seeming to indicate some effects of policy on academic practice. This would be

consistent with previous research which reported that Portuguese academics perceived

Bologna as a window of opportunity to change the pedagogic paradigm towards student-

centred pedagogies (Sin, 2012; Veiga and Amaral, 2009). Nonetheless, it did not emerge clearly

from this research how far changes in practice have, indeed, been driven by recent European

policy. Only in some cases is direct reference made to Bologna. While we could infer from

some observed consistencies that Bologna might have driven these developments, especially

as regards institutional policies, the existence of a clear cause-effect relationship is hard to

uphold.

The implementation staircase metaphor (Reynolds and Saunders, 1987) used as a lens in this

study has helped illustrate the situatedness of the policy experience and the phases of policy

transformation and interpretation, which result in variable ground-floor enactment (Ball, 1994;

Gornitzka, Kogan, and Amaral, 2005; Ozga, 2000; Trowler, 2002). The unique set of

characteristics of the Bologna Process – namely its non-binding nature, the multiple actors and

levels involved in enactment, and the vagueness of its objectives together with the lack of

operationalisation cues – have favoured localised implementation and policy expressions in

such a way as to make sense to ground-floor actors and their contexts. To cite again S. Ball:

Solutions to the problems posed by policy texts will be localized and should be

expected to display ad-hocery and messiness (...) Given constraints, circumstances and

practicalities, the translation of the crude, abstract simplicity of policy texts into

interactive and sustainable practices of some sort involves productive thought,

invention and adaptation. (Ball, 1994, 18-19)

Moving along the implementation staircase, we have indentified the following situated

responses to Bologna policy, determined by actors’ location, roles and circumstances. First, as

we have noted, national policy to regulate assessment is rather vague and scarce in Portugal,

although it did adopt some Bologna principles. At institutional level, variation in

assessment/academic regulations has been observed, despite several commonalities probably

explained by the idiosyncrasies of the national higher education system. At the same time,

their observed alignment, although partial, with the guidelines of Standard 1.3 (definition of

procedures, informing students, etc.) might suggest that these do respond to European policy,

possibly through top institutional actors’ awareness of the European Standards and Guidelines

14

for Quality Assurance. Next, differences have been observed in academic practices especially

between the two disciplines (Arts and Engineering). Variation concerns mainly assessment

methods (final/continuous) and understanding of learning outcomes, two of the key areas

where changes can lead to student-centred learning. The observed variation in academic

practice suggests the strength of learning and teaching regimes (Trowler and Cooper, 2002)

and of their recurrent practices in the face of policy trends, despite the latter’s potential to act

as powerful catalysts of change in academic behaviour (Trowler, Saunders, and Bamber, 2012).

The analysis has revealed some examples of student-centred pedagogy: the very existence of

institutional student assessment or academic policies which give transparency to procedures;

communication of relevant information to students early on, creating additional transparency;

the inclusive character of assessment procedures (at least officially), as institutional

regulations acknowledge the needs of special-regime students by stipulating exceptions from

general attendance and assessment requirements; a growing presence of continuous

assessment alongside final assessment; increasing reflection around pedagogic practices.

However, despite apparent institutional alignment with European orientations (as reflected in

rules and regulations), and despite academics’ apparent compliance with institutional

regulations as inferred from the interviews, academic practices and students’ experiences

occasionally tell a different story. Changes as the ones highlighted above sometimes appear to

be pro-forma: the inclusive character of policies is not always reflected in practice, as

academics sometimes appear unwilling to acknowledge the needs of working students;

continuous assessment can be summative, therefore not necessarily fulfilling a formative

function meant to help students along their learning journey; learning outcomes, despite their

presence in one form or another in institutional documents, are variably understood by

academics and students and especially problematic in Engineering, hence their dissociation

from assessment; and students sometimes complain that assessment procedures lack

transparency.

This research has, therefore, revealed some areas of student assessment which could

potentially benefit the student experience if they were to receive further consideration at the

various policy levels. Thus, the poor understanding of learning outcomes might be addressed

at institutional level, but also national, through initiatives to raise awareness of the potential of

learning outcomes as a pedagogic tool in teaching and assessment design. Pedagogic training

could also tackle some of the shortcomings identified in assessment procedures, such as

academics’ reticence to acknowledge the regime of working students or the preference for

summative assessment in the Engineering degrees. At the same time, institutional regulations

could address factors which affect the transparency and objectivity of assessment, such as the

absence of clear tiered criteria to inform marking and the reliance on single examiners.

Acknowledgements

This research was undertaken in the context of the project IBAR funded by the European

Commission, entitled 'Identifying barriers in promoting the European Standards and Guidelines

for Quality Assurance at institutional level', reference 511491-LLP-1-2010-1-CZ-KA1-KA1SCR.

15

References

Ball, S. J. (1994) Education reform : a critical and post structural approach, Buckingham: Open

University Press.

Bamber, V., Trowler, P., Saunders, M., & Knight, P. (2009). Enhancing learning, teaching,

assessment and curriculum in higher education: theory, cases, practices. Maidenhead:

Society for Research into Higher Education & Open University Press.

Becher, T. and Trowler, P. (2001) Academic tribes and territories: intellectual enquiry and the

culture of disciplines, Buckingham: Open University Press.

Biggs, J. B. (2003) Teaching for quality learning at university : what the student does,

Buckingham: SRHE & Open University Press.

Bucharest Communiqué (2012) ‘Making the Most of Our Potential: Consolidating the European

Higher Education Area’, Bucharest.

Cerych, L. and Sabatier, P. A. (1986) Great expectations and mixed performance : the

implementation of higher education reforms in Europe, Stoke on Trent: Trentham

Books.

ENQA (2005) Standards and Guidelines for Quality Assurance in the European Higher Education

Area, Helsinki: European Association for Quality Assurance in Higher Education.

Gornitzka, Å., Kogan, M. and Amaral, A. (2005) Reform and Change in Higher Education.

Analysing Policy Implementation, Dordrecht: Springer.

Gornitzka, Å., Kyvik, S. and Stensaker, B. (2005) ‘Implementation Analysis in Higher Education’,

in Å. Gornitzka, M. Kogan and A. Amaral (eds). Reform and Change in Higher

Education. Analysing Policy Implementation, Dordrecht: Springer, pp. 35-57.

Leuven Communiqué (2009) ‘The Bologna Process 2020 -The European Higher Education Area

in the new decade. Communiqué of the Conference of European Ministers Responsible

for Higher Education’, Leuven.

London Communiqué (2007) ‘Towards the European Higher Education Area: responding to

challenges in a globalised world’, London.

MCTES (2006) ‘Decree-Law No. 74/2006, of 24 March’, Lisboa: Ministério da Ciência,

Tecnologia e Ensino Superior.

Ozga, J. (2000) Policy research in educational settings: contested terrain, Buckingham: Open

University Press.

Reichert, S. andTauch, C. (2003) Trends 2003: Progress towards the European Higher Education

Area. Bologna four years after: steps toward sustainable reform of higher education in

Europe, Geneva/Brussels: European University Association.

Reynolds, J. and Saunders, M. (1987) ‘Teacher Responses to Curriculum Policy: Beyond the

‘Delivery’ Metaphor’, in J. Calderhead (ed). Exploring Teachers’ Thinking, London:

Cassell Educational Limited, pp. 195-214.

Sin, C. (2012) ‘Academic understandings and responses to Bologna: a three-country

perspective’, European Journal of Education 47(3): 392-404.

Sursock, A. and Smidt, H. (2010) Trends 2010: A decade of change in European Higher

Education, Brussels: European Universities Association.

Trowler, P. (2002) Higher education policy and institutional change : intentions and outcomes

in turbulent environments, Buckingham: Society for Research into Higher Education &

Open University Press.

16

Trowler, P. and Cooper, A. (2002) ‘Teaching and Learning Regimes: Implicit theories and

recurrent practices in the enhancement of teaching and learning through educational

development programmes’, Higher Education Research & Development 21: 221-240.

Trowler, P., Saunders, M. and Bamber, V. (2012) Tribes and Territories in the 21st Century:

Rethinking the significance of disciplines in higher education, London: Routledge.

Trowler, P., Saunders, M. and Knight, P. (2004) ‘Change thinking, change practices. A guide to

change for heads of department, subject centres and others who work middle-out’,

Paper written with support from: The LTSN Generic Centre, The UK Evaluation of the

LTSN and The HEFCE Innovations project ''Skills plus''.

Veiga, A., & Amaral, A. (2009). ‘Survey on the implementation of the Bologna process in

Portugal’, Higher Education 57(1): 57-69.