Scope, Transparency and Style: System-Level Quality Strategies and the Student Experience in...

15
Chandos Publishing Hexagon House Avenue 4 Station r .an� \Iimcy Oxford OX28 4 BN K Tel: +44 (0) 1993 848726 Email: info@chandospublishing.com w.chandospublishing.com www.chandospublishingonline.com C:handos Publishing is an imprint of 0dhead Publishing Limitcd Woodhead Publishing Limited 80 High Street Sawston Cambridge CB22 3HJ UK Tel: 44 (0) 1223 499140 Fax: +44 (0) 1223 832819 w .woodheadpublishing.com First publishtd in 2013 ISB" 978-1"84334"676"0 rint) ISB\" 978-1-78063"316-9 (onli ne) Chandos Learni ng and Teaching Series ISSN: 2052-2088 rint) and ISS: 2052-2096 (online) © The editors and contributors, 2013 Brish Library Cataloguing-il l-Publication Data, A catalogue [tconl for this book is awilable from the Brish Library. l righʦ reserved. � o part of this publicaon may be reproduced, stored in or introduced into a ret rieval system, or ansmi tted, in any form, or by any means (electronic, mechanical, photocopying, recording or othcisc) without the prior written permission of the Publishers. This pUblicaon may not be lent, resold, hired out or otherwise disposed of by way of trade in any form of binding or cover othtr than that in which it is publ ished ithout the prior consent of the Publishers. Any ptrson who does any unauthorised act in relation this publ icat ion may be liable to criminal prosecuon and claims for damages. The Publishers make no representation, express or impl ied, with regard to the accuracy of the informati on contai ned this publ ication and cannot accept any legal responsibil ity or b for any e nors or Of filSSlons, The =ateci contained ;n this publication constutes gencfil guidelincs only and does not represent to be advice on any particular matter. \0 reader or purchaser should act on the basis of material contained in this publ icaon ithout first taking professional advice appropriate to their particular circumstances. All screenshots i n this publ icaon are the copyright of the website owner(s), unles indicated otherwise. Typeset by Domex e-Data P. Ltd., India Printed in the UK and USA. Printed in the "CK by 4cdge Ltd, IIocldey, Es�ex, External Quality Audit it improved quality assurance in universities? EDITED BY MAHSOOD SHAH AND CHENICHERI SID NR CP �"NO1'\lrH I�lmlr, " I v

Transcript of Scope, Transparency and Style: System-Level Quality Strategies and the Student Experience in...

Chandos Publishing Hexagon House

Avenue 4 Station r .an�

\'{Iimcy Oxford OX28 4BN

C;K Tel: +44 (0) 1993 848726

Email: [email protected] www.chandospublishing.com

www.chandospublishingonline.com

C:handos Publishing is an imprint of \��700dhead Publishing Limitcd

Woodhead Publishing Limited 80 High Street

Sawston Cambridge CB22 3HJ

UK Tel: ..,.44 (0) 1223 499140 Fax: +44 (0) 1223 832819

www.woodheadpublishing.com

First publishtd in 2013

ISB1:\" 978-1"84334"676"0 (print) ISB1:\" 978-1-78063"316-9 (online)

Chandos Learning and Teaching Series ISSN: 2052-2088 (print) and ISS1'\: 2052-2096 (online)

© The editors and contributors, 2013

British Library Cataloguing-ill-Publication Data, A catalogue [tconl for this book is awilable from the British Library.

/ill rights reserved. �o part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording or othcf\visc) without the prior written permission of the Publishers. This pUblication may not be lent, resold, hired out or otherwise disposed of by way of trade in any form of binding or cover othtr than that in which it is published 'Without the prior consent of the Publishers. Any ptrson who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages.

The Publishers make no representation, express or implied, with regard to the accuracy of the information contained in this publication and cannot accept any legal responsibility or liability for any enors or OffilSSlons,

The =ateciol contained ;n this publication constitutes gencfil guidelincs only and does not represent to be advice on any particular matter. 1'\0 reader or purchaser should act on the basis of material contained in this publication 'Without first taking professional advice appropriate to their particular circumstances. All screenshots in this publication are the copyright of the website owner(s), unletis indicated otherwise.

Typeset by Domex e-Data Pvt. Ltd., India Printed in the UK and USA.

Printed in the "CK by 4cdge Ltd, IIocldey, Es�ex,

External Qual ity Audit it improved quality assurance

in universities?

EDITED BY MAHSOOD SHAH

AND CHENICHERI SID NAIR

C P � " I\ N l'> O � 1'\lrH I�lmlr, "'"" �

I v

Scope, transparency and style:

system-level quality strategies and

the student experience in Australia

Nigel Palmer

Abstract: As Australia moves to an approach to quality assurance that is framed around regulation and risk, it is timely to reflect on the merits of external quality audit supported by a fitness-for-purpose approach. This chapter explores the proposition that external review has made a difference in enhancing the higher education student experience in Australia in terms of the scope, transparency and style of quality-enhancement activities adopted by higher education providers in Australia.

Key words: quality assurance, external review, student experience, system­level quality strategies, reporting strategies, performance funding, programme incentive funding, continuous improvement, critical self-review.

Introduction

Rather than seek to demonstrate improvement through comparing performance on indicators for the quality of the student experience, this chapter identifies three important dimensions where quality audit may have had an impact on the quality of the student experience through supporting improvements to the continuous improvement efforts of institutions. Benefits of external review include expanding the scope of activities worthy of consideration for continuous improvement efforts, improved transpan::ru.:y ill the activities and outcomes supported by institutions and qualitative improvement in the approach taken to continuous improvement within institutions. In short, it is proposed here

r 221

I External Quality Audit

222 I

that external quality audit has supported improvements in the quality of the student experience through improvements in the scope, transparency

and style of the quality-enhancement activities adopted by higher education providers in Australia. This chapter addresses these in the context of historical developments in system-level quality assurance strategies in Australian higher education, and is presented in two parts: the first section provides an overview of the development and implementation of successive system-level quality initiatives in Australian higher education, while the second compares external review with comparable system-level quality strategies. The chapter concludes by addressing the merits of external review and future prospects for system­leve! quality strategies in Australia.

External quality audit and the student experience

System-level quality initiatives in Australian higher education

External quality audits conducted by the Australian Universities Quality Agency (AUQA) were the principal means of monitoring and reporting on the activities of Australian higher education providers during the decade 2001-11. Reports of audit findings lent transparency to the quality improvement activities of institutions, and external review appears to have encouraged an expansion in the range and scope of the quality-enhancement activities of providers. While it would be difficult to establish conclusively that improvements in the quality of the student experience have been as a direct result of external audit, it is easier to see how external quality audit may have had a significant influence on assurance, enhancement and reporting activities relevant to the student experience during this time. In evaluating the contribution of external quality audit during the 2000s, it is worth reviewing developments in system-level higher education quality initiatives in Australia in the lead­up to AUQA's establishment, before addressing prospects for future development.

AUQA was established in March :lOUU as an independent agency jointly supported by Australian Federal, State and Territory Governments. Over the decade 2001-11 AUQA conducted roughly 150 external quality audits of universities and other higher education providers.!

� 0 N , ..

� "

.2 �

� 0

u 0 , 0

.tb � 0

.� I! 1;1 0 " .E • .� �

.�

."

.E

.� �

u 0 • '" 0 'C C 0 � " ."

" 0 0-m 0

.2 � • Z

'" 0 • E E 0

u

0-o • � .-� . o� " 0

� � .E <; E

§ �

g � • or

0 0 .!!:! jg "'ffi 0 � ,

�� t; " " o m 0 "" '" (IJ .r. .s :5 .S? C _� o o � ·Vi � · 0 o 0 m (IJ .2 0. E " x

,,-.s 2 � .-" 0 .-� o 0 .� 2 t;u - 0 u m . " E 0 0·-�� �g .r. .!!2 (IJ

<J) E'" til (1) = C . c..., ..QE2c mE $90t;::.9 co U)u·EttI :a� (1) <J) 'till U • til (1) (1) ._ :J (1) (1) .r.:;:: tIl-o .r.:p . ....- (1) (1) ... . _ IJ) O�(1) ... o�� ... (1) IJ) (Il ... (Il (1)

"0 .::: Q).r. "0 .::: == (1) c >."2!l (1) C o ....J� o.r. ....J ::J u

� .!!! .S ."!::: >. .. " � � o m

�" • 0 0::r: U5�

0 d, • '" " 0> � '" � • 0 0- , " mu m • o - � =1:: o�� r;� (Il tl C . -� ::J.9 o 0 - '" " " 0 E .:.:;.� o • _ 0

§� � §l'� u� o �c(O <tl�U U 001 .r. (/) (/) ::J <tl � ... (Il (Il "0 (IJ • ...... -.- u.Jt;:; c o .ijl 'ijl >. 'E .Q 1: ... ... ... ttl o�� .!2E u �·E·E6) 1:: 0.g Ct:��r-- �Qu.J 0 0 0 0 o � 0 (1) C ._ (1) ..... § t;:; co :!:< t;:; 0 >. ._ t> .- = (/)

Em ' ·E � ti5 ttl 0 • 0 !\ E 1il .<: UE:J'-g o 0 0 :=l O�t5"O " u�� <C:UI.i..I""LU

00 � '" � " m I� '" � 0>

'"'

j "6

�� .-m E • o 1;1 .. -" 0 . � 'E E

uU. � m", " 0 0 <tl :5 0.

� • 0 .; Q ,2 (1) .P rtl u ,_ <::> J!!:E:J Co ._ "tl � 2 �

u 0 0 w �� f- � ,� u . �

eo � :e: II � " :a m � E m 0 o 0 S: C '-o 0 "

� co • " "6 " & � :a � •

.1! s :3

e " 2 E co .!2 J)� -g"OdJ m 0 � UjJm� 0 ._

·c C ,...., ._ 0 n. co·- -. �w F m_ . .2 (IJ o m E 2 LIl E 2.:.:;� 00> "

� � 0 W 0> '

'"'� 0 _. o � 0 1:: a3 .9 8.Ertl <tl �:s.o,...., Ct:-tlLO (IJ Q. (1) a

.r.E>o J-u.JE("\J

o > . . _ ,� Eti5tt1Eu E·- uEu.J u o1::.gof- � U�LUUS U

! :;; '"'

m � 0> '"'

� m� � 0 � 0 �S� OD_ - 0 o • 0 > o� .§,g1J) "O�.s -- 0 o m 0 o > E -m,2(1) IJ) Q) e5 o • , o , a :;:; 0 E 'all :J .... . - C :!:< (/) (1) 'is tl � u C c :Jc :J .- "0 � ..... rtl255?£ :5 a IJ)"O "0 C. '" (IJ (IJ ... .P..::.: (/) <tl = .S 'S t;:; m­"0 <tl :J <tl

o:(..ClO"..Cl

§ . " "'u . 0 rr

� �& 00> ,ge'"' w<5 �i'" �8 t>o � .2 .- . � g w� _w o • ,1'

J.!

� '"'

w !

Table 15.1. National quality, funding and review initiatives in Australian higher education, 1954-2012 (Continued)

Years Agency Report, program or initiative Author/ Comments chair

1986 CTEC Quality Measures in Universities Paul Bourke Recommendations led to Discipline Reviews (Bourke, 1986); Discipline Reviews in higher education. (1986--91)

1987-88 Australian Higher Education: A Policy Discussion The Hon. Led to the establishment of the Higher Government Paper (Green Paper, Dawkins, 1987); John Education Contribution Scheme (HECS) and

Policy Statement (White Paper, 1988); Dawkins the 'national unified system' reintegrating Higher Education Funding Act (1988) colleges and universities. Also led to the and the establishment of National establishment of the Educational Profiles Board for Employment, Education and process. NBEET was also established, Training (NBEET) (1988-96) effectively replacing CTEC.

1991 Australian Higher education qualrty and diversity Hon. Peter Recommended additional funding to Government in the 1990s (1991) Baldwin encourage improvement in institutional

quallty assurance practices.

1991 Higher The Quality of Higher Education Ian Chubb An influential compilation of discussion Education

Council; NBEET papers on quality in higher education.

1992-95 Higher Higher education: achieving quality Brian Recommendations led to the establishment Education (1992); Committee for Quality Wilson of the CQAHE. Council; NBEET Assurance in Higher Education

(CQAHE) (1992-95)

1995 Higher Higher Education Management Review David Hoare Addressed accountability, governance, Education (Hoare, 1995)

financial and reporting arrangements. Council; NBEET

1998 Australian Review of Higher Education Financing Roderick Led to the implementation of a lifetime Government and Policy (1998); Learning for Life West learning entitlement for government.

(West, 1998) sUpported study. 1999- AUstralian Knowledge and Innovation White Paper The Hon. Led to the Commonwealth Grant Scheme

2003 Government (Kemp, 1999), followed by BaCking Brendan (CGS) (replacing block funding grants) Australia's Ability (Australian Nelson through the Higher Education Support Act, Government, 2001); Higher Education the Learning and Teaching Performance Fund at the Crossroads (Nelson, 2002) and and the Carrick Institute (later renamed the Backing Australia's Future (Nelson, ALTC) (2003-11), 2003}

2000-11 Ministerial AUQA David AUQA was an independent quality agency

Council on Woodhouse jOintly established by Federal, State and

Education, Territory Governments,

Employment, Training and Youth Affairs

2008 Australian Review of AUstralian Higher EdUcation Denise Recommendations led to the establishment Government (2008)

Bradley of the Tertiary Education Quality and Standards Agency, 2011- Australian The Tertiary Education Quality and Carol Nicoll Assuming many of the fUnctiOns of AUQA,

Government Standards Agency (TEQSA) TEQSA was established by legislation in 2011.

I External Quality Audit

226 1

AUQA's establishment came at what was recognised as a time of increased diversity in the scale, organisation and mode of delivery in higher education, following significant expansion through the 1980s and 1990s (DETYA, 2000). AUQA's approach was based on encouraging critical self-review on the part of providers, supported by regular external quality audits. While on the one hand AUQA's establishment signalled a new approach to system-level quality assurance in Australia, on the other it also represented the culmination of efforts to encourage critical sel£­assessment on the part of higher education providers extending back to at least the late 1970s. The major system-level programs and strategies relevant to this development are outlined below.

CTEC, the EIP and Discipline Reviews

The Commonwealth Tertiary Education Commission (CTEC) was established in 1977, effectively replacing the Australian Universities Commission. CTEC established the Evaluations and Investigations Programme (EIP) in 1979. The EIP became renowned for publishing high-quality research papers on specific aspects of Australia's educational systems, with a view to informing development and improvement in both policy and practice. After CTEC was replaced by NBEET in 1988 the EIP continued as a branch within the Department of Education, Employment and Training (DEET) and subsequent government departments, publishing targeted policy and research papers through to 2005.

Aims of the EIP initiative included the evaluation of courses, organisational units and resource usage in tertiary institutions. They also included promoting a climate of critical self-assessment within higher education providers, supported by external review. To this end, CTEC commenced a series of major discipline reviews in 1986 to determine standards and examine the quality of university teaching and research in Australia by major field of study. Discipline reviews were given renewed impetus in 1988, when the framework for the 'national unified system' was announced (see below). Particular issues noted at the time were the future needs of disciplines, whether teaching and research activities were of an appropriate standard and issues around duplication and institutional efficiency (Dawkins, 1988, p. 86). By the time discipline reviews were discontinued in 1991 these had covered law (Pearce et ai., 1987), engineering (Williams, 1988), teacher education in mathematics and science (Speedy et aI., 1989), accounting (Mathews et aI., 1990), agriculture and related education (Chudleigh et aI., 1991) and computing studies and information sciences education (Hudson, 1992).

Quality strategies and thA -,,-tudent experience in A1Jstraiia

Follow�up reports found that discipline reviews were overall viewed as an effective stimulus for the adoption of self-review and self-appraisal strategies, and were seen to have some SUCcess in this (for examples see Whitehead, 1993; Caldwell et aI., 1994). In many cases they served as an effective impetus for broadening the SCope of activities worthy of consideration for quality assurance purposes. They also had the effect of raising the profile of quality assurance activities within institutions and lending some transparency to their activities. Despite these advantages, Discipline Reviews were nevertheless perceived as slow, expensive and lacking mechanisms to promote follow-up activity, They were seen as lacking the means for ensuring that review recommendations were subsequently acted upOn by institutions or that quality-enhancement activities were supported and encouraged on an on-going basis (Higher Education Council, 1992; DEST, 2003),

NBEET, educational profiles and the CQAHE

The year 1988 saw significant changes to the higher education landscape in Australia, including significant changes to the way institutions were funded, the establishment of a framework for a national unified system of universities and a national government-supported loan scheme for student fees through the HECS. The Australian Government also restructured its advisory arrangements at this time, effectively replacing CTEC with the NBEET. Unlike CTEC, NBEET was an advisory body, leaving the responsibility for program delivery to the relevant government department. Reporting directly to the Minister, the Board also convened separate advisory Councils for Schools, Skills, Research and Higher Education. Significant attention also turned to the efficiency and effectiveness of higher education provision in Australia during the late 1980s and 1990s, and to the means of measuring quality and performance in particular.

Through the 1990s the Australian Government published The Characteristics and Performance of Higher Education Institutions,

providing a range of indicators with the aim of highlighting quality and diversity in Australian higher education (DETYA, 2001, pp. 11-12). Indicators published in these reports summarised staff and student characteristics, research performance and other available data on provider activities. These data were also used by institutions to review their Own performance (benchmarking both within and across institutions) and by government to mOnitor quality across the Australian higher education sector. In 1989 the Australian Government Convened a group of experts

/ill

I External Quality Audit

2281

to develop and report on a trial study of indicators useful in evaluating the performance of institutions at the department and faculty level, and of students at the level of academic award, discipline group and field of study (Linke, 1991, p. xi). Their final report, informed by the work of Cave et al. (1991) and Ramsden (1991b), classified a range of performance indicators reflecting dimensions of institutional performance.

The Linke report (Linke, 1991) suggested that performance indicators were of most use as part of a university's self-assessment process rather than in direct application to funding at the national level. This point was taken up in a 1991 policy statement on higher education in which the Minister for Higher Education, the Hon. Peter Baldwin, MP, outlined that while the government had no intention of prescribing a common set of performance indicators to be used by every university, it was interested in assisting universities to develop quantitative and qualitative indicators of performance in teaching and assessment, as well as for organisational areas such as governance, finance and staff development (Baldwin, 1991, pp. 31-32;DEST, 2003, p. 259). Among the most prominent developments to follow from this has been the use of data from the Course Experience Questionnaire (CEQ) both within institutions and in national benchmarking and funding decisions (Ramsden, 1991a, 1991b; Ramsden and Entwistle, 1981). Financial and staffing data, enrolment trends and data from student surveys have since formed the basis of the means for evaluating and comparing institutional performance in Australian higher education, and since that time have often been taken as institution-wide proxies for the quality of the student experience (Palmer, 2011).

The Higher Education Funding Act (1988) represented a fundamental change in the way higher education was funded in Australia. The government had previously determined operating grants for universities based on provisions in the States Grants (Tertiary Education Assistance) Act (1987). The new Act provided for institutional funding subject to ministerial determination informed by the Education Profile of the institution. Educational Profiles therefore formed the basis of agreements between the Australian Government and each higher education provider participating in what was to become the national unified system. Beyond informing funding determinations, Profiles also became the principal means for each institution to define their mission, their goals, their role in the sector and their particular areas of activity.

From 1996 institutional Quality Assurance and Improvement Plans would be integrated into the Educational Profiles process. This reflected a renewed emphasis at the time on the idea that maintaining and enhancing quality in higher education could best be achieved if universities were able

Quality strategies and the student eXperience in Australia

to operate in a framework of government encouragement without unnecessary intervention. In 1995 the Higher Education Council had reported that many institutions shared the view that national quality strategies should have a less direct relationship to the allocation of funding, and that a greater emphasis should be placed on institutional Outcomes. This was accompanied by the view that greater account needed to be taken of the diverSity within the system, and of the strengths and needs of newer universities in particular (Higher Education Council, 1995, p. 20). The framework subsequently recommended by the Higher Education Council addressed systems and processes in place as well as Outcomes reflected in measures like the CEQ, the Graduate Destination Survey (GDS), student entrance scores, student attrition and progression rates (DEST, 2003). Through this process universities were assessed on a range of qualitative and quantitative indicators of quality and performance, and this information was also used by government in monitoring the viability and sustainability of institutions in the system. All publicly funded Australian universities were required to include Quality Assurance and Improvement Plans as part of their Educational Profiles, a requirement that was later to form part of the Institutional Assessment Framework (IAF) from 2004. By the end of this period the IAF was frequently referred to as a key element in the 'strategic bilateral engagement' between government and higher education providers, with the purpose of encouraging institutional quality, accountability and sustainability while at the same time minimising government intervention and reporting requirements.

In 1991 the Australian Government released the Ministerial Statement Higher Education: Quality and Diversity in the 1990s (Baldwin, 1991). The paper addressed weaknesses of previous approaches to quality review at the discipline level, including their effectiveness as a quality assurance strategy, deficiencies in the comparability of findings across institutions and the need for improved quality assurance processes at both the system and institutional levels. The paper also suggested that universities should receive differential funding more closely tied to their performance on an agreed set of indicators of quality and performance. Outcomes from the discussion paper included recommendations for the establishment of what was to become the Committee for Quality Assurance in Higher Education (CQAHE) (esf<lblished in 1992) and [or the allocation of funds over and above base operating grant amounts determined by the assessed performance of each institution (Higher Education Council, 1992).

Convened by the Higher Education Council of NBEET and chaired by University of Queensland Vice Chancellor Brian WilSOll, the CQAHE

r 229

I External Quality Audit

230 I

(or Wilson Committee) conducted independent audits of institutions and advised the Australian Government on funding allocations through associated incentive programs between 1992 and 1995. Self-review formed the basis of evaluation, with review at the whole-of-institution rather than discipline level. Reviews addressed quality assurance processes within institutions, evidence of seH-evaluation and the quality of outcomes as reflected in the indicators available. They included a site visit from the review team to supplement information presented in quality portfolios prepared by each institution. In evaluation, equal emphasis was placed on evidence of quality processes, self-review and outcomes that could be demonstrated in available indicators. Various incentive funding initiatives were employed as part of this strategy, including the National Priority Reserve Fund Grants, schemes funded through the Committee for the Advancement of University Teaching and through the EIP. Three rounds of independent whole-of-institution audits were conducted between 1993 and 1995. Each round had a specific focus: an overview of teaching, learning, research and community service in 1993; teaching and learning in 1994; and research and community service in 1995 (DEST, 2004, p. 9). Institutions received differential funding based on their performance in these reviews, through the Quality Assurance and Enhancement component of universities' operating grant. While the relationship between the CQAHE and individual institutions was on a confidential basis, publication of the review reports themselves in part or in full was left as a matter for each institution (Anderson et aI., 2000).

Evaluation of the CQAHE program suggested that it had been successful in raising the profile of quality assurance practices within institutions and of the need to monitor, review and make adjustments to institutional processes where gaps were identified. While there were mixed opinions as to how helpful these reviews had been, particularly in light of the consternation about the ranking of institutions in published results, it was perceived that external review had in fact supported considerable advances in establishing effective institutional practices. The whole-of-institution approach had the advantage of involving a much broader share of the university community in seH-review activity. Reviews appeared to trigger considerable change in institutional quality assurance practices, including adoption and greater acceptance of continuous impruVtllltnt and sdf-evaluarion practices. They also appeared to encourage the increased dissemination and use of data from performance measures within each institution (Vidovich and Porter, 1999; Chalmers, 2007). CQAHE reviews have since been offered as a good example of the

Quality strategies and the student experience in Australia

effective use of external review informed by a fitness-for-purpose approach in evaluating the performance of institutions relative to their mission and aims (Gallagher, 2010, p. 92). CQAHE reviews have also been held as an example 'par excellence' of government achieving system­level quality-improvement policy aims through 'steering at a distance', as opposed to adopting a more direct approach through legislation or the use of direct funding incentives (DEST, 2003, pp. 257�8).

The Australian Universities Quality Agency and the Tertiary Education Quality and Standards Agency

The Australian Universities Quality Agency (AUQA) was established in March 2000 as an independent agency jointly supported by Australian Federal, State and Territory Governments. The agency's brief was to monitor, audit and report on quality assurance in Australian higher education. AUQA's responsibilities included the publication of reports on the Outcomes of audit visits and also of the quality assurance processes, international standing and relative standards in the Australian higher education system (DETYA, 2001, p. 12). Over the decade 2001�11 AUQA conducted roughly 150 external quality audits of universities and other higher education providers. AUQA's brief followed in many respects from that of the CQAHE in conducting independent audits of institutions at the whole-of-institution level. AUQA's approach was based on encouraging critical self-review on the part of providers supported by regular external quality audits. The Agency was also responsive to the need for balancing the comprehensive assessment of institutions with the need to keep bureaucratic requirements, costs and time involved in the review process to a minimum (DEST, 2003, pp. 271�2; Adams et a1., 2008).

An important contribution of these reviews was an expansion in Scope of the kinds of activity subject to review, and therefore in turn worthy of evaluation and improvement efforts. While AUQA did not employ an externally prescribed standards framework, the scope of issues given consideration as part of the audit process was informed by the kind of criteria typical of organisaliunal self-assessment. A good example of these are those adapted by Woodhouse (2006), covering organisational. leadership, teaching and learning, research, staffing, facilities, enabling services and community engagement. These criteria were applied in

r 231

I External Quality Audit

232 ]

support of improving educational value to students, effective use of resources and capabilities and the contribution made to student development and overall well-being (Woodhouse, 2006, pp. 14-15). The scope of the student experience reflected in audit reports suggests a significant broadening of the matters considered worthy of consideration for continuous improvement purposes under this influence when compared with previous approaches. A 2009 report by Alcock et al. reviewed comments, affirmations and recommendations included in AUQA Audit Reports between 2002 and 2007. They identified learning and student support, transition, student conduct, equity and campus life as key areas, along with targeted student support for particular groups (Alcock et aI., 2009, pp. 3-4).

AUQA's approach to external quality audit was premised on the idea of managing for continuous improvement, auditing for fitness-for­purpose and focussing on institutional efforts in meeting their stated mission and goals. Here each institution was conceived of as an integrated system, with attention during audit being devoted to the nature and effectiveness of systems and strategies for monitoring, evaluating and improving performance relative to each institution's objectives (Baird, 2007, p. 3). Regular audit visits were supplemented by the self-review activities of providers, including the use of trial audits. Among the benefits of external quality audit that have been identified are increased awareness of quality systems, improved internal communication, improved transparency, increased responsibility and ownership for improvements in quality and improved cooperation within and across academic units (addressed in detail in the following section). While often the source of some anxiety on the part of university staff and management, external audit provided opportunities to focus the attention of the university community on 'quality', and appeared to be an effective impetus in bringing management, staff and students together to identify strengths, opportunities and risks among their activities and the outcomes they support, and for broadening participation in planning and review activities.

In 2009 the Australian Government announced the establishment of the Tertiary Education Quality and Standards Agency (TEQSA). This new agency has since assumed the majority of functions previously the undertaken by AUQA and State accreditation agencies. Announcement of the new agency has heralded a move to a more standards-based approach. Precisely what this will mean for quality and the student experience will be largely borne out in the development and implementation of the proposed standards framework, and through the definition and

Quality strateJ2:ies and the stlJrlpnt experience in Australia

measurement of risk. While it remains to be seen just how the marriage of regulatory and audit functions within a single agency will work in practice, the formation of TEQSA represents an opportunity to build on some of the activities developed by AUQA and its predecessors, and to reflect on the merits of the various system-level quality strategies employed and the broader developments in higher education quality governance that have led to its establishment. Aspects of these are compared in detail in the folloWing section.

System-level quality strategies and the impact of external review

External review has featured prominently among the main policy levers available to government in promoting and assuring institutional and system-level quality in higher education. However, these have typically been accompanied by additional system-level quality strategies. In understanding the practical impact of external review on the quality of the student experience, it is important to compare the impact of external review in policy and practice with other system-level quality strategies. Four broad strategies for system-level quality initiatives are identified here, namely reporting strategies, performance funding, program incentives, and external review (as outlined in Table 15.2). Each of these is compared below.

Reporting strategies

Basic reporting requirements have often featured as an adjunct to other quality strategies. These have featured as part of Australian higher

Table 15.2 System-level quality strategies employed in Australian

higher education

Strategy Examples

Reporting strategies EdUcational Profiles and Quality Improvement Plans Performance funding Learning and Teaching Performance FUnd Program incentive Incentive funding associated with CQAHE review funding

External review CQAHE and AUQA audit

r 233

I External Quality Audit

education quality initiatives such as the Educational Profiles process, Quality Assurance and Improvement Plans, the Institutional Assessment Framework and even Institutional Performance Portfolios. Supported by varying degrees of transparency, strategies like these have been an effective means of driving institutional improvement efforts, particularly where reporting includes clearly defined indicators for quality and performance. Reporting requirements certainly typify improvement strategies at the 'action-at-a-distance' end of the scale, in contrast to strategies involving more direct intervention or review on the part of government. Reporting initiatives like these also have a significant influence on the scope of activities worthy of consideration for continuous improvement purposes.

Financial performance and enrolments have for some time featured prominently among metrics for system-level evaluation and comparison of institutional performance among higher education providers. Following the Linke report (Linke, 1991), the Australian Government has employed a range of competitive, conditional and performance-based funding mechanisms to support system incentives for improvement in key areas of higher education. These include competitive research grants and performance funding initiatives designed to influence institutional behaviour, and indicators adopted to reflect learning and teaching quality. Over time, the emphasis of higher education performance measures in Australia has shifted from relying on a relatively narrow set of institutional performance indicators to encompass a much broader view of the means by which institutional performance may be reflected.

The more detailed the reporting requirements of institutions are, the more reporting requirements begin to look like a system-level 'performance reporting' quality strategy. As noted above, transparent measures of institutional performance were an increasingly prominent feature of the reporting requirements of Australian higher education providers through the 1990s. This gave increasing prominence to measures such as enrolment metrics and student surveys (Palmer, 2011). Powerful system incentives may be supported through reporting requirements on specific measures, without those measures being employed as criteria for the allocation of funding. This is perhaps best illustrated in the recent publication of the first full round of results from the Excellence in Research for Australia Initiative (ERA) (ARC, 2011). Among the <'1ims of the Allstr<'1lian

Government's current system-level quality arrangements is to ensure that students have better information about how institutions are performing, and to demonstrate to the community that value for money is being delivered and the national interest is being served (Australian Government,

234]

Quality strategies and the student experience in Australia

2009, p. 31). Improved transparency and accountability also featured prominently among justifications for the recent move toward a more standards-based approach. The incentives created through the reporting of performance measures via the proposed My University website may in themselves prove to be an important part of the Australian Government's quality assurance activities. To this end, the proposed My University website may serve to suppOrt a range of performance reporting objectives, and may assist in achieving the right balance between transparency measures, system incentives and performance funding arrangements.

Performance funding

An example of the use of indicators for performance funding purposes as a quality strategy can be found in the use of student satisfaction survey data in the Learning and Teaching Perfotmance Fund introduced by the Australian Government in 2003 (DEST, 2004). While concerns were raised regarding the transparency, appropriateness and rigor associated with the development and use of indicators in the scheme (Access Economics, 2005), the Fund nevertheless had the effect of successfully encouraging a greater focus on the teaching and learning activities of universities. Despite their limitations, the development and publication of institutional indicators for teaching and learning performance drew attention to the relevant activities of providers, recognising the development and use of targeted initiatives in support of on-going improvements in this area. More recently, the Final Report of the Review of Australian Higher Education reached the conclusion that transparent, public reporting of such data on an annual basis would be an effective means of providing a focus for further improvements in this area, and that measures relating to both the quality of teaching and the extent of students' engagement in their education should be included in any framework for assessing institutional performance (Bradley et aI., 2008, p. 78). There are contrasting perspectives on the role of performance funding in system improvement. On the one hand, performance funding can Sllpport system improvements through directing a broad range of activities toward a common goal, without being prescriptive on how that should be done. This creates positive incentive� in the area being cvalualed as the basis for funding (as, for example, with increased attention to the participation of students from low socio-economic backgrounds encouraged by reward funding based on enrolment metrics for that group [see Palmer et aI., 2011, p. 4J). Such strategies also serve to reflect

r 235

I External Quality Audit

recognition by government of the importance of the area being evaluated, contributing to parity in approach (if not investment) across different activity areas (as in the case, for example, of higher education teaching and research), and facilitating institutional comparisons on the basis of performance funding measures using equivalent indicators. On this view, in order to sustain incentives for institutional performance, quantitative measures for key attributes, at different levels of aggregation, should cover as many of the key functions of providers as possible and should be associated either directly or indirectly with funding incentives.

However, while certainly transparent, in terms of their 'style', performance funding incentives risk focussing the attention of institutions too narrowly on the means of evaluation, potentially at the expense of a broader range of activities in support of that which the indicator was originally employed to reflect. The use of quantitative indicators can both contribute to and detract from judgements in managing the characteristics being evaluated, and either way cannot provide a comprehensive measure of educational quality overall. As Linke (1991)

put it: 'to the extent that such indicators may influence institutional practice ... they will generate a pressure on institutions to direct their performance to the indicators themselves, regardless of what they reflect, rather than to the underlying issues of educational and research excellence, or indeed to any specific institutional goals' (Linke, 1991,

p. 131). Also often overlooked is the capacity for indicator frameworks to describe the scope for innovation in meeting institutional goals. This can have both a positive and a negative impact as performance indicators work to either stimulate or stultify innovative approaches to achieving their aims. In some cases performance frameworks may simply motivate institutions to manipulate their performance data so as to perform well against the indicators being employed (Chalmers, 2008). Careful judgement is therefore required in employing indicator frameworks, whether they be for funding purposes or not, as there is an inherent risk that either way their use may compromise their original aims.

A further criticism of the use of performance funding entails the assumption that resource allocation is instrumental in improving quality and standards, but where inadequate resources are available, overall performance will remain low. FruU! this point of view, performance funding can be held to perpetuate the status quo, rather than promote innovation and improvement. Those institutions scoring well on funding indicators will continue to do well, even if only in part due to the resources secured with the help of the performance measure. Those performing poorly may continue to perform poorly, and struggle to

2361

Quality strategies and the student experience in Australia

improve relative to their competitors, given their relatively smaller share of resources in support of those activities. This challenge again emphasises the importance of judicious employment of performance indicators where they also influence program funding.

Given the importance of institutional prestige to higher education providers, the impact of performance reporting strategies can be comparable in effect to that of performance funding (but arguably much more economical from the perspective of government). Both are comparable in the way they influence the scope of activities worthy of consideration for continuous improvement efforts. Both lend comparable levels of transparency to the improvement efforts of providers (though of course the indicators used for performance funding tend to be a lot more refined and subject to far greater scrutiny). They may also be seen to be comparable in 'style', particularly in the way in which they focus the efforts of institutions on those aspects defined, measured and, potentially, reported as part of each quality strategy.

Program incentives

While program incentive funding strategies may appear in many respects comparable to performance funding, their influence is potentially quite different in scope, transparency and style. Program incentive funding is typically contingent on program participation and compliance on the part of institutions, with their influence potentially broader in scope than performance funding programs tied to particular indicators. Program incentive funding initiatives may be effective in lending transparency to the improvement efforts of providers, but only where reporting requirements feature as part of the program. Shortcomings of program incentive strategies include the potential lack of comparability in demonstrable program outcomes between providers, particularly where the measures of performance are not clearly defined. Finally, and perhaps most importantly, program incentive funding may be seen as among the least effective of the main system-level quality improvement strategies in terms of its 'style' of influencing institutions. While on the one hand program incentive funding can allow institutions a fair degree of scope in determining the activities worthy of continuous improvement efforts, it risks being so broad as to not only detract from the comparability of outcomes between institutions, but also weaken the incentive for institutions to improve their practices in the first place, along with those which might support improvement on an on-going basis. Discipline

r 237

I External Quality Audit

reviews perhaps represent the best example of this, in being criticised for generating a lot of activity and a lot of evidence around the activities of institutions while lacking comparability in terms of outcomes, and without having a lasting impact on institutional practice.

External review

External review informed by a fitness-far-purpose approach is typically understood as a systematic examination of an organisation's activities in line with its objectives. Under this approach each institution is expected to have in place appropriate strategies and objectives in line with its aims, and appropriate processes for monitoring and evaluating aspects of its 'quality cycle'. External audits conducted during the 1990s were supported by voluntary seH-assessment on the part of providers. Quality audits of this style served as an effective mechanism for change. It was noted at the time that this holistic approach to self-review had the advantage of being able to involve much of the university in seH-review activities, yielding a range of practical and strategic benefits (DETYA, 2000, p. 2). Self-review continued to comprise an important part of quality assurance activities during the 2000s in featuring as a central aspect to the quality framework supported by AUQA. Self-review not only enabled institutions to develop the means to report the kind of information required by an external quality agency, but also had the potential to support improvements independent of the direct intervention of government or an external agency.

External review has been held to stimulate debate on issues related to quality, to contribute to development of a more professional approach to administration and education support structures and to create new routines and systems for handling data and information on educational performance and quality (Stensaker et al., 2011, p. 465). Shah and point to the way that external quality reviews have supported institutionl in examining and monitoring processes in ways that they had not before. They point to the impact of external quality audit on the way which problem areas have been identified and addressed (Shah and 2011). Perhaps the single biggest contribution made by external audits has been where they have encouraged the development sustained culture of continuous improvement, and where they were effective catalyst for the development of robust quality systems (Andersl et al., 2000; Shah et aI., 2007; Adams et al., 2008, pp. 26-7; Shah Nair, 2011). According to Shah et al. (2010), external quality

238\

strategies and the student

Australia have been particularly effective in improving internal quality management systems and processes in universities, embedding quality cycles in strategic planning frameworks and in informing the core activities of higher education providers (Shah and Nair, 2011, p. 143). Stensaker et al. point out that that on Some views external quality aSSurance wijj only ever be related to structure and process, with little impact filtering through to the actual practices of institutions. They point to a need for a more refined understanding of the dynamics of organisational change in order for external review to be used to best effect (Stensaker et aI., 2011). Further to this, Harvey (2002) is often quoted for pointing out that if quality monitoring is seen as an 'event' rather than a 'process', there is little likelihood of the event's making much long-term impact. Rather, it is likely to lead to short-term strategies to improve performance on the measures used, and other strategies for 'gaming the system'. The more quality assurance priorities become focussed on external requirements, the less lasting the benefits are likely to be (Harvey, 2002). Shah and Nair also found that external audits with an improvement_led culture reflected more pOsitive results in terms of self-assessment, external peer review, improvements and follow-up, while audits with a compliance-driven culture were much less successful in engaging institutional stakeholders in quality and improvement (Shah and Nair, 2011, p. 141). Reports of the relative success of external review in Australia contrast with some of the views reported about the UK experience of quality audit, where external reviews by the Quality Assurance Agency were seen by some stakeholders as a costly and unduly bureaucratic exercise. External review in this case was held to promote a culture of compliance, discouraging the engagement of ideas around quality improvements (Shah and Nair, 2011, p. 142). This was attributed in part to overlapping and burdensome processes, competing notions of quality, a failure to engage learning and transformation and a focus on accountability and compliance rather than on continuous improvement and self-review (Shah and Nair, 2011, p. 142). Further comparisons would be useful in establishing if there were in fact marked differences between Australian and UK experiences of external quality review, why this might be the case and the extent to which there were comparable fa(.:lors in play. Finally, external review strategies are sometimes mistakenly held to carry the weight of their influence in their own right, exerting their influence through audit events alone. Alongside the other strategies Outlined above, it is easier to see how external review may work in concert with other strategies to suppOrt and encourage the continuous

r 239

I External Quality Audit

240 J

improvement efforts of providers to best effect. The success of AUQA audits and other external reviews was, arguably, supported by a dear set of reporting protocols, be that in the public domain or in the quality portfolios used as the basis of review. Transparency was lent to the activities of providers through the publication of audit reports for each institution and the selective publication of evidence from educational profiles and institutional portfolios. In effect, therefore, the scope of external review is potentially very broad, limited largely by the kind of reporting requirements that typically form the back-drop for each review. The measure of success for external review may be found at least to some degree in the quality of publicly available information on the activities and performance of providers as much as in the 'style' of monitoring, improvement and enhancement activity they have been found to promote within the institution before, during and after each audit event. So has external review actually led to improvements in the quality of the student experience? It would be difficult to establish conclusively that any demonstrable improvements in the quality of the student experience could have been as a direct result of external audit. In other respects, however, the student experience of quality assurance represents a noteworthy mirror to quality assurance of the student experience, one that in Australia at least reflects mixed results, at best (Patil, 2007; Palmer, 2008; Gvaramadze, 2011). Based on the European experience, students as a group seem less convinced about the positive effects of the now seemingly endless evaluation activities they are asked to participate in (5tensaker et aI., 2011, p. 476). Further to this, the somewhat artificial construction of 'the student voice' and 'the student experience' in higher education policy and quality assurance cirdes at the moment suggests the risk of these succumbing to the marketing and myth-making activities of universities rather than their being legitimate matters for inclusion in the Scope of continuous improvement efforts. A more optimistic take on this may be that featuring as part of the 'gloss' of what universities promote is precisely the path to being 'in scope' for continuous improvement purposes.

External quality audit has proved to be an effective means for bringing a broad range of activities and outcomes into the scope of review where supported by quality strategies comparable with the aims of review. Despite the limitations and shortcomings noted above, external quality audit� Oln provide an effective vehicle for engagement on quality issues. They also provide a stimulus for innovation in quality assurance and quality enhancement activities. Finally, they offer an opportunity for institutions to demonstrate that the activities and outcomes they

Quality strategies and the student experience in Australia

suPpOrt are at or above a reasonable standard, and that they have identified and prioritised resources to address areas where they may be underperforming.

ConclUSion: the merits of external review and prospects for future development It has been proposed here that external review has made a difference in enhancing the higher education student experience in Australia. Compared with other system-level quality strategies, improvements yielded through external review have included expanding the scope of activities worthy of consideration for continuous improvement efforts, improved transparency in the activities and Outcomes supported by institutions and qualitative improvement in the approach taken to continuous improvement within institutions. There are clearly strengths to the fitness-far-purpose approach Supported by external audit that are worthy of consideration under a regulatory framework, compared with other system-level quality strategies, and it is not necessarily the case that one need come at the expense of the other. Merits of external review include effectively encouraging an expansion in the scope of activities worthy of consideration for quality assurance purposes and promoting greater transparency in quality assurance activity and in the broader activities and outcomes supported by institutions. Finally, and perhaps most importantly, external review appears to have been effective in many cases in promoting a culture of self-review on the part of higher education providers in Australia. Overall, external review quality strategies have yielded the greatest improvement in quality in Australian higher education in cases where they have served to promote a culture of innovation and improvement in quality-enhancement activities. They also appear effective in supporting the constructive engagement of stakeholders on quality issues. They have also led to improved transparency in demonstrable evidence not just of outcomes, but also of the improvement and enhancement activities of institutions, supporting an environment where good practice is not only acknowledged but shared.

Is it possible to deVelop a system-level quality strategy that effectively integrates regulation and standards with a fitness-for-purpose approach? The answer perhaps lies in removing some of the mis-characterisation of each of the approaches in terms that imply that each is opposed to the

/ 241

l External Quality Audit

others. This question remains largely untested in the context of Australian higher education. Instrumental in supporting positive outcomes in a regulatory environment will be recognition that each new approach exists against a background of former system-level quality initiatives. While each may be found to have its own strengths and weaknesses, each iteration has more or less sought to build on the strengths of previous approaches, in addition to seeking to address their shortcomings. We should hope that the current iteration is no different.

Note

1. Reports of these audits are archived at http://pandora.nla.gov.au/ panll 2 706612011 082 6-0004Iwww.auqa.edu.aulqualityauditlindex.htm I (AUQA, 2011).

References

Access Economics (2005) Review of Higher Education Outcome Performance Indicators. Canberra, Australia: Department of Education, Science and Training, Commonwealth of Australia.

Adams, R., Strong, J., Mattick, LE., McManus, M.L, Matthews, K.E. and Foster, J. (2008) Self-Review for Higher Education Institutions. Melbourne, Australia: Australian Universities Quality Agency.

Alcock, C.A., Cooper, ]., Kirk, J. and Oyler, K. (2009) The Tertiary Student Experience: A Review of Approaches Used on the First Cycle of AUQA Audits 2002-2007 (No. 20). Melbourne, Australia: Australian Universities Quality Agency.

Anderson, D., Johnson, R. and Milligan, B. (2000) Quality Assurance and Accreditation in Australian Higher Education: An Assessment of Australian and Intemational Practice. Canberra, Australia: Evaluations and Investigations Programme, Higher Education Division.

ARC (2011) Excellence in Research for Australia 2010: National Report. Canberra, Australia: Australian Research Council (ARC), Commonwealth of Australia.

AUQA (2011) Australian Universities Quality Agency (Archive). Available from: pandora. nla.gov.aulpanI1 2 70661201 1 082 6-0004Iwww.auqa.edu.aul quafityauditl [Accessed February 2012J.

Australian Government (2009) Transforming Australia's Higher Education System. Canberra, Australia: Department of Education, Employment and Workplace Relations, Commonwealth of Australia.

Baird, J. (2007) Quality in and around universities. Paper presented at Regional Conference on Quality in Higher Education, 10-11 December, Kuala Lumpur.

242 1

Quality strategies and the student Axperience in AustraUa

Available from: http://www.auqa.edu.aul(iles/presentations/quality_in_and_ around_universities.pdf [Accessed February 2012J.

Baldwin, P. (1991) Higher Education Quality and Diversity in the 1990s: policy Statement by the Hon. Peter Baldwin, MP, Minister for Higher Education and Employment Services. Canberra: Australian Government Publication Service.

Bourke, P. (1986) Quality Measures in Universities. Canberra, Australia: Commonwealth Tertiary Education Commission.

Bradley, D., Noonan, P., Nugent, H. and Scales, B. (2008) Review of Australian Higher Education: Final Report. Canberra, Australia: Department of Education Employment and Workplace Relations, Commonwealth of Australia.

Caldwell, G., Johnson, R. and Anderson, D.S. (1994) Report all the Impact of

the Discipline Review of Engineering. Canberra, Australia: Evaluations and Investigations Program.

Cave, M., Hanney, S., Kogan, M. and Trevett, G. (1991) The Use of Performance Indicators III Higher Education; A Critical Analysis of Developing Practice, 2nd edn. London, UK: Jessica Kingsley Publishers.

Chalmers, D. (2007) A Review of Australian and International Quality Systems and Indicators of Learning and Teaching. Sydney, Australia: Carrick Institute for Learning and Teaching in Higher Education.

Chalmers, D. (2008) Indicators of University Teaching alld Learning Quality. Sydney, Australia: Australian Learning and Teaching Council.

ChudJeigh, J.w., McColl,].C. and Robson, A.D. (1991) Report of the Review of Agricultural and Related Education. Canberra, Australia: Evaluations and Investigations Programme, Commonwealth of Australia.

Committee on Australian Universities (1957) Report of the Committee on Australian Universities. Canberra, Australia: Commonwealth Government Printer.

Committee on the Future of Tertiary Education in Australia (1964) Tertiary Educatioll in Australia: Report of the Committee on the Future of Tertiary Education in Australia to the Australian Universities Commission. Canberra, Australia: AGPS.

CTEC (1986) Review of Efficiency and Effectivelless in Higher Education: Report of the Committee of Enquiry. Canberra, Australia: Commonwealth Tertiary Education Commission.

Dawkins, J. (1987) Higher Education: A Policy Discussion Paper. Canberra: Australian Government Publishing Service.

Dawkins, J. (1988) Higher Education: A Policy Statement (No. 064408300X). Canberra, Australia: Australian Government Publishing Service.

DEST (2003) National Report on Higher Education in Australia 2001. Canberra, Australia: Department of Education, Science and Training.

DEST (2004) Learning and Teaching Performance Fund (Issues Paper). Canberra, ACT: Department of Education Science and Training.

DETYA (2000) The Australian Higher Education Quality Assurance Framework. Canberra, Australia: Higher Education Division, Department of Education, Training and Youth Affairs.

DETYA (2001) Quality of Australian Higher Education: Institutional Quality Assurance and Improvement Plans for the 2001-2003 Triennium

r 243

I External Quality Audit

(No. DETYA No. 6666.HERCOIA). Canberra, ACT: Department of Education Training and Youth Affairs, Commonwealth of Australia.

Employment, Education and Training Act 1988, (cwth) 80 (1988) Gallagher, M. (2010) The Accountability for Quality Agenda in Higher

Education. Canberra, Australia: The Group of Eight. Gvaramadze, I. (2011) Student engagement in the Scottish Quality Enhancement

Framework. Quality in Higher Education, 17 (1), 19-36. Harvey, L. (2002) The end of quality� Quality in Higher Education, 8 (1), 5-22. Higher Education Council (1992) Higher Education: Achieving Quality.

Canberra, Australia: Australian Government Publishing Service. Higher Education Council (1995) The Promotion of Quality and Innovation in

Higher Education: Advice of the Higher Education Council on the Use of Discretionary Funds. Canberra, Australia: Australian Government Publishing Service.

Higher Education Funding Act 1988, (cwth) 2 (1989) (1988). Hoare, D. (1995) Higher Education Management Review: Report of the

Committee of Inquiry. Canberra, Australia: Department of Employment, Education and Training.

Hudson, H. (1992) Report of the Discipline Review of Computing Studies and Information Sciences Education (No. 92/190). Canberra, Australia: Information Industries Education and Training Foundation; Evaluation and Investigations Program.

Kemp, The Hon. Dr D.A. MP (1999) Knowledge and Innovation: A Policy Statement on Research andResearch Training. Canberra, ACT: Commonwealth of Australia.

Linke, R.D. (Ed.) (1991) Performance Indicators in Higher Education; Report of a Trial Evaluation Study Commissioned by the Commonwealth Department of Employment , Educationand Training (Vol. 1 : Report and Recommendations). Canberra, ACT: Performance Indicators Research Group; Department of Employment Education and Training.

Mathews, R., Jackson, M. and Brown, P. (1990) Accounting in Higher Education. Report of the Review of the Accounthtg Discipline ill Higher Education. Canberra, Australia: Evaluations and Investigations Programme, Commonwealth of Australia.

Nelson, B. (2003) Our Universities: Backing Australia's Future. Canberra, Australia: Department of Education Science and Training, Commonwealth of Australia.

Nelson, T.H.B. (2002) Higher Education at the Crossroads: An Overview Paper. Canberra, Australia: Department of Education, Science and Training.

Palmer, N. (2008) The Impact of VSU 011 Services, Amellities and Representation for Australiall Students (Respollse to DiscussiOlI Paper). Carlton, VIC: Council of Australian Postgraduate Associations.

Palmer, N. (2011) Development of the University Experience Survey: report on findings from secondary sources of information. In A. Rarlloff, H. Coates, R. James and K-L. Krause (eds), Report on the Development of the University Experience Survey. Canberra, Australia: Department of Education, Employment and Workplace Relations.

244 ]

Quality strategies and the student experience i n Australia

Palmer, N., Bexley, E. and James, R. (2011) Selection and Participation in Higher Education. Melbourne, Australia: Centre for the Study of Higher Education for The Group of Eight.

Patil, J. (2007) Student Participatioll in Quality Assurance. Melbourne, Australia: Asia-Pacific Quality Network.

Pearce, D., Campbell, E. and Harding, D. (1987) Australiall Law Schools: A Discipline Assessment for the CommOllwealth Tertiary Education Commission. Canberra, Australia: Evaluations and Investigations Programme, Commonwealth Tertiary Education Commission.

Ramsden, P. (1991a) A performance indicator of teaching quality in higher education: the Course Experience Questionnaire. Studies in Higher Education, 16 (2), 129-50.

Ramsden, P. (1991b) Report on the Course Experience Questionnaire trial. In R.D. Linke (ed.), Perfonnance Indicators in Higher Education: Report of a Trial Evaluation Study Commissioned by the Commonwealth Department of Employment, Education and Training. (Vol. 2: Supplementary Papers). Canberra: Australian Government Publishing Service.

Ramsden, P. and Entwistle, N.J. (1981) Effects of academic departments on students' approaches to studying. British Journal of Educatiollal Psychology, 51 (3), 368-83.

Shah, M. and Nair, S. (2011) The influence of strategy and external quality audit on university performance: an Australian perspective. Tertiary Education and Management, 1 7 (New Zealand), pp. 139-50.

Shah, M., Roth, K. and Nair, S. (2010) Improving the quality of offshore student experience: findings of a decade in three Australian universities. Proceedings of the Australian International Education (AlEC) Conference. Sydney, Australia, 12-15 October. Available from: http://www.aiec.idp.coml pdflImproving%20the%20Quality%20of%200ffshore%20Student%20Ex perience]eerReviewed.pdf [Accessed 29 October 2012].

Shah, M., Skaines, I. and Miller, ]. (2007) Measuring the impact of external quality audit on universities: can external quality audit be credited for improvements and enhancement in student learning? How can we measure? Proceedings of AUQF 2007: Evo/utioll and Renewal in Quality Assurance, pp. 136-42.

Speedy, G.w., Annice, C. and Fensham, P.]. (1989) Discipline Review of Teacher f.ducation in Mathematics and Science. Canberra, Australia: Australian Government Publishing Service.

States Grants (Tertiary Education Assistance) Act 1987, (cwth) 123 (1987). Stensaker, B., Langfeldt, L., Harvey, L, Huisman, ]. and Westerheijden, D.

(2011) An in-depth study on the impact of external quality assurance. Assessment & EvaluatiOIl in Higher Education, 36 (4), pp. 465-78.

Tertiary Education Commission Act 1977, (cwth) 25 (1977). The Australian Government (2001) Backing Australia's Ability: An Innovation

Action Plan for the Future. C:1nberra, Australia: Commonwealth Publishing Service.

Vidovich, L. and Porter, 1'. (1999) Quality policy in Australian higher education of the 1990s: university perspectives. Journal of Education Policy, 14, pp. 567-86.

] 245

I External Quality Audit

West, R. (1998) Learning for Life: Final report of the Review of Higher Education Financing and Policy. Canberra, Australia: Department of Employment, Education, Training and Youth Affairs.

Whitehead, G. (1993) The Influence of Discipline Reviews on Higher Education; Review of Teacher Education in Mathematics and Sciwce. Canberra, Australia: Australian Government Publishing Service.

Williams, B. (1988) Review of the Discipline of Engineering. Canberra, Australia: Evaluations and Investigations Programme, Commonwealth of Australia.

Woodhouse, D. (2006) Quality frameworks for institutions. In J. Baird (ed.), Quality Frameworks: Reflections from Australian Universities. Melbourne, Australia: Australian Universities Quality Agency.

2461

DI

Accreditation and institutional

learning: the impact interactions

based on a minimaxing strategy have

on the benefits from external reviews

Fernando Padro

Abstract: This chapter is a personal epistemology of the potential limitations on the organizational learning that universities can actually achieve through external review processes. The goal is to engender a discussion of institutional learning from the perspective of risk and of how an understanding of the potential limitations can enhance the capacity for learning. The chapter begins with a description of the perceptual and structural challenges to institutional learning emanating from external review processes, which can help to explain the hurdles that have to be overcome in designing and implementing internal and external quality assurance processes. This is followed by a discussion of the nature of learning, of when learning happens, the limitations on organizacional learning, and the institutional mechanisms and values that impact on the potential for learning vis-a.-vis acceptance of the appropriateness of external demands for and the basis of accountability.

Key words: capacity for change, capacity for risk, higher education, minimaxing behaviour, limitation to organizational learning, organizational buy-in, organiza tiunal learning, paradox of learning, university organizational climate.

I ntrod uction

Because change is a more arresting phenomenon than persistence, we are apt to overlook the fact that in an institution or culture

I 247