Function allocation in complex systems: reframing an old problem

20
This article was downloaded by: [University of Leeds] On: 06 January 2015, At: 02:17 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Ergonomics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/terg20 Function allocation in complex systems: reframing an old problem Rose Challenger a , Chris W. Clegg b & Craig Shepherd c a Institute of Work Psychology, Management School, The University of Sheffield , Sheffield , S10 2TN , UK b Socio-Technical Centre, Leeds University Business School, University of Leeds , Leeds , LS2 9JT , UK c Nottingham University Business School, Jubilee Campus , Nottingham , NG8 1BB , UK Published online: 08 May 2013. To cite this article: Rose Challenger , Chris W. Clegg & Craig Shepherd (2013) Function allocation in complex systems: reframing an old problem, Ergonomics, 56:7, 1051-1069, DOI: 10.1080/00140139.2013.790482 To link to this article: http://dx.doi.org/10.1080/00140139.2013.790482 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Transcript of Function allocation in complex systems: reframing an old problem

This article was downloaded by: [University of Leeds]On: 06 January 2015, At: 02:17Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

ErgonomicsPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/terg20

Function allocation in complex systems: reframing anold problemRose Challenger a , Chris W. Clegg b & Craig Shepherd ca Institute of Work Psychology, Management School, The University of Sheffield , Sheffield ,S10 2TN , UKb Socio-Technical Centre, Leeds University Business School, University of Leeds , Leeds , LS29JT , UKc Nottingham University Business School, Jubilee Campus , Nottingham , NG8 1BB , UKPublished online: 08 May 2013.

To cite this article: Rose Challenger , Chris W. Clegg & Craig Shepherd (2013) Function allocation in complex systems:reframing an old problem, Ergonomics, 56:7, 1051-1069, DOI: 10.1080/00140139.2013.790482

To link to this article: http://dx.doi.org/10.1080/00140139.2013.790482

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Function allocation in complex systems: reframing an old problem

Rose Challengera*, Chris W. Cleggb and Craig Shepherdc

aInstitute of Work Psychology, Management School, The University of Sheffield, Sheffield S10 2TN, UK; bSocio-Technical Centre, LeedsUniversity Business School, University of Leeds, Leeds LS2 9JT, UK; cNottingham University Business School, Jubilee Campus,

Nottingham NG8 1BB, UK

(Received 6 February 2012; final version received 24 March 2013)

In this article, we offer a new, macroergonomics perspective on the long-debated issue of function allocation. We believethinking in this domain needs to be realigned, moving away from the traditional microergonomics conceptualisation,concerned predominantly with task-based decisions, and towards a macroergonomics approach, viewing function allocationchoices as central to effective systems design. We frame our arguments within a systems perspective, advocating thatfunction allocation issues need to be on the agenda of all individuals with a wider interest in the human and organisationalaspects of complex work systems, including people who commission, sponsor, design, implement and use such systems. Wealso argue that allocation decisions should form a transparent, explicit stage early in the systems design and developmentprocess, involve multiple stakeholders (including end-users), be evidence-based, framed within the language of risk andutilise iterative methods (e.g. scenarios planning techniques).

Practitioner Summary: This article presents a macroergonomics approach to function allocation, advocating itsimportance in effective systems design. Adopting a systems mindset, we argue function allocation should form an explicitstage early in the design process, involve multiple stakeholders, be evidence-based, framed within the language of risk andutilise iterative methods.

Keywords: allocation of function; complex systems; socio-technical systems; human–machine systems

1. Introduction

Our aim in this article is to reframe a problem known in the human factors and ergonomics community as ‘allocation of

function’. This concerns the decisions that system designers make on whether specific functions (or tasks) are ‘allocated’ to

humans or to some form of automation which increasingly, in the modern era, involves some form of information and

communications technology. Although this has been a topic of enquiry for more than half a century, we believe there are

several problems relating to both theory and practice which need to be addressed. Historically, the area has focused

predominantly on the allocation of physical tasks and on whether a human or a machine is better suited to undertake such

tasks. The orientation has been individualistic and the topic has been viewed as a microergonomics problem, in large part

related to longstanding concerns with operator workload, fatigue and vigilance, i.e. with variables that have historically

fallen within the domain of the microergonomics specialist (e.g. Chapanis 1970; Wickens and Holland 2000; Kirwan 2001;

Wilson 2005). A further problem has been that, in practice, such allocations often happen by default, typically emerging

during conceptual or detailed design. On the rare occasions when these decisions are made explicitly, they typically happen

relatively late in the process, during detailed design.

One unfortunate consequence of this orientation is these issues are not usually discussed by people with a wider

macroergonomics viewpoint. Our argument is that such decisions over the roles of humans and computers in complex

systems need to be addressed explicitly and strategically at an early stage within the design process, and to be on the agenda

of people who commission, sponsor, design, implement, use and evaluate these systems. Therefore this article aims to bring

function allocation to the attention of those with a wider interest in the human and organisational aspects of complex work

systems, and to offer a way forward for the treatment of such issues. In essence, we wish to offer a new perspective on an old

problem.

To be clear, we are not arguing for the abandonment of the traditional human factors perspective and the importance of

specifically assigning tasks to humans or tomachines.Nor arewedisputing that advances have beenmade. Indeed, researchers

are increasingly considering function allocation as amore dynamic, adaptive process, to accommodate the changing nature of

modern working practices (e.g. Hancock and Scallen 1996; Lagu and Landry 2011), and are exploring different techniques to

q 2013 Taylor & Francis

*Corresponding author. Email: [email protected]

Ergonomics, 2013

Vol. 56, No. 7, 1051–1069, http://dx.doi.org/10.1080/00140139.2013.790482

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

guide allocation decisions (e.g. Dearden, Harrison, andWright 2000; Grote et al. 2000; Salmon et al. 2010). Rather, we simply

wish to extend current knowledge and practice in the field, by offering a complementary perspective.

1.1 A systems approach

Theoretically, our ideas are framed within the tradition of systems thinking, drawing on the well-established ideas of socio-

technical systems theory (e.g. Trist and Bamforth 1951; Cherns 1976, 1987; Clegg 2000; Mumford 2006) and systems

engineering (e.g. Goode and Machol 1957; Sage 1977, 1992; Blanchard and Fabrycky 1981). Such theories are founded on

the premise that design is systemic and inherently complex (e.g. Hendrick 1995; Clegg 2000). Thus, when designing and

implementing any new organisational or engineering system, a holistic approach is required, jointly optimising both human

and technological subsystems as early in the process as possible (e.g. Bostrom and Heinen 1977a, 1977b; Hendrick 1995,

1997, 2002; Checkland 1999; Bostrom, Gupta, and Thomas 2009). Technology alone should not drive organisational

changes; new systems should be designed to accommodate the needs of the organisation and its employees, rather than to

simply keep pace with technology-driven modernisation (Clegg 2000).

In this perspective, it is inevitable that changes to one part of the system will necessitate changes to other parts and,

thereby, the whole system should be considered holistically and highly interrelated if its success is to be optimised (e.g.

Hendrick 1997). Consequently, a wide range of factors, both internal and external to the organisation, need to be taken into

account when designing and implementing any new system, including individual employees, work processes and

procedures, technologies and tasks, job designs and work organisation, buildings and infrastructure, organisational goals

and cultures, political influences, and regulatory requirements and standards (e.g. Hall 1962; Jenkins and Youle 1971;

Rasmussen 1997; Clegg 2000; Bostrom et al. 2009). For instance, Rasmussen’s (1997) socio-technical risk management

framework advocates the importance of considering lower level factors (e.g. work, staff and management) through to higher

level factors (e.g. company, regulatory and government) in order to gain a picture of the system in its wider context.

Furthermore, it is unlikely that any one individual or group will understand all the component parts when considering

the system as a whole. As such, it is vital that the design of any new system involves multiple stakeholders with a wide and

complementary range of knowledge, skills and expertise, such as end-users, managers, human resource experts, designers,

strategists and clients (e.g. Sage 1977, 1992; Clegg, Older Gray, and Waterson 2000; Kappelman, McKeeman, and Zhang

2006). In particular, systems thinking emphasises the importance of end-user participation and ownership over the design

and implementation of organisational systems (e.g. Blanchard and Fabrycky 1981; Clegg andWalsh 2004; Mumford 2006).

All too often designers create new systems and then hand them over to end-users; yet it is much more effective if end-users

take ownership of the process, from the initial conceptualisation and design through to implementation and beyond,

adapting it to better meet their needs and controlling problems at source (e.g. Bostrom and Heinen 1977a, 1977b; Eason

2007).

In the following section, we present a literature review that discusses what function allocation is traditionally considered

to be, why it is an important issue to study and where key function allocation problems typically arise. Thereafter, we

introduce two case studies involving electronic care records to highlight the shortcomings typically experienced in practice

when reallocating functions from humans to technology, without adopting a systemic approach to design. The first case

study concerns the National Health Service (NHS) Summary Care Record (SCR), part of the electronic Care Records

Service (CRS) within the UK National Programme for Information Technology (NPfIT). It is intended to hold

electronically every patient’s medical records, enabling information to be safely and securely accessible to healthcare

professionals and patients, whenever and wherever needed (Connecting for Health 2005). The second case concerns the

Integrated Children’s System (ICS), a government-led IT initiative within child welfare services intended to improve child

safety, increase standardisation of care and enhance process efficiency (Department of Health 2000, 2002).

Next, building on the analysis of the case studies, we introduce our own macroergonomics approach towards function

allocation, describing what we believe it to be and explaining when it should be undertaken, who should be involved and

how it should be carried out. Specifically, we argue function allocation should be an explicit phase early in the design

process, involving a multidisciplinary team so as to incorporate the differing perspectives, knowledge and expertise of a

range of organisational actors, including end-users. We argue this process should be evidence-based, framed in the language

of risk and supported by ‘quick and dirty’ iterative techniques, such as RAD, scenarios planning and storyboarding. Finally,

we conclude the article by summarising our main contributions and implications for future research and practice.

2. Literature review

Searching ISI Web of Knowledge for peer reviewed journal articles is helpful in drawing authoritative conclusions about

the status of the literature on a particular topic (e.g. Tatli and Ozbilgin 2012). Consequently, we used ISI Web of Knowledge

R. Challenger et al.1052

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

to search for English language articles with the terms ‘allocation of function*’ or ‘function allocation*’ in their title,

abstract or keywords. The search criteria were deliberately kept as inclusive as possible, to ensure that relevant articles from

multiple disciplines were captured, which could help enhance our understanding of function allocation in organisations.

This yielded a total of 122 peer reviewed articles. Although these dated back to 1932, it confirmed that the topic of function

allocation remains of contemporary interest, with 21% (26 out of 122) of the articles published in the past 5 years (since the

beginning of 2008) alone.

To establish whether these articles were directly related to function allocation in an organisational context, the abstracts

were read and, where the contribution was unclear, the full article. Disciplines such as zoology, plant biology,

oceanography, surgery and cell biology were filtered out at this stage as they were not directly relevant. The reference lists

within the remaining articles were then cross-checked to ensure relevant ones were not missed by the database search. This

also enabled the identification of scholarly books and book chapters that were cited consistently (e.g. Fitts 1951; Fallon,

Bannon, and McCarthy 1997).

In total, the literature search identified 104 articles/chapters relevant to understanding function allocation within an

organisational context. These contributions originated across a diverse range of disciplines. For instance, the majority of

articles have been published in the domain of human factors and ergonomics (e.g. Edwards and Lees 1974; Price 1985;

Clegg 1988; Jones, Hapeshi, and Frankish 1989; Mital et al. 1994a, 1994b; Grote et al. 1995; Scallen, Hancock, and Duley

1995; Kaber, Onal, and Endsley 2000; Kirwan 2001; Kleiner and Shewchuk 2001; Waterson, Older Gray, and Clegg 2002;

Jiang et al. 2003, 2004; Master et al. 2005; Boff 2006; Holness, Drury, and Batta 2006; Stanton 2006; Roth 2008; Lagu and

Landry 2011; Zhang, Tang, and Zhang 2011; Landry 2012; Millot and Boy 2012; Sauer, Nickel, and Wastell 2013). Indeed,

15 articles were published in the journal Ergonomics alone (Whitfield 1967; Noro and Okada 1983; Lee and Moray 1992;

Muir 1994; Page 1995; Drury and Forsman 1996; Muir and Moray 1996; Older, Waterson, and Clegg 1997; Endsley and

Kaber 1999; Moscoso, Wafler, and Windischer 1999; Hoc 2000; Casler and Cook 2003; Gregoriades and Sutcliffe 2006;

Jenkins et al. 2008; Sauer et al. 2011).

However, function allocation has also been of considerable interest in the disciplines of psychology (e.g. Jordan 1963;

Chapanis 1965; Moray, Inagaki, and Itoh 2000; Hoc and Debernard 2002; Blomberg 2011; Nickerson 2011; Lintern 2012),

systems engineering (e.g. Stanton and Marsden 1997; Bye, Hollnagel, and Brendeford 1999; Moray and Inagaki 1999;

Jiang et al. 2002; Bate and Kelly 2003; Jung, Seong, and Kim 2004; Lin, Yenn, and Yang 2010; Sutcliffe, Thew, and Jarvis

2011; Hardman and Colombi 2012) and computer science (e.g. O’Neal, Lively, and Sheppard 1989; Garg and Salvendy

1991; Rouse, Cody, and Frey 1992; Agou, Raskin, and Salvendy 1993; Chi and Drury 1998; Siemieniuch, Sinclair,

and Vaughan 1999; Clegg et al. 2000; Dearden et al. 2000; Fuld 2000; Grote et al. 2000; Hollnagel and Bye 2000;

Ivergard 2000; McCarthy, Fallon, and Bannon 2000; Parasuraman, Sheridan, and Wickens 2000; Rognin, Salembier, and

Zouinar 2000; Sheridan 2000; Strain and Eason 2000; Wright, Dearden, and Fields 2000; Wang et al. 2010; Prevot et al.

2012).

Having outlined the process through which relevant literature was identified, the following sections analyse these

articles to address what function allocation is traditionally considered to be, why it is important and where key problems

typically arise.

2.1 What is function allocation traditionally?

The concept of function allocation – ‘the allocation of functions or tasks between the humans and machines in a system’

(Clegg et al. 2000, 238) – has been widely debated in the human factors and engineering literature since the publication of

Paul Fitts’ list about the contrasting abilities of humans and machines, borne out of research into human engineering for air

traffic control (Fitts 1951). Yet, whilst its theoretical importance for systems design is generally acknowledged, function

allocation is rarely discussed explicitly during the design process, but is typically sidelined or disregarded (e.g. Older et al.

1997; Fuld 2000). In practice, function allocation does not exist as a discrete, identifiable phase but, rather, is often implicit

in design activities and assumptions or emerges as the system evolves (e.g. Kidd 1992). Indeed, Fuld (1993) goes as far as to

argue that there is no such process in the development of a new system.

On the occasions when function allocations are considered, they are predominantly task focused, wherein specific tasks

are allocated to either humans or to machines, and fail to take into account wider social and organisational issues (e.g.

Meister 1987; Clegg et al. 1989; Siemieniuch et al. 1999; Grote et al. 2000; Hollnagel and Bye 2000; Rognin et al. 2000;

Waterson et al. 2002; Gregoriades and Sutcliffe 2006; Jenkins et al. 2008). Traditionally, allocation decisions are based

upon some form of HABA–MABA lists (‘humans are better at – machines are better at’, e.g. Fitts 1951; Chapanis 1960,

1965; Dekker and Woods 2002). Tasks are assigned to either humans or technology, depending on which is more ‘capable’

of performing the task effectively, often referred to as ‘function allocation by substitution’ or the ‘compensatory’ approach

(e.g. Parasuraman et al. 2000). However, this relies on the assumption that the relative strengths and weaknesses of humans

Ergonomics 1053

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

and machines are static and, therefore, that new technologies can simply be implemented as substitutes for individuals,

without consequence (e.g. Hoc 2000; Dekker and Woods 2002; Lagu and Landry 2011).

Moreover, given that humans are often considered to be sources of error and unpredictability (e.g. Clegg et al. 1989),

there is often an explicit or implicit preference for automating as many functions as possible and allocating only those that

remain to humans, namely ones which cannot be automated due to either technical feasibility or cost; the so-called ‘left-

over’ approach (e.g. Hendrick 1995; McCarthy et al. 2000). However, Jordan (1963) argued that humans and machines

should be considered as congruent system components, each having particular skills and abilities at which they are best.

Thus, according to the ‘complementary’ approach, tasks should be assigned so as to exploit the capabilities of both humans

and machines to the full (e.g. Moscoso et al. 1999; Sheridan 2000).

We acknowledge that there are a few exceptions to the situation as described above, the main one being found within

military and defence-related domains, where function allocation is considered to be an important part of systems design and

development (e.g. McDaniel 1996; Corbridge and Cook 1997; Jenkins et al. 2008). Indeed, UK Ministry of Defence (MOD

1989) standards specify function allocation as an explicit stage in the design process, occurring early on, before equipment

is designed in detail or prototypes are constructed (Dearden et al. 2000). The same broad approaches are utilised in US

military standards, where function allocation is again regarded as a distinct phase in system design (e.g. MIL-STD-46855

1994; McDaniel 1996). Similarly, the international standards for the design of control centres denote function allocation

as a specific stage occurring early in the design process, prior to conceptual and detailed design (e.g. International

Organization for Standardization 2000). However, so far as we can ascertain, this recognition of the importance of function

allocation within defence-related domains remains predominantly task-based in focus and has not been successfully

transferred into wider civil applications. We believe there is still a need for a reorientation in thinking and mindsets in

relation to function allocation issues, and to get this topic onto the agenda of a much wider community.

Interestingly, some of the leading journals and major textbooks in the area of information systems development (e.g.

Information Systems Journal, Information Systems Research, Information Systems Development: Methodologies,

Techniques and Tools) do not address the issue of function allocation, lending support to the proposition that the issue is not

regarded as a significant mainstream problem. However, in our view, function allocation is concerned with the roles of

humans and machines in the operation and management of all complex systems. In any system, there are tasks to be

undertaken and interdependent design decisions to be made based upon the division of these tasks, both between and

amongst humans and technologies. The former addresses the issue of allocation of function whilst the latter addresses the

topic of job design. We believe explicit consideration of function allocation (and job design) ensures that these issues get on

the agenda for systems design. Indeed, we would go as far as to say that the respective roles of humans and technologies are

one of the critical issues in the design of any complex system; they are absolutely central to how the overall system will

operate and be managed in practice. For whatever reason, if these design decisions are not well made, then the system is

likely to be sub-optimal.

Incidentally, we are not arguing here for some inherent technological determinacy whereby design decisions entirely

prescribe how the system will be operated and managed. We acknowledge that choices remain, for example, in job design

and work organisation, and that the way in which systems operate can, in practice, evolve over time. But, as we will explore

below, systems design, including the allocation of functions, influences and constrains the way technologies are used and

supported and, therefore, deserves our attention.

2.2 Why is function allocation important?

It is well documented that complex work systems often fail to deliver their anticipated benefits, with evidence arising from

surveys, case studies, public enquiries, expert panels, economic analyses and longitudinal studies of productivity

(e.g. Landauer 1995; Clegg et al. 1997; Brynjolfsson and Hitt 1998; Lippert and Davis 2006). The most recent example of

this is in the UK NHS, where NPfIT – considered to be the most extensive IT healthcare development of its kind in the

world – is currently being reviewed and in large part dismantled. NPfIT was a 10-year change programme within the NHS,

established in October 2002 and predicted to cost around £12.4 billion (Brennan 2005; House of Commons Committee of

Public Accounts 2007). Its overarching vision was ‘delivering better care’ to patients, by equipping staff with a host of new

information technologies designed to provide them with ‘the latest information at their fingertips’ (Connecting for Health

2005, 2). However, as a review by the Cabinet Office Major Projects Authority reports, ‘there can be no confidence that the

programme has delivered or can be delivered as originally conceived’ (2011, 4). We will return to this example of NPfIT

later in the article.

One of many reasons for these failings concerns the choices made for the design of systems, including the allocation of

functions to humans or technology (e.g. Brennan 1984; Older et al. 1997; Waterson et al. 2002). Issues, such as how the

overall system will operate, how the various processes and tasks will be organised and managed, and what technologies and

R. Challenger et al.1054

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

other organisational systems will be required to provide support, are all influential (Clegg 2000). Indeed, one of the key

principles of design offered by socio-technical systems theory advocates that the allocation of functions amongst and

between humans and machines is central to the process of systems design (e.g. Clegg 2000). There is plentiful evidence that

the change driven by technology (so-called ‘technology-push’) without due consideration of the social and organisational

factors, i.e. without taking a systems approach, can result in failure (e.g. Symon and Clegg 1991; Kappelman et al. 2006;

Clegg and Shepherd 2007; Eason 2007; Baxter and Sommerville 2011). Yet currently, as technology supply industries

advance at a rapid pace and vast amounts of money are being invested, there is increasing pressure to automate processes

(e.g. The Guardian 2011). As a result, individuals find many of their work roles being reallocated to technology, the

consequences of which are discussed in more detail below. In our view, a systems approach to design, including rigorous

consideration of the roles of both humans and machines, should be adopted in order to enhance system success.

2.3 Where do key problems typically arise?

We believe that allocating functions in the traditional ways outlined above, so as to capitalise on the strengths of technology

whilst compensating for (or eliminating) the weaknesses of humans, merely serves to create unanticipated problems (e.g.

Bainbridge 1983). Techno-centric mental models and accompanying overconfidence in the capabilities of new technologies

to solve complex problems can lead system designers to become ‘spectacularly ill calibrated’ (Dekker and Woods 2002,

242) in relation to the real consequences of automating processes. Hence, they only envisage positive consequences, such as

reduced workload and increased accuracy (e.g. Feltovich et al. 2004). However, when employees’ roles are altered from

performing tasks themselves to monitoring the technology, the cognitive demands placed upon them are often increased,

thereby indirectly increasing employee workloads (e.g. Bainbridge 1983; Sarter and Woods 1995; Parasuraman and Byrne

2003). Similarly, reallocating functions from humans to technology can lead to reduced situational awareness (e.g. Endsley

and Kaber 1999; Kaber et al. 2000) and problems with vigilance (e.g. Broadbent 1958), along with losses of knowledge and

expertise, as employees are required less and less to use their manual skills or recall information from their long-term

memory (Bainbridge 1983). Subsequently, this results in an inability to cope when called upon in unexpected circumstances

(e.g. Hollnagel and Bye 2000; Kirwan 2001).

Thus, through automating processes, a system’s capacity for adaptation may actually be reduced, increasing the inherent

risks which automation was intended to remove (Bainbridge 1983; Strain and Eason 2000). As Bainbridge notes ‘the irony

that the more advanced a control system is, so the more crucial may be the contribution of the human operator’ (1983, 775).

This is a particular concern within the air traffic management industry where automation is pervasive and the consequences

of human errors and systems failures can be fatal (e.g. Wickens et al. 1998; Kirwan 2001). The amount of air traffic is also

predicted to increase drastically over the next 20 years, thereby placing increasing demands on the systems and those who

operate and monitor them (e.g. Sheridan, Corker, and Nadler 2006; Hollnagel 2007; Langan-Fox, Canty, and Sankey 2009;

Prevot et al. 2012). Should parts of these automated systems fail, controllers may not be able to recall their obsolete skills in

order to take over manually (Wickens 1999). Moreover, with new controllers being trained primarily in automated systems,

their knowledge of manual systems may be only theoretical and, therefore, the necessary procedural skills to be able to take

control will be lacking (Kirwan 2001). Fundamentally, the danger is that ‘when the computer system fails, the whole system

fails due to the human being unable to recover it’ (Kirwan 2001, 168).

Furthermore, a key disadvantage of automating tasks previously allocated to humans is the inability of technology to

take into consideration the multiple, and often conflicting, goals which frequently arise in real work settings (e.g. Wright

et al. 2000). Consequently, employees often find themselves having to adapt their working practices in order to work around

the technology and compensate for its constraints (e.g. Cook and Woods 1996; Berg 1997). Consider Choose and Book, for

example, the electronic computer-based booking service within NPfIT. This seeks to enable patients to select and book their

own specialist consultancy appointments, thereby increasing patient choice and speeding up the referral and booking

processes (Connecting for Health 2005; Department of Health 2006a). However, doctors described the Choose and Book

system as ‘a non-clinical and irrelevant task’ (Rabiei et al. 2009, 175) which is ‘completely unworkable’ (The British

Medical Association 2009, 5). Many view the electronic process as simply increasing consultation time and workload, and

providing inappropriate information and referrals (e.g. Bell 2007; Eason 2007; Currie 2008; Pisipati, Chow, and Payne

2009).

Automating work processes and practices can also result in a loss of all but their primary function which can have

knock-on effects on issues such as staff well-being, engagement, motivation and job satisfaction. For example, the primary

function of the shift handover process amongst nurses on a hospital ward, which usually takes the form of an informal

meeting and briefing, is informational to ensure continuity of care (e.g. Kerr 2002). However, there is also an educational

component, involving experiential learning and teaching by example, in addition to initiating newcomers into the

organisational culture and upholding nursing norms (e.g. Ekman and Segesten 1995; Strange 1996). In addition, shift

Ergonomics 1055

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

handover has an organisational element, concerned with formulating immediate plans for the forthcoming shift (Kerr 2002).

It also has a social function, to provide staff with any support they need (Strange 1996), whilst serving as a period for

catching-up with colleagues and friends (Kerr 2002).

Consequently, if the shift handover process was to be reallocated to an electronic system, whilst the informational

function of the process could potentially be maintained, other functions – i.e. organisational, social and educational –

would either be lost or more likely would need to be provided elsewhere in the system. The apparent cost savings made by

reducing a social handover would either result in those functions not being undertaken (at a long-term cost to the

organisation) or undertaken by staff in some other way as a ‘workaround’.

Similarly, studies on large private sector technological change projects, such as the implementation of enterprise

resource planning (ERP) systems, also highlight the limitations of a technology-led approach and the importance of

understanding how wider organisational factors shape how users interact with technology (e.g. Dery, Hall, and Wailes

2006; Shepherd 2006). For example, Shepherd (2006) studied the implementation of the Human Resources module of a

leading ERP system within a multinational manufacturing company. He describes how all employees in the UK were

expected to use it to maintain their own personal data (e.g. address, bank account details and telephone numbers), book their

own travel and accommodation, and enter their holidays, sickness, overtime and expense claims. However, the study

revealed some of these tasks were demanding and, at times, incongruent with employees’ professional identities. For

example, engineers described booking travel and accommodation and inputting expenses as particularly onerous and an

inefficient use of their time, distracting them from what they saw as their core work responsibilities. Consequently, there

was evidence of some of these users working around the system, by delegating these tasks to administrative colleagues.

In summary research indicates multiple problems with the ways in which allocation of function decisions are

traditionally undertaken. These findings are reinforced by the case studies discussed below, which illustrate the problems

experienced when automating functions previously afforded to professionals without adopting a systemic approach to

design.

3. Case studies

In this section, we analyse two electronic systems designed to record and store care records within UK healthcare and child

welfare services, namely the NHS Summary Care Record (SCR) and the Integrated Children’s System (ICS).

3.1 NHS Summary Care Record (SCR)

As mentioned above, the electronic NHS Care Records Service (CRS) was implemented as part of NPfIT. Described as the

‘lynchpin’ (Connecting for Health 2005, 8) of the new systems, the aim of the CRS was to store medical records and health-

related information for all patients, enabling information to be shared safely and securely across the NHS and accessible

whenever and wherever needed. The service comprised two parts: the SCR and the detailed care record (DCR) (e.g.

Greenhalgh et al. 2008b; Eason et al. 2012). The SCR is a centrally held summary of an individual’s demographic

information and key medical details, created from the records held by his or her general practitioner (GP). This is accessible

to NHS staff in any location across the country in the event of an emergency or unscheduled care situation (e.g. walk-in

centres). The DCR is a more localised, more detailed medical record accessible to an individual’s GP and in community or

local hospital care settings. Anticipated benefits of the SCR by policy makers included: better care, as a result of improved

clinical decision making; safer care, by reducing the risk of medication errors; and more efficient care, by making

consultations quicker (e.g. Department of Health 2006b, 2006c).

However, despite best intentions, studies report multiple problems with the automated CRS, particularly with the SCR,

and evidence of the anticipated benefits is limited (e.g. Greenhalgh et al. 2008a, 2008b, 2010a, 2010b, 2011; Sheikh et al.

2011; Cresswell, Worth, and Sheikh 2012; Eason et al. 2012; Takian et al. 2012). For instance, there have been problems

with non or partial adoption of the system in many locations; reasons given include lack of compatibility with clinical

practice, competing priorities, concerns about data quality and the costs of replacing current systems (e.g. Greenhalgh et al.

2010a, 2010b; Sheikh et al. 2011; Eason et al. 2012; Takian et al. 2012). Many frontline staff have been reluctant to ask for

consent to see the SCR at the point of care, deeming it to be ‘unworkable, inappropriate or stressful’ (Greenhalgh et al.

2010b, 9). Where the SCR has been used, data were often found to be incomplete or inaccurate, for example, in relation to a

patient’s current medication or allergies (Greenhalgh et al. 2010b), whilst data entry, involving making selections from pre-

defined lists, was reported to be too restrictive (Walsh 2004).

Unanticipated additional workload was also reported. Doctors perceived electronic notes to create more cognitive work

compared with handwritten patient notes (Walsh 2004). Meanwhile, staff mentioned having to investigate unauthorised

SCR access, respond to queries from patients, and support secure access created more task-related load (e.g. Greenhalgh

R. Challenger et al.1056

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

et al. 2010b). Furthermore, many clinicians reported the SCR as eroding their sense of professional identity (Greenhalgh

et al. 2008a, 2008b; Cresswell et al. 2012), spending more time in front of the computer than with their patients and feeling

more ‘like IT people’ (Sheikh et al. 2011, 8) than clinicians. They also felt that this compromised the quality of their

interactions with patients (Cresswell et al. 2012).

In addition, staff reported having to adapt their working processes in order to work around the restrictions imposed by

the SCR (e.g. Collins 2007; Eason 2007; Randell 2007; Sheikh et al. 2011; Cresswell et al. 2012). For instance, staff

mentioned reverting to paper systems, cross-referencing to avoid having to enter information multiple times, not recording

information considered to be unimportant and entering less descriptive information, in order to cut corners and save time

(e.g. Cresswell et al. 2012). Indeed, many staff deemed the processes associated with the SCR service ‘bureaucratic,

intrusive and unworkable’ (Greenhalgh et al. 2010a, 5). They also reported frequently sharing smartcards used to control

access to the SCR and, thereby, protect the security and confidentiality of patients’ electronic records. This overcame the

time-consuming process of each individual entering his or her unique smartcard details and password each time and the

restrictions on sharing patients’ information as a result of differential levels of smartcard access (e.g. a doctor has more

access than a nurse who has more access than a clerk, etc.) (Collins 2007; Eason 2007; Caldwell 2011). Therefore, despite it

being a disciplinary offence, one smartcard would be left logged into the computer for the duration of the shift, highlighting

how impractical staff found the security system for delivering patient care (Collins 2007; Eason 2007; Peltu, Eason, and

Clegg 2008; Barr 2011).

According to Eason (2006, 2007), workarounds become the norm when systems are designed which do not meet end-

users’ needs. We believe the SCR is one such system. We believe the primary reason for the problems observed with the

SCR is the failure to adopt a systems approach. Rather than being designed to accommodate both social and technical

considerations, the SCR was a top-down ‘technology push’ project (e.g. Clegg and Shepherd 2007; Mark 2007; Greenhalgh

et al. 2008a) ‘governed by software contracts and formal processes of requirement specification’ (Sheikh et al. 2011, 4).

Indeed, this techno-centric mindset is evident in Connecting for Health’s vision about NPfIT more generally and its primary

concern with ‘bringing in the IT’ (Connecting for Health 2005, 12). It also appears to be the mindset for the majority of

organisational change programmes within healthcare, where ‘the culture is still to train people to adapt to poorly designed

technology, rather than to design technology to fit people’s characteristics’ (Zhang 2005, 1).

Taking a higher level perspective, we also believe this techno-centric mindset and need for technologically driven

change within the NHS was further compounded by the political agenda at the time, with NPfIT, including the SCR,

forming an important part of the ‘New Labour’ Government health agenda (e.g. Mark 2007; Currie 2008; Greenhalgh et al.

2010a; Sheikh et al. 2011; Takian et al. 2012). ‘NPfIT was the most ‘political’ of politically-driven projects – allegedly the

result of a ten-minute chat on the sofa in No. 10 Downing Street – and the impetus and continuing drive certainly emanated

from Cabinet ministers with the Department of Health being instructed to get on with it.’ (Maughan 2010). Eager to gain

support from the electorate, the Government drove forward their political agenda of using information and communications

technology, i.e. NPfIT, to revolutionise healthcare and bring the NHS into the twenty-first century (e.g. Callaghan and

Wistow 2006; Currie and Suhomlinova 2006; Currie 2008).

Fundamentally, we believe that the SCR was imposed on clinicians rather than being owned by them and designed and

implemented to best meet their needs for effective performance (e.g. Clegg and Walsh 2004; Baxter and Sommerville

2011). There was a distinct lack of involvement and engagement with frontline clinical staff, particularly in relation to

decision making, for instance, about which processes could successfully be automated and which would need to still be

performed by the staff themselves (Sheikh et al. 2011). Many staff reported being unaware of the processes involved with

the SCR and the changes going on in their healthcare practices (e.g. Greenhalgh et al. 2010a, 2010b, 2011). As Sheikh et al.

suggest, the SCR project appears to be a:

complex and constantly shifting national set-up characterised by complex politically shaped contractual relationships that largelyexcluded NHS healthcare providers. Many stakeholders had a lack of appreciation of the social and organisational consequences ofimplementing technology and unrealistic expectations about the capabilities of the software systems. (2011, 9–10).

Consequently, this lack of end-user ownership meant that the resulting ‘one-size-fits-all’ system failed to capture the

complexities of clinical practice and the clinical environment (Sheikh et al. 2011; Eason et al. 2012; Waterson, Glenn, and

Eason 2012). One clinician believed the software developer:

had very little idea of how we do things in the NHS and was just interested in introducing technology to fulfil its contractualobligations. When I asked questions about how the new software would work, I failed to get any detailed answers. I felt I was justexpected to adapt to the new system. (Currie 2008, 211).

As Greenhalgh et al. observe, assumptions about the ways in which end-users will use and interact with the technology were

‘Inbuilt in the SCR’ (2010a, 9). This mirrors the findings of other research into technological change, including structuration

perspectives on technology (e.g. DeSanctis and Scott Poole 1994; Orlikowski 2000). However, as evidenced by the poor

Ergonomics 1057

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

uptake and workarounds employed, these implicit assumptions were not compatible with the realities of clinical practice

(e.g. Greenhalgh et al. 2010a). There was a distinct lack of understanding of clinical practice on the part of the developers,

with no explicit consideration given as to how best to allocate the various tasks and functions between staff and computers

(Sheikh et al. 2011).

Thus, in line with the research evidence presented, we believe that the difficulties experienced with the SCR were

mainly a result of the failure to adopt a systemic approach to its design and implementation. Consistent with the literature at

the beginning of the article, outlining where key problems typically arise when functions are allocated primarily according

to technological capabilities (i.e. without due consideration of the social and organisational factors involved), problems

occurred particularly regarding workarounds, conflicting goals and unanticipated workload. Higher level factors, such as

the political agenda at the time and the techno-centric culture of the NHS, appear to have driven the SCR programme, whilst

lower level factors, such as the need for end-user involvement and ownership over design decisions and the importance of

considering explicitly allocation of functions, appear to have been largely overlooked.

We will now look at our second example, exploring how automating a care records system within child welfare services

without adopting a systemic approach to design caused problems similar to those outlined above.

3.2 The Integrated Children’s System (ICS)

In recent years, following a series of high-profile inquiries into non-accidental child deaths (e.g. Lord Laming 2003), the UK

Government has implemented a wide-reaching, ‘modernisation’ programme across practices at the ‘front-door’ of statutory

children’s social care services across the country. The ultimate aims of this programme – primarily a new IT system known

as the ICS – are to improve child safety, increase standardisation of care and enhance process efficiency, whilst also

emphasising the importance of early initial response and assessment (e.g. Department of Health 2000, 2002). The key

elements of the ICS are: to consider social work in a more systematic manner, comprising four stages of assessment,

planning, intervention and review; to provide a series of exemplar formats as a basis from which to develop a standardised,

electronic social care record for each case; and to provide a universal database allowing information to be shared between

different authorities (Department of Health 2002).

However, as was the case with the SCR programme, research reports multiple problems with the ICS, impacting on staff

well-being, the quality of service provided and, ultimately, children’s safety (e.g. Shaw et al. 2009; White et al. 2009a;

White, Hall, and Peckover 2009b; Munro 2010; Broadhurst et al. 2010a, 2010b; Wastell et al. 2010; White et al. 2010;

Munro 2011; Featherstone, Broadhurst, and Holt 2012). Indeed, the tragic death of ‘Baby P’ in the London Borough of

Haringey in 2007, at the hands of his mother, her boyfriend and his brother, brought to the fore the serious deficiencies with

the ICS (Lord Laming 2009; White et al. 2010).

For instance, social workers reported feeling increasingly constrained and frustrated by the difficulties and time-

consuming nature of trying to ‘fit’ their work to the numerous, standardised requirements of the ‘one-size-fits-all’

assessment forms (e.g. White et al. 2009a; Wastell et al. 2010). They found the exemplar formats and ‘tick box’ style

questions to be too prescriptive and often irrelevant, lacking the flexibility needed to accommodate contextual differences in

specific cases (e.g. Shaw et al. 2009; Broadhurst et al. 2010b; Featherstone et al. 2012). Furthermore, in similar vein to the

NHS staff, social workers reported increasingly spending their time at the computer, ‘mechanistically following

bureaucratic procedures’ (White et al. 2009a, 409) rather than engaging with families face-to-face and ascertaining

individual family circumstances. Thus, the risk of making inappropriate decisions has been increased, potentially putting

children’s welfare at risk (e.g. White et al. 2009b; Broadhurst et al. 2010b).

In addition, the statutory, immutable targets and timescales dictated by the ICS, as a means of regulating performance,

created immense time pressures and stress for employees (e.g. Broadhurst et al. 2010a). Consequently, as was observed with

the SCR system, employees developed workarounds in order to meet these targets (Wastell et al. 2010). For instance, they

were forced to skip questions, or to make categorisations quickly and with insufficient information, running the risk of

filtering out cases which may, in reality, require intervention (e.g. Broadhurst et al. 2010a). Decision-making heuristics

were also frequently employed and habituated to, in order to manage the pressing timescales and demanding targets,

introducing a whole array of new risks (e.g. Shaw et al. 2009). As a result, employees reported that the fundamental purpose

of safeguarding children and supporting families often came secondary to meeting targets and deadlines (Broadhurst et al.

2010a). Therefore, the inherent risks which automation was intended to remove from social work practice have actually

increased (Bainbridge 1983; Strain and Eason 2000; White et al. 2009a, 2009b).

On reviewing the information available in these academic articles and official Government documents, we can see that,

as with the SCR programme outlined above, a systems approach to the design and implementation of the ICS was lacking.

Indeed, Professor Munro’s (2010, 2011) recent independent review of child protection, commissioned by the Secretary of

State for Education (the Right Honourable Michael Gove MP), has advocated the need to think systemically about reform in

R. Challenger et al.1058

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

social care. Fundamentally, we believe the human elements of social work practice were separated from technological ones

and vice versa: ‘its design was divorced from practice’ (White et al. 2009a, 407). Rather than seeking to jointly optimise

both human and technical concerns as systems thinking advocates (e.g. Hendrick 1995, 1997, 2002; Clegg 2000), the ICS

was seen primarily as a top-down ‘IT project’ rather than a more holistic, user-led ‘social work’ project.

Akin to the incompatibility observed between the SCR and clinical practice, such a techno-centric approach is at odds

with the complexities inherent within child welfare services (e.g. Erikson 2004; Local Government Association 2008;

White et al. 2009a; Broadhurst et al. 2010b; Munro 2010; Wastell et al. 2010). As Wastell et al. observe, ‘is the world of

social work really so tidily organised?’ (2010, 312). We argue that it is not appropriate, or acceptable, to condense the

professional judgements of those experienced and competent in the realities of social care to protocols and standardised

procedures; ‘the “smell” of practice is not captured by file or format’ (Broadhurst et al. 2010b, 1060). This opinion is echoed

in Lord Laming’s (2009) report on child protection in the UK, commissioned by the Secretary of State for Children, Schools

and Families (the Right Honourable Ed Balls MP), following the death of ‘Baby P’:

Professional practice and judgment, as said by many who contributed evidence to this report, are being compromised by an over-complicated, lengthy and tick-box assessment and recording system. The direct interaction and engagement with children and theirfamilies, which is at the core of social work, is said to be at risk as the needs of a work management tool overtake those of evidence-based assessment, sound analysis and professional judgment about risk of harm. (Lord Laming 2009, 33)

Furthermore, this top-down imposition meant end-users were not involved in the decisions taken about the design and

implementation of the ICS, in particular whether it was appropriate to automate child welfare records; ‘the design has been

highly centralized and only weakly connected with frontline, professional practice’ (White et al. 2010, 417). Of the 63

named members of the steering and advisory groups for the ICS project (Department of Health 2002), it is significant that

not one was a frontline social worker. The only ‘direct’ link to social work practice appears to be via eight group members

who were directors or senior managers within the social services profession; the vast majority of members were employed

by the Government, in the Department of Health or the Department for Education and Skills (now known as the Department

for Children, Schools and Families) (Department of Health 2002). Thus, the design and implementation of the ICS appears

to have been championed by senior civil servants and individuals within social services departments, working closely with a

handful of senior academics (Calder 2004; White et al. 2009a, 2010).

Indeed, as observed with the SCR programme, the ICS formed a key part of the ‘New Labour’ Government’s agenda to

modernise social care and improve child welfare services, through centralised control, increasingly standardised practice

and associated performance targets and audits (e.g. White et al. 2010; Featherstone et al. 2012). There was an ‘apparent faith

by policy makers in “top-down” enforcement’ (Broadhurst et al. 2010b, 1047). Accordingly, the design was driven by the

need to create rigorous, standardised, auditable procedures for assessment, planning, intervention and review, where

information could be used systematically and efficiently, and individuals’ performance against statutory targets and

timescales could be monitored (e.g. Calder 2004; Garrett 2009; Shaw et al. 2009; Broadhurst et al. 2010a; Wastell et al.

2010).

The lack of involvement and ownership also meant there was very little scope for end-users to draw on their expertise

and exercise professional discretion; they were not afforded the opportunities to shape the system so as to better meet their

needs as frontline social workers (e.g. White et al. 2009a, 2010). Concerns raised during piloting of the system were simply

dismissed as ‘teething problems’ which could be resolved through staff training (Cleaver and Walker 2004; Cleaver et al.

2008). Even those who believed in the ICS voiced concerns about the lack of frontline ownership and empowerment; one

ICS project manager stated ‘it wasn’t clear that there was to be no negotiation . . . staff thought they were shaping things’

(White et al. 2010, 418).

In summary, and again as argued previously in relation to the SCR, we believe the difficulties experienced with the ICS

were primarily a result of the failure to adopt a systemic approach to its design and implementation. We believe the

problems reported relating to workarounds, staff dissatisfaction and incompatibility with practice were, once again, the

result of multiple, interrelated lower and higher level factors. The techno-centric mindset and political modernisation

agenda appear to have interacted with the overall failure to take social aspects into account, along with the lack of end-user

involvement, ownership and explicit consideration of allocation of function decisions.

3.3 Appropriate allocations

In both of these case studies, we can see that it is technically feasible to automate care records within UK healthcare and

child welfare services and reallocate functions from frontline personnel to technology. Thus, it is technically possible to

automate the tasks involved with assessing a patient’s medical history and current condition, and to store centrally all

patients’ medical records and health-related information. It is also technically possible to automate the tasks involved with

the assessment, planning, intervention and review of social work cases, and to develop electronic versions of social care

Ergonomics 1059

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

records. However, as the case studies illustrate, the fundamental issues here are not whether the SCR and ICS are

technically feasible at the task level, but whether their conceptual designs are optimal for medical and social care,

respectively. Do the new systems support the care of patients in the NHS and the welfare of children within the care system?

Do the new systems support clinicians and social workers in performing their job roles effectively? Are the systems

compatible with medical and social care environments? Based on the research outlined above, we believe the users are, in a

variety of ways, saying ‘No’. Moreover, we could find no evidence that the allocation decisions were made explicitly, early

and from a systems viewpoint.

Building on the analysis of the case studies, we now move on to introduce our own macroergonomics approach towards

function allocation, describing what we believe it to be and explaining when it should be undertaken, who should be

involved and how it should be carried out.

4. A macroergonomics approach to function allocation

We began our article with a literature review of perspectives that have sought to understand the process of function

allocation in organisations. This revealed that despite the longstanding tradition of research in this area, it remains of

considerable contemporary interest, with articles published recently across a variety of disciplines, including psychology,

ergonomics and computer science. Thereafter, we presented two case studies, the SCR and the ICS. The literature review

and case studies illustrate that with the exception of some military systems and control centre design, function allocation

typically does not exist as a discrete, identifiable phase during systems development but, rather, is simply implicit in design

activities and assumptions, emerging as the system evolves (e.g. Kidd 1992; Older et al. 1997; Clegg et al. 2000; Fuld

2000). Moreover, the need for a systems (rather than technology-led) approach and the importance of end-user involvement

and ownership are largely overlooked (e.g. Bostrom and Heinen 1977a, 1977b; Kappelman et al. 2006; Eason 2007;

Greenhalgh et al. 2008a, 2010a; White et al. 2009a, 2010; Broadhurst et al. 2010b).

However, given its importance in systems development and the potential consequences of ‘getting it wrong’, we suggest

there is a real need to realign our thinking. We need to move away from a microergonomics focus on tasks and establish a

new, complementary macroergonomics mindset for allocation choices, i.e. adopt a systems approach, to take into account

the multiple, interrelated factors which comprise the system (e.g. social, technical, organisational, political and regulatory

issues) and that will be affected by automation decisions.

4.1 What do we see function allocation as being?

We view function allocation as the process of allocating functions or tasks between and amongst humans and technologies

in a system, which forms an explicit stage early in systems design and development, involves a multi-disciplinary team

(including end-users), is evidence-based, framed within the language of risk, and utilises iterative methods (e.g. scenarios

planning techniques). We discuss these ideas in more detail below.

4.2 When should function allocation be undertaken?

In line with systems thinking, we argue that function allocation decisions should form an early, explicit and distinct part of

the design and development process, in order to maximise chances of success (e.g. Blanchard and Fabrycky 1981; Clegg

2000). Early consideration of differing allocation options, in conjunction with detailed reflection about the accompanying

implications of each different decision for the system as a whole (e.g. Dekker and Wright 1997; Rognin et al. 2000), should

enable the optimal system to be designed from the beginning, rather than the system needing to adapt in order to

accommodate problems (e.g. Older et al. 1997; Strain and Eason 2000; Eason 2007). This is borne out by experience across

system and product design whereby many claim that up to around 80% of design characteristics and system costs are locked

in during conceptual design, i.e. early in the process (e.g. Cooper and Chew 1996; Anderson 2010). It is widely held in the

design community that system design becomes increasingly constrained as the process progresses. The logical corollary,

and indeed experience in practice, is that changing designs becomes increasingly expensive as the process progresses

(Anderson 2010).

For instance, consider the waterfall model often used to characterise the sequential, downward flowing nature of the

software development process, from the development of the initial strategy and feasibility assessment, through conceptual

and detailed design, programming and implementation, to system use and maintenance (e.g. Royce 1970; Clegg, Waterson,

and Axtell 1997). Currently, if allocation decisions are considered, they are considered post hoc, at the detailed design stage

when it is too late to instigate significant changes to the system (e.g. Fuld 2000). This is not sufficient. We believe such

decisions need to be considered as part of the feasibility and conceptual design phases, when there is still sufficient time

prior to the system being built to evaluate different decisions and choose the optimum.

R. Challenger et al.1060

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

4.3 Who should be involved in allocation decisions?

As outlined earlier, both socio-technical systems thinking and systems engineering advocate that multiple stakeholders

should be involved in decisions about systems design and development, including those related specifically to function

allocations (e.g. Hall 1962; Clegg 2000). Such a multidisciplinary approach, incorporating the differing perspectives,

knowledge and expertise of a range of organisational actors, such as end-users, management, designers, strategists and

clients, should ensure a more complete, less fragmented, understanding of the system overall (e.g. Jordan 1963; Sage 1977,

1992; Clegg et al. 2000 Kappelman et al. 2006). The involvement of end-users in particular is key to system success (e.g.

Bostrom and Heinen 1977a, 1977b; Blanchard and Fabrycky 1981; Clegg and Walsh 2004; Mumford 2006; Eason 2007).

Ultimately, a multidisciplinary approach should influence and ideally enhance design choices for the system, including

function allocation decisions.

Within the wider literature on organisational decision making, many argue that a group comprising multiple disciplines,

contributing differing capabilities and expertise, is better able to make effective decisions than a less diverse group (e.g.

Steiner 1972; Hollenbeck et al. 1995). Not only does such a composition broaden the spectrum of knowledge and expertise

available, thereby enabling more informed decisions, but also helps mitigate decision-making biases, such as groupthink

(Janis 1972), which are more likely to arise in highly homogeneous groups. Furthermore, the distinct viewpoints brought by

differing disciplines should create an environment ripe for discussion, debate and conflict which, when constructive and

functional, is known to be beneficial to group effectiveness (e.g. Amason 1996; De Dreu and Van de Vliert 1997; Simons

and Peterson 2000). For instance, when focused on the problem at hand and how best to resolve it, rather than on more

emotionally driven interpersonal incompatibilities, conflict can facilitate decision making (e.g. Amason 1996). It

encourages group members to debate opposed perspectives, identifying and evaluating their relative strengths and

weaknesses and underlying assumptions, ultimately resulting in a synthesis superior to either perspective alone (e.g. De

Dreu and Van de Vliert 1997). Hence, decision quality is enhanced. In addition, such multidisciplinary debate can serve to

make allocation of function decisions explicit and visible across the organisation and its employees.

4.4 How should function allocation be undertaken?

In this section, we make three key points in relation to how allocation decisions should be made. We argue that if these three

points had been adhered to in either of our two earlier case studies, then early warnings about potential drawbacks and risks

of design decisions would have been given and the opportunity for improvements would have been possible.

First, we believe decisions related to function allocations should be supported by ‘quick and dirty’ techniques, such as

‘Rapid Application Development’ (RAD; e.g. Martin 1992; Beynon-Davies et al. 1999), storyboarding (e.g. Madsen and

Aiken 1991, 1993; Jones 2008) and scenarios planning (e.g. Schoemaker 1991, 1992, 1995; Hodgkinson and Healey 2008).

These are well-established methods of dynamically evaluating potential system designs and strategic management

decisions, with minimal time and costs. Moreover, in line with systems thinking, they all stress the need for a collaborative,

multidisciplinary approach with active end-user involvement throughout (e.g. Madsen and Aiken 1991, 1993; Beynon-

Davies et al. 1999). Crucially, and in keeping with a systems mindset, the fundamental values and assumptions

underpinning the system and the associated human behaviours should be made explicit in addition to physical system

attributes.

Second, we believe such decisions should be grounded in evidence-based practice, an approach now well-established in

fields including medicine and nursing that integrates practitioner expertise with research evidence in order to enhance

working practices (e.g. Briner 1998; Sackett et al. 2000; Briner and Rousseau 2011). This may involve observations of

existing practice, systematic literature reviews, simulations of different scenarios and the use of the techniques outlined

above. For instance, particular processes within the system could be electronically simulated to explore the potential

benefits and drawbacks of humans or technology performing particular functions and the findings used as evidence to direct

subsequent allocation choices.

Third, we believe function allocation decisions should be framed in the language of risk, i.e. one with which

organisations are increasingly familiar and for which identification, management and mitigation procedures and standards

already exist (e.g. Pidgeon 1991; Institute of Risk Management 2002; McNeil, Frey, and Embrechts 2005; International

Organization for Standardization 2009). Just as organisations recognise the importance of conducting risk analyses for

potential decisions in other domains, so too should they undertake analyses of the likely risks involved in different function

allocation choices.

To illustrate how the above could have helped in the case of the SCR, multidisciplinary teams, comprising designers,

doctors, nurses, administrators and patients, could develop different scenarios concerned with the processes of recording

electronically patients’ medical details and openly debate the pros and cons of each. These scenarios could include

variations on who will be responsible for different parts of the process, how end-users will most likely interact with the

Ergonomics 1061

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

system, which tasks should be automated and how much autonomy and professional discretion will be afforded to medical

practitioners. For example, should consultants be entering data into the SCR or is it a more appropriate use of time for

administrative staff to perform this task? To facilitate these debates, stimulate constructive conflict and enhance the quality

of decisions, a devil’s advocate can be introduced to the group (e.g. Schwenk 1984).

In addition to these scenarios, experimental trials could be conducted and simulations performed to test how long the

SCR process takes and how well it would fit into a normal clinician–patient consultation to gather evidence to inform

practice. Moreover, the risks and benefits associated with each different option could then be debated and the knock-on

implications discussed, from a multidisciplinary perspective, thereby enabling a more holistic evaluation. Key questions to

ask of the design could include: How will this work on the ground? What are the benefits of automating the care records

processes for employees and for patients? What are the risks associated with entering and storing patients’ medical details

electronically? Will the new system meet the needs of the employees and patients? What impact will an automated system

have on employees’ current roles and responsibilities? What impact will the system have on the well-being of employees

and patients? How will an automated care records process fit within the overall NHS culture?

Such a process would also have allowed wider discussion about alternative conceptual designs for the new system.

Thus, for the SCR, it could have encouraged consideration of a patient-owned system for storing securely individual

medical records, perhaps in the form of a personal, user-carried smart card, as has been deployed in parts of Germany and

Canada (e.g. Aubert and Hamel 2001). It may also have encouraged the design and implementation of smaller, locally

owned and led SCR systems (e.g. at the Primary Care Trust level) rather than a single, NHS-wide programme (e.g. Eason

et al. 2012; Waterson et al. 2012). Nevertheless, this article is not the place to argue whether alternatives such as this

represent better design solutions, but rather to highlight that making function allocations decisions in an explicit, iterative

way opens up the possibility for such reflection.

5. Conclusions

Our core argument is that we need to find new, macroergonomics ways of thinking about a long-debated problem. We need

a reorientation of our thinking, to acknowledge function allocation not only as a traditional human factors and

microergonomics problem but also as an issue requiring attention from those with a wider interest in the human and

organisational aspects of complex systems. Fundamentally, we need to persuade all those concerned with the performance

and effectiveness of complex work systems, including people who commission, sponsor, design, implement and use such

systems, that this problem domain is worth serious attention. We need to use case study examples, such as the SCR

programme and ICS, to convince them that the issues surrounding function allocation are not solely (or even

predominantly) task-based decisions focused on detailed design but are also macroergonomics issues crucial to effective

systems design and, ultimately, to systems success. Furthermore, if we are going to succeed in influencing practice, we need

to get function allocation back onto the agenda for researchers, particularly in the fields of information systems and

organisational behaviour.

We are not arguing that these ideas will ‘solve the problem’ of function allocation. It may well still be the case that

higher political processes dominate certain system design and allocation decisions. But this state of affairs is more likely to

continue if function allocation continues to arise by default relatively late in the design process and, where it is discussed, is

considered at the level of individual tasks. Our approach, as described above, at the very least, increases the chances of more

effective allocation decisions and system designs. We believe our perspective will serve to heighten interest in this

important topic, will provide a platform from which to reframe this longstanding problem, will stimulate future research

and, ultimately, will enhance practice.

Acknowledgements

The authors would like to thank the editors and anonymous reviewers for their very helpful suggestions for improvements to the paperduring the review process. They would also like to thank Professor David Wastell and Professor Tricia Greenhalgh for their assistance inhelping to develop the article. Finally, Rose would like to dedicate the paper to Linda and Jim O’Neill for their continued love andsupport.

References

Agou, S., V. Raskin, and G. Salvendy. 1993. “Combining Natural Language with Direct Manipulation: The Conceptual-Framework for aHybrid Human–Computer Interface.” Behaviour & Information Technology 12 (1): 48–53.

Amason, A. C. 1996. “Distinguishing the Effects of Functional and Dysfunctional Conflict on Strategic Decision Making: Resolving aParadox for Top Management Teams.” Academy of Management Journal 39 (1): 123–148.

Anderson, D. M. 2010. Design for Manufacturability and Concurrent Engineering. Cambria, CA: CIM Press.

R. Challenger et al.1062

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

Aubert, B. A., and G. Hamel. 2001. “Adoption of Smart Cards in the Medical Sector: The Canadian Experience.” Social Science &Medicine 53 (7): 879–894.

Bainbridge, L. 1983. “Ironies of Automation.” Automatica 19: 775–779.Barr, F. 2011. “Smartcard Sharing ‘Should be Offence’.” EHI eHealth Insider. Accessed March 15. http://www.ehi.co.uk/news/ehi/6703/

smartcard-sharing-‘should-be-offence’Bate, I., and T. Kelly. 2003. “Architectural Considerations in the Certification of Modular Systems.” Reliability Engineering & System

Safety 81 (3): 303–324.Baxter, G., and I. Sommerville. 2011. “Socio-Technical Systems: From Design Methods to Systems Engineering.” Interacting with

Computers 23 (1): 4–17.Bell, J. 2007. “PCTs Still Only Halfway to Choose and Book Target.” Health Service Journal 117 (6050): 9.Berg, M. 1997. Rationalizing Medical Work: Decision-Support Techniques and Medical Practices. Cambridge, MA: MIT Press.Beynon-Davies, P., C. Carne, H. Mackay, and D. Tudhope. 1999. “Rapid Application Development (RAD): An Empirical Review.”

European Journal of Information Systems 8: 211–223.Blanchard, B., and W. Fabrycky. 1981. Systems Engineering and Analysis. Englewood Cliffs, NJ: Prentice Hall.Blomberg, O. 2011. “Conceptions of Cognition for Cognitive Engineering.” International Journal of Aviation Psychology 21 (1):

85–104.Boff, K. R. 2006. “Revolutions and Shifting Paradigms in Human Factors and Ergonomics.” Applied Ergonomics 37 (4): 391–399.Bostrom, R. P., S. Gupta, and D. Thomas. 2009. “A Meta-theory for Understanding Information Systems Within Sociotechnical

Systems.” Journal of Information Systems 26 (1): 17–47.Bostrom, R. P., and J. S. Heinen. 1977a. “MIS Problems and Failures: A Socio-Technical Perspective. Part 1: The Causes.”MIS Quarterly

1 (3): 17–32.Bostrom, R. P., and J. S. Heinen. 1977b. “MIS Problems and Failures: A Socio-Technical Perspective. Part 2: The Application of Socio-

technical Theory.” MIS Quarterly 1 (4): 11–28.Brennan, L. 1984. “The Influence of New Technology on the Allocation of Functions Decision.” In Proceedings of the 1st International

Conference on Human Factors in Manufacturing, edited by T. Lupton, 79–86. Bedford and Amsterdam: IFS Publications Ltd.Brennan, S. 2005. The NHS IT Project. London: Radcliffe Publishing Ltd.Briner, R. B. 1998. “What is An Evidence-based Approach to Practice and Why do we Need One in Occupational Psychology?”

Proceedings of the 1998 British Psychological Society Occupational Psychology Conference, 39–44. Leicester, UK: The BritishPsychological Society.

Briner, R. B., and D. M. Rousseau. 2011. “Evidence-based I–O Psychology: Not There Yet.” Industrial and Organizational Psychology 4(1): 3–22.

British Medical Association (BMA). 2009. “Choose and Book: Learning Lessons from Local Experience.” Accessed June 14 2010. http://www.bma.org.uk/ethics/health_records /connecting_for_health/chooseandbook.jsp?page=1

Broadbent, D. E. 1958. Perception and Communication. London: Pergamon.Broadhurst, K., C. Hall, D. Wastell, S. White, and A. Pithouse. 2010b. “Risk, Instrumentalism and the Humane Project in Social Work:

Identifying the Informal Logics of Risk Management in Children’s Statutory Services.” British Journal of Social Work 40:1046–1064.

Broadhurst, K., D. Wastell, S. White, C. Hall, S. Peckover, K. Thompson, A. Pithouse, and D. Davey. 2010a. “Performing ‘InitialAssessment’: Identifying the Latent Conditions for Error at the Front-door of Local Authority Children’s Services.” British Journal ofSocial Work 40: 352–370.

Brynjolfsson, E., and L. M. Hitt. 1998. “Beyond the Productivity Paradox.” Communications of the ACM 41: 49–55.Bye, A., E. Hollnagel, and T. S. Brendeford. 1999. “Human–machine Function Allocation: A Functional Modelling Approach.”

Reliability Engineering & System Safety 64 (2): 291–300.Cabinet Office Major Projects Authority. 2011. Programme Assessment Review of the National Project for IT. London: Cabinet Office.Calder, M. 2004. “Out of the Frying Pan into the Fire? A Critical Analysis of the Integrated Children’s System.” Child Care in Practice

10: 225–240.Caldwell, G. 2011. “Logging in and Logging Out: Patient Safety on Ward Rounds.” British Journal of Healthcare Management 17 (11):

547–553.Callaghan, G. D., and G. Wistow. 2006. “Publics, Patients, Citizens, Consumers? Power and Decision Making in Primary Health Care.”

Public Administration 84 (3): 583–601.Casler, J. G., and J. R. Cook. 2003. “Work Design and Analysis for Space-based Manufacturing: A Case Analysis of Initial Design

Issues.” Ergonomics 46 (1–3): 141–152.Chapanis, A. 1960. “Human Engineering.” In Operations Research and Systems Engineering, edited by C. Flagle, W. Huggins, and R.

Roy, 534–582. Baltimore, MD: John Hopkins Press.Chapanis, A. 1965. “On the Allocation of Functions Between Men and Machines.” Occupational Psychology 39: 1–11.Chapanis, A. 1970. “Human Factors in Systems Engineering.” In Systems Psychology, edited by K. B. de Greene, 51–78. New York:

McGraw Hill.Checkland, P. 1999. Systems Thinking, Systems Practice. Chichester: John Wiley & Sons.Cherns, A. B. 1976. “The Principles of Sociotechnical Design.” Human Relations 29: 783–792.Cherns, A. B. 1987. “Principles of Sociotechnical Design Revisited.” Human Relations 40: 153–162.Chi, C. F., and C. G. Drury. 1998. “Do People Choose an Optimal Response Criterion in an Inspection Task?” IIE Transactions 30 (3):

257–266.Cleaver, H., and S. Walker. 2004. “From Policy to Practice: The Implementation of a New Framework for Social Work Assessments of

Children and Families.” Child and Family Social Work 9: 81–90.

Ergonomics 1063

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

Cleaver, H., S. Walker, S. Scott, D. Cleaver, W. Rose, H. Ward, and A. Pithouse. 2008. The Integrated Children’s System: EnhancingSocial Work and Inter-agency Practice. London: Jessica Kingsley Publishers.

Clegg, C. 1988. “Appropriate Technology for Manufacturing: Some Management Issues.” Applied Ergonomics 19 (1): 25–34.Clegg, C. W. 2000. “Sociotechnical Principles for System Design.” Applied Ergonomics 31: 463–477.Clegg, C. W., C. Axtell, L. Damodaran, B. Farbey, R. Hull, R. Lloyd-Jones, J. Nicholls, R. Sell, and C. Tomlinson. 1997. “Information

Technology: A Study of Performance and the Role of Human and Organizational Factors.” Ergonomics 40: 851–871.Clegg, C. W., M. T. Older Gray, and P. E. Waterson. 2000. “The Charge of the ‘Byte Brigade’ and a Sociotechnical Response.”

International Journal of Human Computer Studies 52 (2): 235–251.Clegg, C. W., S. J. Ravden, J. M. Corbett, and G. I. Johnson. 1989. “Allocating Functions in Computer Integrated Manufacturing:

A Review and a New Method.” Behaviour and Information Technology 8: 175–190.Clegg, C. W., and C. Shepherd. 2007. “The Biggest Computer Programme in theWorld Ever!: Time for a Change in Mindset?” Journal of

Information Technology 22: 212–221.Clegg, C. W., and S. Walsh. 2004. “Change Management: Time for a Change!” European Journal of Work and Organizational

Psychology 13: 217–239.Clegg, C. W., P. E. Waterson, and C. M. Axtell. 1997. “Software Development: Some Critical Views.” Behaviour and Information

Technology 16: 359–362.Collins, T. 2007. “NHS Security Dilemma as Smartcards Shared.” Computer Weekly, 30 January. Accessed June 14 2010. http://www.

computerweekly.com/Articles/2007/01/30/221461/nhs-security-dilemma-as-smartcards-shared.htmConnecting for Health. 2005. “A Guide to the National Programme for Information Technology.” Accessed February 7 2011. http://www.

connectingforhealth.nhs.uk/resources/archive/npfit_brochure_apr_05_final.pdfCook, R. I., and D. D. Woods. 1996. “Adapting to New Technology in the Operating Room.” Human Factors 38: 593–613.Cooper, R., and W. B. Chew. 1996. “Control Tomorrow’s Costs through Today’s Designs.” Harvard Business Review 88–97, January.Corbridge, C., and C. A. Cook. 1997. “The Role of Function Allocation in the Design of Future Naval Systems.” In ALLFN’97 Revisiting

the Allocation of Function Issue: Proceedings of the 1st International Conference on Allocation of Functions, edited by E. Fallon, L.Bannon, and J. McCarthy, Vol. 1, 73–88. Louisville, KY: IEA Press.

Cresswell, K. M., A. Worth, and A. Sheikh. 2012. “Integration of a Nationally Procured Electronic Health Record System into User WorkPractices.” BMC Medical Informatics and Decision Making 12: 15–27.

Currie, W. L. 2008. “Evaluating the Governance Structure for Public Sector IT: The UK National Programme in the Health Service.” InEvaluating Information Systems: Public and Private Sector, edited by Z. Irani, and P. Love, 199–217. Oxford: Elsevier.

Currie, G., and O. Suhomlinova. 2006. “The Impact of Institutional Forces Upon Knowledge Sharing in the UK NHS: The Triumph ofProfessional Power and the Inconsistency of Policy.” Public Administration 84 (1): 1–30.

De Dreu, C. K. W., and E. Van de Vliert, eds. 1997. Using Conflict in Organizations. London: Sage.Dearden, A., M. Harrison, and P. Wright. 2000. “Allocation of Function: Scenarios, Context and the Economics of Effort.” International

Journal of Human Computer Studies 52 (2): 289–318.Dekker, S. W. A., and D. D. Woods. 2002. “MABA-MABA or Abracadabra? Progress on Human-Automation Co-ordination.” Cognition,

Technology and Work 4: 240–244.Dekker, S. W., and P. C. Wright. 1997. “Function Allocation: A Question of Task Transformation Not Allocation.” In ALLFN’97:

Revisiting the Allocation of Function Issue: Proceedings of the 1st International Conference on Allocation of Functions, edited byE. F. Fallon, L. Bannon, and J. C. McCarthy, Vol. 1, 31–40. Louisville KY: IEA Press.

Department of Health. 2000. Learning the Lessons: The Governments’ Response to Lost in Care. London: The Stationery Office.Department of Health. 2002. Integrated Children’s System: Working with Children in Need and their Families. Consultation Document.

London: Department of Health.Department of Health. 2006a. Report on the National Patient Choice Survey – July 2006 England. London: Department of Health.Department of Health. 2006b. Our Health, Our Care, Our Say: A New Direction for Community Services. London: The Stationery Office.Department of Health. 2006c. Direction of Travel for Urgent Care: A Discussion Document. London: The Stationery Office.Dery, K., R. Hall, and N. Wailes. 2006. “ERPs as ‘Technologies-in-Practice’: Social Construction, Materiality and the Role of

Organisational Factors.” New Technology Work and Employment 21 (3): 229–241.DeSanctis, G., and M. Scott Poole. 1994. “Capturing the Complexity in Advanced Technology Use: Adaptive Structuration Theory.”

Organization Science 5 (2): 121–147.Drury, C. G., and D. R. Forsman. 1996. “Measurement of the Speed Accuracy Operating Characteristic for Visual Search.” Ergonomics

39 (1): 41–45.Eason, K. 2006. A Local Sociotechnical Design Approach to Exploiting the Potential of The National Healthcare IT Programme (NPfIT).

London: The Bayeswater Institute.Eason, K. 2007. “Local Sociotechnical System Development in the NHS National Programme for Information Technology.” Journal of

Information Technology 22: 257–264.Eason, K., M. Dent, P. Waterson, D. Tutt, P. Hurd, and A. Thornett. 2012. Getting the Benefit from Electronic Patient Information that

Crosses Organisational Boundaries. London: National Institute for Health Research Service Delivery and Organisation Programme.Final Report.

Edwards, E., and F. P. Lees. 1974. “Influence of Process Characteristics on Role of Human Operator in Process Control.” AppliedErgonomics 5 (1): 21–25.

Ekman, I., and K. Segesten. 1995. “Deputed Power of Medical Control: The Hidden Message in the Ritual of Oral Shift Reports.” Journalof Advanced Nursing 22 (5): 1006–1011.

Endsley, M. R., and D. B. Kaber. 1999. “Level of Automation Effects on Performance, Situation Awareness and Workload in a DynamicControl Task.” Ergonomics 42 (3): 462–492.

Erikson, F. 2004. Talk and Social Theory. Cambridge: Polity Press.

R. Challenger et al.1064

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

Fallon, E. F., L. Bannon, and J. C. McCarthy, eds. 1997. ALLFN’97: Revisiting the Allocation of Function Issue: Proceedings of the 1stInternational Conference on Allocation of Functions. Vol. 1. Louisville KY: IEA Press.

Featherstone, B., K. Broadhurst, and K. Holt. 2012. “Thinking Systemically – Thinking Politically: Building Strong Partnerships withChildren and Families in the Context of Rising Inequality.” British Journal of Social Work 42: 618–633.

Feltovich, P. J., R. R. Hoffman, D. Woods, and A. Roesler. 2004. “Keeping it Too Simple: How the Reductive Tendency AffectsCognitive Engineering.” IEEE Intelligent Systems 19: 90–94.

Fitts, P. M. 1951. Human Engineering for an Effective Air Navigation and Traffic Control System. Washington, DC: National ResearchCouncil.

Fuld, R. B. 1993. “The Fiction of Function Allocation.” Ergonomics in Design 1: 20–24.Fuld, R. B. 2000. “The Fiction of Function Allocation, Revisited.” International Journal of Human Computer Studies 52 (2): 217–233.Garg, C., and G. Salvendy. 1991. “Development of a Structured Methodology for Knowledge Elicitation: Applications to Flexible

Manufacturing Systems.” IIE Transactions 23 (4): 355–370.Garrett, P. M. 2009. Transforming Children’s Services? Social Work, Neoliberalism and the Modern World. Maidenhead: McGraw Hill.Goode, H. H., and R. E. Machol. 1957. System Engineering: An Introduction to the Design of Large-scale Systems. New York: McGraw

Hill.Greenhalgh, T., J. Russell, R. E. Ashcroft, and W. Parsons. 2011. “Why National Ehealth Programs Need Dead Philosophers:

Wittgensteinian Reflectionson Policymakers’ Reluctance to Learn from History.” The Milbank Quarterly 89: 533–563.Greenhalgh, T., K. Stramer, T. Bratan, E. Byrne, J. Russell, and H. W. W. Potts. 2010a. “Adoption and Non-adoption of a Shared

Electronic Summary Record in England: A Mixed-method Case Study.” British Medical Journal 340: c3111.Greenhalgh, T., K. Stramer, T. Bratan, E. Byrne, J. Russell, S. Hinder, and H. W. W. Potts. 2010b. The Devil’s in the Detail. Final Report

of the Independent Evaluation of the Summary Care Record and HealthSpace Programmes. London: University College London.Greenhalgh, T., K. Stramer, T. Bratan, E. Byrne, Y. Mohammad, and J. Russell. 2008a. “Introduction of Shared Electronic Records:

Multi-site Case Study Using Diffusion of Innovation Theory.” British Medical Journal 337: a1786.Greenhalgh, T., K. Stramer, T. Bratan, E. Byrne, J. Russell, Y. Mohammad, G. Wood, and S. Hinder. 2008b. Summary Care Record Early

Adopter Programme. An Independent Evaluation by University College London. London: University College London.Gregoriades, A., and A. G. Sutcliffe. 2006. “Automated Assistance for Human Factors Analysis in Complex Systems.” Ergonomics 49

(12–13): 1265–1287.Grote, G., S. Weik, T. Wafler, and M. Zolch. 1995. “Criteria for the Complementary Allocation of Functions in AutomatedWork Systems

and their Use in Simultaneous Engineering Projects.” International Journal of Industrial Ergonomics 16 (4–6): 367–382.Grote, G., C. Ryser, T. Wafler, A. Windischer, and S. Weik. 2000. “KOMPASS: A Method for Complementary Function Allocation in

Automated Work Systems.” International Journal of Human–Computer Studies 52 (2): 267–287.Hall, A. D. 1962. A Methodology for Systems Engineering. Princeton, NJ: Van Nostrand.Hancock, P. A., and S. F. Scallen. 1996. “The Future of Function Allocation.” Ergonomics in Design 4: 24–29.Hardman, N., and J. Colombi. 2012. “An Empirical Methodology for Human Integration in the SE Technical Processes.” Systems

Engineering 15 (2): 172–190.Hendrick, H. 1997. “Organizational Design and Macroergonomics.” In Handbook of Human Factors and Ergonomics, edited by G.

Salvendy, 2nd ed., 594–637. New York: John Wiley and Sons.Hendrick, H. W. 1995. “Future Directions in Macroergonomics.” Ergonomics 38 (8): 1617–1624.Hendrick, H. W. 2002. “An Overview of Macroergonomics.” InMacroergonomics: Theory, Methods, and Applications, edited by H. W.

Hendrick, and B. M. Kleiner, 1–24. Mahwah, NJ: Lawrence Erlbaum Associates.Hoc, J. M. 2000. “From Human–Machine Interaction to Human–Machine Cooperation.” Ergonomics 43 (7): 833–843.Hoc, J. M., and S. Debernard. 2002. “Respective Demands of Task and Function Allocation On Human–machine Co-operation Design: A

Psychological Approach.” Connection Science 14 (4): 283–295.Hodgkinson, G. P., and M. P. Healey. 2008. “Toward A (Pragmatic) Science of Strategic Intervention: Design Propositions for Scenario

Planning.” Organization Studies 29 (3): 435–457.Hollenbeck, J. R., D. R. Ilgen, D. J. Sego, J. Hedlund, D. A. Major, and J. Phillips. 1995. “Multilevel Theory of Team Decision Making:

Decision Performance in Teams Incorporating Distributed Expertise.” Journal of Applied Psychology 80 (2): 292–316.Hollnagel, E. 2007. “Flight Decks and Free Flight: Where are the System Boundaries?” Applied Ergonomics 38: 409–416.Hollnagel, E., and A. Bye. 2000. “Principles for Modelling Function Allocation.” International Journal of Human Computer Studies 52

(2): 253–265.Holness, K. S., C. G. Drury, and R. Batta. 2006. “A Systems View of Personnel Assignment Problems.” Human Factors and Ergonomics

in Manufacturing 16 (3): 285–307.House of Commons Committee of Public Accounts. 2007. Department of Health: The National Programme for IT in the NHS. London:

The Stationery Office Ltd.Institute of Risk Management. 2002. A Risk Management Standard. London: The Institute of Risk Management.International Organization for Standardization. 2000. Ergonomic Design of Control Centres (ISO 11064-1). Geneva: ISO.International Organization for Standardization. 2009. Risk Management: Principles and Guidelines (ISO 31000). Geneva: ISO.Ivergard, T. 2000. “Information Technology and Knowledge-based Business Development.” Behaviour & Information Technology 19

(3): 163–169.Janis, I. 1972. Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes. Boston: Houghton, Mifflin.Jenkins, D. P., N. A. Stanton, P. M. Salmon, G. H.Walker, and M. S. Young. 2008. “USING CognitiveWork Analysis to Explore Activity

Allocation Within Military Domains.” Ergonomics 51 (6): 798–815.Jenkins, G. M., and P. V. Youle. 1971. Systems Engineering: A Unifying Approach in Industry and Society. London: Watts.Jiang, X. C., J. Bingham, R. Master, A. K. Gramopadhye, and B. J. Melloy. 2002. “Visual Inspection Simulator for Hybrid

Environments.” International Journal of Industrial Engineering – Theory Applications and Practice 9 (2): 162–173.

Ergonomics 1065

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

Jiang, X. C., A. K. Gramopadhye, B. J. Melloy, and L. W. Grimes. 2003. “Evaluation of Best System Performance: Human, Automated,and Hybrid Inspection Systems.” Human Factors and Ergonomics in Manufacturing 13 (2): 137–152.

Jiang, X. C., M. T. Khasawneh, R. Master, S. R. Bowling, A. K. Gramopadhye, B. J. Melloy, and L. Grimes. 2004. “Measurement ofHuman Trust in a Hybrid Inspection System Based on Signal Detection Theory Measures.” International Journal of IndustrialErgonomics 34 (5): 407–419.

Jones, I. 2008. “Storyboarding: A Method for Bootstrapping the Design of Computer-Based Educational Tasks.” Computers andEducation 51: 1353–1364.

Jones, D., K. Hapeshi, and C. Frankish. 1989. “Design Guidelines for Speech Recognition Interfaces.” Applied Ergonomics 20 (1):47–52.

Jordan, N. 1963. “Allocation of Functions Between Man and Machines in Automated Systems.” Journal of Applied Psychology 47 (3):161–165.

Jung, Y. S., P. Seong, and M. Kim. 2004. “A Model for Computerized Procedures Based On Flowcharts and Success Logic Trees.”Reliability Engineering & System Safety 83 (3): 351–362.

Kaber, D. B., E. Onal, and M. R. Endsley. 2000. “Design of Automation for Telerobots and The Effect on Performance, OperatorSituation Awareness, and Subjective Workload.” Human Factors and Ergonomics in Manufacturing 10 (4): 409–430.

Kappelman, L. A., R. McKeeman, and L. Zhang. 2006. “Early Warning Signs of IT Project Failure: The Dominant Dozen.” InformationSystems Management 23 (4): 31–36.

Kerr, M. P. 2002. “A Qualitative Study of Shift Handover Practice and Function From a Socio-Technical Perspective.” Journal ofAdvanced Nursing 37 (2): 125–134.

Kidd, P. T. 1992. “Interdisciplinary Design of Skill-based Computer-aided Technologies: Interfacing in Depth.” International Journal ofHuman Factors in Manufacturing 2: 209–228.

Kirwan, B. 2001. “The Role of the Controller in the Accelerating Industry of Air Traffic Management.” Safety Science 37 (2–3):151–185.

Kleiner, B. M., and J. P. Shewchuk. 2001. “Participatory Function Allocation in Manufacturing.” Human Factors and Ergonomics inManufacturing 11 (3): 195–212.

Lagu, A. V., and S. J. Landry. 2011. “Roadmap for the Next Generation of Dynamic Function Allocation Theories and Strategies.”Human Factors and Ergonomics in Manufacturing and Service Industries 21 (1): 14–28.

Landauer, T. K. 1995. The Trouble with Computers. Cambridge, MA: MIT Press.Landry, S. J. 2012. “The Next Generation Air Transportation System: An Approach to Function Allocation.” Human Factors and

Ergonomics in Manufacturing and Service Industries 22 (4): 351–361.Langan-Fox, J., J. Canty, and M. Sankey. 2009. “Human-automation Teams and Adaptable Control for Future Air Traffic Management.”

International Journal of Industrial Ergonomics 39: 894–903.Lee, J., and N. Moray. 1992. “Trust, Control Strategies and Allocation of Function in Human Machine Systems.” Ergonomics 35 (10):

1243–1270.Lippert, S. K., andM. Davis. 2006. “A Concept Model Integrating Trust into Planned Change Activities to Enhance Technology Adoption

Behavior.” Journal of Information Science 32: 434–448.Lin, C. J., T.-C. Yenn, and C.-W. Yang. 2010. “Optimizing Human–System Interface Automation Design Based on a Skill-rule-

knowledge Framework.” Nuclear Engineering and Design 240 (7): 1897–1905.Lintern, G. 2012. “Work-focused Analysis and Design.” Cognition Technology & Work 14 (1): 71–81.Local Government Association. 2008. “Lifting the Burdens Task Force: Review of the Department for Schools, Children and Families.”

London: LGA Publications. Accessed December 2 2012. http://webarchive.nationalarchives.gov.uk/20120919132719/ http://www.communities.gov.uk/liftingburdens/publications/finalreports/childrenschoolsfamilies/

Lord Laming. 2003. The Victoria Climbie Inquiry: Report of an Inquiry by Lord Laming. Norwich: The Stationery Office.Lord Laming. 2009. The Protection of Children in England: A Progress Report. London: The Stationery Office.Madsen, K., and P. Aiken. 1991. “Cooperative Interactive Storyboard Prototyping.” In Storyboard Prototyping: A New Approach to User

Requirements Analysis, edited by S. J. Andriole, 2nd ed. Wellesley, MA: QED Information Sciences.Madsen, K., and P. Aiken. 1993. “Experiences Using Cooperative Interactive Storyboard Prototyping.” Communications of the ACM 36:

57–64.Mark, A. 2007. “Modernising Healthcare – Is the NPfIT for Purpose.” Journal of Information Technology 22: 248–256.Martin, J. 1992. Rapid Application Development. Englewood Cliffs, NJ: Prentice-Hall.Master, R., X. C. Jiang, M. T. Khasawneh, S. R. Bowling, L. Grimes, A. K. Gramopadhye, and B. J. Melloy. 2005. “Measurement of Trust

Over Time in Hybrid Inspection Systems.” Human Factors and Ergonomics in Manufacturing 15 (2): 177–196.Maughan, A. 2010. “The End of NPfIT.” Outsource Magazine. Accessed December 2, 2012. http://www.outsourcemagazine.co.uk/arti

cles/item/3541-the-end-of-npfitMcCarthy, J. C., E. Fallon, and L. Bannon. 2000. “Dialogues on Allocation of Function.” International Journal of Human Computer

Studies 52 (2): 191–201.McDaniel, J. W. 1996. “The Demise of Military Standards may Affect Ergonomics.” International Journal of Industrial Ergonomics 18:

339–348.McNeil, A. J., R. Frey, and P. Embrechts. 2005. Quantitative Risk Management: Concepts, Techniques and Tools. Princeton, NJ:

Princeton University Press.Meister, D. 1987. “Systems Design, Development and Testing.” In Handbook of Human Factors, edited by G. Salvendy, 17–42.

New York: Wiley.Millot, P., and G. A. Boy. 2012. “Human–Machine Cooperation: A Solution for Life-Critical Systems?” Work: A Journal of Prevention

Assessment & Rehabilitation 41: 4552–4559.

R. Challenger et al.1066

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

MIL-STD-46855. 1994.Human Engineering Requirements for Military Systems, Equipment and Facilities. Washington, DC: Departmentof Defense.

Ministry of Defence (MOD). 1989.Defence Standard 00-25. Human Factors for Designing of Equipment. Part 12: Systems. UK: Ministryof Defence.

Mital, A., A. Motorwala, M. Kulkarni, M. Sinclair, and C. Siemieniuch. 1994a. “Allocation of Functions to Humans and Machines in aManufacturing Environment. 1. Guidelines for the Practitioner.” International Journal of Industrial Ergonomics 14 (1–2): 3–31.

Mital, A., A. Motorwala, M. Kulkarni, M. Sinclair, and C. Siemieniuch. 1994b. “Allocation of Functions to Humans and Machines in aManufacturing Environment. 2. The Scientific Basis (Knowledge-base) for the Guide.” International Journal of IndustrialErgonomics 14 (1–2): 33–49.

Moray, N., and T. Inagaki. 1999. “Laboratory Studies of Trust Between Humans and Machines in Automated Systems.” Transactions ofthe Institute of Measurement and Control 21 (4–5): 203–211.

Moray, N., T. Inagaki, and M. Itoh. 2000. “Adaptive Automation, Trust, and Self-confidence in Fault Management of Time-criticalTasks.” Journal of Experimental Psychology – Applied 6 (1): 44–58.

Moscoso, P. G., T. Wafler, and A. Windischer. 1999. “A Modelling Framework for Complementary Design of Production Planning andControl Systems.” Ergonomics 42 (11): 1492–1506.

Muir, B. M. 1994. “Trust in Automation. 1. Theoretical Issues in the Study of Trust And Human Intervention in Automated Systems.”Ergonomics 37 (11): 1905–1922.

Muir, B. M., and N. Moray. 1996. “Trust in Automation. 2. Experimental Studies of Trust And Human Intervention in a Process ControlSimulation.” Ergonomics 39 (3): 429–460.

Mumford, E. 2006. “The Story of Socio-technical Design: Reflections on its Successes.” Failures and Potential. Information SystemsJournal 16: 317–342.

Munro, E. 2010. “The Munro Review of Child Protection: Part One: A Systems Analysis.” Accessed December 2, 2012. http://www.education.gov.uk/munroreview/downloads/TheMunroReviewofChildProtection-Part%20one.pdf

Munro, E. 2011. “The Munro Review of Child Protection: Interim Report: The Child’s Journey.” Accessed December 2, 2012. http://www.education.gov.uk/munroreview/downloads/Munrointerimreport.pdf

Nickerson, R. S. 2011. “Roles of Human Factors and Ergonomics in Meeting the Challenge of Terrorism.” American Psychologist 66 (6):555–566.

Noro, K., and Y. Okada. 1983. “Robotization and Human-factors.” Ergonomics 26 (10): 985–1000.O’Neal, M. R., W. Lively, and S. Sheppard. 1989. “Software Function Allocation Methodology.” Software-Practice & Experience 19 (8):

775–786.Older, M. T., P. E. Waterson, and C. W. Clegg. 1997. “A Critical Assessment of Task Allocation Methods and their Applicability.”

Ergonomics 40 (2): 151–171.Orlikowski, W. J. 2000. “Using Technology and Constituting Structures: A Practice Lens for Studying Technology in Organizations.”

Organization Science 11 (4): 404–428.Page, M. R. H. 1995. “Human-factors: A Users View of Ergonomics.” Ergonomics 38 (3): 539–545.Parasuraman, R., and E. A. Byrne. 2003. “Automation and Human Performance in Aviation.” In Principles of Aviation Psychology, edited

by P. Tsang, and M. Vidulich, 311–356. Mahwah, NJ: Erlbaum.Parasuraman, R., T. B. Sheridan, and C. D. Wickens. 2000. “A model for types and levels of human interaction with automation.” IEEE

Transactions on Systems, Man and Cybernetics – Part A: Systems and Humans 30 (3): 286–297.Peltu, M., K. Eason, and C. W. Clegg. 2008. “How a Socio-Technical Approach Can Help NPfIT Deliver Better NHS Patient Care.”

British Computer Society. Accessed February 7, 2011. http://www.bcs.org/upload/pdf/sociotechnical-approach-npfi t.pdfPidgeon, N. F. 1991. “Safety Culture and Risk Management in Organizations.” Journal of Cross-Cultural Psychology 22: 129–140.Pisipati, S., K. Chow, and S. R. Payne. 2009. “Choose and Book: An Audit of the Appropriateness of Referrals and their Effect on

Patients’ Attendance to an Inner City Hospital Secondary Care Provider.” British Journal of Medical and Surgical Urology 2:111–116.

Prevot, T., J. R. Homola, L. H. Martin, J. S. Mercer, and C. D. Cabrall. 2012. “Toward Automated Air Traffic Control-investigating aFundamental Paradigm Shift in Human/Systems Interaction.” International Journal of Human–Computer Interaction 28 (2): 77–98.

Price, H. E. 1985. “The Allocation of Functions in Systems.” Human Factors 27 (1): 33–45.Rabiei, R., P. A. Bath, A. Hutchinson, and D. Burke. 2009. “The National Programme for IT in England: Clinicians’ Views on the Impact

of the Choose and Book Service.” Health Informatics 15: 167–178.Randell, B. 2007. “A Computer Scientist’s Reactions to NPfIT.” Journal of Information Technology 22: 222–234.Rasmussen, J. 1997. “Risk Management in a Dynamic Society: A Modelling Problem.” Safety Science 27 (2/3): 183–213.Rognin, L., P. Salembier, and M. Zouinar. 2000. “Cooperation, Reliability of Socio-technical Systems and Allocation of Function.”

International Journal of Human Computer Studies 52 (2): 357–379.Roth, E. M. 2008. “Uncovering the Requirements of Cognitive Work.” Human Factors 50 (3): 475–480.Rouse, W. B., W. J. Cody, and P. R. Frey. 1992. “Lessons Learned in Developing Human Machine System-design Tools.” Information

and Decision Technologies 18 (5): 301–308.Royce, W. 1970. “Managing the Development of Large Software Systems.” In Proceedings of IEEE Wescon, 328–338.Sackett, D. L., W. S. Richardson, W. Rosenberg, and R. B. Haynes. 2000. Evidence-Based Medicine: How to Practice and Teach EBM.

2nd ed. Edinburgh: Churchill Livingstone.Sage, A. P. 1977. Systems Engineering: Methodology and Applications. New York: IEEE Press.Sage, A. P. 1992. Systems Engineering. New York: Wiley.Salmon, P., D. Jenkins, N. Stanton, and G. Walker. 2010. “Hierarchical Task Analysis vs. Cognitive Work Analysis: Comparison of

Theory, Methodology and Contribution to System Design.” Theoretical Issues in Ergonomics Science 11 (6): 504–531.

Ergonomics 1067

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

Sarter, N. B., and D. D. Woods. 1995. “How in the World Did We Ever Get into that Mode? Mode Error and Awareness in SupervisoryControl.” Human Factors 37 (1): 5–19.

Sauer, J., C.-S. Kao, D. Wastell, and P. Nickel. 2011. “Explicit Control of Adaptive Automation Under Different Levels of EnvironmentalStress.” Ergonomics 54 (8): 755–766.

Sauer, J., P. Nickel, and D. Wastell. 2013. “Designing Automation for Complex Work Environments Under Different Levels of Stress.”Applied Ergonomics 44 (1): 119–127.

Scallen, S. F., P. A. Hancock, and J. A. Duley. 1995. “Pilot Performance and Preference for Short Cycles of Automation in AdaptiveFunction Allocation.” Applied Ergonomics 26 (6): 397–403.

Schoemaker, P. J. H. 1991. “When and How to Use Scenario Planning: A Heuristic Approach with Illustration.” Journal of Forecasting10: 549–564.

Schoemaker, P. J. H. 1992. “How to Link Strategic Vision to Core Capabilities.” Sloan Management Review 34: 67–81.Schoemaker, P. J. H. 1995. “Scenario Planning: A Tool for Strategic Thinking.” Sloan Management Review 36: 25–40.Schwenk, C. R. 1984. “Devil’s Advocacy in Managerial Decision-making.” Journal of Management Studies 21: 153–168.Shaw, I., M. Bell, I. Sinclair, P. Sloper, W. Mitchell, P. Dyson, J. Clayden, and J. Rafferty. 2009. “An Exemplary Scheme? An Evaluation

of the Integrated Children’s System.” British Journal of Social Work 39: 613–626.Sheikh, A., T. Cornford, N. Barber, A. Avery, A. Takian, V. Lichtner, D. Petrakaki, S. Crowe, K. Marsden, A. Robertson, Z. Morrison, E.

Klecun, R. Prescott, C. Quinn, Y. Jani, M. Ficociello, K. Voutsina, J. Paton, B. Fernando, A. Jacklin, and K. Cresswell. 2011.“Implementation and Adoption of Nationwide Electronic Health Records in Secondary Care in England: Final Qualitative Resultsfrom Prospective National Evaluation in ‘Early Adopter’ Hospitals.” British Medical Journal 343: d6054.

Shepherd, C. 2006. “Constructing Enterprise Resource Planning: A Thoroughgoing Interpretivist Perspective on Technological Change.”Journal of Occupational and Organizational Psychology 79: 357–376.

Sheridan, T. B. 2000. “Function Allocation: Algorithm, Alchemy or Apostasy?” International Journal of Human Computer Studies 52(2): 203–216.

Sheridan, T. B., K. M. Corker, and E. D. Nadler. 2006. Final Report and Recommendations for Research on Human-AutomationInteraction in the Next Generation Air Transportation System. Cambridge, MA: VOLPE.

Siemieniuch, C. E., M. A. Sinclair, and G. M. C. Vaughan. 1999. “A Method for Decision Support for the Allocation of Functions and theDesign of Jobs in Manufacturing, Based on Knowledge Requirements.” International Journal of Computer Integrated Manufacturing12 (4): 311–324.

Simons, T. L., and R. S. Peterson. 2000. “Task Conflict and Relationship Conflict in Top Management Teams: The Pivotal Role inIntragroup Trust.” Journal of Applied Psychology 85: 102–111.

Stanton, N. A. 2006. “Hierarchical Task Analysis: Developments, Applications, and Extensions.” Applied Ergonomics 37 (1): 55–79.Stanton, N., and P. Marsden. 1997. “Drive-by-wire Systems: Some Reflections on the Trend to Automate the Driver Role.” Proceedings

of the Institution of Mechanical Engineers Part D – Journal of Automobile Engineering 211 (4): 267–276.Steiner, I. D. 1972. Group Process and Productivity. New York: Academic Press.Strain, J., and K. Eason. 2000. “Exploring the Implications of Allocation of Function for Human Resource Management in the Royal

Navy.” International Journal of Human Computer Studies 52 (2): 319–334.Strange, F. 1996. “Handover: An Ethnographic Study of Ritual in Nursing Practice.” Intensive and Critical Care Nursing 12: 106–112.Sutcliffe, A., S. Thew, and P. Jarvis. 2011. “Experience with User-centred Requirements Engineering.” Requirements Engineering 16 (4):

267–280.Symon, G. J., and C. W. Clegg. 1991. “Technology-led Change: A Study of the Implementation of CADCAM.” Journal of Occupational

Psychology 64: 273–290.Takian, A., D. Petrakaki, T. Cornford, A. Sheikh, and N. Barber. 2012. “Building a House on Shifting Sand: Methodological

Considerations when Evaluating the Implementation and Adoption of National Electronic Health Record Systems.” BMC HealthServices Research 12: 105.

Tatli, A., and M. F. Ozbilgin. 2012. “An EMIC Approach to Intersectional Study of Diversity At Work: A Bourdieuan Framing.”International Journal of Management Reviews 14 (2): 180–200.

The Guardian. 2011. “BIS Provides £145m for e-Infrastructure.” The Guardian, 3 October. Accessed January 9, 2012. http://www.guardian.co.uk/government-computing-network/2011/oct /03/bis-funds-einfrastructure-universities-david-willetts

Trist, E., and K. Bamforth. 1951. “Some Social and Psychological Consequences of the Longwall Method of Coal Getting.” HumanRelations 4: 3–38.

Walsh, S. H. 2004. “The Clinician’s Perspective on Electronic Health Records and How They Can Affect Patient Care.” British MedicalJournal 328: 1184–1187.

Wang, W., F. Hou, H. Tan, and H. Bubb. 2010. “A Framework for Function Allocation in Intelligent Driver Interface Design for Comfortand Safety.” International Journal of Computational Intelligence Systems 3 (5): 531–541.

Wastell, D., S. White, K. Broadhurst, S. Peckover, and A. Pithouse. 2010. “Children’s Services in the Iron Cage of PerformanceManagement: Street-level Bureaucracy and the Spectre of Svejkism.” International Journal of Social Welfare 19: 310–320.

Waterson, P., Y. Glenn, and K. Eason. 2012. “Preparing the Ground for the ‘Paperless Hospital’: A Case Study of Medical RecordsManagement in a UK Outpatient Services Department.” International Journal of Medical Informatics 81 (2): 114–129.

Waterson, P. E., M. T. Older Gray, and C. W. Clegg. 2002. “A Sociotechnical Method for Designing Work Systems.” Human Factors 44(3): 376–391.

White, S., K. Broadhurst, D. Wastell, S. Peckover, C. Hall, and A. Pithouse. 2009a. “Whither Practice-Near Research in theModernization Programme? Policy Blunders in Children’s Services.” Journal of Social Work Practice 23: 401–411.

White, S., C. Hall, and S. Peckover. 2009b. “The Descriptive Tyranny of the Common Assessment Framework: Technologies ofCategorisation and Professional Practice in Child Welfare.” British Journal of Social Work Advance Access 39: 1197–1217.

R. Challenger et al.1068

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15

White, S., D. Wastell, K. Broadhurst, and C. Hall. 2010. “When Policy o’erleaps Itself: The ‘Tragic Tale’ of the Integrated Children’sSystem.” Critical Social Policy 30: 405–429.

Whitfield, D. 1967. “Human Skill as a Determinate of Allocation of Function.” Ergonomics 10 (2): 154–160.Wickens, C. D. 1999. “Automation in Air Traffic Control: The Human Performance Issues.” In Automation Technology and Human

Performance, edited by M. W. Scerbo, and M. Mouloua, 2–10. Hillsdale, NJ: Lawrence Erlbaum Associates.Wickens, C. D., and J. G. Holland. 2000. Engineering, Psychology and Human Performance. London: Prentice Hall.Wickens, C. D., A. S. Mavor, R. Parasuraman, and J. P. McGee. 1998. The Future of Air-Traffic Control: Human Operators and

Automation. Washington: National Academy Press.Wilson, J. R. 2005. “Methods in the Understanding of Human Factors.” In Evaluation of Human Work, edited by J. R. Wilson, and N.

Corlett, 3rd ed., 1–31. London: CRC Press.Wright, P., A. Dearden, and B. Fields. 2000. “Function Allocation: A Perspective from Studies of Work Practice.” International Journal

of Human Computer Studies 52 (2): 335–355.Zhang, J. 2005. “Human-centred Computing in Health Information Systems Part 1: Analysis and Design.” Journal of Biomedical

Informatics 38: 1–3.Zhang, A., Z. Tang, and C. Zhang. 2011. “Man–machine Function Allocation Based on Uncertain Linguistic Multiple Attribute Decision

Making.” Chinese Journal of Aeronautics 24 (6): 816–822.

Ergonomics 1069

Dow

nloa

ded

by [

Uni

vers

ity o

f L

eeds

] at

02:

17 0

6 Ja

nuar

y 20

15