Abstract - CiteSeerX

28
1 Tony Wall THE MEASUREMENT AND MANAGEMENT OF INTELLECTUAL CAPITAL IN THE PUBLIC SECTOR: TAKING THE LEAD OR WAITING FOR DIRECTION? Abstract This paper compares and contrasts the stage of development reached by the public and private sectors with regard to intellectual capital. Whereas the private sector in many parts of the developing world has still not fully embraced the importance of measuring intangible assets, the public sector, with its different objectives, has always had to focus on non-financial results. This has become more critical in recent years due to successive government initiatives that have required the use and disclosure of a number of prescribed performance indicators. Having briefly outlined the history of both intellectual capital and the culture of performance measurement this paper analyses the results of a survey of public sector organisations in Northern Ireland to assess how they are dealing with both the measurement and management of intellectual capital assets. Keywords Intellectual capital, government initiatives, performance measurement, performance management, intellectual capital management

Transcript of Abstract - CiteSeerX

1

Tony Wall THE MEASUREMENT AND MANAGEMENT OF INTELLECTUAL CAPITAL IN THE PUBLIC SECTOR: TAKING THE LEAD OR WAITING FOR DIRECTION?

Abstract

This paper compares and contrasts the stage of development reached by the public

and private sectors with regard to intellectual capital. Whereas the private sector in

many parts of the developing world has still not fully embraced the importance of

measuring intangible assets, the public sector, with its different objectives, has always

had to focus on non-financial results. This has become more critical in recent years

due to successive government initiatives that have required the use and disclosure of a

number of prescribed performance indicators. Having briefly outlined the history of

both intellectual capital and the culture of performance measurement this paper

analyses the results of a survey of public sector organisations in Northern Ireland to

assess how they are dealing with both the measurement and management of

intellectual capital assets.

Keywords

Intellectual capital, government initiatives, performance measurement, performance

management, intellectual capital management

2

INTRODUCTION

Since the 1970s there has been a major change in attitude amongst private sector

organisations as to what constitutes a revenue-generating asset. As the world has

moved from an industrial age into an information age there has been an increased

emphasis on the measurement and management of more intangible assets, such as

knowledge, skill and customer relationships. Whilst there has been a widespread

recognition of this phenomenon, accountants have struggled to find an acceptable

framework by which to both measure and manage these intangible assets, collectively

known as intellectual capital (IC). As a consequence of a lack of both direction from

national regulatory bodies and a universally acceptable framework, the approach to IC

in the private sector has been fairly piecemeal. The only widely adopted practice has

been the use of the Balanced Scorecard (BSC), which although now generally listed

under the banner of IC measurement frameworks, was not originally created for this

purpose.

The picture in the public sector is very different and it could be claimed that to

organisations operating in this sector the measurement of intangibles is nothing new.

The major reforms of the public sector initiated by successive Conservative

governments between 1979 and 1997 and that have continued under Labour since

they took power have placed a major emphasis on performance measurement and the

achievement of predetermined targets. Therefore the annual reports of most public

sector organisations will disclose a number of measures of performance within what

are regarded as the three main categories of IC – human, customer and organisational.

However, although public sector organisations might be more familiar with measuring

IC assets, as is the case with their private sector counterparts there is little evidence of

3

an overall strategy to manage all aspects of IC. Moreover, although there is a similar

widespread use of BSC, the practices of pioneering IC organisations, for example

Skandia, Celemi, Ericsson and Ramboll – all Scandinavian companies, are not

generally followed. Therefore although the public sector would appear to be better

when it comes to measurement and disclosure they are seemingly no further forward

when it comes to the overall management of IC assets.

This paper investigates the phenomenon of IC in the public sector by using a survey

instrument to gather information about the IC practices of leading Northern Ireland

(NI) public sector organisations. The paper will seek to ascertain whether the

widespread disclosure of IC assets (see Wall and Martin 2003) is backed up by their

measurement and also if this is mainly as a result of adhering to a prescribed regime.

It will also investigate if there is an overall management strategy that seeks to co-

ordinate these assets and use them to their optimum values.

INTELLECTUAL CAPITAL IN THE PRIVATE SECTOR

Since the beginning of the 1990s there has been an increasing emphasis on the

importance of intangible assets, often referred to as IC, by organisations operating in

the private sector. This has come about due to changes in the global economy such as

easier access to the capital markets, improved telecommunications and rapid advances

in technology. Although it is difficult to pinpoint exactly when the new economy or

information age began, since the 1970s there has been a definite shift into a post-

industrial era. Economies are no longer dominated by industrial machinery and are

more concerned with exploiting the information technology (IT) available to ensure

that business entities retain a competitive edge over their rivals. This has been

4

particularly evident in the United Kingdom (UK) where there has been a steady

decline in the number of workers engaged in traditional industries such as

shipbuilding and mining and a huge increase in those working in service industries

such as banking and telecommunications. The modern organisation operates using

three types of capital: physical (plant and equipment), financial (cash and

investments) and intellectual, with the latter continuing to grow rapidly in importance.

According to May (1997: 91) ‘the old world was obsessed with counting stuff –

nuclear warheads, battleships, armoured divisions, military personnel, durable goods,

purchases and machine tool orders. The networked economy is driven not by what

you can count, but on what you know. The traditional reliance on hard assets has been

replaced with a new premium on soft ones’. Therefore the hard assets, i.e. those that

tend to appear on balance sheets, such as plant and machinery are now deemed to be

less important than assets such as employee and customer satisfaction, distribution

networks, ideas generated by employees and corporate culture. Decker, Herz and

Keegan (2002: 36) state that ‘the historical balance sheet only captures, on average,

about 20 per cent of the market value of companies today’ with the remaining 80 per

cent made up of ‘intangible assets, non financial value drivers’ and the difference

between the ‘historical cost and market value of the assets recorded’.

IC is generally identified (Dzinkowski 2000, Atrill 1998, Lynn 1998) as being made

up of elements that fall into three categories: human capital, customer (or relational)

capital and organisational (or structural) capital. The elements of all three categories

interact and thereby add value to the organisation. Examples of these elements are:

human – skills, abilities, knowledge; customer – customer satisfaction, customer

loyalty, supplier relationships; and organisational – culture, intellectual property,

5

manufacturing processes. Human capital is the primary source for organisational

innovation and renewal (Agor 1997:175) and is thus the category that will distinguish

one organisation from the other, with key elements such as customer perception or

corporate culture all being dependant on the qualities of the workforce. One of the

earliest writers on this subject was Stewart (1994: 68) who highlighted the importance

of these assets and also acknowledged ‘the difficulty of measuring and managing the

chief ingredient of the new economy: IC, the intangible assets of skill, knowledge and

information’.

With regard to accounting there are two major issues surrounding IC. The first is the

external issue or the reporting of an organisation’s IC assets. Although failure to

report such assets can mean that shareholders are not getting the full picture and are

not aware of all the elements contributing to the overall market value of the company,

the main argument against including them is that as no universally acceptable method

of measuring these assets has been agreed upon they could be appearing on balance

sheets at randomly selected valuations, thereby distorting the picture to the

shareholder. There is also the competitive factor, whereby organisations do not want

to disclose their IC in annual reports, ‘fearing they would give away competitive

secrets or attract litigation from disgruntled investors’ (Stewart 1994: 74). The

recently launched operating and financial review (OFR), whereby companies will

have to include an account in their annual reports of how intangible assets contribute

to overall value (Starovic and Marr 2003:24), will only partly solve this problem as

the guidelines are fairly loose and thus subject to individual interpretation. The second

is the internal issue, the measurement and management of IC assets, which involves

the classification and identification of these softer assets as well as the creation or

6

adoption of some method to measure them so that they are used to their optimum

levels of efficiency, thereby adding value to the organisation.

Although several attempts have been made to create a universally acceptable

framework by which to manage and measure IC assets, one proposal in particular has

gained broad approval, this being BSC, which was created by Kaplan and Norton in

1992 and. ‘provides for the control of intangibles while simultaneously monitoring

financial results.’ (Brennan and O’Connell 2000: 220). Since that time BSC has

evolved from focusing primarily on strategy implementation and being a

communication, informing and learning system to being recognised as a model for

many of the IC reporting systems. Although BSC is not without its critics (Adams

2001, Bontis, Dragonetti, Jacobsen and Roos 1999, Nørreklit 2000) it has been widely

accepted throughout the world and is used by both public and private sector

organisations. Despite this extensive usage of BSC and the general recognition that

intangible assets will continue to have an increasing importance in the gaining of

competitive advantage, work in the area of IC has been fairly limited. There has been

a lot of pioneering work conducted in Sweden and Denmark and to a lesser degree in

the United States of America (USA), Canada, Australia and Japan, but elsewhere

organisations have adapted a ‘wait and see’ approach and the end of the economic

boom of the 1990s has merely refocused attention on financial as opposed to

intangible outcomes. Although organisations would tend to measure some elements

connected with IC, such as staff and customer satisfaction, surveys have shown that

this tends to take place in isolation and there is generally no overall strategy. Surveys

of note have been conducted in the UK (see Chong, Holden, Wilhelmij and Schmidt

2000) USA (see Bassi and Van Buren 1999 and Usoff, Thibodeau and Burnaby 2000)

7

Australia (see Petty and Guthrie 1999 and Zhou and Fink 2003) Austria, The

Netherlands and Spain (see Brennan and Connell 2000), however their findings can

be summarised as follows (CIMA 2002):

• Companies have experienced difficulties in implementing non-financial

measurement frameworks and have worried about measuring too much;

• Although the use of non-accounting measures have been widely accepted in

practice their actual implementation has been more informal and ad hoc and

have almost been superimposed on the formal accounting based systems;

• Common problems include the difficulty of measuring key drivers of success,

employee behaviour not being in line with strategic objectives, the non-

financial measurement system conflicting with the culture of the organisation

and the development process being too time-consuming or difficult.

Therefore not only is the measurement of IC assets far from comprehensive, but there

is even less progress in the area of IC management (ICM). The distinction between

measurement and management will be expanded upon later in the paper, but first the

situation in the public sector will be investigated.

PERFORMANCE MEASUREMENT AND MANAGEMENT IN

THE PUBLIC SECTOR

When the Conservative party returned to power in 1979 under the leadership of

Margaret Thatcher, they were determined to drastically reform what they saw as an

overstaffed and wasteful government. Against the backdrop of the worldwide New

8

Public Management movement, a series of sweeping initiatives began in 1982 when

they introduced the Financial Management Initiative, which aimed to encourage the

development of corporate planning and the delegation of budgets and accountability

to managers responsible for service delivery (Byrne 1997: 10). However it was the

next initiative, Next Steps, which was to totally restructure the Civil Service by

devolving executive functions to semi-autonomous executive agencies (agencies),

who were to deliver services more efficiently and effectively, within available

resources, for the benefit of customers, taxpayers and staff (Next Steps Briefing Note,

1996). The service delivery aspect was separated from the policy-making aspect of

public service and from then onwards a considerable emphasis was placed on

improving and reporting performance. The agencies were expected to publish

commercial-style accounts and be in a position to have them formally audited, by the

National Audit Office (NAO), within two years of launch (Byrne, 1997, p. 10). These

accounts were to go beyond the traditional private sector model and were to contain

details of how the agency had performed against pre-set targets. According to the

Next Steps Report 1997:

The setting of clear, stretching, performance targets is a key part of the

philosophy underpinning Next Steps. It is these targets that draw agencies

towards the provision of better quality services and better value for the

taxpayer. Open and accessible reporting of performance against the targets,

which the Government has set, enables agencies to demonstrate how

effectively they have discharged the task, which they have been set. (1998, p.

1)

9

Other government organisations were also affected by these reforms, for example

non-departmental public bodies (NDPBs) were to have similar reporting and

accountability practices to agencies (NAO 2000: 11) and performance measurement

within government departments was to be driven by Public Service Agreements and

Service Delivery Agreements.

Another initiative, the Citizen’s Charter (Cabinet Office 1991), introduced by Mrs

Thatcher’s successor, John Major, required public sector organisations to publish their

aims and objectives as well as the performance measurements that would be used to

assess whether the objectives have been achieved. This had an effect on both local

authorities, which from 1992 have had to both use and disclose a number of

performance indicators prescribed by the Audit Commission, and the National Health

Service (NHS), which had further standards imposed by the Department of Health’s

(DOH) Patient’s Charter (1991). As far as higher education was concerned, the

Higher Education Statistics Agency (HESA) was set up in 1993 following a

government White Paper ‘Higher Education: A New Framework’, which called for

more coherence in higher education statistics. From that time on, universities from all

over the UK were expected to send monitoring and performance statistics to HESA on

an annual basis. When Labour came into power in 1997 they did not reverse any of

the changes introduced by the Conservatives in the area of performance measurement

and actually increased the number of performance indicators used and disclosed by

local authorities by the introduction of Best Value via the Local Government Act of

1999 (see also DETR 1998). Furthermore there has been a new plan for the NHS

(DOH 2000) and a recent White Paper for higher education (Department for

Education and Skills 2003), both of which have an emphasis on performance

measurement. In NI, colleges of further and higher education, which also took part in

10

this survey, work towards targets set by the Education Training Inspectorate and the

Department of Employment and Learning.

If some of the areas measured by the various indicators used by either central or local

government organisations are examined it can be seen that the three categories of

intellectual capital – human, customer and organisational are all covered. For example

the measurements of the Water Service (2002), an agency, include the development of

an environmental management system (organisational), response to customer

correspondence (customer) and the continued accreditation of Investors in People

(human), likewise North Down Borough Council (2002) measure items such as

percentage of total waste received which is recycled (organisational), percentage of

citizens satisfied with council services (customer) and proportion of working days lost

to sickness or unauthorised absence (human). As all public sector organisations will

make measurements that are similar in nature, though not identical, this sector is

probably at a more advanced stage than the private sector in the UK when it comes to

measuring intangible assets. However, leaving aside the notion that this is mainly due

to an adherence to government initiatives, the two sectors have very differing

operational environments. The public sector does not have the same objectives of the

private sector in which the dominant goal in most cases is to earn a satisfactory profit,

an outcome that is easily measured. Public sector organisations however have

multiple goals, and thus ‘effectiveness in attaining these goals rarely can be measured

by quantitative amounts’ (Anthony and Govindarajan 1998: 681). Furthermore, public

sector organisations have always used resources to achieve mainly intangible

outcomes and have traditionally been, although the gap is closing, more human capital

11

intensive. Cinca, Molinero and Queiroz (2003: 253) felt that there was less urgency

for public sector managers to quantify their intangible assets, whereas in the private

sector it is paramount to value them in order to value the firm, particularly if the firm

is sold.

Before turning to the survey it is worth highlighting that although the terms

measurement and management, when it comes to performance, are often used

interchangeably they are not the same thing. According to Amaratunga and Baldry

(2002:217), measurement provides the basis for an organisation to assess how well it

is progressing towards its predetermined objectives, helps to identify areas of

strengths and weaknesses, and decides on future initiatives, with the goal of

improving organisational performance. Measurement is not an end in itself, but a tool

for more effective management. Results of performance measurement indicate what

happened, not why it happened, or what to do about it. In order for an organisation to

make effective use of its performance measurement outcomes it must be able to make

the transition from measurement to management. One can also distinguish between

performance management as practised by public sector organisations, which is mainly

reactionary in nature, and ICM that is more proactive and involves the identification

and auditing of an inventory of IC assets, as well as a continuous evaluation of the

value added by IC (Lynn 1998:13). It should be noted that the use of BSC in itself

does not mean that an organisation is practicing ICM. Roos (Chatzkel 2002: 110)

feels that whilst BSC allows you to identify effectively the resources you need to

employ, therefore acting as an identifier and method of auditing IC assets, it may not

be so effective in continuous evaluation of the value added. Indeed Veen-Dirks and

Wijn (2002: 408-409) found that as a diagnostic tool BSC provided great

opportunities, but was less effective as a means of strategic control. Roos (Chatzkel

12

2002: 112) also believes that there are three generations of IC practices, with tools

such as BSC belonging to the first. The third, and current, generation of IC practices

requires what he calls a Holistic Value Added (HVA) approach, whereby the desired

outcome will be measured from the point of view of several different stakeholders. In

this aspect local authorities who have been urged to consult with a wide a range of

stakeholders as possible when it comes to planning and decision-making could be

ahead of the majority of private sector organisations that still tend to view the

shareholder as the primary stakeholder, with all other groups subservient to their

needs (ASB: 1991).

METHODOLOGY

In order to test some of the theories proposed concerning the stage reached with

regard to IC, a survey of 100 leading NI public sector organisations was conducted.

See Table 1 for sector breakdown of those organisations both approached and that

actually took part in the survey. The method used was a postal questionnaire that

consisted of one open and 17 closed questions although for seven of these questions

organisations were encouraged to offer different responses to those listed. As

Denscombe (1998) notes, questionnaires supply standardised answers, given that

respondents are all posed the same question. The great strength of this characteristic is

the ability to collect data that are unlikely to be contaminated by variations in the

wording of the questions or the manner in which the question is asked. Postal

questionnaires are useful in that they permit wide geographical contact, however they

are also problematic. They have a generally low response rate and, as Denscombe

states, they offer little opportunity for the researcher to check the truthfulness of the

answers given by the respondents. The first of these problems was overcome by firstly

13

following up the original survey with a second, using an identical questionnaire, and

secondly by the use of both e-mail and the telephone to contact those organisations

who had not responded. The eventual number of usable responses was 57 giving a

response rate of 57 per cent. This was deemed to be a suitable response, particularly

as there was a minimum response of 55 per cent in each of the five sectors. The

survey investigated the following areas: recognition of the term IC; whether any staff

worked on IC; use and measurement of databases; most important elements of IC;

measurements made within the three categories; use of IC frameworks; attitude

towards employee innovation; strategy formulation; and ICM practices.

Table 1: Sector Breakdown

Sector

Number Approached

Number of Usable Responses (%)

Government Department 11 6 (55%) Executive Agency & Non Departmental Public Body

27

15 (56%)

Local Authority 26 15 (58%) Education (institutions and boards) 24 14 (58%) Health (institutions and boards) 12 7 (58%) Totals 100 57 (57%)

THE STATE OF INTELLECTUAL CAPITAL IN THE PUBLIC

SECTOR

The first factors to be established were whether the term ‘intellectual capital’ was

recognised by the respondent and if the organisation concerned had any person or

team working with IC. Although 49 of the respondents (86 per cent) were familiar

with the term, only eight (14 per cent) had someone working with IC with half of

these working in teams. When asked the function of this person or team the five that

responded had different members of staff working in this area, the various roles being;

14

head of human resources and business support, business analyst, accountant, and head

of development, with the one identified team working in risk management. A sector

analysis showed that half of the organisations that had people or teams working with

IC were either agencies or NDPBs.

Organisations were then asked about databases; specifically which ones were

currently in operation and what measurements were made against these databases.

Arora (2002: 246) feels that databases are one of the best examples of codification,

that is when human capital becomes part of the organisational capital and knowledge

moves from being tacit to explicit, or from individual to organisational knowledge. He

further states that it is essential that companies check whether such repositories of

knowledge are actually being used, as if they are not they are of little value.

Surprisingly two organisations were not maintaining any databases, however the

remaining 55 (96 per cent) had at least one, with 48 (84 per cent) maintaining more

than one. Fifty-two organisations (91 per cent) maintained a customer database, with

45 (79 per cent) maintaining a mailing list database and only 11 (19 per cent) having

an ideas database. This latter point would seem to suggest that there is still a lack of

innovation in the public sector, however this point is returned to later. Twenty-four

organisations (42 per cent) maintained other databases than those listed in the

questionnaire and although eleven of these were only relevant to the activities of the

particular organisation, there were some areas of commonality. Eight organisations

maintained human resource databases, two of which specified that they kept records

of skills, competences and training with other databases of interest, each maintained

by two organisations, being ones containing information on stakeholders,

performance management and shared knowledge. When it came to the actual

15

measurement of databases there was generally a low level of activity going on with

just under half of the organisations not making any measurements at all. The highest

level of measurements pertained to upgrades (22, i.e. 39 per cent) with 17 (30 per

cent) measuring links, 15 (26 per cent) measuring the number of times the database

was being used and only eight (14 per cent) measuring contributions.

Organisations were then asked about the level of importance they attached to certain

elements of intellectual capital, ranging from of vital importance to not important at

all. The responses can be seen in Table 2. Most elements were seen to be at least

important by over 85% of the responding organisations, with, perhaps unsurprisingly,

the elements deemed to be of vital importance to most organisations being their

reputation and customer satisfaction. Of greater surprise was the fact that one local

government organisation thought that employee innovation was not important at all

and one educational institution thought the same about best practice, however neither

of these ratings was reflective of their sectors. The survey then turned to the three

recognised categories of IC, human, customer and organisational, to see what

individual measurements organisations were making in each category. The results can

be seen in Tables 3, 4 and 5.

From Table 3 it can be seen that there is a particular emphasis on training in public

sector organisations with high levels of measurement of both spending on training and

post training evaluation, an often overlooked practice. Staff satisfaction surveys are

commonplace and along with easy to measure elements, such as length of service and

professional qualifications, it could be reasonably expected that there would be high

levels of measurement, as was the case. For the same reason therefore it was

16

surprising that staff turnover was not measured by more organisations. Value added

per employee, although an essential measurement is extremely difficult to gauge with

any accuracy in practice and thus such a small figure was to be expected.

Table 2: The Importance of the Various Elements of Intellectual Capital

Element

Of vital

importance

Very

important

Important

Not very

important

Not important

at all Information 35 (62%) 15 (26%) 7 (12%) 0 0 Customer satisfaction

46 (81%) 10 (17%) 1 (2%) 0 0

Reputation of organisation

48 (84%) 8 (14%) 1 (2%) 0 0

Stakeholder relationships

36 (63%) 19 (33%) 2 (4%) 0 0

Public/Private partnerships

14 (24%) 25 (44%) 12 (21%) 5 (9%) 1 (2%)

Social commitment 21 (37%) 23 (40%) 12 (21%) 1 (2%) 0 Environmental commitment

15 (26%) 23 (40%) 14 (25%) 5 (9%) 0

Employee innovation

23 (40%) 25 (44%) 8 (14%) 0 1 (2%)

Workforce expertise

38 (67%) 17 (30%) 2 (3%) 0 0

Workforce knowledge

38 (67%) 17 (30%) 2 (3%) 0 0

Intellectual property

16 (28%) 22 (39%) 13 (23%) 4 (7%) 2 (3%)

Organisational processes

19 (33%) 31 (54%) 6 (11%) 1 (2%) 0

Mailing/telephone lists

9 (16%) 21 (37%) 21 (37%) 5 (9%) 1 (1%)

Consultancy/advice 4 (7%) 23 (40%) 22 (39%) 8 (14%) 0 Software 17 (30%) 26 (45%) 12 (21%) 1 (2%) 1 (2%) Best practice 30 (52%) 20 (35%) 5 (9%) 1 (2%) 1 (2%)

17

Table 3: Elements of Human Capital Being Measured

Element of human capital

Number of companies measuring element

Percentage

Employee satisfaction 46 81% Number of years’ service per employee 46 81% Number of senior positions filled by junior staff

7 12%

Development and training spend per employee

46 81%

Post training evaluation exercise 49 86% Staff with professional qualifications 44 77% Staff turnover at all levels 37 65% Aptitude of staff 22 39% New ideas generated by members of staff 26 46% Value added per employee 5 9%

With regard to customers in the public sector, comparisons cannot be made with

private sector organisations, as it would not be a like for like comparison. Customers

in the public sector have a lack of choice when it comes to service provision and

despite various attempts of successive governments; there is little or no competition.

However over the past two decades there has been a recognition that customers in the

public sector have become more vociferous and more likely to complain about poor

service (see for example Bolton 2002 and Brennan and Douglas 2002), furthermore

according to Osbourne (2000: 37) ‘the Citizen's Charter created a paradigm shift

among public employees. It moved the customer from the periphery of organisations'

concerns to the centre’, therefore customer issues are just as important, if not as vital

for survival, in the public sector. It was slightly surprising therefore that not all

organisations were measuring either customer satisfaction or customer complaints

(see Table 4), particularly when 98 per cent stated that customer satisfaction was at

18

least very important (see Table 2). A sector analysis did not reveal any trends apart

from the fact that all organisations associated with health measured complaints. It

would be expected that image would be measured as part of customer satisfaction

surveys and therefore levels of measurement would be heavier, particularly as

reputation was again deemed to be at least very important by 98 per cent of

respondents. Conversely transparency, though seen to be an essential quality of any

public sector organisation, and the effectiveness of advertising campaigns are harder

to evaluate.

Table 4: Elements of Customer Capital Being Measured

Element of customer capital

Number of companies measuring element

Percentage

Number of customers 41 72% Customer satisfaction 51 89.5% Telephone response times 25 44% Image of organisation 31 54% Transparency of organisation 17 30% Customer complaints 49 86% Effectiveness of advertising campaign 26 46%

Out of the three categories, organisational capital was the one that organisations

measured the least, with only best practice being measured by more than half of those

responding (see Table 5). A lot of the elements were connected to innovation, which

will be dealt with later in the paper, yet others would seemingly be captured fairly

easily. Calculating IT expenditure as a percentage of administration spending could

surely be gleaned from budgetary statements and as changes, however minor, will

usually result from the findings of either employee or customer satisfaction surveys it

is surprising that these are not monitored by more organisations.

19

Table 5: Elements of Organisational Capital Being Measured

Element of organisational capital

Number of companies measuring element

Percentage

Ratio of new ideas generated to new ideas implemented

7 12%

Value of new ideas (e.g. money saved) 25 44% Number of new services introduced 24 42% Changes implemented due to employee or customer satisfaction surveys

27 47%

Expenditure on rewards for innovation 15 26% IT expenditure as a percentage of administration spend

25 44%

Best practice 31 54%

When it comes to the overall evaluation of all the elements making up IC there are a

number of scorecard type frameworks, apart from BSC, which are equally applicable

to the public sector, yet take up has been fairly limited. Examples would be the

business navigator (Edvinsson and Malone 1997), the intangible assets monitor

(Sveiby 1997) and the IC-Index (see Atrill 1998: 58). However, no public sector

organisation that took part in this survey was using any of these, with the most

popular frameworks being Investors in People (IIP), used by 47 organisations (82.5

per cent) followed by BSC and the European Foundation for Quality Management’s

Business Excellence Model (EFQM BEM) both used by 29 organisations (51 per

cent). Only six organisations were not using any framework at all and 40 (70 per cent)

were using more than one. All government departments and agencies/NDPBs were

using IIP.

20

A major focus of the survey was what public sector organisations were doing to

encourage innovation. Innovation is one of the key elements of IC, however the

ability of employees to be innovative is an attribute traditionally associated with

private sector workers who receive better rewards for their new ideas and are not so

berated by the media and opposition parties for failure when things go wrong.

However Borrins (2001:311) found some good examples of public sector innovation

in the USA, particularly at the middle manager and front line level who were

responsible for 50 per cent of new ideas. Although this survey did not investigate

whether certain employees were more innovative than others it did ask questions

about the environment organisations were creating to promote innovation. The

responses were as follows; 24 (42 per cent) are offering financial incentives, 25 (44

per cent) are offering other rewards such as names of those coming up with good

ideas being published in in-house magazines, 24 do not level any criticisms at ideas

that turn out to be unfeasible and 15 (26 per cent) claim that risk taking was part of

the organisational culture. Yahya and Goh (2002:466) state that companies should

encourage risk taking when it comes to problem solving, thus an atmosphere where

there is no fear of failure needs to be created and whilst there was some evidence of a

supportive culture overall the survey produced a rather mixed set of responses. Only

11 (19 per cent) organisations maintained an ideas database, yet 48 (84 per cent)

stated that employee innovation was at least very important (see Table 2). Other

relevant findings (see Tables 3 and 5) were new ideas generated by members of staff,

measured by 26 (46 per cent) organisations, ratio of new ideas generated to new ideas

implemented, measured by seven (12 per cent), the value of new ideas, measured by

25 (44 per cent) and expenditure on rewards for innovation, which was measured by

15 organisations (26 per cent), which is a smaller number than those actually

21

financially rewarding innovative staff. Whilst it could be argued that the fact that only

seven organisations measure the ratio of new ideas generated to those implemented

indicates that there is a lack of censure for ideas that do not work, there was very little

evidence of an overall strategy towards innovation with none of the responses for the

related questions forming any clear pattern.

The organisations were then asked for their attitude towards strategy formulation. The

choices and responses can be seen in Table 6; it should be noted that respondents

could make more than one selection.

Table 6: Attitude Towards Strategy Formulation

Factor Number of companies Percentage Carried out by senior management only 21 37% All staff involved in decision making process

27

47%

Consultation with stakeholders 45 79% Reaction to targets not achieved 32 56% Use of BSC or similar framework 16 28% Prescribed by central government 24 42% More than one 44 77% All 1 2% None 1 2%

It was very encouraging to see such a high level of consultation with stakeholders,

indicating that the public sector is taking the HVA approach, that Roos (Chatzkel

2002: 112) linked with the current or third generation of IC practices. However, over

a third of organisations still rely on a top down approach with only senior

management involved. Just under half include staff, although it should be noted that

several organisations qualified the ‘all staff’ to ‘some staff’ when responding. One

question this paper was seeking to answer was if public sector organisation were

measuring IC, was it mainly due to prescribed government regimes? The first

22

indication that this may be the case is that over half of the responding organisations

formulate their strategy as a reaction to targets that have not been achieved. Apart

from a small number of targets set internally by Chief Executives the government or

government bodies set all such targets. A further indication was that 24 organisations

(42 per cent) felt that the majority of their activities were prescribed by central

government and although this does not prove beyond doubt that these public sector

organisations only engage in IC practices due to government driven initiatives it does

suggest a lack of independence. Kennerley (2004) makes the distinction between the

private sector, where organisations only measure performance that reflect their

improvement priorities, and the public sector where organisations measure

performance in order to report externally. He sees several problems in this, for one

thing the targets set are not necessarily relevant to the ‘individual NHS Trust or Local

Authority’, and there are already too many targets to achieve for each single

organization to begin creating extra ones that are more applicable to them. With

regard to the large number of existing targets he believes that this lack of focus makes

it ‘impossible to understand what the priorities are’. What he feels is needed is a focus

of attention on the ultimate objective of the organisation and from this, more

appropriate measures can be drawn up, otherwise, ‘with the emphasis placed on the

specific targets prescribed by central government, attention will only be paid to

achieving these targets’.

It has been suggested by writers such as Lynn (1998:12) that in order to practice

effective ICM, organisations should draw up and maintain an inventory of IC assets,

audit these assets on a regular basis and from this audit try to establish the value

added by then. The responses to the final three questions showed that there were

23

generally very low levels of compliance with such practices, as only five

organisations (9 per cent) were keeping an inventory, seven (12 per cent) were

auditing IC assets and five were attempting to establish value added.

CONCLUSIONS

From the findings outlined above it is clear that whilst public sector organisations are

performing well when it comes to the measurement of IC assets and are making great

attempts to manage these softer assets by the use of frameworks such as IIP, there

would appear to be a lack of overall ICM strategy. Not everything that was deemed to

be important was being measured and in areas such as innovation the approach,

though commendable in many cases, appeared to be a little chaotic. If the responses in

Table 2 are followed through to their related measurements there was often a

mismatch, for example reputation of organisation that was deemed to be at least

important by all respondents, yet was only being measured by just over half.

Moreover, despite the large number of respondents measuring both staff and customer

satisfaction (81 per cent and 89.5 per cent respectively) only 47 per cent were actually

measuring changes brought about from satisfaction surveys. When it came to direct

ICM practices such as the creation and auditing of an inventory of all IC assets and

the consequent evaluation of these assets to see if they were gaining, or otherwise, in

value there was very little work going on and furthermore very few organisations

have staff dedicated to IC as is common practice in Swedish companies such as

Skandia and Celemi. Moreover although the overall use of frameworks was greater

than that of Irish private sector organisations (see Wall, Kirk and Martin 2004) the

widespread use of IIP by the public sector really only focuses on one aspect of IC,

human capital, and there was a lesser use of more comprehensive frameworks, such as

BSC or EFQM BEM.

24

When one considers the aforementioned surveys carried out on private sector

organisations in the area of knowledge transfer and the measurement of other

intangible assets, it would appear from the findings of this paper that the public sector

is slightly ahead when it comes to the measurement of IC assets. However, this is to

be expected due to a combination of the environments they work in and the various

government initiatives that prescribe both the measurement of and disclosure of the

achievement of predetermined targets. Although the public sector have always been

human capital intensive and focussed on intangible outcomes, it is unlikely that there

would have been such a large number of measurements made unless successive

governments imposed them. Several IC measurements can be directly equated with

those imposed under initiatives such as Next Steps or Best Value, for example

employee satisfaction, customer satisfaction and customer complaints. However, this

in itself does not explain the range of elements being measured, not all of which relate

to prescribed measurements, and furthermore there was only limited evidence from

the survey that organisations only formulated their strategy due to government

initiatives or as a reaction to targets that have not been achieved, therefore

environmental factors also play a major role. With regard to ICM there was some

evidence of work being conducted in order to try and holistically assess the impact the

IC assets were having on the overall efficiency, but generally public sector

organisations would appear to be waiting for direction of how best to channel their

resources into this area; not necessarily from the private sector but from either a

regulatory body or innovator who can create a readily adaptable and universal

framework for the current or third generation of IC practices.

25

REFERENCES

Accounting Standards Board (ASB) (1991) Exposure Draft: Objective of Financial Statements and the Qualitative Characteristics of Financial Information, London: ASB.

Adams, C. (2001) ‘The Performance Prism in Practice’. Measuring Business

Excellence, 5:2 pp6-12. Agor, W. H. (1997) ‘The Measurement, Use, and Development of Intellectual Capital

to Increase Public Sector Productivity’. Public Personnel Management, 26:2 pp175-86.

Amaratunga, D. and Baldry, D. (2002) ‘Moving from Performance Measurement to Performance Management’. Facilities, 20:5/6 pp217-23.

Anthony, R. N. and Govindarajan, V. (1998) Management Control Systems, 9th edn,

London: Irwin McGraw Hill. Arora, R. (2002) ‘Implementing KM: A Balanced Score Card Approach’. Journal of

Knowledge Management, 6:3 pp240-9. Atrill, P. (1998) ‘Intellectual Assets: The New Frontier’. ACCA Students’ Newsletter,

December pp56-9. Bassi, L. J. and Van Buren, M. E. (1999) ‘Valuing Investments in Intellectual

Capital’. International Journal of Technology Management, 18:5-8 pp414-32.

Bolton, S. C. (2002) ‘Consumer as King in the NHS’. The International Journal of Public Sector Management, 15:2 pp129-39.

Bontis, N., Dragonetti, N. C., Jacobsen, K. and Roos, G. (1999) ‘The Knowledge

Toolbox: A Review of the Tools Available to Measure and Manage Intangible Resources’. European Management Journal, 17:4 pp391-402.

Borrins, S. (2001) ‘Encouraging Innovation in the Public Sector’. Journal of

Intellectual Capital, 2:3 pp310-19. Brennan, N. and Connell, B. (2000) ‘Intellectual Capital: Current Issues and Policy

Implications’. Journal of Intellectual Capital, 1:3 pp206-40. Brennan, C. and Douglas, A. (2002) ‘Complaints Procedures in Local Government:

Informing your Customers’. The International Journal of Public Sector Management, 15:3, pp219-36.

Byrne, L. (1997) Information Age Government: Delivering the Blair Revolution, London: Fabian Society.

Cabinet Office (1991) The Citizen's Charter, London: HMSO.

26

Charted Institute of Management Accountants (CIMA) (2002) ‘Latest Trends in Corporate Performance Measurement’. Technical Briefing, London: CIMA.

Chatzkel J. (2002) ‘An Interview with Goran Roos’. Journal of Intellectual Capital, 3:2 pp96-117.

Chong, C. W., Holden, T., Wilhelmij, P. and Schmidt, R. A. (2000) ‘Where Does

Knowledge Management Add Value?’. Journal of Intellectual Capital, 1:4 pp366-83.

Cinca, C. S., Molinero, C. M. and Queiroz, A. B. (2003) ‘The Measurement of

Intangible Assets in Public Sector Using Scaling Techniques’. Journal of Intellectual Capital, 4:2 pp249-275.

Decker, W. E., Herz, R. H. and Keegan, E. M. (2002) ‘Accounting Standards’, pp. 33-

55 in DiPiazza, S. A and Eccles, R. G. (eds.) Building Public Trust: The Future of Corporate Reporting, New York: John Wiley & Sons.

Denscombe, M. (1998) The Good Research Guide, Buckingham: Open University

Press. Department of Education (1993) White Paper - Higher Education: A New

Framework, London: HMSO. Department for Education and Skills (2003) White Paper - The Future of Higher

Education, Norwich: The Stationery Office. Department of the Environment, Transport and the Regions (DETR) (1998) Best

Value and Audit Commission Performance Indicators for 2000/2001, London: HMSO.

Department of Health (1991) The Patient's Charter, London: HMSO. -------- (2000), The NHS Plan, London: HMSO.

Dzinkowski, R. (2000) ‘The Measurement and Management of Intellectual Capital:

An Introduction’. Management Accounting, 78:2 pp32-6. Edvinsson, L. and Malone, M. (1997) Intellectual Capital: Realizing your Company’s

True Value by Finding its Hidden Brainpower, New York: Harper. Kaplan, R. S. and Norton, D. P. (1992) ‘The Balanced Scorecard: Measures That

Drive Performance’. Harvard Business Review, January/February pp71-9. Kennerley, M. (2004) Measuring Performance in the Public Sector - Learning the

Lessons, Cranfield: Centre for Business Performance, Cranfield School of Management.

Local Government Act (1999) ‘Part 1 – Best Value’. DETR Circular 10/99, London:

DETR

27

Lynn, B. (1998) ‘Intellectual Capital’. CMA Management, 72:1 pp10-15. May, T. A. (1997) ‘The Death of ROI: Re-Thinking IT Value Measurement’.

Information Management and Computer Security, 5:3 pp90-2. National Audit Office (2000) Good Practice in Performance Reporting in Executive

Agencies and Non-Departmental Public Bodies, London: Stationery Office. Next Steps Briefing Note (1996), London: Office of Public Service. Next Steps Report 1997 (1998), London: Office of Public Service. Nørreklit, H. (2000) ‘The Balance on the Balanced Scorecard: A Critical Analysis of

Some of its Assumptions’. Management Accounting Research, 11:1 pp65-88. North Down Borough Council (2002) Corporate Plan 2002–2006, Bangor, North

Down Borough Council. Osbourne, D. (2000) ‘Lessons from Abroad’. Government Executive, 32:1 pp35-7. Petty, R. and Guthrie, J. (1999) ‘Managing Intellectual Capital: From Theory to

Practice’. Australian CPA, 69:7 pp18-21. Starovic, D. and Marr, B. (2003) Understanding Corporate Value: Managing and

Reporting Intellectual Capital, London: Chartered Institute of Management Accountants.

Stewart, T. A. (1994) ‘Your Company’s Most Valuable Asset: Intellectual Capital’.

Fortune, 130:7 pp68-74. Sveiby, K. (1997) The Invisible Balance Sheet: Key Indicators for Accounting,

Control and Evaluation of Know-How Companies, Stockholm: The Konrad Group.

Usoff, C. A., Thibodeau, J. C. and Burnaby, P. (2000) ‘The Importance of Intellectual

Capital and Its Effect on Performance Measurement Systems’. Managerial Auditing Journal, 17:1-2 pp9-15.

Veen-Dirks, P. v. and Wijn, M. (2002) ‘Strategic Control: Meshing Critical Success

Factors with the Balanced Scorecard’. Long Range Planning, 35:4 pp407-27. Wall, A., Kirk, R. and Martin, G. (2004) Intellectual Capital: Measuring the

Immeasurable, Oxford: Elsevier. Wall, A and Martin, G. (2003) ‘The Disclosure of KPIs: How Irish Organisations are

Performing’. Public Management Review, 5:4, pp491-509. Water Service (2002) Annual Report and Accounts 2001–2002. Belfast: Water

Service.

28

Yahya, S. and Goh, W-K. (2002) ‘Managing Human Resources Towards Achieving Knowledge Management’. Journal of Knowledge Management, 6:5 pp457-68.

Zhou, A. Z and Fink, D. (2003) ‘Knowledge Management and Intellectual Capital: An

Empirical Examination of Current Practice in Australia’. Knowledge Management Research & Practice 1:2 pp86-94.