Using activitysensitivityandnetworktopologyinformationtomonitor

Post on 16-Jan-2023

2 views 0 download

Transcript of Using activitysensitivityandnetworktopologyinformationtomonitor

ARTICLE IN PRESS

Omega 38 (2010) 359–370

Contents lists available at ScienceDirect

Omega

0305-04

doi:10.1

� Corr

Belgium

E-m

journal homepage: www.elsevier.com/locate/omega

Using activity sensitivity and network topology information to monitorproject time performance

Mario Vanhoucke a,b,�

a Ghent University, Tweekerkenstraat 2, 9000 Gent, Belgiumb Vlerick Leuven Gent Management School, Reep 1, 9000 Gent, Belgium

a r t i c l e i n f o

Article history:

Received 27 February 2008

Accepted 6 October 2009

Processed by B. LevWhen management has a certain feeling of the relative sensitivity of the various parts (activities) on the

project objective, a better management’s focus and a more accurate response during project tracking

Available online 13 October 2009

Keywords:

Project management

Simulation

83/$ - see front matter & 2009 Elsevier Ltd. A

016/j.omega.2009.10.001

esponding author at: Ghent University, Twe

. Tel.: +32 92643569.

ail address: mario.vanhoucke@ugent.be

a b s t r a c t

The interest in activity sensitivity from both the academics and the practitioners lies in the need to

focus a project manager’s attention on those activities that influence the performance of the project.

should positively contribute to the overall performance of the project.

In the current research manuscript, a simulation study is performed to measure the ability of four

basic sensitivity metrics to dynamically improve the time performance during project execution. We

measure the use of sensitivity information to guide the corrective action decision making process to

improve a project’s time performance, while varying the degree of management’s attention. A large

amount of simulation runs are performed on a large set of fictitious project networks generated under a

controlled design.

& 2009 Elsevier Ltd. All rights reserved.

1. Introduction

Since the introduction of the well-known PERT in the late 50sin project scheduling, the research on measuring a project’ssensitivity has been increasingly received attention from bothpractitioners and academics. Motivated by the common knowl-edge that the traditional critical path analysis gives an optimisticproject duration estimate (see, e.g. Klingel [1], Schonberger [2],Gutierrez and Kouvelis [3] and many others), measuring theproject sensitivity and the ability to forecast the final durationduring its execution have become key parameters for projectmanagers. In the remainder of this paper, measuring the durationsensitivity of project activities is referred to as Schedule RiskAnalysis (SRA, Hulett [4]). In literature, many often diverseresearch outputs have been presented on the duration sensitivitymeasures for project activities, all highlighting advantages and/ordisadvantages and illustrating shortcomings of the variousmeasures.

Next to these SRA studies, earned value management (EVM) isanother area where project duration forecasting has received a lotof attention, both from an academic and a more business-orientedpoint of view. Although the vast majority of EVM research hasbeen focused on cost performance, a renewed attention to time

ll rights reserved.

ekerkenstraat 2, 9000 Gent,

performance was recently started with the paper written by Lipke[5] who launched the earned schedule method as an alternativeand improved method to measure the time performance of aproject. The overview article of Vandevoorde and Vanhoucke [6]gives a summary of time measurement and forecasting methodsusing earned value management.

The motivation of the current paper lies in a research paper ofVanhoucke and Vandevoorde [7] who investigated the reliabilityof three EVM methods to forecast a project’s final duration. Thefundamental assumption of this EVM study is that EVM providesproject performance indicators that act as early warning signals todetect problems and/or opportunities in an easy and efficient way.To that purpose, performance measurement is done at the costaccount level, or even at higher work breakdown structure (WBS)levels, rather than on the individual activity level.1 A projectmanager can decide to take corrective actions when a generalproject performance status drops below a critical threshold(e.g. the schedule performance index, SPIo0:75) to take correc-tive actions. If corrective actions are necessary, the projectmanager needs to drill down into lower WBS levels (up to theindividual activity level) and take the appropriate correctiveactions on those activities which are in trouble (especially thosetasks which are on the critical path). We refer to this approach asthe project based tracking process to express that general project

1 It is said that EVM is no simple replacement of the activity based critical-

path based scheduling tools (CPM) [8].

ARTICLE IN PRESS

M. Vanhoucke / Omega 38 (2010) 359–370360

based performance measures are used to trigger the activity basedcorrective action decision making process.

Obviously, the reliability of the duration performance mea-sures and forecasts on the project level are crucial during projecttracking and affect the adequacy of the corrective action decisionmaking process. Reliable forecasts allow the project manager torestrict the focus on simple project based sanity checks to triggerthe often time-consuming critical-path based scheduling andtracking process. However, it is observed by Vanhoucke andVandevoorde [7] that all EVM time prediction methods onlyprovide reliable estimates when the project contains a lot ofactivities in series. Hence, when project based EVM forecasts areunreliable (in case the project contains a lot of activities inparallel), a more activity-based project tracking approach (i.e. atlower WBS levels than usually done for the earned value basedperformance measures) is stringent which demands a strongercontrol on a larger subset of activities and a continuous critical-path based tracking decision process. We refer to this approach asthe activity based tracking process to express that individualactivity information is required to effectively manage the projecttracking and corrective action decision making process.

The purpose of this paper is to perform an SRA reliability study,similar to the EVM reliability study of Vanhoucke and Vande-voorde [7], for project sensitivity measures in order to investigatewhether activity sensitivity information can be used as a reliabletool to provide time estimates to efficiently support correctiveactions. The basic question is whether a SRA offers an alternativetool to the EVM performance measurement when the latter failsin providing reliable estimates. More precisely, the research paperwill investigate whether the activity sensitivity measures can beused to measure the duration sensitivity of a project in order toset up an activity-based dynamic project control tool that triggerscorrective actions without looking the each individual activityalong the project progress.

The aim and contribution of this paper is twofold. First, theresearch briefly reviews basic as well as more advanced sensitivitymeasures used throughout the literature. Second, the relationbetween project duration sensitivity and the ability and accuracyof forecasting a project’s final duration is investigated taking thetopological project network structure into account. To thatpurpose, a simulation study is performed to measure theusefulness of sensitivity measures during project tracking. Theresults are obtained by simulation runs on a wide variety ofartificial projects generated under various settings. In doing so, weaim at providing insight to project managers where and whenactivity-based sensitivity measures are useful as a dynamic tool tosupport the corrective action decision making process duringproject tracking, as an alternative for the project based approachtaken by EVM.

The outline of this paper is as follows. Section 2 reviews themost important research efforts on duration sensitivity measuresin project scheduling and gives a basic overview of theiradvantages and disadvantages. In Section 3, the setting of thesimulation study is presented in which the relation betweensensitivity measures and project network topology is outlined. Itpresents the setting of the test design and the fictitious projectdata set and illustrates the research topic on a project exampleand a small empirical study. Section 4 presents results forsimulation runs in a dynamic corrective action project controlenvironment. Section 5 gives overall conclusions.

2. Literature overview

This section provides a general summary of the research onactivity and project sensitivity in project scheduling. It gives an

overview of various activity based sensitivity measures and theircorresponding advantages and disadvantages.

2.1. Activity-based sensitivity measures

The literature on project sensitivity measures is wide anddiverse and focuses on the measurement of the relative activitysensitivity in relation to the project duration. Typically, manypapers and handbooks mention the idea of using Monte Carlosimulations as the most accessible technique to estimate aproject’s completion time distribution. These research paperspresent often simple metrics to measure a project’s sensitivityunder various settings. Williams [9] reviews three importantsensitivity measures to measure the criticality and/or sensitivityof project activities. Elmaghraby [10] critically reviews thesesensitivity measures and extends the domain to more funda-mental sensitivity measures. In Section 2.1.1 of the currentmanuscript, four basic sensitivity measures will be brieflyreviewed and will be used in the simulation study of this researchpaper. In Section 2.1.2, the more advanced studies discussed byElmaghraby [10] are briefly reviewed, but will not be further usedin this study. Motivated by the heavy computational burden ofsimulation techniques, various researchers have published analy-tical methods and/or approximation methods as a worthyalternative. An overview can be found in the study of Yao andChu [11] and will not be discussed in the current research paper.

2.1.1. Basic sensitivity measures

In this section, the three activity based sensitivity measuresdiscussed in Williams [9] are briefly reviewed. A fourth sensitivitymeasure, based on sensitivity issues published in PMBOK [12] isadded in the simulation experiment of this paper. The followingnotation will be used throughout this paper:

nrs

number of Monte Carlo simulation runs (index k)

di

duration of activity i (superscript k will be used to refer

to the di of simulation run k)

tf i

total float of activity i (superscript k will used to refer to

the tf i of simulation run k)

Cmax total project duration (superscript k will used to refer to

the Cmax of simulation run k) - often referred to as theproject makespan

2.1.1.1. Criticality index CI. The criticality index measures theprobability that an activity lies on the critical path. It is a simplemeasure obtained by Monte Carlo simulations, and is expressed asa percentage denoting the likelihood of being critical. The conceptwas introduced by Martin [13] and further extended by variousauthors (see, e.g. Van Slyke [14], Dodin and Elmaghraby [15], andFatemi Ghomi and Teimouri [16], amongst others). The CI of ac-tivity i can be given as follows:

CI¼ Prðtf i ¼ 0Þ: ð1Þ

Although the criticality index has been used throughout variousstudies and implemented in many software packages, the CIoften fails in adequately measuring the project risk. The maindrawback of the CI is that its focus is restricted to measuringprobability, which does not necessarily mean that high CI activ-ities have a high impact on the total project duration (e.g. think ofa very low duration activity always lying on the critical path, but

ARTICLE IN PRESS

M. Vanhoucke / Omega 38 (2010) 359–370 361

with a low impact on the total project duration due to its negli-gible duration).

A simulation-based estimator of CI is given by

CI ¼1

nrs

Xnrs

k ¼ 1

1ðtfki ¼ 0Þ; ð2Þ

where in general the indicator function lð:Þ is defined by

1ðGÞ �1 if G is true;

0 if G is false:

(ð3Þ

2.1.1.2. Significance index SI [9]. In order to better reflect the re-lative importance between project activities, the sensitivity indexof activity i has been formulated as follows:

SI¼ Edi

diþtf i�

Cmax

EðCmaxÞ

� �; ð4Þ

with EðxÞ is used to denote the expected value of x. The SI has beendefined as a partial answer to the criticism on the CI. Rather thanexpressing an activity’s criticality by the probability concept, theSI aims at exposing the significance of individual activities on thetotal project duration. In some examples, the SI seems to providemore acceptable information on the relative importance of ac-tivities. Despite this, there are still examples where counter-in-tuitive results are reported.

A simulation-based estimator of SI is given by

SI ¼1

nrs

Xnrs

k ¼ 1

dki

dki þtfk

i

!Ck

max

C max

� �: ð5Þ

2.1.1.3. Cruciality index CRI [9]. A third measure to indicate theduration sensitivity of individual activities on the total projectduration is given by the correlation between the activity durationand total project duration. This measure reflects the relative im-portance of an activity in a more intuitive way and measures theportion of total project duration uncertainty that can be explainedby the uncertainty of an activity.

This measure can be calculated by using the Pearson’s product-moment:

(a)

The cruciality index based on the Pearson product-moment

correlation coefficient between the duration di of activity i andthe overall project completion time Cmax is given by

CRIðrÞ ¼ Corrðdi;CmaxÞ ¼Covðdi;CmaxÞ

VarðdiÞVarðCmaxÞ: ð6Þ

A simulation-based estimator of Pearson’s product-momentof activity i can be calculated as follows:

^CRIðrÞ ¼

Pnrsk ¼ 1ðd

ki � di ÞðC

kmax � C maxÞ

ðnrs� 1ÞsdisCmax

; ð7Þ

with sdiand sCmax

the sample standard deviations of variablesdi and Cmax, given by

sdi¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPnrsk ¼ 1ðd

ki � di Þ

2

nrs� 1

sand sCmax

¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPnrsk ¼ 1ðC

kmax � C maxÞ

2

nrs� 1

s:

ð8Þ

This correlation metric is a measure of the degree of linearrelationship between two variables. However, the relationbetween an activity duration and the total project durationoften follows a non-linear relation. Therefore, Cho and Yum[17] propose to use non-linear correlation measures such asthe Spearman Rank correlation coefficient or Kendall’s taumeasure. These non-linear measures can be calculated asfollows:

(b)

The Spearman’s rank correlation assumes that the values forthe variables are converted to ranks, and the differencebetween the ranks of each observation on the two variablesare then calculated. This cruciality index can be given by

CRIðrÞ ¼ E½ ^CRIðrÞ�; ð9Þ

where ^CRIðrÞ is a simulation-based estimator of CRIðrÞ that isgiven by

^CRIðrÞ ¼ 1�6Pnrs

k ¼ 1 d2k

nrsðnrs2 � 1Þ; ð10Þ

where dk is the difference between the ranking values of di

and Cmax during simulation run k, i.e. dk � rankðdki Þ �

rankðCkmaxÞ for k¼ 1; . . . ;nrs.

(c)

The cruciality index based on Kendall’s tau rank correlation

index measures the degree of correspondence between tworankings and can be given by

CRIðtÞ ¼ Prfðd‘i � dki ÞðC

‘max � Ck

maxÞ40g

� fðd‘i � dki ÞðC

‘max � Ck

maxÞo0g: ð11Þ

A simulation-based estimator is given as follows:

^CRIðtÞ ¼ 4

nrsðnrs� 1Þ

Xnrs�1

k ¼ 1

Xnrs

‘ ¼ kþ1

1fðd‘i � dki ÞðC

‘max � Ck

maxÞ40g

" #� 1: ð12Þ

2.1.1.4. Schedule sensitivity index SSI [12]. The project managementbody of knowledge (PMBOK) mentions quantitative risk analysisas one of many risk assessment methods, and propose to combinethe activity duration and project duration sample standard de-viations (sdi

and sCmax) with the criticality index. In this paper, it

will be referred to as the schedule sensitivity index. The measureis equal to

SSI¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiVarðdiÞ

VarðCmaxÞ

s" #� CI ð13Þ

and its corresponding simulation-based estimator is given by

^SSI ¼sdi� CI

sCmax

: ð14Þ

2.1.2. Advanced sensitivity research

Although the simulation study of this paper will be restrictedto the use of the four sensitivity measures presented in theprevious subsection, a short critical review is given here as asummary from various sources in literature. A detailed study ofthese sensitivity extensions is outside the scope of this manu-script, and the reader is referred to the different sourcesmentioned in this section.

Williams [9] shows illustrative examples for three sensitivitymeasures, CI, SI, and CRI, and mentions weaknesses for eachmetric. For each sensitivity metric, anomalies can occur whichmight lead to counter-intuitive results. Numerous extensions havebeen presented in literature that (partly) give an answer on theseshortcomings and/or anomalies. Tavares et al. [18] present asurrogate indicator of criticality by using a regression model inorder to offer a better alternative to the poor performance of thecriticality index in predicting the impact of an activity delay onthe total project duration. Kuchta [19] presents an alternativecriticality index based on network information. However, nocomputational experiments has been performed to show theimprovement of the new measure. In Elmaghraby [10], a shortoverview is given on the advantages and disadvantages ofthe three sensitivity measures discussed in Williams [9]. Heconjectures that a relative importance of project activities should

ARTICLE IN PRESS

1 2

100 100%10 50%20 50%

Fig. 2. A serial two non-dummy activity example network (Source: Williams [9]).

M. Vanhoucke / Omega 38 (2010) 359–370362

be given by considering a combined version of these threesensitivity measures and reviews the more advanced studies thatgive partial answers on the mentioned shortcomings. Moreprecisely, the paper reviews the research efforts related to thesensitivity of the mean and variance of a project’s total durationdue to changes in the mean and variance of individual activities.Cho and Yum [17] propose an uncertainty importance measure tomeasure the effect of the variability in an activity duration on thevariability of the overall project duration. Elmaghraby et al. [20]investigates the impact of changing the mean duration of anactivity on the variability of the project duration. Finally, Gutierrezand Paul [21] present an analytical treatment of the effect ofactivity variance on the expected project duration.

The use of the criticality index CI has been criticizedthroughout literature since it is based on probabilistic considera-tions which are very far from management’s view on the project.Moreover, the metric only considers probabilities, while it isgenerally known that the risk of an activity depends on acombination of probability and impact. The latter is completelyignored in the CI value, as illustrated in Fig. 1. The figure shows aparallel project network (the non-numbered nodes are used todenote the start and end dummy activities) with the possibledurations and the corresponding probabilities denoted above eachnode. Obviously, activity 1 has the highest potential impact on theproject duration since it might lead to a project with a totalduration of 100 time units. However, the CI of activity 1 is equal to1%, which is much lower than the CI = 99% of activity 2.Consequently, the values for the sensitivity measures are notalways intuitively clear, and they might lead to strange andcounterintuitive conclusions.

Although the SI and CRI measures have been proposed toreflect the relative importance of an activity in a better way thanthe CI, they can, however, both produce counter-intuitive resultsas illustrated by means of the example network of Fig. 2. Clearly,activity 1 has the largest impact on the project duration andEðRDÞ ¼ 115, with RD the real project duration. However, the SIvalues are equal for both activities and hence no distinction ismade between the sensitivity of both activities. Indeed, the SI isequal to

100% �100

100�

115

115¼ 1

for activity 1 and to

50% �10

10�

110

115þ50% �

20

20�

120

115¼ 1

for activity 2. Even worse, the CRI values show an opposite riskprofile for both activities. The CRI measure shows only the effecton the risk of the total project and, consequently, if the duration ofan activity is deterministic (or stochastic but with very lowvariance), then its CRI is zero (or close to zero) even if the activityis always on the critical path. The CRI value for activity 1 is equal

1

2

0 99% 100 1%

10 50%20 50%

Fig. 1. A parallel two non-dummy activity example network (Source: Williams [9]).

to 0% (no variation) while it is equal to

ð10� 15Þ � ð110� 115Þþð20� 15Þ � ð120� 115Þ

2 � 5 � 5¼ 1

for activity 2.

3. A simulation study

This section presents the test design for the simulation studyto measure the adequacy of the four sensitivity measures ofSection 2.1.1 during the project tracking process. Section 3.1discusses the generation of the project networks used in the studyand presents the various simulation scenarios. In Section 3.2, anartificial example illustrates how these sensitivity measures aredynamically used in a project control environment.

3.1. Test design

The project test data set consists of 4100 fictitious projectnetworks that have been generated by a project networkgenerator RanGen [22,23] which has been used in other projectscheduling and control studies (see, e.g. the study of Vanhouckeand Vandevoorde [7] mentioned in the introduction of this paper).

The fictitious project networks have been generated under acontrolled design by carefully varying the serial/parellel (SP)indicator.2 This indicator offers an alternative to the well-knownorder strength (OS) indicator [27] and measures how close aproject network lies to a complete serial or parallel network. Theindicator is based on the maximal progressive level concept ofElmaghraby [28] for a project network which defines themaximum number of activities lying on a single path of the pro-ject. Consequently, the indicator SP is defined as ðm� 1Þ=ðn� 1Þwith m the maximal progressive level of the project network and n

the number of activities in the project. Obviously, for a networkwith all activities in series, m¼ n and SP¼ 1. When all activitiesare in parallel then m¼ 1 and, consequently, SP¼ 0.

The reason why this indicator has been chosen to control thetopological structure of a project network lies in the aforemen-tioned relation between the EVM time performance reliability andthis SP indicator. Indeed, projects with high SP values (i.e. moreserial networks) provide reliable time estimates when using EVM,while lower SP value projects (i.e. projects with more parallelactivities) lead to poor time-related EVM early warning signals.The simulation runs of Section 4 will test whether the activity-based sensitivity measures are a useful alternative to the poorproject-based forecasting quality for projects with low SP values,and are an adequate tool for an activity based project trackingapproach. All simulations are extended to a dynamic correctiveaction decision making process which uses the status of highlysensitive activities as a trigger to take adequate and effectivecorrective actions.

Project execution is simulated for each project network usingMonte Carlo simulation runs under various scenarios. The use ofMonte Carlo has been discussed in various sources from literature

2 Note that the SP indicator is called the I2 indicator in the theoretical paper of

Vanhoucke et al. [24]. However, in most popular oriented project simulation

studies, the indicator is known as the SP indicator (see Vanhoucke and

Vandevoorde [7,25,26]).

ARTICLE IN PRESS

Table 1Five simulation scenarios to perform a schedule risk analysis.

Scenario 2 3 4 5 6 7 8 9 10 11 RD

1 6 11 1 4 3 2 8 8 5 4 18

2 7 14 1 4 5 1 8 11 5 5 22

3 6 16 1 7 8 1 11 14 6 5 27

4 4 14 1 7 5 1 9 9 5 3 20

5 6 17 2 6 9 2 13 12 4 4 26

Average 5.8 14.4 1.2 5.6 6 1.4 9.8 10.8 5 4.2 22.6

St. dev. 0.98 2.06 0.40 1.36 2.19 0.49 1.94 2.14 0.63 0.75 3.44

1 0

2 4

5 4

9 8

4 1

6 5

7 1

3 9

8 7

12 0

11 3

10 3

Fig. 3. A fictitious example project network.

Table 2The sensitivity measures for all activities obtained through a schedule risk

analysis.

2 3 4 5 6 7 8 9 10 11

CI 0.80 0.00 0.20 0.80 0.20 0.20 0.20 0.80 0.00 0.00

SI 0.94 0.82 0.36 0.94 0.61 0.38 0.72 0.97 0.62 0.30

M. Vanhoucke / Omega 38 (2010) 359–370 363

(see Williams [29] for an overview of network simulation studies).In the current study, we make use of the earned value simulationmodel of Vanhoucke and Vandevoorde [7]. We simulate randomvariation in activity durations using the generalized beta density,which is widely used to model activity durations in constructionproject simulations (see, e.g. AbouRizk et al. [30]). The generalizedbeta distribution is a continuous probability distribution f ðxÞwitha lower limit a, an upper limit b, shape parameters y1 and y1, andGð�Þ to refer to the gamma function, given by

f ðxÞ ¼Gðy1þy2Þ

Gðy1ÞGðy2Þðb� aÞy1þy2�1ðx

� aÞy1�1ðb� xÞy2�1 for xA ½a;b�: ð15Þ

Since the beta distribution is not always easily understood orsince its parameters are not easily estimated, variation in activitydurations are often simulated using the much more simplertriangular distribution [9] where practitioners often base in initialinput model on subjective estimates for the minimum ðaÞ, themode ðcÞ (most likely value) and the maximum ðbÞ of thedistribution of the activity duration. Although it has beenmentioned in literature that the triangular distribution can beused as a proxy for the beta distribution in risk analysis (see, e.g.Johnson [31]), its arbitrary use in case no empirical data isavailable should be taken with care (see, e.g. Kuhl et al. [32]). Thepragmatic choice of using triangular distributions might lead tobiased results and should therefore be investigated within asensitivity analysis. Studies have revealed that the choice ofdistributions to model empirical data should reflect the propertiesof the data. As an example, AbouRizk et al. [30] defend theimportance of appropriate input models and state that theirinappropriate use is suspect and should be dealt with carefully.

In this simulation study, the choice of using the generalizedbeta distribution has been based on the comments made in Kuhlet al. [32]. These authors motivate that the generalized betadistribution is generally a better choice than the triangulardistribution in cases c � a5b� c or c � abb� c, that is, insituations in which there is a pronounced left or right-hand tail onthe distribution of the stochastic variable (activity duration).

In the computational experiment of Section 4, it is assumedthat estimates for the lower limit a, upper limit b and the mode c

are given. While the lower and upper limits a and b areparameters of the generalized beta density, we are also giventhe mode c of the distribution of X. More precisely, in terms of theauxiliary quantity

d¼b� c

c � að16Þ

the beta density with shape parameters y1 and y2 given by

y1 ¼d2þ3dþ4

d2þ1and y2 ¼

4d2þ3dþ1

d2þ1ð17Þ

will have its mode at the value x¼ c to an excellent degree ofapproximation [32]. Moreover, it has been shown that this can beeasily verified by noting that for any beta density with y141 andy241, the unique mode m of the density is given exactly by

m¼ðy1 � 1Þbþðy2 � 1Þa

y1þy2 � 2: ð18Þ

The input parameters a, b, and c for a generalized beta densityhave been carefully selected resulting in three test case scenariosdescribed as follows (the activity duration di refer to the originalbaseline duration estimate for each activity i of the project):

CRI ðrÞ 0.27 0.93 0.49 0.52 0.96 0.14 0.83 0.97 0.09 0.50

CRI ðrÞ 0.30 0.88 0.50 0.50 0.88 0.13 0.73 1.00 0.30 0.60

� CRI ðtÞ 0.20 0.60 0.40 0.20 0.60 0.60 0.40 1.00 0.20 0.20

SSI 0.23 0.00 0.02 0.32 0.13 0.03 0.11 0.50 0.00 0.00

Scenario 1: Random activity time deviations:ða; c; bÞ ¼ ð0:5di; di;1:5diÞ, which might result in activity dura-tions which are ahead, on or behind schedule.

Scenario 2: Activity delays: ða; c; bÞ ¼ ð0:8di; di;1:9diÞ resultingin total project duration delays. � Scenario 3: Severe activity delays: ða; c; bÞ ¼ ð0:9di; di;10diÞ

resulting in huge total project duration delays.

As a summary, by using this approach, we avoid that activitydurations are generated that substantially exceed the modeestimate c with a substantial probability, as would be the casewhen using triangular distributions (which would lead tounrealistic high activity durations, particularly in the case ofscenario 3).

It should be noted that the uncertainty in activity durationsmodeled here are simple yet intuitive ways to use the subjectivejudgments of the original activity duration estimates. Recently,more advanced and extended methods have been presented inliterature, such as, amongst others, the models by Hans et al. [33]for hierarchical multi-project planning with uncertainty andHuang et al. [34] for a fuzzy analytical hierarchy process methodto evaluate subjective expert judgments used for project selection.Another interesting source is the paper written by Durbach andStewart [35] who have shown that simplified models (the paperhas no specific focus on project management models) can oftenprovide acceptable and fairly robust performance in decisionmaking models under uncertainty.

ARTICLE IN PRESS

CRI1

0.00 0.20 0.40 0.60 0.80 1.00

2

3

4

5

6

7

8

9

10

11

Action threshold

Full control(0th percentile)

CRI (r)

No control(100th percentile)

Fig. 4. Action threshold as a percentile between full and no control.

M. Vanhoucke / Omega 38 (2010) 359–370364

3.2. Fictitious example and empirical evidence

In order to illustrate the purpose and features of the simulationstudy presented in Section 4, Table 1 shows five fictitioussimulated scenarios for the example project network of Fig. 3.Each scenario is characterized by a set of activity durations and atotal real project duration RD. Table 2 displays the values for allsensitivity measures based on these five scenarios.

Fig. 4 illustrates how the CRIðrÞ sensitivity information ofproject activities can be used and how an action threshold can beset as a minimal threshold value of the sensitivity measure. Thisaction threshold defines the degree of control, which can varybetween no control and full control, and is shown by the verticaldotted line on the figure. All activities with a CRIðrÞ value higherthan or equal to this line are said to be highly sensitive activitieswhich require attention during the tracking process and deservecorrective actions in case of delays. In the example case of thefigure, the action threshold has been set to 50% such that only themost sensitive activities 3, 5, 6, 8, and 9 with a CRIðrÞ value higherthan 0.5 need to be considered during the tracking process. In theremainder of this paper, this will be referred to as a % Controlvalue equal to 50%. The simulation study of Section 4 investigateswhether SRA can serve as a tool to steer the project trackingprocess using low action thresholds (i.e. low % Control) while stillhaving positive results of possible corrective actions on the projectobjective.

In order to test the validity of the fictitious project data and thesetting of the simulation approach, a research project has beenconducted based on empirical data from various consultancyprojects gathered during the period November 2007–January2009 gathering and analyzing data of more than 30 short termprojects.3 Each project network has been analyzed in a softwaretool ProTrack4 which allows the calculation of the SP indicator andthe replication of its real project progress in combination withEVM project tracking and SRA. Despite the limited empirical data,the analysis showed that the SP indicator range between 0 and 1occurs in practice, since SP values of these project vary between0.12 and 0.78. Due to the limited empirical data and the oftensubjective approach taken to estimate the activity duration

3 Parts of these empirical data have been used in the schedule adherence

study of Vanhoucke [36] (seven projects) or are updates of projects investigated

prior to this research period used during the forecast accuracy summary study of

Vandevoorde and Vanhoucke [6] (three projects).4 ProTrack is a software tool that is able to automatically calculate the network

topology (SP indicator) and links it to EVM forecasting accuracy and SRA

calculations, see www.protrack.be.

distributions, a more profound experiment is performed in thenext section based on the 4100 fictitious project networks under astrict controlled design.

4. Simulation results

4.1. Corrective action simulation approach

The design of the simulation study is outlined in Fig. 5, split upinto input parameters, simulation details, and output measures.The first three steps (project data, run simulation, and measureactivity sensitivity) are obvious steps used in a traditionalschedule risk analysis study and consist of various simulationruns for all project networks in order to obtain a value for allsensitivity measures as discussed in Section 2.1.1.

The values for the sensitivity measures for all activities areused as action thresholds in the next simulation run (runsimulation with corrective actions). More precisely, a subset ofhighly sensitive activities will be selected during project progressto measure their current time performance from the moment anaction threshold has exceeded. The action threshold is setbetween the minimal sensitivity value and maximal sensitivityvalue (e.g. the average sensitivity) of all activities obtained fromthe first simulation run. In this paper, we report results for anaction threshold equal to predefined percentiles of all sensitivityvalues of the project activities, but robustness checks has beendone for higher (closer to the maximum) and lower (closer to theminimum) action threshold values, showing no significant orrelevant differences in conclusions. The level of the actionthreshold defines the effort a project manager puts in the projecttracking phase, as will be measured by the % Control outputmeasure discussed later. The specific corrective action is takenwhen the selected activity shows a delay, and consists of areduction of the activity delay to half of its original value. Othersimilar actions has been tested as robustness checks (see Section4.2.3), and have shown no relevant or significant differences.

The three output measures are as follows (each variable has anextra superscript yes/no to refer to the simulation run with orwithout the corrective action decision making):

% Control (%C): Percentage of activities in the project networkthat has been controlled during project tracking (from 0% (nocontrol) to 100% (full control)), Consequently, the measuredefines the degree of management’s attention as the effort theproject manager puts in controlling and measuring the

ARTICLE IN PRESS

Run simulation

Measure activity sensitivity

Run simulation with corrective

actions

Measure improvement

Action threshold % Control

Unit contributionTotal contribution

Input Simulation run Output

Project data

Fig. 5. The simulation approach with corrective actions.

M. Vanhoucke / Omega 38 (2010) 359–370 365

performance of the project progress. The % Control and thecorresponding corrective actions are triggered by the actionthreshold, which is set at a value between the minimal andmaximum sensitivity measure value over all activities ob-tained through the first Monte Carlo simulation runs men-tioned earlier.

� Unit contribution (UC): Number of time units (e.g. days)

decrease on project duration divided by the total number oftime units decrease of all controlled activities as a result of thecorrective actions. Hence, this measure calculates the averagereturn of all actions taken on the activities on the total projectduration as follows:

1

nrs

Xnrs

k ¼ 1

Ck;nomax � Ck;yes

maxPiðd

k;noi � dk;yes

i Þ

!� 100:

Total contribution (TC): % decrease in project duration. This

5 As an example, the 83rd percentile of all sensitivity measure values is equal

to that value such that approximately 17% of the project activities have a value

equal to or higher than this value. For a 30 activity project network, this

corresponds to an absolute action threshold of 5 activities.

measures calculates the relative contribution of the correctiveactions on the total project duration as follows:

1

nrs

Xnrs

k ¼ 1

Ck;nomax � Ck;yes

max

Ck;nomax

!� 100:

In order to obtain meaningful comparisons of the effectivenessof the sensitivity measures CI, SI, ^CRIðrÞ, ^CRIðrÞ, ^CRIðtÞ, ^SSI forcontrolling project duration, we report both average values as wellas error bars corresponding to 95% confidence intervals for thethree output measures. To that purpose, a meta-experiment is setup consisting of 30 independent experiments, where eachexperiment consists of nrs¼ 100 independent simulation runson each project of the data set.

4.2. Computational results

In order to investigate the effect of the % Control (i.e. the effort)on the quality of the tracking process for each sensitivity measure,Table 3 has been constructed for three fixed values of the %

Control variable. More precisely, the % Control is kept, to the bestpossible, constant by using percentiles of the sensitivity measuresas action thresholds. More precisely, the 83rd, 73rd, and 63rdpercentiles are used as action thresholds, such that the % Controlvariable is set such that approximately 17%, 27%, and 37% of allactivities are said to be highly sensitive, respectively.5 In doing so,the contribution of the different sensitivity measures can beunambiguously compared under a fixed project tracking effort.

The table shows results under three different action thresholdvalues for low, medium and high SP value networks (top, middle,and bottom rows, respectively). Each output measure (%C, UC, andTC) contains three rows, corresponding to the % Control inputvalues representing action thresholds equal to the 63rd, 73rd, and83rd percentile. Note that the simulated %C values (body of thetable, rows %C) are not always exactly equal to their correspondinginput values (37%, 27%, and 17% for the first, second and third row,respectively) since it is not always possible to find a uniquepercentile such that exactly 37%, 27%, and 17% are above thispercentile. This is the case when multiple activities have an equalsensitivity value that is equal to the selected percentile. In thiscase, all activities with a sensitivity measure value higher than theselected percentile are subject to the activity based tracking andpossible corrective actions (i.e. in case of delay) and hence, thereal %C value is lower than the corresponding input value.Consequently, the %C variable is controlled in the best possibleway, and deviations occur more and more when the sensitivitymeasures lie close to each other. In the extreme, when the tablereports ‘‘n.a.’’ for a sensitivity measure, the test was not able toselect different values for the percentiles (this is the case when allsensitivity measure values are equal or very close to each other).

ARTICLE IN PRESS

Table 3The output measures for three fixed action thresholds using percentiles (95% confidence intervals).

CI SI CRI ðrÞ CRI ðrÞ CRI ðtÞ SSI

SP¼ 0:2

%C [31.9%; 32.1%] [37.0%; 37.0%] [37.0%; 37.0%] [37.0%; 37.0%] [37.0%; 37.0%] [31.9%; 32.1%] *

[24.0%; 24.0%] [27.0%; 27.0%] [27.0%; 27.0%] [27.0%; 27.0%] [27.0%; 27.0%] [26.0%; 26.0%]

[14.0%; 14.0%] [16.0%; 16.0%] [17.0%; 17.0%] [17.0%; 17.0%] [17.0%; 17.0%] [17.0%; 17.0%] **

UC [68.5%; 68.9%] [52.1%; 52.5%] [51.2%; 51.6%] [57.4%; 57.6%] [54.5%; 54.9%] [66.6%; 66.8%]

[77.3%; 77.9%] [68.2%; 68.6%] [63.3%; 63.7%] [65.7%; 66.1%] [62.0%; 62.6%] [73.2%; 73.6%]

[87.2%; 88.0%] [85.0%; 85.6%] [79.5%; 80.1%] [79.9%; 80.3%] [58.3%; 59.1%] [85.9%; 86.3%]

TC [11.9%; 12.1%] [12.9%; 13.1%] [10.9%; 11.1%] [10.9%; 11.1%] [6.0%; 6.0%] [12.9%; 13.1%] *

[10.9%; 11.1%] [11.9%; 12.1%] [9.9%; 10.1%] [9.9%; 10.1%] [5.0%; 5.0%] [12.0%; 12.0%]

[6.0%; 6.0%] [7.9%; 8.1%] [9.0%; 9.0%] [8.0%; 8.0%] [2.0%; 2.0%] [10.0%; 10.0%] **

SP¼ 0:5

%C [35.4%; 35.4%] [32.3%; 32.5%] [37.0%; 37.0%] [37.0%; 37.0%] [37.0%; 37.0%] [37.0%; 37.0%]

[27.6%; 27.6%] [20.8%; 21.0%] [27.0%; 27.0 %] [27.0%; 27.0%] [26.0%; 26.0%] [27.0%; 27.0%]

[15.6%; 15.6%] [10.9%; 11.1%] [17.0%; 17.0%] [17.0%; 17.0%] [17.0%; 17.0%] [17.0%; 17.0%]

[98.0%; 98.0%] [98.0%; 98.0%] [85.2%; 85.6%] [87.4%; 87.8%] [63.2%; 63.6%] [96.6%; 97.4%]

UC [100.0%; 100.0%] [100.0%; 100.0%] [89.9%; 90.5%] [92.0%; 92.6%] [58.4%; 59.0%] [97.6%; 98.4%]

[100.0%; 100.0%] [100.0%; 100.0%] [96.4%; 97.0%] [94.0%; 94.6%] [42.6%; 43.4%] [98.5%; 99.5%]

TC [3.0%; 3.0%] [4.0%; 4.0%] [7.9%; 8.1%] [7.9%; 8.1%] [3.0%; 3.0%] [9.9%; 10.1%]

[1.0%; 1.0%] [2.0%; 2.0%] [6.9%; 7.1%] [6.9%; 7.1%] [2.0%; 2.0%] [8.9%; 9.1%]

[0.0%; 0.0%] [1.0%; 1.0%] [4.9%; 5.1%] [4.9%; 5.1%] [1.0%; 1.0%] [5.9%; 6.1%]

SP¼ 0:8

%C n.a. [17.9%; 18.1%] [37.0%; 37.0%] [37.0%; 37.0%] [36.9%; 36.9%] [37.0%; 37.0%] ***

n.a. [14.0%; 14.0%] [27.0%; 27.0%] [27.0%; 27.0%] [26.8%; 26.8%] [27.0%; 27.0%]

n.a. [7.9%; 8.1%] [17.0%; 17.0%] [17.0%; 17.0%] [17.0%; 17.0%] [17.0%; 17.0%] ****

n.a. [100.0%; 100.0%] [95.8%; 96.2%] [96.8%; 97.2%] [77.6%; 78.0%] [100.0%; 100.0%]

UC n.a. [100.0%; 100.0%] [97.8%; 98.2%] [96.8%; 97.2%] [76.3%; 76.7%] [100.0%; 100.0%]

n.a. [100.0%; 100.0%] [97.9%; 98.1%] [97.9%; 98.1%] [69.9%; 70.1%] [100.0%; 100.0%]

n.a. [1.0%; 1.0%] [6.9%; 7.1%] [6.0%; 6.0%] [2.0%; 2.0%] [8.0%; 8.0%] ***

TC n.a. [1.0%; 1.0%] [4.9%; 5.1%] [5.0%; 5.0%] [1.0%; 1.0%] [6.0%; 6.0%]

n.a. [0.0%; 0.0%] [3.0%; 3.0%] [3.0%; 3.0%] [0.0%; 0.0%] [4.0%; 4.0%] ****

Table 4Average and standard deviations for the three output variables.

CI SI (%) CRI ðrÞ (%) CRI ðrÞ (%) CRI ðtÞ (%) SSI (%) RAN50 (%)

%C n.a. 20.5 27.0 27.0 27.0 26.3 50

n.a. 3.9 0.0 0.0 0.0 0.9 0.0

UC n.a. 89.3 84.3 85.5 62.8 91.1 63.7

n.a. 6.4 18.7 18.1 20.9 16.8 21.5

TC n.a. 4.7 7.2 7.0 2.4 8.7 2.3

n.a. 2.9 5.4 5.0 0.7 4.1 0.5

M. Vanhoucke / Omega 38 (2010) 359–370366

In the three remaining subsections, the results of Table 3 willbe discussed along the settings of the input variables. In Section4.2.1, the average performance of the five sensitivity measures arecompared with each other, and with a project tracking processbased on a random selection of project activities. In Section 4.2.2,the impact of two important input variables will be discussed. Thesection shows the influence of the network topology on the threeoutput variables as well as the relevance of using high thresholdvalues during project tracking. In Section 4.2.3, results ofalternative simulation runs for extended simulation settings areperformed and briefly discussed to guarantee the generality of thesimulation results of Table 3.

4.2.1. Comparison of sensitivity measures

Table 4 displays summary results for the three outputmeasures for all activity networks (with low, in-between, andhigh SP values) and all action thresholds (63rd, 73rd, and83rd percentiles). Since the action threshold is set equal to a17%C, 27%C, and 37%C, the average %C is approximately equalto 27%. The output measures are compared with a random %Control approach where 50% (i.e. much more than the average27%) of the activities has been randomly selected as controlactivities for which a corrective action is taken in case of delay.Each cell in the table displays the average value (top) and standarddeviation (bottom). The results of the table can be summarized asfollows:

First, the table clearly shows that a corrective action approachbased on activity based sensitivity measures is relevant, since theresults outperform the random approach. Both the unit and totalcontributions are significantly higher for the sensitivity measures,despite the much lower effort (only on average 27% of theactivities are selected for control purposes, compared to the 50%

random approach). Consequently, the use of these sensitivitymeasures reduces a project manager’s effort while obtainingbetter results compared to a random control approach.

Second, the table shows that both the unit and totalcontributions are relatively high for the CRIðrÞ, CRIðrÞ, and SSImeasures, while it is lower for the CRIðtÞ (cf. low totalcontribution) and SI measures. Moreover, no results could bereported for the CI measure (n.a.). Consequently, the CRIðrÞ, CRIðrÞ,and SSI measures perform best for selecting a small subset ofproject activities (the size of the subset is defined by the actionthreshold) as highly sensitive activities for control purposes andcorrective action decision making in case of problems. Table 3shows that the average results of Table 4 can be further refined tomeasure the impact of the network structure and action thresh-old, which will be discussed in Section 4.2.2.

Finally, the results of Table 4 show that the standard deviationsare relatively small. This can also be confirmed by the 95%confidence intervals displayed in Table 3. For this reason, the nextsection will show the impact of input variables on the average

value of the three output variables %C, UC, and TC.

ARTICLE IN PRESS

0

0.1

0.2

0.3

0.4

CI SI CRI SSI

low SP - low action threshold

0

0.1

0.2

0.3

0.4

CI SI CRI SSI

low SP - high action threshold

%C TC

0

0.1

0.2

0.3

0.4

CI SI CRI SSI

high SP - low action threshold

0

0.1

0.2

0.3

0.4

CI SI CRI SSI

high SP - high action threshold

?

?Fig. 6. Partial graphical results of.

M. Vanhoucke / Omega 38 (2010) 359–370 367

4.2.2. Impact of the network topology and action threshold

Fig. 6 graphically displays partial results (only the averagevalues for the %C and TC output variables) of Table 3 in a graph asfollows:

Low SP and low action threshold: SP¼ 0:2 and %C¼ 37% (rows* of Table 3). � Low SP and high action threshold: SP¼ 0:2 and %C ¼ 17%

(rows ** of Table 3).

� High SP and low action threshold: SP¼ 0:8 and %C ¼ 37%

(rows *** of Table 3).

� High SP and high action threshold: SP¼ 0:8 and %C ¼ 17%

(rows **** of Table 3).

The figure is constructed to evaluate the ability of the foursensitivity measures to take timely and effective decisions (i.e.with the lowest effort possible and the highest total contribution)based on accurate information reported by the sensitivitymeasures, and should therefore be considered as a graphicalsummary of parts of Table 3. The CRI measure is the averageperformance of the CRIðrÞ and CRIðrÞ measures, as they haveshown to have a better performance than the CRIðtÞ measure inSection 4.2.1.

The figure clearly shows the impact of the network topology(SP) on the performance of the sensitivity measures in a projecttracking environment. The results show that the total contributionis lower for projects with more serial activities (i.e. higher SPvalues), regardless of the action threshold. This can be explainedby the observation that the sensitivity measure values arerelatively high for more serial project networks while theirstandard deviation is relatively low. Consequently, it is hard todistinguish between insensitive and sensitive activities and hencehard to select the right subset of project activities for controlpurposes. Hence, it becomes more difficult to control the time andeffort (% Control) a project manager puts in the tracking processleading to positive contributions when taking corrective actions.In the extreme, for a 100% serial network, all CI values are equal to

one, and the standard deviation equals 0. This is the reason whyno results could be reported for the CI measure for high SP values(see Tables 3 and 4).

The impact of action threshold is another relevant inputparameter in a project control environment. Obviously, highthreshold values (bottom rows for each output measure) areparticularly interesting, since they should lead to a small selection(i.e. low % Control) of highly sensitive activities that highly affectthe total project duration. The SSI performs best in that respect,followed by the CRI ðrÞ and CRI ðrÞ measures, when SP values arelow. Even with high action threshold values the total contributionremains relatively high, denoting that a small subset of activities(i.e. leading to a less time consuming tracking approach) isresponsible for a high project duration variance. When SP valuesof projects increase, the CI, SI, and CRI (r) measures perform ratherpoor, as they are not able to select a small subset of activities totake significant corrective actions. Consequently, high actionthreshold values obviously result in a small selection of activitiessubject to corrective actions, but it leads to only very small projectduration improvements (total contributions drop to 0% or 1% forSP¼ 0:5 for the CI and SI). As mentioned earlier, for SP values of0.8, the CI measure was not able to report different values for thepercentiles, as all CI values were high and close to one. The SSImeasure, however, shows that even with high action thresholdand serial project networks, there is still room to select a smallsubset of highly sensitive activities, leading to significantcontributions when taking the appropriate corrective actions.

4.2.3. Robustness

Simulation experiments are often subject to subjective esti-mates for input variables and results often need to be interpretedin the light of these input choices. While some of the inputvariables have been carefully controlled under various inputsettings, other parameters might seem to be arbitrarily chosen.

Both the topological structure of each project network (low,medium and high SP values) and the percentage control (%Cis set at relatively high percentile values, 63rd, 73rd and 83rd

ARTICLE IN PRESS

Table 5Average and standard deviations for the three output variables using triangular

distributions for scenario 3.

CI SI (%) CRI ðrÞ (%) CRI ðrÞ (%) CRI ðtÞ (%) SSI (%)

%C n.a. 17.4 21.3 21.1 22.4 20.6

n.a. 4.5 3.1 2.3 2.7 1.8

TC n.a. 1.7 3.6 3.8 0.8 4.1

n.a. 3.1 4.3 4.1 0.3 3.0

M. Vanhoucke / Omega 38 (2010) 359–370368

percentile, to assure that only a relatively small subset of theproject activities need to be controlled) have been controlled tothe best possible and results are discussed in the previoussubsections.

Three other variables of the simulation experiment, i.e. theparameters of the stochastic activity durations, the choice of thedistribution of the stochastic variables and the reduction ofactivity durations have been set fixed to pre-determined values.For these input variables choices, alternative robustness simula-tion runs have been performed to test the validity of the resultsreported. More precisely, all simulation runs have been testedagainst other scenarios, as follows:

Parameter robustness: The three scenarios have been run withother a, b, and c values for the generalized beta densityfunction of the stochastic activity durations to model othersettings of increases and decreases in the activity baselinedurations and their distribution skewness. No relevant devia-tions compared to the results of the previous sections could bereported. � Distribution robustness: The three scenarios with pre-definedða; c; bÞ values have been run using both generalized betadistributions to model activity duration uncertainty as well astriangular distributions. Results revealed that differencesbetween the two test runs were negligible for scenarios 1and 2, but not for scenario 3. In scenario 3, the totalcontributions of corrective actions were significantly lowerthan for scenario 1 and 2 when using triangular distributionfunctions, leading to less promising results when focusing ononly a subset of project activities (see Table 5 with averages(top) and standard deviations (bottom) for the %C and TCvariables). However, in this third scenario, it can be motivatedthat the use of the triangular distribution, and itscorresponding huge deviations between the real (i.e.simulated) activity durations and the baseline durationsleads to unrealistic baseline schedules and hence to relativelyhigh values for all sensitivity measures. Consequently, it isharder to distinguish between sensitive and insensitive projectactivities and hence to detect which activity subset (%C) needto be controlled during the monitoring control phase. This isthe reason why the variable %C is lower than the average of27%. � Throughout all experiments, corrective actions are modeled as

a 50% reduction in the activity duration in case an actionthreshold is exceeded and the activity currently has asignificant delay. Alternative simulation tests have been runwith other corrective actions within reasonable ranges, such aslower or higher percentage reductions in the activity duration,reductions in proportion to the activity delay, priced timereductions within a limited total budget amount (usingrandomly generated activity costs as done in Vanhoucke andVandevoorde [7]), etc. While each of these test runs obviouslyresulted in other values for the final project duration and

consequently for the values for the three output variables, therelation between the %C and TC variables, as reported in Fig. 3remained relatively stable.

Note that all simulation graphs show results for the individualsensitivity measures (the criticality index, the sensitivity index,the cruciality index and the schedule sensitivity index) and not fortheir combined versions. In literature, it has been proposed thatcombinations of sensitivity measures would lead to better results(see, e.g. the cruciality measure used in the software toolPertMaster as a combination of sensitivity and criticality or thesuggestions made by Williams [9] to use both the CI and the CRI incombination). However, the focus of the current research is on ageneral study to measure the ability of sensitivity measures as afilter to distinguish between highly sensitive and insensitiveactivities to control the project manager’s effort and not on theuse of individual sensitivity indicators to measure risk in activitydurations. In that respect, it is conjectured that effects observedfor individual sensitivity measures also hold for combinations ofthese indicators. These general results can be interesting for theproject managers and can be used as general rule-of-thumbswhich obviously need to be refined and further investigated forspecific projects in practice.

5. Conclusions

This paper presents a simulation study to measure thepotential of dynamically using activity based sensitivity informa-tion to improve the schedule performance of a project. A dynamiccorrective action decision making simulation model has been runon a large set of fictitious project networks, varying the degree of aproject manager’s attention on the project during tracking. Thesimulation study measures the relevance of four activity basedsensitivity measures (the criticality index, the sensitivity index,the cruciality index and the schedule sensitivity index) duringproject tracking in relation to the topological network structure,the effort a project manager should put in the tracking processand the overall schedule performance improvement when takingcorrective actions. The study aims at investigating whether theactivity sensitivity measures are able to distinguish betweenhighly sensitive and insensitive project activities in order to steerthe focus of the project tracking and control phase to thoseactivities that are likely to have the most beneficial effect on theproject outcome.

The study should be relevant to practitioners since it providesinsights into the project tracking and control phase, and gives anetwork topology indicator that prescribes where the focus of aproject manager should be during the project tracking phase.

Project tracking is often done based on earned value manage-ment (EVM) systems that provide indicators on a project’s generaltime and cost performance, that can be used as triggers to drilldown to individual activities to take corrective actions. Thisapproach, referred to as project based tracking approach, hasshown to provide reliable results when projects contain a lot ofserial activities, but often fails for more parallel structured activitynetworks.

An alternative approach, referred to the activity basedapproach, is to control a (preferably small) subset of the set ofproject activities, in order to take the timely corrective actions incase the timing of one of these activities are in danger. Obviously,the right selection of a subset of project activities is necessary, i.e.the smaller the subset, the lower the control effort for the projectmanager, but obviously those activities with the highest impacton the project duration should be selected. Consequently, in orderto select the right subset of project activities, a feeling about the

ARTICLE IN PRESS

M. Vanhoucke / Omega 38 (2010) 359–370 369

most important activities that are likely the cause of a projectdelay is necessary. The sensitivity measures can provide thatinformation, and can be used to select the smallest possibleactivity subset, leading to the best possible results when takingcorrective actions. The simulation study of the current paperinvestigates the potential of various sensitivity measures, and canbe considered as an activity based project tracking studycomparable to the project based tracking approach.

The results of the current paper show that both a project basedand an activity based project tracking approach could lead toreliable results, and their use depends on the topological structureof the underlying project network. More precisely, while previousresearch results have show that project based schedule perfor-mance information is particularly useful for serial project networks,an activity based tracking approach is much more reliable fornetworks with a completely other topological structure. Moreprecisely, the results show that for projects that contain a lot ofactivities in parallel—where the project based approach leads topoor results—more detailed activity sensitivity information isrequired by drilling down to lower WBS levels duringproject tracking. In these cases, management has a certain feelingof the relative sensitivity of the individual activities on theproject objective, in order to restrict the management’s focus toonly a subpart of the project while still being able to provide anaccurate response during project tracking in order to control theoverall performance of the project. It has been shown thatsensitivity measures suit very well for that purpose, and certainlythe schedule sensitivity index (SSI) and some versions of thecruciality index (CRI) provide relatively better results than thecriticality index (CI) and the sensitivity index (SI) when evaluatingthe contribution of the corrective actions taken during projecttracking.

Obviously, the results obtained by the study should be put intothe right perspective. First, the results reported in this paper areretrieved from simulation experiments on mainly fictitious dataand hence, no project or sector case-specific conclusions could bedrawn. However, these research results could be used as a triggerto use these concepts for empirical project data in order tomake the general conclusions more specific and flavor themwith case specific extensions. Moreover, all project simulationruns are only evaluated based on a comparison between effort ofproject tracking (percentage control) and contribution of correc-tive actions made by the project manager (unit and totalcontribution). Obviously, real projects should be evaluated ondifferent levels in different stages of their life cycle in order todecide upon corrective actions during their progress and to createa knowledge base of lessons learned at their closing stages. Aninteresting research area has been recently discussed in a paper byEilat et al. [37] who discussed an integrated data envelopmentanalysis and balanced scorecard approach for R&D projectevaluations.

Acknowledgments

We acknowledge the support by the research collaborationfund of PMI Belgium received in Brussels in 2007 at the BelgianChapter meeting, the support for the research project funding bythe Flemish Government (2008), Belgium, as well as the supportgiven by the Fonds voor Wetenschappelijk Onderzoek (FWO),Vlaanderen, Belgium under contract number G.0463.04. Thisresearch is part of the IPMA Research Award 2008 project byMario Vanhoucke who was awarded at the 22nd World Congressin Rome (Italy) for his study ‘‘Measuring Time—An Earned ValueSimulation Study’’.

References

[1] Klingel A. Bias in PERT project completion time calculations for a realnetwork. Management Science 1966;13:B194–201.

[2] Schonberger R. Why projects are ‘‘always’’ late: a rationale based on manualsimulation of a PERT/CPM network. Interfaces 1981;11:65–70.

[3] Gutierrez G, Kouvelis P. Parkinson’s law and its implications for projectmanagement. Management Science 1991;37:990–1001.

[4] Hulett D. Schedule risk analysis simplified. Project Management Network,July 1996.

[5] Lipke W. Schedule is different. The Measurable News 2003;Summer:31–34.[6] Vandevoorde S, Vanhoucke M. A comparison of different project duration

forecasting methods using earned value metrics. International Journal ofProject Management 2006;24:289–302.

[7] Vanhoucke M, Vandevoorde S. A simulation and evaluation of earned valuemetrics to forecast the project duration. Journal of the Operational ResearchSociety 2007;58:1361–74.

[8] Lipke W, Zwikael O, Henderson K, Anbari F. Prediction of project outcome:The application of statistical methods to earned value management andearned schedule performance indexes. International Journal of ProjectManagement 2009;27:400–7.

[9] Williams T. Criticality in stochastic networks. Journal of the OperationalResearch Society 1992;43:353–7.

[10] Elmaghraby S. On criticality and sensitivity in activity networks. EuropeanJournal of Operational Research 2000;127:220–38.

[11] Yao M-J, Chu W-M. A new approximation algorithm for obtaining theprobability distribution function for project completion time. Computers &Mathematics with Applications 2007;54:282–95.

[12] PMBOK. A guide to the project management body of knowledge, 3rd ed.Newtown Square, PA: Project Management Institute, Inc.; 2004.

[13] Martin J. Distribution of the time through a directed, acyclic network.Operations Research 1965;13:46–66.

[14] Van Slyke R. Monte Carlo methods and the PERT problem. OperationsResearch 1963;11:839–60.

[15] Dodin B, Elmaghraby S. Approximating the criticality indices of the activitiesin PERT networks. Management Science 1985;31:207–23.

[16] Fatemi Ghomi S, Teimouri E. Path critical index and activity critical indexin PERT networks. European Journal of Operational Research 2002;141:147–152.

[17] Cho J, Yum B. An uncertainty importance measure of activities in PERTnetworks. International Journal of Production Research 1997;35:2737–58.

[18] Tavares L, Ferreira J, Coelho J. A surrogate indicator of criticality for stochasticnetworks. International Transactions on Operational Research 2004;11:193–202.

[19] Kuchta D. Use of fuzzy numbers in project risk (criticality) assessment.International Journal of Project Management 2001;19:305–10.

[20] Elmaghraby S, Fathi Y, Taner M. On the sensitivity of project variability toactivity mean duration. International Journal of Production Economics1999;62:219–32.

[21] Gutierrez G, Paul A. Analysis of the effects of uncertainty, risk-pooling, andsubcontracting mechanisms on project performance. Operations Research2000;48:927–38.

[22] Demeulemeester E, Vanhoucke M, Herroelen W. A random network generatorfor activity-on-the-node networks. Journal of Scheduling 2003;6:13–34.

[23] Vanhoucke M, Coelho JS, Debels D, Maenhout B, Tavares LV. An evaluation ofthe adequacy of project network generators with systematically samplednetworks. European Journal of Operational Research 2008;187:511–24.

[24] Vanhoucke M, Coelho J, Debels D, Maenhout B, Tavares L. An evaluationof the adequacy of project network generators with systematicallysampled networks. European Journal of Operational Research 2008;187:511–524.

[25] Vanhoucke M, Vandevoorde S. Measuring the accuracy of earned value/earned schedule forecasting predictors. The Measurable News 2007;Win-ter:26–30.

[26] Vanhoucke M, Vandevoorde S. Earned value forecast accuracy and activitycriticality. The Measurable News 2008;Summer:13–16.

[27] Mastor A. An experimental and comparative evaluation of production linebalancing techniques. Management Science 1970;16:728–46.

[28] Elmaghraby S. Activity networks: project planning and control by networkmodels. New York: Wiley; 1977.

[29] Williams T. Towards realism in network simulation. Omega—InternationalJournal of Management Science 1999;27:305–14.

[30] AbouRizk S, Halpin D, Wilson J. Fitting beta distributions based on sampledata. Journal of Construction Engineering and Management 1994;120:288–305.

[31] Johnson D. The triangular distribution as a proxy for the beta distribution inrisk analysis. Statistician 1997;46:387–98.

[32] Kuhl ME, Lada EK, Steiger NM, Wagner MA, Wilson JR. Introduction tomodeling and generating probabilistic input processes for simulation. In:Henderson S, Biller B, Hsieh M, Shortle J, Tew J, Barton R, editors. Proceedingsof the 2007 winter simulation conference. New Jersey: Institute of Electricaland Electronics Engineers; 2007. p. 63–76.

[33] Hans E, Herroelen W, Leus R, Wullink G. A hierarchical approach to multi-project planning under uncertainty. Omega—International Journal of Man-agement Science 2007;35:563–77.

ARTICLE IN PRESS

M. Vanhoucke / Omega 38 (2010) 359–370370

[34] Huang C-C, Chu P-Y, Chiang J-H. A fuzzy AHP application in government-sponsored R&D project selection. Omega—International Journal of Manage-ment Science 2008;36:1038–52.

[35] Durbach I, Stewart T. Using expected values to simplify decision makingunder uncertainty. Omega—International Journal of Management Science2009;37:312–30.

[36] Vanhoucke M. The effect of project schedule adherence and rework on theduration forecast accuracy of earned value metrics. Technical Report, GhentUniversity, Ghent, Belgium; 2008.

[37] Eilat H, Golany B, Shtub A. R&D project evaluation: an integrated DEA andbalanced scorecard approach. Omega—International Journal of ManagementScience 2008;36:895–912.