Single-loop project controls: Reigning paradigms or straitjackets?

14
PAPERS February 2011 Project Management Journal DOI: 10.1002/pmj 17 All Project Decisions Are Based on Models—Usually Mental Models Every week—on projects large and small—project managers analyze many situations and make tens, even hundreds, of decisions. Rarely, however, do they stop to think about how they think. No manager’s head contains a business, a project, people, resources, soft- ware, or hardware. All human decisions are based on models, usually mental models—of project roles and relationships, cost-schedule trade-offs, organi- zational structures, and so on—created out of each person’s prior experiences, training, and instruction (Hunt, 1989). These are the deeply ingrained assump- tions, generalizations, and even pictures and images we form of ourselves, others, the environment, and the things with which we interact. People base their models on whatever knowledge they have, real or imaginary, naïve or sophisticated (Norman, 1988, p. 38). Once formed, these cognitive constructs not only provide a basis for interpreting what is currently happening, but they also strongly influence how we act in response (Chapman & Ferfolja, 2001). We like to think (and most of us believe) that well-adjusted individuals possess relatively accurate mental models of themselves, their jobs, and their environment. Unfortunately, this isn’t the case. A great deal of research in cognitive psychology has revealed that mental models are only simplified abstractions of the experienced world and are often incomplete, reflecting a world that is only partially understood (Chapman & Ferfolja, 2001; Peterson & Stunkard, 1989; Taylor & Brown, 1988). Everyone, from the greatest genius to the most ordinary clerk, has to adopt mental frameworks that simplify and structure the information encountered in the world.... [mental models] keep complexity within the dimensions our minds can manage.... But beware: Any [model] leaves us with only a partial view of the problem. Often people simplify in ways that actually force them to choose the wrong alternatives. (Russo & Shoemaker, 1989, p. 15) Project managers are no exception. This article is part of an ongoing program of research to study the role mental models play in project decision making. Specifically, this article examines models of project planning and control, discusses some of their limitations, and proposes tools to address the associated deficiencies. System Dynamics Microworld for the Study of Project Decision Making To date, much of the research conducted to examine the role mental models play in human judgment and choice has been limited to the study of static- type decision tasks (Gonzalez, Vanyukov, & Martin, 2005). The task of man- aging a project—a dynamic decision-making task—differs from the static variety customarily studied (e.g., in cognitive psychology) in at least three ways: (1) it involves a series of decisions rather than a single decision; (2) the Single-Loop Project Controls: Reigning Paradigms or Straitjackets? Tarek K. Abdel-Hamid, Naval Postgraduate School, Department of Information Sciences, Monterey, CA, USA ABSTRACT This article reports on the results from an ongo- ing research program to study the role mental models play in project decision making. Project management belongs to the class of multiloop nonlinear feedback systems, but most man- agers do not see it that way. Our experimental results suggest that managers adopt simplistic single-loop views of causality, ignore multiple feedback interactions, and are insensitive to nonlinearities. Specifically, the article examines single-loop models of project planning and con- trol, discusses their limitations, and proposes tools to address them. KEYWORDS: feedback; mental models; multiloop nonlinear systems; planning and control; simulation; system dynamics Project Management Journal, Vol. 42, No. 1, 17–30 ©2010 by the Project Management Institute Published online in Wiley Online Library (wileyonlinelibrary.com) . DOI: 10.1002/pmj.20176

Transcript of Single-loop project controls: Reigning paradigms or straitjackets?

PA

PE

RS

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 17

All Project Decisions Are Based on Models—Usually Mental ModelsEvery week—on projects large and small—project managers analyze manysituations and make tens, even hundreds, of decisions. Rarely, however, dothey stop to think about how they think.

No manager’s head contains a business, a project, people, resources, soft-ware, or hardware. All human decisions are based on models, usually mentalmodels—of project roles and relationships, cost-schedule trade-offs, organi-zational structures, and so on—created out of each person’s prior experiences,training, and instruction (Hunt, 1989). These are the deeply ingrained assump-tions, generalizations, and even pictures and images we form of ourselves,others, the environment, and the things with which we interact. People basetheir models on whatever knowledge they have, real or imaginary, naïve orsophisticated (Norman, 1988, p. 38). Once formed, these cognitive constructsnot only provide a basis for interpreting what is currently happening, but theyalso strongly influence how we act in response (Chapman & Ferfolja, 2001).

We like to think (and most of us believe) that well-adjusted individualspossess relatively accurate mental models of themselves, their jobs, andtheir environment. Unfortunately, this isn’t the case. A great deal of researchin cognitive psychology has revealed that mental models are only simplifiedabstractions of the experienced world and are often incomplete, reflecting aworld that is only partially understood (Chapman & Ferfolja, 2001; Peterson &Stunkard, 1989; Taylor & Brown, 1988).

Everyone, from the greatest genius to the most ordinary clerk, has to adoptmental frameworks that simplify and structure the information encounteredin the world. . . . [mental models] keep complexity within the dimensions ourminds can manage. . . . But beware: Any [model] leaves us with only a partialview of the problem. Often people simplify in ways that actually force them tochoose the wrong alternatives. (Russo & Shoemaker, 1989, p. 15)

Project managers are no exception.This article is part of an ongoing program of research to study the role

mental models play in project decision making. Specifically, this articleexamines models of project planning and control, discusses some of theirlimitations, and proposes tools to address the associated deficiencies.

System Dynamics Microworld for the Study of Project Decision MakingTo date, much of the research conducted to examine the role mental modelsplay in human judgment and choice has been limited to the study of static-type decision tasks (Gonzalez, Vanyukov, & Martin, 2005). The task of man-aging a project—a dynamic decision-making task—differs from the staticvariety customarily studied (e.g., in cognitive psychology) in at least threeways: (1) it involves a series of decisions rather than a single decision; (2) the

Single-Loop Project Controls: ReigningParadigms or Straitjackets?Tarek K. Abdel-Hamid, Naval Postgraduate School, Department of Information Sciences,Monterey, CA, USA

ABSTRACT ■

This article reports on the results from an ongo-ing research program to study the role mentalmodels play in project decision making. Projectmanagement belongs to the class of multiloopnonlinear feedback systems, but most man-agers do not see it that way. Our experimentalresults suggest that managers adopt simplisticsingle-loop views of causality, ignore multiplefeedback interactions, and are insensitive tononlinearities. Specifically, the article examinessingle-loop models of project planning and con-trol, discusses their limitations, and proposestools to address them.

KEYWORDS: feedback; mental models;multiloop nonlinear systems; planning andcontrol; simulation; system dynamics

Project Management Journal, Vol. 42, No. 1, 17–30

©2010 by the Project Management Institute

Published online in Wiley Online Library

(wileyonlinelibrary.com). DOI: 10.1002/pmj.20176

18 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

decisions are interdependent; and (3) the environment changes, bothautonomously and as a consequence ofthe subjects’ decisions (Brehmer, 1990).Such dynamic-type decision tasks havebeen likened to the pursuit of a targetthat not only moves, but also reacts tothe actions of the pursuer. This not onlycomplicates the task for the decisionmaker, it has also (until relativelyrecently) made such tasks exceedinglydifficult to study.

. . . the study of real-time, dynamicdecision making requires new formsof research technology. One cannotstudy dynamic tasks using the ordi-nary paper-and-pencil approach ofpsychological research. Instead,interactive computer simulations ofdynamic tasks are required. Thetechnology for this has only recentlybecome available in psychologicallaboratories.

Most later experiments on dynamicdecision making have used computersimulations of dynamic tasks.(Brehmer, 1990)

Experimental simulation “laborato-ries”—also called microworlds—enablethe replication of complex dynamicenvironments—with moving targetsthat react to the decision maker—andprovide a degree of control not easilyobtained in field settings (Sterman,2000). In a microworld-type experi-mental environment—unlike in reallife—the effect of changing one factorcan be observed while all other factorsare held unchanged.

For our research program, we devel-oped and used such an experimentalmicroworld. Our project managementsimulator—a system dynamics modelof software project management—wasdeveloped as part of an empirical casestudy to study and model the softwaredevelopment process at one of NASA’sflight centers. The model captures therichness and complexity of the NASAsoftware development environment in

great detail, and uniquely integrates theengineering-type functions (designing,coding, and quality assurance) togetherwith the management-type functions(planning, controlling, and staffing).(The model’s structure and its valida-tion are described in detail in Abdel-Hamid and Madnick [1991].)

Analogous to the flight simulatorsthat pilots use to practice on and learnabout the complexities of flying anaircraft, a project management micro-world provides a virtual practice fieldfor managers to “fly” a project andexperience the long-term conse-quences of their decisions (Sterman,1992). In a typical experimental sce-nario, our subjects “play” the role of theproject’s manager—making project costand schedule estimates, monitoringprogress, and making staffing and otherresource decisions over the life of thesoftware project.

Over the last two decades, close to athousand experimental subjects partic-ipated in our experiments. Many weregraduate students (masters’ students ina computer systems management cur-riculum who had an average 10 years ofwork experience). In addition, severalhundred practicing mangers (executive-education participants) also participat-ed. Many in that latter group were sen-ior managers who had spent most oftheir careers overseeing complex proj-ects for commercial enterprises andgovernment agencies (Sengupta, Abdel-Hamid, & Van Wassenhove, 2008).

Seeing Two Loops . . . But OnlyOne at a TimeHuman decision behavior, empiricalstudies demonstrate, is highly adaptive(Payne, Johnson, Bettman, & Coupey,1990). When tackling complex decisiontasks, for example, people draw upon arepertoire of heuristics (mental models)and adapt their decision-making strate-gies to the perceived demands of thetask (Payne, Bettman, & Johnson, 1993).Project management proved to be agood case in point.

Among the most striking examplesof contingent judgment in the projectmanagement domain are the adapta-tions that managers make in theirstaffing strategies as a project progress-es through the life cycle. Staffing deci-sions are doubly interesting to studybecause they are among the most con-sequential decisions a project managermakes—with significant impacts onproject cost, schedule, and quality. Toillustrate, consider the typical staffingpattern of Figure 1 (curve 1).

As mentioned earlier, the softwareproject used in our experiments is a sim-ulation of a real NASA project that wasconducted to develop software for pro-cessing satellite telemetry data. At thestart of the project, the system’s size wasestimated to be 400 tasks,1 and the pro-ject’s cost and duration were estimatedto be 1,100 person-days and 320 days,respectively. As often happens on soft-ware projects, the system’s size grew overtime—to 600 tasks—because of addedsystem requirements (see curve 3).

The plots of Figure 1 are not theresults actually observed on the realNASA project (we’ll see those later);rather, they portray typical resultsobtained in our simulation-basedexperiments (using the simulatedNASA project). The subject’s task—asproject manager—was to track the pro-ject’s progress using status reports gen-erated at different stages of the project(by the “simulated” project team), anddecide whether to update cost andschedule estimates, increase ordecrease staff level, and reallocate staffamong the various project activities(such as among development and qual-ity assurance). The experimentationenvironment automatically tracked notonly the decisions the subjects made(e.g., how much staff they hired/fired),but also what status reports they usedand how much time they spent on the

1 A task is a unit of work to build (design, code, and test) a

software module of average size—say, 50 lines of code.

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 19

different tasks. To gain deeper insightinto our managers’ mental models, wealso conducted postgame debriefingswhere we asked the subjects to verbal-ize the assumptions they made whilemaking the various project decisions.

The staffing pattern of Figure 1—selected because it was typical—mirrorsthe profile one commonly observes inpractice, with the project starting with asmall core team, gradually building upstaff size through the detailed designand coding phases, and ultimately withthe staff level tapering off as the projectenters the final testing phase. Note alsohow in the early stages of the project, asthe project’s size was growing, the man-ager held the completion date steady.According to DeMarco (1982), the incli-nation not to adjust a project’s scheduleearly in the life cycle is quite common. Itarises, he argued, because of politicalpressures. For example, a manager mayresist adjusting the schedule comple-tion date early in the project becausehe/she might fear that it’s too risky toshow an early slip to the customer (orthe boss) or that if he/she re-estimatesearly, they risk having to do it again later(and looking bad twice).

To system dynamicists—who are“conditioned” to spot feedback structures

in systems—the staffing profile of Figure1 itself suggests that the staffing decisionis driven by two distinct mental models: anegative feedback model early in the lifecycle and a positive feedback model inthe later stages. Our postproject debrief-ings would indeed confirm that and helpreveal the loops’ causal structures.

Early in the life cycle, the subjects’mental model of the planning and con-trol task is shown in Figure 2. Projectresources (such as manpower, develop-ment tools, equipment, etc.) areacquired and applied to accomplishproject work. As project work is accom-plished, the stock of project tasks per-ceived remaining declines. By trackingthe rate at which this happens (vis-à-vis the planned rate of progress), themanager can determine if the project’sforecast completion time needs to beupdated. If the forecasted completiontime (what the manager believes islikely to happen) and the scheduledcompletion date (what’s promised tothe customer) start to diverge, themanager can try to adjust the size orallocation of the project’s resources inorder to close the gap and bring theproject back on track. This planningand control loop is not a one-timeaffair but rather is a continuous

process that goes on throughout thelife cycle.

The loop of Figure 2 encompassesthe archetypical goal-seeking feedbackstrategy we rely on—both consciouslyand subconsciously—to control manyprocesses in daily life: where the stateof some system we aim to control iscompared to our goal for the system,and if a discrepancy is detected, correc-tive action is taken to close the gap andbring the system back in line with thegoal. Indeed, such a negative feedbackprocess underlies all goal-orientedbehavior. Nature evolves such goal-seeking feedback mechanisms, andhumans invent them as controls tokeep system states within desiredbounds (Meadows, 1999). For example,the homeostatic process built into ourphysiology to maintain body core tem-perature is such a process, as is thehuman-built thermostat that keeps aroom’s temperature at a desired level.

Notice, however, that this initialstrategy—opting to maintain the com-pletion date steady while willing toadjust the staff resource to avert poten-tial project delays—is reversed late inthe life cycle. In the later stages of theproject (beyond day 300 in Figure 1), thestaffing level is held steady and projectdelays are handled by extending the pro-ject’s completion date instead. In termsof the goal-seeking structure of Figure 2,this means that at the later stages, theproject manager sought to close the per-ceived gap between scheduled and fore-cast completion dates by adjusting thegoal instead of adjusting the resources.

In our postexperimental debriefings,we asked the participants to explain thereasons why they refrained from hiringmore staff late in the project and pre-ferred instead to extend the completiondate. Their answers revealed that theirmental models adapted—just as the con-tingent theory of judgment predicts—asthe project progressed through the lifecycle. More specifically, their responsesindicated two interesting things: (1) thegoal-seeking feedback model (Figure 2),

15600

1200

1:2:3:

1:2:3:

1:2:3:

8300600

000

0.00 125.00 250.00Days

375.00 500.00Page 1

1: Full Time Equiv Work Force 2: Scheduled Completion Date 3. Perceived Job Size

2 22

2

3

3

3 3

1

1

1 1

Figure 1: Typical project behavior.

20 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

which drove their decisions early on, nolonger did in the later stages and (2) therewas remarkable consistency with regardto the rationale that drove the shift.

Recall that in the mental model ofFigure 2 there is an implicit direct rela-tionship between project resources andwork rate—that is, the “expectation”that an increase in project resourcesboosts the work rate. While this may be“approximately” true early in a project’slife cycle, most participants understoodthat it is almost never true in the laterstages. A simplistic linear relationshipbetween project resources and workrate ignores the fact that adding morepeople (especially late in the project)often leads to higher communicationand training overheads, which tend todilute the team’s overall productivity.These effects create the phenomenonreferred to as Brooks’s Law: “addingmore people to a late software projectmakes it later.” (Brooks’s Law was firstpublicized in The Mythical Man-Month:Essays on Software Engineering [Brooks,1975], which remains on the must-readlist of most project managers.)

Assimilation delays are a big part ofthe Brooks’s Law phenomenon. Theseare the delays incurred when assimilat-ing new staff into the project team—

that is, bringing them up to speed onthe details of the project and providingthem with the necessary training on theproject’s hardware platform, develop-ment tools, and methodologies. Thisassimilation process is often time-consuming—generally ranging from 2 to 6 months—and imposes a signifi-cant drag on productivity. Duringassimilation, not only is the newemployee not fully productive, butbecause the “hand-holding” is typicallyperformed by the veterans on the proj-ect, the productivity of the veterans alsosuffers.2

While the productivity “hit” associat-ed with the hiring and assimilation ofnew staff may be absorbed and, there-fore, “safely” discounted early in the lifecycle, the impact is more problematicwhen staff are added late (as Brooksargued convincingly). This was under-stood by our experimental subjects—many, as mentioned, were experiencedmanagers. Hence, the mental model thatdrove their staffing decisions late in the

life cycle was not the goal-seeking struc-ture of Figure 2 but rather the so-called“Brooks’s Law feedback loop” of Figure 3.

This second loop is different fromthe negative feedback loop of Figure 2in a very important way—it is a positiveloop. Whereas negative loops counter-act and oppose change, positive loopsby contrast tend to reinforce or amplifyit (Sterman, 2000, p. 12). You can followthis self-reinforcing dynamic by walk-ing yourself around the loop: increasingstaff resources through hiring lead tohigher communication and trainingoverheads, which lowers productivityand slows work rate, thereby increasing(rather than decreasing) the gapbetween project status and plan, whichinduces further staff additions.

To avoid the vicious trap of Brooks’sLaw, most project managers in ourexperiments refrained from adding stafflater in the life cycle and opted to closethe gap between perceived project statusand plan the other way—by extendingthe schedule. (That’s the goal adjust-ment path shown as the upper loop inFigure 3.) Essentially, then, what theywere doing was seeking to close the gapbetween project status (the state of thesystem they were striving to control)and the plan (the system’s goal) by low-ering the goal rather than by taking cor-rective action(s) to bring the project’sstate into line with the plan. (In the sys-tems thinking literature, this is referredto as the “goal erosion” dynamic.)

Binary Thinking: How “One PlusOne Equals One”In reality, it is important to realize, bothloop effects—the negative loop ofFigure 2 and the positive loop of Figure3—are present and operating in theproject from beginning to end. Projectmanagement, in other words, is a mul-tiloop nonlinear feedback system—not a“one-pony show” (Figure 4). But ratherthan seeing this multiloop (more com-plex) reality, our findings suggest thatmost project managers view the worldthrough a simpler single-loop lens. The“single-loop illusion” arises because thenonlinearities in multiloop systems

SCHEDULEDCOMPLETIONDATE

FORECASTCOMPLETION

DATE

GAP

WORK RATE

RESOURCES

DELAY

STARTHERE

Figure 2: Planning and control (negative) feedback loop.

2 For reasons we will understand shortly, the average

assimilation period on the NASA project was unusually

short—only 4 weeks long. Because of that, it was among

the project characteristics explicitly communicated to the

participants in the pre-experimental orientation.

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 21

cause the relative strengths (hence, vis-ibility) of the loops to shift over time(Forrester, 1987). As a feedback loopgains strength (relative to other loops inthe system), it dominates and, hence,becomes more salient.

Reducing a complex phenomenon orchoice to a binary set—negative or posi-tive feedback in this case—is no aberra-tion. It is a convenient (occasionally suf-ficient) mental shortcut we routinelyrely on to simplify our world. And notjust in thinking about project manage-ment, but also in many judgmental taskswe face. Indeed, it almost seems to bepart of human nature (Wood &Petriglieri, 2005).

As Stephen Breyer, the U.S. SupremeCourt Associate Justice, observes in hisbook Breaking the Vicious Circle:

We simplify radically; we reason withthe help of a few readily understand-able examples; we categorize (eventsand other people) in simple waysthat tend to create binary choices—yes/no, friend/foe, eat/abstain, safe/dangerous, act/don’t act. The result-ing categorizations do not alwaysaccurately describe another personor circumstance, but they help usmake quick decisions, most of whichprove helpful. (Breyer, 1993, p. 35)

Most of which!While binary thinking may help us

minimize cognitive effort and makequick decisions, it dramatically over-simplifies things. And this, as JusticeBreyer cautions later in his book, canseriously inhibit our understanding of acomplex problem or situation. In the

case of managing the staffing level on asoftware project, it can seriously under-mine a manager’s capacity to deter-mine the optimal staffing level.

Feedback-Loop Arithmetic: OnePositive Loop � One NegativeLoop Equals . . .As mentioned, project managementbelongs to the class of multiloop non-linear feedback systems. That’s the sameclass that defines some of our mostcomplex technological systems, includ-ing chemical refineries, autopilots, andcommunication networks. In such mul-tiloop systems, discerning the dynamicbehavior of any one of the individualloops in isolation (the loops of Figures 2or 3) may be reasonably obvious, butfiguring out the behavior of multipleinterconnected feedback loops (somepositive, some negative) can be tricky.The complexity of the bookkeeping taskis further compounded when there aresignificant nonlinearities and delays inthe system that alter the relativestrengths of the loops over time. This isprecisely what happens in a softwareproject: as a project progresses throughthe life cycle, nonlinear interactions anddelays dynamically alter the relativestrength of the Brooks’s Law loop.

To illustrate the dynamic complexi-ties, consider the following hypotheti-cal project situation:

A medium-sized software projectthat is currently at the midpoint inits life cycle is falling slightly behindschedule. At that point, the projectteam is composed of five teammembers and the team’s averageproductivity is clocked at 100 lines ofcode (LOC) per person-month. Withthe project falling behind schedule,the project’s manager is consideringadding one additional person.

As already discussed, newly hiredstaff often require considerable handholding to get up to speed. And becausethe training of the newcomers—bothtechnical and social—is usually carriedout by the old-timers, adding staff to alate project can significantly dilute the

FORECASTCOMPLETION

DATE

RESOURCES

COMMUNICATIONAND TRAINING OVERHEADS

PRODUCTIVITY

SCHEDULEDCOMPLETIONDATE

PRESSURETO ADJUSTSCHEDULE

BROOKS’S LAWLOOP

GAP

DELAY

Figure 3: The Brooks’s Law feedback loop that dominates late in the life cycle.

22 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

team’s average productivity. On thishypothetical project situation, thehire/no-hire decision will rest on themanager’s answer to the following ques-tion: Will the temporary drain on pro-ductivity be shallow and/or brief enoughthat it is more than compensated for bythe gains in productivity achieved laterwhen assimilation is complete?

The “unequivocal answer” to thatquestion is: It depends. That’s becausethe magnitude of the initial “hit” toteam productivity and the length of theassimilation delay are both organiza-tion- and project-specific. They dependon the quality of the people hired and onwhether the project is simple or com-plex, familiar or one of a kind.

To demonstrate the effects, considerthe two scenarios depicted in Figure 5.

Figure 5 depicts—for two differentscenarios—the productivity values duringassimilation for the newly hired person(ProdNew) and the five veterans (ProdOld).In both cases, I am assuming that the pro-ductivity of the five veterans on the proj-ect drops by 10% (that is, to 90 LOC/per-son-month [PM]) during the assimilationperiod. In scenario 1—a run-of-the-millproject—the productivity of the new hireis not much lower than that of the veter-ans—at 80 LOC/person-month. Thus, inscenario 1, average productivity for theexpanded six-person team drops to 88LOC/PM, while the team’s output increas-es from 500 to 530 LOC/month. In sce-nario 2—a more complex project—thenewcomer induces a bigger “productivityhit,” with average team productivity drop-ping to a lower 82 LOC/PM and the team’soutput decreasing to 490 LOC/month.This means that in scenario 2 (but not sce-nario 1), the addition of a new person tothe team induces a negative net contribu-tion to the team’s output (of 490 – 500 �–10 LOC/month).

In both cases, the increased trainingand communication overheads duringassimilation cause average productivityto drop (to 88 LOC/PM in scenario 1 andto an even lower 82 LOC/PM in scenario2). The drop in average team productivi-ty, in turn, means that project costs will

DELAY

SCHEDULEDCOMPLETIONDATE

PRESSURETO ADJUSTSCHEDULE

RESOURCESFORECASTCOMPLETION

DATE

GAP

WORK RATE

PRODUCTIVITYCOMMUNICATION

AND TRAINING OVERHEADS

Figure 4: Multiloop reality of project management.

Current Status:

• Close to midpoint of project

• 5 people working on project

• Average productivity: 100 LOC/Person-Month

• Because project late, one additional person to be hired

Two Scenarios:

Scenario 1

Add 1 Person

ProdNew = 80 LOC/PM

ProdOld = 90% of 100

Average Prod = 88 LOC/PM

Output = 530 LOC/M

Scenario 2

Add 1 Person

ProdNew = 40 LOC/PM

ProdOld = 90% of 100

Average Prod = 82 LOC/PM

Output = 490 LOC/M

Figure 5: Two project scenarios with different impacts on productivity.

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 23

rise (since a project’s cost in person-daysis equal to project size [in LOC] dividedby average productivity). An increase inproject cost (in person-months) doesnot, however, necessarily translate intoan increase in project duration. Totalteam output in LOC/month (not averageteam productivity) is what would deter-mine that. More precisely, for the pro-ject’s schedule to also suffer (togetherwith project cost), the drop in productiv-ities must be large enough to render theadditional person’s net cumulative con-tribution to the team’s output to be anegative contribution. We need to calcu-late the net contribution because anadditional person’s contribution to use-ful project work (e.g., 40 LOC/month inscenario 2) must be balanced against thelosses incurred by the veterans (the 10%productivity drop experienced by thefive existing team members). And weneed to calculate the cumulative contri-bution because while a new hire’s netcontribution might be negative initially,as training takes place and the new hire’sproductivity increases (see Figure 6), thenet contribution becomes less and lessnegative, and eventually (given enoughtime on the project) the new person

starts contributing positively to the proj-ect. (For example, at the point in Figure 6where the new hire’s productivity growsto 80 LOC/PM, his/her net contributionwould be the same as in scenario 1 [i.e.,a positive 30 LOC/month].)

Only if net cumulative contributionis negative will the addition of the newstaff member translate into a longerproject-completion time. Whether thishappens or not will be a function of thecomplexity of the project, the qualityand experience of the added staff, andthe stage in the life cycle when they areadded. The earlier in the life cycle peo-ple are added and/or the shorter thetraining period needed (e.g., due to the high quality of new hires or the lowcomplexity/novelty of the project), themore likely it is that the net cumulativecontribution will turn positive.Conversely, the later in the life cycle thatpeople are added and/or the costlier theassimilation process, the stronger the“Brooks’s feedback loop” of Figure 4,and the more likely it is that the netcumulative contribution will remainnegative.

In scenario 2, for example, whetheror not the net cumulative contribution

turns positive by project’s end willdepend on the rate at which productivi-ty improves and on the remaining timeto complete the project. Doing the nec-essary “bookkeeping” to figure that outis no trivial matter, however (Forrester,1964). (Essentially, it involves solving a high-order nonlinear differential equation—a difficult task for all but the simplestsystems.) On a “live” dynamic project(as opposed to the snapshot of Figure5), the calculus is further complicatedby the fact that not one but several dif-ferent types of individuals may beadded, and not necessarily at once butat different times during the project.

Our own experimental results doindeed suggest that, for most man-agers, the bookkeeping task is far toocomplex to accomplish by inspectionand intuition. Recall that in our experi-ments, the commonly adopted staffingstrategy was to refrain from hiring staffa little after the project’s midpoint(around day 300 in Figure 1). (This sug-gests an implicit assumption that thenet cumulative contribution turns neg-ative beyond that point.) That strategyled to a project duration of 440 days (asseen in Figure 1)—a duration, it turnsout, that is far from optimal!

Before discussing what is optimal,let us first see what actually transpiredon the real NASA project.

Figure 7 depicts the model’s simula-tion of the real NASA project (the simu-lation run used during model validation)together with the project’s actualresults. As can be seen, the model’s out-put closely matched the project’s actualbehavior (represented by the solid circles/triangles/squares in the figure).

Notice that the scale on the hori-zontal (time) axis of Figure 7 is missing.This is purposefully done so we may undertake a simple thought-experiment—one that we often con-duct in conjunction with our laboratoryexperiments. To do that, first compareNASA’s workforce pattern to that ofFigure 1. A simple comparison shouldconvince you that the staffing strategy atNASA was a lot more “aggressive”—with

Scenario 2

Add 1 Person

ProdNew = 40 LOC/PM

ProdOld = 90% of 100

Average Prod = 82 LOC/PM

Output = 490 LOC/M

On the day hired

9080

40Time

Productivityof new hire(LOC/PM)

Figure 6: Productivity of a new hire picks up over time.

24 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

management willing to add significantlyto the workforce until fairly late into thelife cycle. (Note, in particular, the dra-matic increase in workforce after timeT1.) This raises the following legitimatequestion: How much did such an aggres-sive (reckless?) hiring policy—one thatblatantly ignores the “lesson” of Brooks’sLaw—hurt the NASA project?

Contemplate that question for aminute, and before reading further,provide your best guess as to how muchlonger you think the actual project tookas a result—that is beyond the 440 daysobtained with the workforce policy ofFigure 1.• Contemplate for a few minutes the

implications of NASA’s “aggressive”staffing policy.

• And provide your best guess: Projectduration � ___________ days.

The “Un-Wisdom” of“Conventional Wisdom”Typical answers we get range from 500to 650 days. That’s a 15 to 50% “penalty”our experimental managers slap ontoNASA’s management for forsaking thelesson of Brooks’s Law.

On the real project, with its “anti-Brooks” staff policy, project duration was380 days! That’s approximately three cal-endar months earlier than the “by-the-book” workforce policy of Figure 1.

This result is often an absolute“shock” to most participants—many, asmentioned, were seasoned managerswho had spent most of their careers run-ning software projects. And this invari-ably triggers questions like: How cansuch a policy work for NASA when it wasso dysfunctional at IBM? And does thismean we should “repeal” Brooks’s Law?

To answer the first question, weneed to recall that the net cumulativecontribution is a dynamic variablewhose ultimate value is a function ofboth the characteristics of the systembeing developed and the people hired todevelop it. Our empirical results fromNASA do suggest that, in practice, it ispossible to compress communicationand assimilation overheads to the pointwhere the net cumulative contributionremains positive even when staff areadded late—very late. These project’sstats do not, however, explain the how.

To understand the cause behind thecauses, we need to dig a bit deeper intoNASA’s system/people characteristics.

Let’s start with system characteris-tics. The satellite software that was beingdeveloped on the project, while new andunique, was not fundamentally differentfrom satellite software developed on ear-lier projects. (This meant that, similar tothe run-of-the-mill scenario 1 of Figure 5,ProdNew would be only moderately lowerthan ProdOld.) As on earlier satellite proj-ects, the software for this project wasbeing developed in parallel with thedesign of the satellite’s hardware (itsprocessors and sensors). Over the years,NASA managers learned the hard waythat such two-track projects are particu-larly prone to late “surprises” when thesoftware and hardware subsystems arefirst brought together. Inevitably, somesoftware/hardware components will failto meet specified functionality or per-formance targets, and when that happens,software is almost always where manage-ment turns—because of economics—to engineer a “detour solution.”

To manage in such an environment,NASA’s software managers figured theynot only needed the capacity to addstaff on short notice, but also access to areliable pool of experienced softwaredesigners and programmers who can becounted on to contribute to a projectimmediately when hired. In the particularNASA flight center we studied, manage-ment sought to achieve that by institutinga long-term contractual arrangementwith a single contractor—in this partic-ular case with the Computer SciencesCorporation (CSC). Over the years, as aresult of the steady relationship, thepool of CSC software professionalsbecame intimately familiar with theNASA environment and the satellitesoftware, and when hired into a projectthey were indeed able to contribute toproject work relatively quickly and with-out incurring a great deal of communi-cation or training overheads.

The policy helped NASA compressboth the hiring and assimilation delayssignificantly (to six and four weeks,

320 Days

2 People

13 People

2,200Person-Days

Workforce

Design Phase Coding Phase

Days

Testing

EstimatedCost

EstimatedSchedule

1,100Person-Days

T1

Actual “Estimated Schedule” (Days)Actual “Estimated Cost” (Person-Days)Actual Workforce (Full-Time Equivalent People)

Figure 7: Actual behavior on the NASA project.

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 25

respectively) and caused the loss to pro-ductivity during the relatively shallowassimilation period to be minimal. Onour case-study project, the project’sultimate outcome suggests that—as aconsequence of these system/peoplecharacteristics—adding manpower verylate into the project did not cause netcumulative contribution to be negative.(In the next section, I present a morequantitative analysis of the impact.)

Which brings us to the secondquestion we posed: Does this mean wemust now repeal Brooks’s Law? To dothat on the basis of the above resultswould in fact be inappropriate. Andthat’s simply because the positiveresults of NASA’s (aggressive) staffingpolicy is an entirely company-specificresult. Thus, the answer to our secondquestion must be no.

What the results do underscore,however, are the perils of blind adher-ence to conventional wisdom and sim-plistic one-size-fits-all prescriptions(e.g., that “adding more people to a latesoftware project makes it later”). It isnot the first time (nor will it be the last)that conventional wisdom has beenproven wrong. John Kenneth Galbraith,the man who coined the phrase “con-ventional wisdom,” did not consider ita compliment. Conventional wisdom,Galbraith often lamented, reflects ourtendency to associate truth with con-venience. Because comprehending thetrue character of a complex system orproblem can be “mentally tiring,” heargued, people all too often adhere tosimplified conceptualizations, asthough to a raft, because they are easier tounderstand. In Galbraith’s view, conven-tional wisdom must be simple, convenient,comfortable, and comforting—thoughnot necessarily true (Levitt & Dubner,2005, pp. 89–90).

Combining the Strengths ofMental Models and ComputerModelsWhile adding staff to a late project wascounter-productive at IBM (Brooks,1987), the same policy worked well for

NASA. For project managers elsewhere,the $64,000 question becomes: Whatwould work in my organization?

Figuring that out requires twoessential cognitive skills: the managerneeds to develop an adequate causalmodel of his/her project environment—what’s referred to in control theory asthe “operator’s” model. By that it ismeant acquiring structural knowledgeof the project environment—that isunderstanding how system variables,such as people and system characteris-tics, hiring and assimilation delays, staffexperience/productivity, and the like,are related and how they influence oneanother. Second, to infer how the sys-tem behaves in response to some inter-vention the manager must be able to“run” that model (Brehmer, 1990;Conant & Ashby, 1970; Kleinmuntz &Thomas, 1987). A perfect operatormodel without a capability to “run” it isof little practical utility (Sterman, 1994).The ability to infer system behavior isessential if the project manager is toknow how actions taken (such as addingstaff) will influence the system and,thus, is essential in devising appropriateinterventions for change. The twoskills—understanding and prediction—are needed together. And herein lies aproblem!

Experience from working with man-agers in many environments indicatesthat while they are generally capable of grasping the unique characteristics oftheir environments (acquiring structur-al knowledge), they are usually unableto accurately determine the dynamicbehavior implied by these relationships(running their operator models)(Sterman, 2000). The human mind,experiments consistently show, is anexcellent recorder of decisions, reasons,motivations, and structural relation-ships, but it is not that good (nor reliable)at inferring the behavioral implicationsof interactions over time (Forrester,1979). Being able to “run” our mentalmodel of some system or situation, inother words, is a much more difficulttask for us.

Luckily, that’s precisely where com-puter modeling can help (Forrester,1979). Unlike a mental model, a comput-er simulator can reliably and efficientlytrace through time the implications of amessy maze of interactions. And it cando so without stumbling over phraseol-ogy, cognitive bias, or gaps in intuition(Richardson & Pugh, 1981). Computersimulation is thus well suited to fill thegap where human judgment is mostsuspect. Furthermore, by tailoringmodel parameters, computer-based toolscan be easily customized to fit the precisespecifications of different project/organizational environments.

To answer our $64,000 question, wethus need to combine the strengths ofthe manager with the strengths of thecomputer. The manager aids by specify-ing relationships within his/her softwareproject environment (e.g., people andsystem characteristics, hire and assimila-tion delays, staff experience/productivi-ty, etc.) and the computer then calculatesthe dynamic consequences of these rela-tionships (e.g., on cost and duration). Todemonstrate how this can be accom-plished in practice, I discuss next how itwas done as part of the NASA case study.

The obvious place to start—sincewe’re seeking to combine the strength ofmental models and computer models—is to elicit the managers’ mental models(in this case relating to project staffing).To do that, we conducted one-on-onestructured interviews where we askedthe managers about the informationthey used and how they used it in for-mulating staffing decisions. This infor-mation was then cross-checked withreviews of historical project records.From this we were able to map out a setof (rather “nuanced”) heuristics thatgoverned NASA’s staffing policy.

Not unlike managers elsewhere,NASA’s managers had to juggle a numberof conflicting objectives when determin-ing the workforce level. One obviousobjective was to maintain the workforceat the level they believed was necessaryto complete the project on its currentschedule. This workforce level was

26 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

referred to as the “indicated workforcelevel” and was determined by dividingthe amount of effort perceived remain-ing (in person-days) by the time remaining(in days). In addition to this all-impor-tant scheduling goal, consideration wasalso given to the stability of the work-force. What was interesting—and signifi-cant—here was that the relative weigh-ing between the desire to maintain work-force stability on the one hand and thedesire to complete the project on timewas not static but changed dynamicallythroughout the life of a project. To dothat, they conjured a mental heuristic—which we dubbed the “Willingness toChange the Workforce” (WCWF) heuris-tic—that worked as follows:

Workforce Level Needed � (IndicatedWorkforce Level) � (WCWF) � (CurrentWorkforce) � (1 – WCWF)

(1)

The WCWF is a weighing factor thatassumes values between 0 and 1, inclu-sive. WCWF is itself composed of twocomponents—namely, WCWF_1 andWCWF_2 (the two parts depicted inFigure 8).3 To understand how it works,assume for the moment that the WCWFis only composed of, and is thereforeequal to, WCWF_1. In the early stages ofthe project when “time remaining” isgenerally much larger than the sum ofthe “hiring delay” and the “averageassimilation delay” (which at NASA were30 and 20 working days, respectively),WCWF_1 would be equal to 1. WhenWCWF � 1, the “workforce level needed”in equation (1) would simply be equal tothe “indicated workforce level”—that is,management would be adjusting itsworkforce size to the level it feels is need-ed to finish on schedule.

Late in the project, when the “time remaining” drops below somethreshold—0.4 times the “time parame-ter,” or 20 days in this case—the partic-ular policy curve of Figure 8a suggests

that no more additions would be madeto the project’s workforce. At that stage,WCWF_1 equals exactly 0. The “work-force level needed” would thus be equalto the “current workforce”—that is,management maintains the project’sworkforce at its current level. Scheduleslippages at this late stage would, thus,be handled by adjusting the schedulecompletion date, and not throughadjustments to the workforce level.

As seen in Figure 8(a), the transitionfrom “hiring whatever is needed” to“freezing all hiring” is not abrupt (bina-ry). In the middle of the project—when“time remaining” is between 0.4 and 1.5times the sum of hiring and assimila-tion delays—the WCWF_1 variableassumes values between 0 and l. This rep-resents situations where management

responds to schedule slippages by par-tially increasing the workforce level andpartially extending the current sched-ule to a new date.

As mentioned, WCWF_1 is only oneof two components to WCWF. To under-stand the rationale behind the WCWF_2formulation, we need to understand oneimportant aspect of the NASA softwaredevelopment environment—namely,that serious schedule slippages could notbe tolerated. That’s primarily because ofthe ironclad satellite launch windowsthey had to contend with. A satellite’slaunch window constituted a “maximumtolerable completion date” (that’s howthey referred to it) that could not bebreached. Managers typically startedwith that “maximum tolerable comple-tion date” as an anchor, and using their

1.0

.8

.6

.4

.2

0

1.0

.8

.6

.4

.2

0

0 .3 .6 .9 1.2 1.5

WCWF-1

WCWF-2

Time RemainingTime Parameter

Scheduled Completion DateMax Tolerable Completion Date

The nominal value of the “Time Parameter” was 50 days(the sum of hiring and assimilation delays).

.7 .8 .9 1.0

(a)

(b)

Figure 8: Willingness to change the workforce policy curves.3 Notice that the time axis in Figure 8 is a normalized meas-

ure of time—time remaining as a multiplier of the sum of

hiring � assimilation delays.

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 27

estimate of the project’s duration—withsome safety factor mixed in for goodmeasure—would work backgrounds intime to derive a start date for the project.For example, if the estimated projectduration is 10 months, and a 20% safetyfactor is used, the project would be start-ed 12 months before the “maximum tol-erable completion date” (at the latest). Ifsuch a project starts to fall behind sched-ule, management’s reaction will dependon how close they are in breaching the“maximum tolerable completion date.”As long as the “scheduled completiondate” is comfortably below the “maxi-mum tolerable completion date,” deci-sions to adjust the schedule, add morepeople, or do a combination of both arebased on the balancing of schedulingand workforce stability considerations ascaptured by WCWF_1. However, if the“scheduled completion date” startsapproaching the “maximum tolerablecompletion date,” pressures develop thatoverride the workforce stabilityconsiderations. That is, managementbecomes increasingly willing to pay anyprice necessary to avoid overshootingthe “maximum tolerable completiondate.” And this often translated into amanagement that was increasingly will-ing to add new people (plucked fromCSC) to the project.4

The development of such overrid-ing pressures is captured through thefollowing formulation of the WCWF:

WCWF � MAXIMUM (WCWF_1, WCWF_2)(2)

As long as “scheduled completiondate” is comfortably below the “maxi-mum tolerable completion date,” thevalue of WCWF_2 would be zero (seeFigure 8b)—that is, it would have nobearing on the determination of WCWF,

and consequently on the hiring deci-sions. When “scheduled completiondate” starts approaching the “maxi-mum tolerable completion date,” thevalue of WCWF_2 starts to graduallyrise. Because such a situation typicallydevelops toward the end of the project,it would be at a point where the value ofWCWF_1 is close to zero and decreas-ing. If the value of WCWF_2 does sur-pass that of WCWF_1, the “willingnessto change the workforce” will be domi-nated by WCWF_2 and, thus, the pres-sures not to overshoot the “maximumtolerable completion date.”

This WCWF heuristic is in essencehow NASA’s management intuitivelyjuggled the simultaneous effects of thethree interacting loops of Figure 4. It isclever and it is compact . . . but is it opti-mal? (Hint: No manager—even a math-ematician at heart—can be expected toaccurately and reliably optimize that onthe basis of bare intuition [Forrester,1979; Sterman, 2000].)

Among the important virtues ofsimulation-type models is the capacityto conduct perfectly controlled experi-mentation where the effect of changingone factor (e.g., staffing/WCWF policy)can be observed while all other factorsare held unchanged. In real life, by con-trast, many variables change simultane-ously, confounding the interpretation ofmanagerial actions/decisions. Usingour microworld, we conducted a seriesof controlled experiments to assess theschedule and cost consequences of awide range of WCWF policies (whileholding other project parameters con-stant).

Assessing first what’s optimal forWCWF_2 turned out to be relativelystraightforward: NASA managers need-ed to scrap it altogether. Re-simulationsof the project demonstrated that thepolicy of unbridled late hiring (to des-perately avoid overshooting the “maxi-mum tolerable completion date”) is notcost-effective—even with NASA’s rela-tively compressed hire and assimilationdelays. This can be seen in Figure 9, wherethe project’s base case performance (with

WCWF_2 intact) is compared to a re-simulation in which WCWF_2 is elimi-nated—that is, where WCWF �

WCWF_1. In the base case, as WCWF_2kicks late in the life cycle, the staff levelrises sharply. But, this hire-until-we-drop mentality, our results clearly indi-cate, buys them very little savings.Relative to the no WCWF_2 case, theproject saves only a few days in totalduration (less than 1%), while the pro-ject’s cost (in person-days) increases bya whopping 11%.5

Given these results, we droppedWCWF_2 in our subsequent analysesand reformulated the WCWF to be sole-ly a function of WCWF_1. Besides theobvious simplification, the reformula-tion offers an added bonus: it extendsthe generalizability of the results to thelarger universe of organizations wheretime constraints are not as stringent asthose at NASA (i.e., where they do nothave to contend with a “maximum tol-erable completion date”).

To assess what’s optimal forWCWF_1, we had the option of assess-ing its shape (how steep or flat) or its“time parameter”—which regulateswhere WCWF_1 is laterally positionedon the time axis—or both. In this article,I discuss how we optimized the latter—the WCWF’s time parameter, which wasthe issue that was of the most practicalconcern to the NASA managers.Determining where WCWF_1 shouldoptimally sit along the time axis is key todetermining the following three transi-tions in policy: the transition from will-ingness to hire whomever is needed(early in the life cycle) to maintain theschedule (when WCWF � 1) leads to(transition 2) handling potential delaysby partially increasing the workforcelevel, and partially extending the sched-ule (when 0 � WCWF � 1) leads to(transition 3) freezing all hiring (whenWCWF � 0).

Shifting the WCWF curve—and,hence, the previously mentioned three

4Tight time commitments are, of course, not unique to

NASA. Many other organizations we studied that were

involved in developing embedded software systems (e.g.,

MITRE) experienced similar pressures. When developing

embedded software systems (e.g., such as a new weapon

system), serious schedule slippages are not tolerated

because the software is often on the critical path of the

larger system development effort, and hence a schedule

slippage can magnify into a very costly overrun.

5 These results suggest that late in the life cycle, the project’s

“vital statistics” fall in between scenarios 1 and 2 of Figure 5,

with net cumulative contribution of approximately zero.

28 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

transition points—to the right or leftalong the X axis is easily accomplishedby simply recalibrating the value of the“time parameter” (see Figure 8a). Forexample, lowering the time parameterfrom its base case value of 50 days shifts

the WCWF to the left and would meanthat hiring continues later into the lifecycle. Conversely, increasing this timeparameter to, say, 100 working dayswould mean that the freeze on hiringoccurs much earlier in the project

(at 0.4 � 100 � 40 days from comple-tion, instead of the current 20 days).

We have simulated the project usingdifferent time parameter values and, inFigure 10, plot the consequences on theproject’s schedule and cost.

The results indicate that—in theNASA environment—the net cumula-tive contribution of new hires remainspositive as long as the “time parameter”remains � 35. While this means that latehiring (up until approximately threecalendar weeks from completion) doesnot cause delays, notice that it caninduce a sharp rise in project cost. Ourresults suggest that the more prudentstrategy would be to keep the timeparameter value at � 50 days. At thatlevel, late staff additions would savetime without excessively increasingcost. A “time parameter” of 50 meansthat NASA’s management could contin-ue to add (a few) people up until thepoint where the time remaining to com-plete the project is equal to 0.4 � 50 � 20working days—that is, approximatelyone calendar month from the project’scompletion.6 By contrast, in our experi-mental studies (Figure 1), the partici-pants typically froze their hiring a lotearlier than that (at approximately 100days, or 5 months before completion).

The resulting optimal WCWF policyis plotted in Figure 11 together withNASA’s intuitive policy and the binarystrategy commonly observed in ourexperiments.

The significance of this result, it isimportant to emphasize, is not in itsparticular value—the specific numberof months to stop hiring—since thiscannot be generalized beyond theNASA project, but rather the process ofderiving it—using microworld-typemodels for controlled experimentation.Such models, it is encouraging to note,can be easily customized to fit differentsoftware development environments toderive environment-specific optimalityconditions.

Page 2

15

8

00.00 100.00 200.00 300.00 400.00

6:53 AM Sat, Jul 04, 2009DaysUntitled

1:

1:

1:

Full Time Equiv Work Force: 1 - 2 -

2

2

2

2

1

1

1

1

_

Without WCWF_2

Base CaseWith WCWF_2

Project Duration(Days)

WithWCWF_2

394 2,310

2,050396WithoutWCWF_2

Project Cost(Person-Days)

Figure 9: Simulation with and without WCWF_2.

100806040200

450 2,500

400 2,000

350 1,500

ProjectCompletion

Time(Days)

Project Cost(Person-Days)

CompletionTime

ProjectCost

Time parameter

Figure 10: Impacts of shifting WCWF_1 along the time axis by changing the “time parameter.”

6 As shown in Figure 8a, WCWF_1 � 0 at 0.4 � time

parameter.

February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj 29

Concluding RemarksWe draw three key insights from the

results here. First, tapping into an orga-nization’s “mental database” can be aninvaluable source of organization-specific knowledge and wisdom. In thisstudy, it was key to understanding notonly the history of an organization’sstaffing decisions, but, more impor-tantly, it provided insight into whyproject managers acted as they did, therationale governing their decisions, andwhat information was/was not avail-able at various decision-making points.Indeed, as Forrester (1987) argues, anorganization’s behavior cannot be ade-quately understood without under-standing its mental database:

Human affairs are conducted pri-marily from the mental database.Anyone who doubts the dominanceof [the mental database] shouldimagine what would happen to anindustrial society if it were deprivedof all knowledge in people’s headsand if action could be guided onlyby written policies and numericalinformation. There is no writtendescription adequate for buildingan automobile, or managing a fami-ly, or governing a country. . . . If anorganization could not functionwithout its mental database, then Ibelieve its behavior cannot beunderstood except through thatmental database. (Forrester, 1987)

Second, when it comes to managingcomplex systems, mental models—even if “perfect”—are not enough. A key

lesson that I hope project managers willtake away from this article is that weshould not—cannot—rely on intuitionalone in managing our projects. With itsmany interrelated feedback processes(some counteracting, some reinforcing)project management is simply toodynamically complex to effectivelymanage by human intuition alone. Thelong time delays and the many nonlinearinteractions mean that interventionscan have a multitude of consequences,some immediate and others distant intime and space.

Third, I sought to demonstrate thefeasibility and utility of combining the strengths of the manager with thestrengths of computer modeling. Thiswas done not only to provide us withreliable and efficient tools to do thenecessary bookkeeping, but also to cre-ate customized solutions to fit theunique characteristics of our organiza-tions. The traditional one-size-fits-allsimplistic model(s) to project manage-ment is truly a legacy of times when wewere computationally poor. It is abankrupt strategy that we need toabandon. ■

ReferencesAbdel-Hamid, T. K., & Madnick, S. E.(1991). Software project dynamics: Anintegrated approach. Englewood Cliffs,NJ: Prentice-Hall.

Brehmer, B. (1990). Strategies in real-time, dynamic decision making. In R. Hogarth (Ed.), Insights in decisionmaking: A tribute to Hillel J. Einhorn

(pp. 262–279). Chicago: University ofChicago Press.

Breyer, S. (1993). Breaking the viciouscircle: Toward effective risk regulation.Cambridge, MA: Harvard UniversityPress.

Brooks, F. P., Jr. (1975). The mythicalman-month. Reading, MA: Addison-Wesley.

Brooks, F. P., Jr. (1987). No silver bullet:Essence and accidents of softwareengineering. Computer, 20(4), 10–19.

Chapman, J., & Ferfolja, T. (2001).Fatal flaws: The acquisition of imper-fect mental models and their use inhazardous situations. Journal ofIntellectual Capital, 2, 398–409.

Conant, R., & Ashby, W. (1970). Everygood regulator of a system must be amodel of the system. InternationalJournal of Systems Science, 1, 89–97.

DeMarco, T. (1982). Controlling soft-ware projects. New York: Yourdon Press.

Forrester, J. W. (1964). Commonfoundations underlying engineeringand management. IEEE Spectrum, 1(9),66–77.

Forrester, J. W. (1979). System dynam-ics: Future opportunities (Workingpaper number D-3108-1). Cambridge,MA: The System Dynamics Group,Sloan School of Management,Massachusetts Institute of Technology.

Forrester, J. W. (1987). Nonlinearity inhigh-order models of social systems.European Journal of OperationalResearch, 30, 104–109.

Gonzalez, C., Vanyukov, P., & Martin,M. (2005). The use of microworlds tostudy dynamic decision making.Computers in Human Behavior, 21,273–286.

Hunt, E. (1989). Cognitive science:Definition, status, and questions.Annual Review of Psychology, 40,603–629.

Kleinmuntz, D., & Thomas, J. (1987).The value of action and inference indynamic decision making. Organiza-tional Behavior and Human DecisionProcesses, 39, 341–364.

1

0.8

0.6

0.4

0.2

00

0.18 0.

30.

60.

91.

21.

51.

82.

12.

42.

7 3

WC

WF

Time Remaining/(Hire+Asim Delays)

Binary

Exp’t/Conv/l Wisdom

NASA

Optimal

Figure 11: Optimal willingness to change the workforce policy.

30 February 2011 ■ Project Management Journal ■ DOI: 10.1002/pmj

Single-Loop Project ControlsP

AP

ER

S

Levitt, S. D., & Dubner, S. J. (2005).Freakonomics: A rogue economistexplores the hidden side of everything.New York: William Morrow.

Meadows, D. (1999). Leverage points:Places to intervene in a system.Hartland, VT: The SustainabilityInstitute.

Norman, D. A. (1988). The design ofeveryday things. New York: DoubledayCurrency.

Payne, J. W., Bettman, E. J., & Johnson,J. R. (1993). The use of multiple strate-gies in judgment and choice. In N. J.Castellan, Jr. (Ed.), Individual andgroup decision making (pp. 19–39).Philadelphia, PA: Lawrence Erlbaum.

Payne, J. W., Johnson, E. J., Bettman,J. R., & Coupey, E. (1990).Understanding contingent choice: Acomputer simulation approach. IEEETransactions on Systems, Man, andCybernetics, 20, 296–309.

Peterson, C., & Stunkard, A. J. (1989).Personal control and health

promotion. Social Science & Medicine,28, 819–828.

Richardson, G. P., & Pugh, G. L. (1981).Introduction to system dynamics mod-eling and dynamo. Cambridge, MA:The MIT Press.

Russo, J. E., & Shoemaker, P. J. H.(1989). Decision traps: The ten barriersto decision-making and how to over-come them. New York: Fireside.

Sengupta, K., Abdel-Hamid, T. K., &Van Wassenhove, L. N. (2008). Theexperience trap. Harvard BusinessReview, 86(2), 94–101.

Sterman, J. D. (1992, October).Teaching takes off: Flight simulatorsfor management education. OR/MSToday, pp. 40–44.

Sterman, J. D. (1994). Learning in andabout complex systems. SystemDynamics Review, 10, 291–330.

Sterman, J. D. (2000). Business dynam-ics: Systems thinking and modeling fora complex world. Boston: IrwinMcGraw-Hill.

Taylor, S. E., & Brown, J. D. (1988).Illusion and well-being: A social psy-chological perspective on mentalhealth. Psychological Bulletin, 103,193–210.

Wood, D. J., & Petriglieri, G. (2005).Transcending polarization: Beyondbinary thinking. Transactional AnalysisJournal, 35(1), 31–39.

Tarek K. Abdel-Hamid has been a professor ofinformation sciences and system dynamics atthe Naval Postgraduate School since 1986. Hereceived his PhD in management informationsystems and system dynamics from MIT and hismaster’s in engineering economic systems fromStanford. Prior to joining NPS, he spent 21/2 yearsat the Stanford Research Institute as a senior ITconsultant. He is the coauthor of Software ProjectDynamics: An Integrated Approach (Prentice-Hall,1991), for which he was awarded the 1994 JayWright Forrester Award. In addition, he hasauthored or coauthored more than 50 papers onsoftware project management and other applica-tions of system dynamics.