Survival of the Fastest: Improving Service Velocity

11
# Companies must improve their technology and organization ifthey ai-e to contend with the domination of time in the Injimation Age. The author suggests improvements on both fronts, focusing on the goal of getting pfroducts to market quicklv. Survival of 1 n the Pastest: T lmproving Service Velocity NEIL C. OLSEN, EBS creasingly used to develop Information Age products. Major features of such products are quick customization and delivery. The first company to get a product on the market can charge a premium price and capture a larger market share. Being first also brings greater opportunities for future sales and more money from support and replacement activities. Products that rapidly implement customized demands will be preferred over fixed or embedded products. The problem is that although soft- ware is quick to change, it is slow ’to develop. It is a truism that “software is always late.” This is not only the rueful admission of poor planning, bad process, or poor engineering disci- pline, but a profound statement of market dynamics in the Information Age: products should be released to market the second they are conceived. Service velocity - the rate at which software can be deployed to market or customized - has a more significant influence on project decisions than quality, predictability, risk, cost, or productivity. Even in noncommercial software, where profit considerations may not dominate, the rate at which work can be serviced affects all other measurable elements more profoundly than any other factor. In practice, software engineers are often given a fixed deadline and expected to develop a schedule that meets that goal. This fixation on time is not an aberration or the result of . 0740-7459/94/$04 00 D 1994 IEEE SEPTEMBER 1995

Transcript of Survival of the Fastest: Improving Service Velocity

# Companies must

improve their technology and organization ifthey ai-e to contend with the domination of time in the Injimation Age. The author suggests improvements on both fronts, focusing on the goal of getting pfroducts to market quicklv.

Survival of 1 n the Pastest:

T lmproving Service Velocity NEIL C . OLSEN, EBS

creasingly used to develop Information Age products. Major features of such products are quick customization and delivery. T h e first company to get a product on the market can charge a premium price and capture a larger market share. Being first also brings greater opportunities for future sales and more money from support and replacement activities. Products that rapidly implement customized demands will be preferred over fixed or embedded products.

The problem is that although soft- ware is quick to change, it is slow ’to develop. It is a truism that “software is always late.” This is not only the rueful admission of poor planning, bad process, o r poor engineering disci-

pline, but a profound statement of market dynamics in the Information Age: products should be released to market the second they are conceived.

Service velocity - the rate at which software can be deployed to market or customized - has a more significant influence on project decisions than quality, predictability, risk, cost, or productivity. Even in noncommercial software, where profit considerations may not dominate, the rate at which work can be serviced affects all other measurable elements more profoundly than any other factor.

In practice, software engineers are often given a fixed deadline a n d expected to develop a schedule that meets that goal. This fixation on time is not an aberration or the result of

.

0740-7459/94/$04 00 D 1994 IEEE S E P T E M B E R 1 9 9 5

misguided management, but the fore- most customer requirement ancl the primary force behind profit. As such, time dominates all factors of the soft- ware-engineering process.

I have been a software engineer for more than 20 years, and 1 cannot ov e re in p h a size tli e do in in a n ce o f schedule, market window5, and release dates on my work process, my techni- cal designs, and my life. This is gener- ally true of my colleagues as well. A s an engineer, I also feel the need to codify my observations and subjective experi- ence with niathernatical inotlels so that I can understand my predicament, not merely testifE. to it.

The literature of software engineer- ing rarcly focuses on time as the doini- nant factor in software projects. In this article, I make a case for improving the rate of development and deployinent of software applications from the per- spective of a bztsine.v.s spomor.. My goals are to increase profit, beat the coinpe- tition, establish a reputation, build market share, and increase shareholder ~ a l u e . My challenge is to select the best software technologies, staff, infor- ma ti on - e n g i nee r i 11 g processes , and organizational structures to achieve rapid deployment and satisfy customer demands. I prefer the term infbtwiation etzg-imeritzg over software e ngin eeri n g because in actual practice much of the work we do involves processing infor- niation (docuinentation, inspection, filling out forms, and so on) rather than prograniming code - and also because i n form a t i on en gi ne e r i n g includes thc efforts of support, puldi- cation, training, and management.

VALUE-ADDED FRAMEWORK

You can view the infortmation mar- ket as a value-added framework, as Figure 1 shows. '1-0 provide service to

each part of this value-adding "food chain" when you explore techniques for optimizing time-to-market.

The key to meeting customer needs is the application. The system-integra- tion effort pulls hardware, operating systein, niiddleware, and applications together t o support the vendor in introducing the product, making the sale, and offering follow-up support. The service level covers support train- i n g, d ocu ni en ta ti on , bill i n g , re I ea se control , sales, and marketing that directly support the customer, and is in tu rn supported b y applications t o ensure the rapid introduction of such services.

Information-engineering model. \When analyzing sofiware activities, Frederick Brooks' (follow-ing Aristotle), divided software tlifficulties into accidents and essences -- thlit is, lie distinguished between induced antl inherent difficul- ties. FIowever, software does not process difficulties to avoid problerns: it processes changes to deliver prod- ucts and services. A change includes anything from initial requirements (a change froin zero), to defects, enhancements, ports, process rrioriifi- cations, and training recoinmenda- tions. A change is anything that creates work for the information engineer. Thus, for purposes of ineasurenient and modeling, it is more useful to sep- arate work deniancl from w)rk service, antl look a t ways of optimizing the process.

Figure 2 proposes a model for inform;ition-engineering activities as a d yn ani i ca I 1 y o w r I oa d e d que ue with feedback.' In thc rnodel, work demand enters a t a varjing rate of A. changes pcr month. The information-engineer- i n g- s e r ve r process (s t a ff, too Is , and organizations) processes the changes and generates deliverables a t a rate o f p changes per nionth. Demand h is a

Figure I . The iiijownation mal-ket ca?i be desn-ibed as a value-added mar- ket ji-amewovk that i n t q p t e s applirtr- tion, middleriri-e, opei-atiitg Jystem, r4nd har&~wre t o piaride s e i v i r e t o t he cnstome,:

requirements. T h e service rate or throughput is a weighted product of staff skill, domain experience, reuse degree, language level, tools, and offce facilities. Such factors have been stud- ied by Capers Jones,' Barry Roehm,' and others, with various weights given to different factors.

As the model shows, information engineering is a process that takes in a dynamic demand of work (identified by changes), arid produces deliverables, with more changes coining out of the

the user you must integrate hardware weigrhted summation function of and operating systems with domain- change factors, including initial prod- specific niiddleware, applications, and , uct complexity, training, number of . . I cPt&r .ec - .-Q,.h ,,C which 1 A A c .,.,I,,, r ~ n r ~ ~ . ~ n ~ ~ , . ~ r ~ , ~ ~ ~ ~ n o q s z ro , .hnc , l< ,<rTr Y I I I I b C - . _ ' , C A . " L V l l l l C l l "U"., "',&U,. ' I" 'SL""'"' ,L."lLa, I L L " . L C C . L L I " I V - ay 7

for the customer. You should consider i internally generated defects, and new i n f o r m a t i o n - e n g i n e e r r n g - s e r v e r

process. In many cases, the process ser- vicing the changes generates more work (such as defects and newly dis- covered requirements) than the initial sudden burst of requirements-generat- ed work at the start of the process results in a heavily overloaded queue. Because the model is dynamic, it is more accurate to think of work volume (the work over time) rather than size (initial task-effort based on require- ments estimation).

It is ironic that feedback-queuing models have been used to analyze scheduling problems in operating sys- tems. The next time you engineer soft- ware, think of yourself as a process in a heavily overloaded system, subject to thrashing, queue congestion, and con- text-switching overheads from exces- sive interrupts . Perhaps we have become what we program.

WHY IS TIME-TO-MARKET IMPORTANT?

Software production may be the first business activity of the Infor- mation Age. The factors driving profit, revenue, and costs are quite different from those of the Industrial Age. For example, the replication cost of word-

processing software (copyng and, mail- ing a diskette and manual) is insignifi- cant compared to the replication of a car in an assembly line.

To illustrate, let’s focus on a soft- ware project from start time to the time the first release is discontinued. Figure 3 shows a rough approximation of a business cycle. T h e important dates are: start (to), the time the first release is deployed to market (tTl), the time the first competing product is deployed to market (tJ, and the time the first release is replaced by the sec- ond (t,!). T h e product price on the market is a perceived ~ a l u e p ( t ) , the number of units sold is n(t), the devel- opment staff required is s(t), and the sales/support staff is 9(t) . T h e profit from a product’s first release can be estimated by the simplified equation,

p r o f i t = [ u n i t p r i c e - u n i t cost] * number s o l d - s t a r t - u p cost-development cost

Competit ion can be studied by looking at the first competitor to mar- ket. At release time (t ,) , the product can command a premium price ( P I ) with margin of M1. U’hen a competi- tor enters the marketplace (t ,) , the price drops to P2 with margin of M 2 .

Figure 3. A business cycle f o r the f;rst release includes start tzme (t(,), tirite deployed to market (trl) , time a competitor’s product is deployed (to, and time the f irst release i.r yeplaced by the secopid release (t,?).

jlL-___-__--- ~~ ~~ -- __.- -~ - -

30

,41sn, after acquiring a market.share, sales level off as the competitor’s prod- uct takes away potential sales. In a competitive market, prices are set by perceived value and market forces and are not under a business sponsor’s con- trol. Nor can I, as a business sponsor, control a competition’s release date (t,). What IS under my control are the start time (to) and release times (t,I and t,.).

To study the effect of time on prof- it, assume a linear model, with derel- opment staff going from S people to zero after release 1, the cost of bur- de ne d - staff (staff p 1 us over h e a d , including ofice space, benefits, and so on) at R, the number of units sold at N, and a start-up cost of Co. The result is Equation 1:

I can also control the margins and start-up costs to some degree, strive to increase margins and number sold, and reduce start-up costs, but clearly profit is highest when the release times of t, I and t,: are lowest. ?;ius the first obser- vation is:

O v e d profit is highest hen time-to- mavket i.r lowest.

Although this may seem obvious even without the math, it is remarkable how quickly it disappears from our consciousness. N o t only does engi- neering literature rarely address time- to-market as the central goal, but man- agement often embraces the short- sighted view that the most important goal is to control software labor and capital budgets, without considering the effect on time-to-market. ‘This is often because information engineers cannot persuasively show how cost hc- tors affect time-to-market; they typi- cally have few tools and case histories to back up their recommendations - whereas budget costs are all too clear.

Because development cost is a prod- uct of burdened-staffing cost and the re 1 at ive re 1 e as e ti in e , the business sponsor must ask, 147ill increasing staff

S E P T E M B E R 1 9 9 5

~ Factors low Mid Hioh il

I m in imi te rele,ise t i n i c ~ If u e do increase staff, will d f i n g costs grow faster or slower thm profit as a reuilt of the reduced time-to-market? 1 I Capitol as percentage of laborcost I 0.05 , 0.10 I 0.20 1

0.27 1 0.98 Staffing considerations. I \ it h t c r to 1

Ial)le factor4 of thc ~ N I S I I I C ‘ ~ ~ niodel i n t l i ca t c t h .I t pro J ec t pl a 11 ne rs h a \,e some tough choices. Should they

using new technologies for an ultimate increase in productivity? Should engi- neering hire nutnerous expensive pro- grammers, few programmers, or low- cost programmers? Which of these will lead to a lower time-to-market?

Capers Jones, in an analysis of staffing experience, structured meth- ods, CASE tools, and language levels, showed that productivity for program- ming and documentation varies from a “worst-case” average of 2.5 function- points-per-staff-month for a poorly optimized software process to a high of 40 (a 1:16 ratio).’ Boehm provided a similar productivity range of about 1 : 19 for staff experience and capability, tools, language experience, and mod- ern methods, and noted that other researches have presented similar results? Such studies show tha t a low- end productivity is .5 of the midrange, and a high-end is 8 times the midrange - a .5:1:8 productivity ratio.

When you are developing a budget for a project, you must ponder the cost-benefit trade-off of such produc- tivity improvements. The cost depends somewhat on time and place. Based on my experience in the US over the last 10 years, including five years estimat- ing budgets for programming environ- ments and staff, I have tried to come up with the costs of implementing improvements relative to labor costs. (‘These results may not hold true in other countries.) I chose costs and pro- ductivity for three ranges: low, mid, and high, and then normalized the data as a ratio to the midrange, as Table 1 shows.

+ For staff wages, there was a .5:1 cost ratio between inexperienced pro-

.________ _____

I E E E S O F T W A R E

Figure 4. The e f ec t of spending on labor and capital are most dramatic at the high end of the scale, where hiring experienced engineers and providing up--ant training produces an exponential growth in productivi<y.

grammers (just out of college) and average programmers and a 1.5:1 ratio between domain experts and average programmers. This results in a 5 : l : l . S cost ratio.

+ Spending on capital (such as tools and workstations) averaged out at 10 percent of labor costs, varying from a low of S percent per year for a personal computer to a high of 20 percent for a tool-rich, networked workstation. This results in a .05:.1:.2 cost ratio.

4 Training varied from a low of none for many companies to a high of 10 days per year mandated by a few large corporations. The midrange was five days. If training costs and travel are included, this results in a 0:.06:.08 cost ratio.

+ Office facilities cost as little as 6 percent of burdened-staffing cost for -.-..______ _ _ _ _ _ _ _ _ _ _ _ _ _ ~

developers jammed into less than 3 2 square feet of inexpensive space or as much as 48 percent with 90 square feet per programmer in expensive offices.

If you add these costs and normalize them to the midrange, staffing costs based on salary, tools, and training varies from .46 to 1 to 1.7. Figure 4 maps the cost ratio against the produc- tivity ratio.

My analysis does not guarantee a cost-productivity relationship. With- out good recruitment and interviewing skills, for example, you could pay a lot for a poor engineer, or your unmoti- vated staff may sleep through the most costly training. However, the market- place tends to swiftly correct such management mistakes. Practitioner experience, as well as my analysis, sup- --________ ~ -____

31

ports the second observation:

Spending for labol- and capital increases linearly, while productivity grows expo- nentially at the high end of staff capabiliq.

My analysis illustrates one phenom- enon of Information Age business: information engineers are a resource that appre- ciates with training and appreciates faster than investment cost. T h i s means you get more bang for the buck by hiring a top-notch programming staff o r spending more time on up-front train- ing. T o look for lower

demand h of incoming changes per month is handled by the service rate k. L e t Fo be the initial functionali ty desired (measured in function points). This total demand size must be han- dled by the server process. However, the process of serving the demand gen-

erates more demand at

IN THE LONG RUN, YOU GET MORE BANG FOR YOUR BUCK WHEN YOU

PROGRAMMERS. HIRE TOP-NOTCH

cost programmers at the expense of time-to-mar- ket may decrease your expenses linear- ly, but decrease your overall profit exponentially. This situation can be called “the bean-counter effect” and results when activity-based - rather than time-based - accounting prac- tices are followed.

My recommendation is to look at your software budget in light of the overall profit model and maximize time-to-market first - even a t the expense of a higher burdened-staff cost and the increased initial cost and risk of using new technologies. T h e next question is: If time is more important than money, shouldn’t you just hire a larger staff of any capability to reduce time-to-market?

Determining release time. I have used the dynamic-queuing model to analyze the effect of various controllable fac- tors on software-engineering time and effort.’ The model acknowledges that information engineering requires the analysis of complex factors (such as size, change, complexity, effort, and time) and has nonlinear outcomes. The model also presents a straightforward way to handle the problems of change overload in the information-engineer- ing process.

In this model, a program is released a t time t , when the t o t a l software

the h rate; that is, the server must handle extra work induced by the processes, which includes initial tool evaluation, training, and prototyp- ing, as well as the contin- uous handling of defects and new requirements discovered after the Dro- ject is launched. Thus, time-to-release occurs

when the total volume of demand is serviced:

T o make a business case for your project before it starts (as opposed to trying to control a product during its life cycle), you should look at three ele- ments: fixed initial-demand size, vari- able-demand volume induced by process execution, and service throughput relating to staffing and process factors. I will thus recast the above dynamic-queuing equations in terms of these management-oriented parameters.

Fixed initial-demand size, Do, includes the initial requirement size measured in function points for prod- uct F,, as well as the work required to train staff in new languages, methods, reusable components, and tools, and the work required to integrate the sys- tem or prototype new technologies. Experienced managers plan up-front training and prototyping for the domain as they define the product requirements. Initially, this adds an additional load to the work queue and increases start-up costs, but improves productivity in the long run.

Variable-feedback-process works as a function of time ( t ) , is W(t), and includes the work generated by requirements growth, staffing over- heads (such as reorganization and paperwork), and defects discovered in the repair process.

Service throughput relates to the number of staff as a function of time, N(t), and average productivity rate (R), weighted by productivity factors (P). Productivity factors include staff skill, domain expertise, reuse degree, lan- guage level, development-environment efficiency, and office facilities.

Assume that the variable-demand factors can be represented by an aver- age work-demand volume 0. Assume also that the number of staff repres- ented by N is fixed for the interval between start and release, and drops dramatically (to 0) afterward. With these simplifying assumptions, the queuing equation becomes:

D, + Wtr = NRPt,

Solving for time-to-release, t,, you get Equation 2:

t, = Do /( N K P -@)

This equation results in a third observation:

Relative time-to-YeIease is not simply dependent on the number of staff and i n - tial demand, but on productivity factors and the volume of process-generated change.

Both productivity and process-gen- erated change volume may be adverse- ly affected by adding more staff. Given that productivity ratios are exponential while staffing number is linear, time- to-market improvements must consid- er more factors than just number of staff or burdened-staff cost.

THREE IMPROVEMENT STRATEGIES

Time-to-market is decreased by more staff (N) and higher productivity

.

32 S E P T E M B E R 1 9 9 5

weights @, and increased by larger ini- tial fixed-demand factors-(D,,) and process-demand volume ( w) . These factors are somewhat controllable. Equations 1 and 2 suggest three ways to increase service velocity and thus improve profit:

+ start sooner, + reduce work volume, and + work more quickly.

Start sooner. If you start the project (to) as soon as you perceive the product need, you increase the early profit interval and reduce the time-to-release (t,,). Starting sooner may sound obvi- ous, but it is the most difficult strategy to apply because it requires an overhaul of the entire corporate ethos. This is beyond the capabilities of engineering management and sometimes those of business sponsors. Any project under time pressure should ask “Why didn’t we know to start sooner?” and figure out how to do better next time. Al- though starting sooner may be the most important strategy, it receives the least attention in engineering literature because it is mainly addressed through organizational techniques.

Reduce work volume. T o reduce work volume, you must address the initial demand size (Do) created by the initial task size, training, and prototyping as well as the work creajed by process- discovered changes (w>. You can re- duce initial work size by finding ways to minimize the task scope or maximize the reusable components so you have less to develop. You can minimize process-induced changes by engineer- ing the server process to prevent nonessential changes from entering the queue, by implementing defect-reduc- don strategies (such as inspections and reworking defect-prone modules), and by performing continuous risk analysis and requirements validation to make sure you are focused on real problems and requirements and are not “mirage engineering. ”

Work more quickly. You can work _ _ _ _ _ _ _ _ ~ ~

I E E E S O F T W A R E

more quickly by working more effec- tively, using better tools, hiring more staff, improving staff capability, and working longer hours. It is not an acci- dent that the average US programmer works 50 hours per week, but rather an economic necessity as predicted by the profit and queuing models. However, program managers who attempt to improve service velocity only through overtime will remain forever in the “invisible jail” of the market econom- ics, condemned to service continuously overloaded queues and still punished by missing fixed deadlines. Overtime is easy, it doesn’t require thought, and if you go to 50- to 60-hour work weeks labor productivity can improve up to 50 percent. However, this improve- ment ~ n l p lasts until staff burnout occurs. Remember, your competitors are also trying to solve scheduling problems with overtime. W e need to look at other solutions.

U’hen using these strategies, you must address both technical- and orga- nizational-improvement techniques to improve service velocity. Although there are many techniques that can be applied within the context of these three strategies, I will present actual practices that have proven successful in case histories.

TECHNICAL-IMPROVEMENT TECHNIQUES

Among the most successful tech- niques for improving service velocity are:

+ higher levels of system integra- tion,

+ higher level programming, + change-tolerant design, + better tools, and + reusable domain components. T o achieve technical results, you

must improve your process and often make changes in your organization as a whole.

System integration. As the value-added framework in Figure 1 shows, to inte- grate your system you must select com- ponents (including hardware, operating systems, middleware, and applications), configure them into a complete system, and then test the system before hand- ing it over to the customer. Higher lev- els of system integration reduce work volume by s tar t ing with reusable, pretested components that already implement a substantial percent of the system capability. Figure 5 shows the effect of system integration level on tinie-to-market.

Although system integration is an important part of any deployment -

F i p r e 5. A higher level of system integration reduces work volume by starting with pretested components that already implement system capabilities.

~~ ~- ~ ~ _ _ _ _ ____~ ~~ ~~

33

and is becoming a large business in itself - the technology behind i t receives remarkably little attention outside the area of standards. T o pro- mote higher levels of system integra- tion j70u should:

4 use open architectures and famil- iar components, and

4 use flexible (programmable) components. To support successful system integra- tion,

4 periodically assess your vendors, 4 use continuous risk analysis in

4 outsource areas outside your core

4 buy rather than build, and + integrate in small increments and

planning,

competency,

build toward a final system.

Higher level programming. Using high- er level programming improves service velocity by helping programmers work more quickly. Generic procedure- and application-oriented languages have been used since the 1950s. If one high- level statement is equivalent to 10 low- level statements, you can code 10 times faster, creating one-tenth as much code. T h e problem is that coding is only about 1 5 percent of develop- ment, and thus a high- level language has a rela- tively small effect on re- lease time. Also, the high- er the language level, the more constrained the solution space. The same semantically enforced paradigms that increase

lower level language. APIs also pro- mote reuse and modularity, providing information-hiding interface specifi- cations to the independent library modules.

However, APIs lack the semantic rigor of languages, and require more training, support, and documentation. It is often difficult for producers to supply in-house consumers with the marketing, sales, support, training, and documentation necessary for a success- ful API. Also, in-house producers are under strong pressure to modify API designs in an application-specific man- ner, which makes them less reusable. The best strategy is to either rigorous- ly promote and reward a reuse philos- ophy or outsource MIS from commer- cial vendors.

Change-tolerant design. In classic papers from the 1970s, David Parnas presented his criteria for good design:'x6

4 confine or group likely sources of change to specific modules;

4 identi+ program-package sub- sets, particularly the minimal subset and minimal increments; and

4 promote information hiding and separation-of-concerns using encapsulation and mod;lar i n d layered

Designs following Par- nas' heuristics tolerate

EXCHANGE change by quickly iden- I N FO RMATlO N tifying the component

to be changed, reacting OUTSIDE THEIR quickly to that change, IMMEDIATE AREA, and testing the small

subset the change affects.

EN CO U RAG E WORKERS TO

designs.

coding productivity may complicate program design, l imit application capabilities, and extend the release time. Ironically, using a high- level language may improve coding

To these criteria, I would add that good design requires that you

4 maximize dynamic binding. In a general sense, dynamic binding

(also called late binding) is the process time, but increase time-to-market. of associating a name, declaration,

For a good compromise between high-level power and lower level flexi- bility, you can use application-pro- gramming interfaces, which create an extensible high-level library of macros, functions, or objects in an appropriate

design-abstraction, or function to an implementation a t execution time. Cy deferring these decisions until execu- tion, dynamic binding lets you alter functionality without affecting other areas of the system. You also reduce -~ - ~-

work volume by avoiding the massive restructuring of software that occurs when changes ripple through a static or embedded architecture.

Dynamic binding lets you make many changes without affecting earlier design decisions and this reduces your analysis, development, and testing efforts. Also, you can defer design de- cisions until you know more about the real problems, further reducing work volume. In some sense, dynamic binding is the information-engineering equiva- lent of just-in-time manufacturing.

Better tools. Tools are automated ways of handling product, process, and organization changes. You can work more quickly in development and test- ing environments with efficient and user-friendly tools. T h e traditional focus in this area has been on pro- gramming tools, but it should include the entire information-engineering process.

Service velocity is improved when you use interactive, graphical, win- dowed, networked, high-performance environments. ID addition, better com- munication tools (such as e-mail) and integrated textual and graphical docu- mentation editors should be shared seamlessly with nonengineering areas. The idea should be to promote vzfizuzl coloration within your enterprise -that is, to encourage workers to share in- formation with others outside their immediate area. Using planning and estimating tools can help you prevent and control time-to-market risks and help you start sooner by predicting resource needs before you slip the schedule.

A warning: When approaching a new tool, use caution. There are few activities likely to bedazzle infonnation engineers more than the search for the magic tool.

Reusable domain components. In gener- al, reuse is effective whenever you achieve it, but the biggest gain comes when you use a set of value-added domain-specific technologies, process-

.

34 S E P T E M B E R 1 9 9 5

es, and organizations that support applications. A domain defines a com- mon problem space whose solutions share design decisions. It often repre- sents an industrial area, such as t e 1 e co mm un i c a t i o n s , medicine, defense, process control, and so on. Domain reuse reduces the initial size of your application-development task (Fo). It also reduces the process-creat- ed work volume (U? by using previous- ly tested components, and, because your development team is smaller, you introduce few-er process changes into the work queue. Reuse also offers proactive functionality, with functions that may not seem to be needed now, but are in reserve should a change occur that requires it.

The goal of higher system integra- tion can also be supported by reuse. You can do this by building the appli- cation on an additional layer, isolating the service layer from the operating systems and hardware layers. Because the reused components of the domain layer implement relatively static domain behavior, their use supports the design enforcement of Parnas’ dic- tum t o “export likely sources of change” by pushing variant behavior to the application. In effect, reusing domain-specific components forces you toward a more change-tolerant design.

Reuse is promoted using the fol- lowing techniques, now accepted in the industry:

+ discover reusable components through domain analyis,

+ reduce the scope of the applica- tion task by maximizing domain-spe- cific lower layers,

+ buy off-the-shelf, generic operat- ing ystems and domain-specific coni- ponents from external producers, and

+ support development processes that promote reuse and outsourcing.

MIDDLEWARE

hliddleware cotnbines the above techniques into a toolkit for applica-

- .. ._

I E E E S O F T W A R E

Fimre 6. Middleware combines technical-improvement techniques into a toolkit suiplied t o the application developer..

tion development. Figure 6 shows the effect of middleware on application time-to-market. Good middleware includes

+ reusable binary components, + example or template source code, + domain-specific tools, + training, + documentation, and + support. hliddleware toolkits have long been

established for databases and user interfaces, and it is likely that they will achieve similar success in other prob- lem domains such as telecommunica- tions, defense, and process control. Indeed, given the economic advan- tages of rapid deployment, it is likely that fast application development will be the competitive norm in the future. Providing an application developer with a domain-specific head start is like being allowed to start a race three- quarters of the way to the finish line: All things being equal, you are hound to win.

ORGANIZATION-IMPROVEMENT TECHNIQUES

Technical improvement addresses only part of the problem of improving

neering occurs in the larger context of business - an organization of people driven by mostly economic (and some social) incentives. W e must reengi- neer business organizations and cor- porate processes as well as informa- tion engineering.

Organizational tactics to promote fast cycle time have been suggested, such as Christopher Meyer’s recom- mendat ions for aligning purpose, strategy, and structure for speed.’ However, such recommendations pre- sent bulleted lists of good advice and approaches to organizational learning within the existing corporate structure, an entity that is notoriously resistant to change and reeducation.

Other improvement solutions pro- mote fixed goals backed by incentives, such as Boehm’s “management by objectives” using predicted “should- cost” targets.’ I find such objectives in information engineering to be too sta- tic; “should-cost” goals are fixed long before enough is known about the problem a t hand, and, if followed, would lead to project errors. In my experience, the only objective that really mattered was deadline, and inventing other objectives only cloud- ed the real issue of service velocity.

Gerald U’ienberg looked at man-

3 5

and concluded that*

In the unsuccessful systems, I found things that the managers couldn’t keep under mental control. Most of these things had to do with the way people in a project behave, and these managers were trying to control by using over- structured models of human behavior.

The same conclusion is supported by my investigation of the dynamically overloaded queuing model. Over- structured processes create a lot of

overhead work, which increases the process work volume, delays the pro- ject, and leads to missed deadlines.

I’ve observed that in many cases a system’s software architecture recapit- ulates organization structure. If this is more than a coincidence, then the architecture should generate the orga- nization - and more flexible software requires more flexible organizations. Organization-improvement techniques must radically and flexibly address the three fundamental strategies of start- ing sooner, reducing unnecessary work

\ \

, Hordwore ’

manufacturer I

access to the customer. 7iguv-e 7. In a vimal corporation, all m i

I 1

__I -__ rnbers of the’deiGeEpment team. have

volume, and working more quickly. This requires going outside a single corporate entity.

Virluol corporation. The virtual corpo- ration is gaining acceptance in high- technology industries and is particular- ly suited to improving service velocity in the software industry.

As somewhat broadly described by William Davidow and Michael iMalone, a virtual corporation is the “cost-effective instantaneous produc- tion of mass-customized goods and services.”’ A more focused definition would be the dynamic precontractual organization of two or more companies in a flexible partnership to rapidly cre- ate and deploy customized products and semices.

In a virtual corporation: + Partnerships form dynamically,

and projects often start before formal legal contracts are signed and may ter- minate shortly after project release. This is an advantage compared to “strategic business alliances” which take a long time to form, may fail because they are imposed from man- agement (rather than by customer demand), and may alienate segments of the customer base.

+ Each partner supplies a core com- petency, or area of expertise, toward the total solution (usually under the direction of a lead partner).

+ Members are directly account- able to each other, as if they were organized as groups within a single real corporation.

t As Figure 7 shows, all members share the same customer, and mem- bers are not isolated from the cus- tomer as they are in traditional value- added models.

+ There is organizational diversi- ty, so one member of the partnership can work around the overstructured bureaucracy of another. Management processes are flexible and adapted to a particular job and partner combi- nations.

t The customer often participates in the corporation. There is a shift

38 S E P T E M B E R 1 9 9 5

Research Qwstiom Case History Q&M r - Is it based on empirical evaluation and data? 1s it based on empirical evaluation and data?

research claims. Case histories should also be subjected to similar questions (with some constraints relaxed because there is less control in commercial

1 Was the experiment designed correctly?

~

I

t em to the customer quickly. Checking the case histories against the suggested questions, I found that in all cases:

1 Is it based on J toy or real situation?

Were the measurements used appropriate to the goals of the experiment?

\Vas the experiment run for a long enough time

Was the measurement collected compared

L\‘As the product intended for field release?

1

I

~

~ with control data? I -

-- --- -

from “the customer is always right” to “the customer is always your boss.” This allows the virtual corporation to react quickly to customer needs and improves customer satisfaction.

The virtual corporation addresses many of the strategies for improving service velocity. You start sooner, without going through the delays inherent in a traditional hierarchy, without waiting for formal contracts to be completed, and without start-up costs. You work more quickly, as each partner does what it does best, thus improving productivity. Unnecessary work volume is reduced, because all parties communicate directly with the customer, and thus focus immediately on real requirements. Because multiple partners must accept each others’ dif- ferences, there is less of a tendency to overstructure management, and, indeed, even an opportunity to work around your own embedded bureau- cracy with the help of your partners.

CASE HISTORIES: WHAT WORKS?

Although other engineering profes- sions teach neophytes using case histo- ries, software engineering has only recently begun to focus on supplying examples of industry activity. The cri- teria established by Norman Fenton, Shari Lawrence Pfleeger, and Robert Glass for substantiating claims on soft- ware-engineering research” may be a good criteria for industry case histories

rnentsj. In addition, I would add the requirement that both research and case-history claims should be com- pared with some baseline control or average t o note the effect of the changed variable. My suggested ques- tions are listed in Table 2.

Actuol case histories. The case histo- ries in Table 3 show complex telecom applications developed at EBS using methods aimed at increasing service velocity. They are compared here with US averages;’ however, because tele- com projects with strict reliability and real-time requirements usually take longer than average s o h a r e projects, the gains for a typical project may be even more dramatic than hose shown.

T h e telecom applications were built on Unix and AccessManager,*’ a middleware product for the telecom domain. The middleware supports the technical solutions of higher levels of system integration; domain specific API and value-added components; change-tolerant design using dynamic binding with layered, encapsulated designs; and built-in tools. The prod- ucts were develoDed usinn virtual cor-

< I

ical analysis after field release. + The goals of releasing the project by a deadline date dominated all other project goals, and the use of middle- ware and virtual corporations was the major feature that distinguished these projects from traditional projects.

+ Both product size in function points and time to first release were measured.

+ T h e products have undergone one or more major releases subse- quent to field release.

+ T h e time-to-market improve- ment was significant in each case and should be of interest to any practition- er working on a short deadline.

+ T h e projects were measured against US averages for products of equivalent size.

lthough improvement techniques A have reduced release time by fac- tors of 1:s to 1:9 over average pro- jects, you can only achieve this by using engineering techniques in the larger context of business and organi- zational restructuring.

Labor productivity, risk, capital porations, in w h c h thewmiddleware cost, and even quality -are less impor- provider, system integrator, and appli- ~ tant than time-to-market. In commer-

I E E E S O F T W A R E 37

-7

FurrctionPoints Avera Actual Ratio Release Experience ~ I1 Time-to-Retse’ Time-to-Release I

_ _ ~

Customer premise 800-number service platform

1 Cellular voice-mail 1 network call server

1 Public network gateway for ’ private branch exchange

120 27 months 3 months 9.0:

2 60 29 months 5 months 5.8:

500 3 1 months 6 months 5 . 2 :

I

i / I / I / I in US carrier netu o r k .

Successful release in Japanese cellular network

Successful f i e ld tri,il in Chinese n c r ~ or!, I

1 CalYbilIing feature adjunct 2090 36 months 7 months 5.1:1 Successful relea5e in I

I

to cellular switch US network - .---1 - -

Tmpared with US averages~

cia1 applications, quality is either a way to reduce costs or improve customer satisfaction. If quality is a process of continuous improvement, then the faster the improvement cycle time, the faster you achieve ultimate customer satisfaction. Even in mission-critical applications - in which defect preven- tion is required a t each stage of the process - it may be better to test and repair more quickly in each stepwise development phase than ro hold prod- ucts from the next stage of release until all defects are prevented.

Perhaps our “software crisis” is indeed not a crisis at all, as argued by Robert Glass,12 but the surface symp- toms of success. Companies pile on overtime, rush to market, run over budget, deemphasize quality, and yet are successful because they instinctive- ly or objectively (in the bottom line of profit) recognize the imperative of speed. Perhaps we are ra ther like evolving mammals who don’t under- stand Darwin’s theories but know how to survive by tooth, claw, and over- time. Improved service velocity may be the “survival of the fastest” theory for the Information Age.

To fully accept Information Age business dynamics, software engineers must see themselves as information service providers, not product manu- facturers. And given the competitive nature of the marketplace, the increas- ing globalization of information, and the easily changeable nature of soft- ware, the time to serve customers soft- ware will continue to decrease until it approaches zero. +

REFERENCES 1. E. P. Brooks, “No Silver Bullet: Essence and Accidents of Software Engineering,” Computer, Apr.

2. N.C. Olsen, “The Software Rush Hour,” IEEESo@are, Sept., 1993, pp. 29-37. 3. C. Jones, Appiied SoftTare Measurement: Assuring Fhductivity and Qwlity, McGraw-Hill, New York,

4. B.W. Roehm and P.N. Papaccio, “Understanding and Controlling Software Costs,” IEEE Trans.

5. D. Parnas, “Designing Software for Ease of Extension and Contraction,” IEEE Tram. So@arc Eng.,

6. D. Parnas, “On the Criteria to Be Used in Decomposing Systems into Modules,” Cwnm. ACM, Dec.

7. C. Meyer, Furt Cycle Time, Macmillan, New York 1993. 8. G.M. Wienberg, “Overstructured Management of Software Engineering,” Proc. Sixth Int’l Conf

9. W.1-I. Davidow and M.S. Illalone, The V i m 1 Coprution, Harper-Collins, New York 1992.

1987, pp. 10-19.

1991.

So@areEng., Oct. 1988, pp. 1462-1477.

Mar. 1979, pp. 128-138.

1972, pp. 1053-1058.

So&?are Ens., IEEE Computer Society Press, Los Alamitos, Calif., 1982, pp. ?-8.

10. N . Fentoo, S.1,. Pfleeger, and R.L. Glass, “Science and Substance: A Challenge to Software Engineers,” IEEE So@ware, July 1994, pp. 88-95.

11. N.C. Olsen, “Designing a ReaE-Time Platform for Rapid Application Development,” Proc. Secad IEEE Workshop Real-Time Applications, IEEE Computer Society Press, Los Alamitos, Calif., 1994.

12. R.L. Glass, “The Software-Research Crisis,”IEEE So&are, Nov. 1994, pp. 42-47.

Neil C. Olsen is director of technology management at EBS. Previously, he practiced software engineering at IPC, GTE, Contel, Alcatel, and General Dynamics, and was a principle engineer at I n ’ s Advanced Technology Center research laboratory. His current interests include real-time fault-tolerant dis- tributed communication systems, object-oriented programming, and software process management.

Polytechnic Institute. He is a member of the IEEE. Olsen received a BS and an MS in electrical engineering from Rensselaer

Address questions about this article to Olsen a t EBS, 2 Enterprise Dr., Shelton, C T 06484; [email protected].

____

38 S E P T E M B E R 1995