An approach for evaluating the market effects of energy efficiency programs

10
An approach for evaluating the market effects of energy efficiency programs Edward Vine & Ralph Prahl & Steve Meyers & Isaac Turiel Received: 8 May 2009 / Accepted: 30 November 2009 / Published online: 15 December 2009 # The Author(s) 2009. This article is published with open access at Springerlink.com Abstract This paper presents work currently being carried out in California on evaluating market effects. We first outline an approach for conducting market effect studies that includes the six key steps that were developed in study plans: (1) a scoping study that characterizes a particular market, reviews relevant market effects studies, develops integrated market and program theories, and identifies market indicators; (2) analysis of market evolution, using existing data sources; (3) analysis of market effects, based on sales data and interviews with key market actors; (4) analysis of attribution; (5) estimation of energy savings; and (6) assessment of sustainability (i.e., the extent to which any observed market effects are likely to persist in the absence or reduction of public intervention, and thus has helped to trans- form the market).We describe the challenges in conducting this type of analysis (1) selecting a comparison state(s) to California for a baseline, (2) availability and quality of data (limiting analyses), (3) inconsistent patterns of results, and (4) con- ducting market effects evaluations at one point in time, without the benefit of years of accumulated research findings, and then provide some sugges- tions for future research on the evaluation of market effects. With the promulgation of market transformation programs, the evaluation of market effects will be critical. We envision that these market effects studies will help lay the foundation for the refinement of techniques for measuring the impacts of programs that seek to transform markets for energy efficiency products and practices. Keywords Energy efficiency . Market effects . Market transformation . Evaluation . Attribution Introduction This paper presents work currently being carried out in California on evaluating the market effects of Energy Efficiency (2010) 3:257266 DOI 10.1007/s12053-009-9070-x E. Vine California Institute for Energy and the Environment, Berkeley, CA, USA E. Vine : S. Meyers Lawrence Berkeley National Laboratory, Berkeley, CA, USA R. Prahl Panoptic Energy Consulting, Madison, WI, USA I. Turiel Northgate Energy Consulting, Berkeley, CA, USA E. Vine (*) Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, Building 90-4000, Berkeley, CA 94720, USA e-mail: [email protected]

Transcript of An approach for evaluating the market effects of energy efficiency programs

An approach for evaluating the market effects of energyefficiency programs

Edward Vine & Ralph Prahl & Steve Meyers &

Isaac Turiel

Received: 8 May 2009 /Accepted: 30 November 2009 /Published online: 15 December 2009# The Author(s) 2009. This article is published with open access at Springerlink.com

Abstract This paper presents work currently beingcarried out in California on evaluating market effects.We first outline an approach for conducting marketeffect studies that includes the six key steps that weredeveloped in study plans: (1) a scoping study thatcharacterizes a particular market, reviews relevantmarket effects studies, develops integrated market andprogram theories, and identifies market indicators; (2)analysis of market evolution, using existing datasources; (3) analysis of market effects, based on salesdata and interviews with key market actors; (4)

analysis of attribution; (5) estimation of energysavings; and (6) assessment of sustainability (i.e.,the extent to which any observed market effects arelikely to persist in the absence or reduction ofpublic intervention, and thus has helped to trans-form the market).We describe the challenges inconducting this type of analysis (1) selecting acomparison state(s) to California for a baseline, (2)availability and quality of data (limiting analyses),(3) inconsistent patterns of results, and (4) con-ducting market effects evaluations at one point intime, without the benefit of years of accumulatedresearch findings, and then provide some sugges-tions for future research on the evaluation ofmarket effects. With the promulgation of markettransformation programs, the evaluation of marketeffects will be critical. We envision that thesemarket effects studies will help lay the foundationfor the refinement of techniques for measuring theimpacts of programs that seek to transform marketsfor energy efficiency products and practices.

Keywords Energy efficiency .Market effects .Markettransformation . Evaluation . Attribution

Introduction

This paper presents work currently being carried outin California on evaluating the market effects of

Energy Efficiency (2010) 3:257–266DOI 10.1007/s12053-009-9070-x

E. VineCalifornia Institute for Energy and the Environment,Berkeley, CA, USA

E. Vine : S. MeyersLawrence Berkeley National Laboratory,Berkeley, CA, USA

R. PrahlPanoptic Energy Consulting,Madison, WI, USA

I. TurielNorthgate Energy Consulting,Berkeley, CA, USA

E. Vine (*)Environmental Energy Technologies Division,Lawrence Berkeley National Laboratory,Building 90-4000,Berkeley, CA 94720, USAe-mail: [email protected]

investor-owned utility (IOU)-sponsored energy effi-ciency programs. In this context and as stated inCalifornia’s energy efficiency evaluation protocols([CPUC] California Public Utilities Commission2006), a market effect is a change in the structure ofa market or the behavior of participants in a marketthat is reflective of a change in the adoption ofenergy-efficient products, services, or practices, and iscausally related to the programs being examined. Themarket effects reflect the actions taken by participants(direct impacts and participant spillover) and by non-participants (non-participant spillover). Currently,many evaluations of energy efficiency programs focuson the direct impacts of programs; as resourceacquisition programs, the “spillover” of these pro-grams is often not evaluated. In contrast, theevaluation of market transformation programs looksat all market effects and does not distinguish directeffects from indirect effects, such as spillover. InCalifornia, the evaluation of market effects receivedsome attention in the late 1990s; however, the energycrisis of 2000/2001 forced utilities and state govern-ment to re-focus on resource acquisition programs,which focus more on direct and near-term impacts. Inthe last year, however, there has been a resurgence ofinterest in market effects and market transformation inCalifornia (e.g., [CPUC] California Public UtilitiesCommission 2008) and the California Public UtilitiesCommission (CPUC) has invested significant resour-ces in measuring market effects, and this may lead tosignificant advances in the field.

The CPUC recently prepared a long-term energyefficiency strategic plan that reflects the efforts ofmany stakeholders over the past year to collabora-tively develop short-, mid-, and long-term strategicinitiatives with a focus on market transformation([CPUC] California Public Utilities Commission2008). In parallel with this process, the CPUC (inCPUC Decision 07-10-032, October 18, 2007([CPUC] California Public Utilities Commission2007)) directed its staff and consultants to examinethe market effects from California’s energy efficiencyprograms, as a result of (1) direct effects fromparticipants installing measures, (2) participant spill-over, and (3) non-participant spillover. To examinenon-participant spillover, the CPUC commissionedthree market effects studies on programs that pro-mote: (1) compact fluorescent lamps (CFLs), becauseof the large amount of resources devoted to promoting

CFLs; (2) residential new construction (RNC), be-cause of the long-standing programs that have tried totransform the market; and (3) high-bay lighting(HBL), because it is a relatively new target area inCalifornia.1 It is important to note that these programswere not designed as market transformation programs.Also, the objective of these studies was not to havethe CPUC attribute any savings from market effectsto the utilities at this point in time, but rather toexamine the feasibility of measuring any marketeffects that may exist. Working with the CPUC whois administering these studies, the California Institutefor Energy and the Environment (CIEE) developedstudy plans for each of these studies to examine themarket effects of the IOU programs that wereimplemented during 2006–2008.

In this paper, we first outline an approach forconducting market effect studies that includes the sixkey steps that were developed in the study plans: (1) ascoping study that characterizes a particular market,reviews relevant market effects studies, developsintegrated market and program theories, and identifiesmarket indicators; (2) analysis of market evolution,using existing data sources; (3) analysis of marketeffects, based on sales data and interviews with keymarket actors; (4) analysis of attribution; (5) estima-tion of energy savings; and (6) assessment ofsustainability (i.e., the extent to which any observedmarket effects are likely to persist in the absence orreduction of public intervention, and thus has helpedto transform the market). While the market effectsstudies have not been completed, we believe that thisframework for evaluating market effects will be ofvalue for those interested in using this approach forevaluating the impact of energy efficiency programson the marketplace. Based on results to date on thethree market effects studies described above, wedescribe the challenges in conducting this type ofanalysis and provide some suggestions for futureresearch on the evaluation of market effects.

1 In a Decision in October 2007 (D.07-10-032), the CPUCdirected their staff to explore during 2008–2009 the ability tocredibly quantify and credit “non-participant spillover” marketeffects and to propose possible revisions to market effectsprotocols, utility savings goals, and/or performance incentivemechanisms for subsequent action by the CPUC. In addition,the CPUC was interested in the market effects that occurred inthe years 2006–2008, the first 3-year cycle of energy efficiencyprograms implemented by the investor-owned utilities.

258 Energy Efficiency (2010) 3:257–266

Approach for evaluating market effects

CIEE developed study plans for each of the marketeffects studies (e.g., Prahl 2008). Each of the studiesthen developed a Work Plan that was reviewed byCIEE, CPUC staff, and CPUC consultants. After theWork Plan was approved, the contractors developeda Scoping Study and Work Plan that was reviewedby the public and then implemented. All threestudies are pursuing the following steps in evaluat-ing market effects, though the emphasis variesamong the studies (see below). As noted below, asof June 2009, the three market effects studies areunderway: the CFL market effects study is farthestalong, followed by the RNC market effects study,and the HBL market effects study. Accordingly,most of the examples in this paper are taken fromthe CFL study.

Step 1. Prepare a scoping study

California’s EM&V protocols ([CPUC] Califor-nia Public Utilities Commission 2008) for marketeffects evaluations emphasize the importance ofperforming a scoping study before actually embark-ing on a market effects study. In the words of theprotocols:

“The appropriate approach for a market effectsstudy cannot be readily determined without ascoping study to define the market to bestudied, develop a market theory to test in theanalysis, assess data availability for the marketeffects study, specify a model of marketchange, develop a methodology for data col-lection and recommend an analysis approach.”(p. 149.)

A later passage in the market effects protocolsuccinctly summarizes the required components of ascoping study when performed at an enhanced levelof rigor, as follows:

“Define the market by its location, theutilities involved, the equipment, behaviors,sector and the program years of interest.Develop market theory and logic model.Detail indicators. Identify available secondarydata and primary data that can be used totrack changes in indicators. Outline datacollection approach. Recommend hypothesesto test in the market effects study. Recom-

mend the analysis approach most likely to beeffective.” (p. 150.)

CIEE’s study plan for each of the market effectsstudies was the first step in developing the scopingplan, which was modified in the contractors’ workplan and which included all of the componentssummarized above. For example, the CFL scopingstudy (The Cadmus Group, Inc et al. 2008) includedthe following elements: (1) characterization of CFLmarket using existing data sources, (2) review of CFLmarket effects studies from other states, (3) develop-ment of market and program theories2 that areintegrated with one another, (4) the study approach,and (5) a list of detailed market indicators that were tobe studied.

The scoping studies led to an increased under-standing of the markets for each of these technologiesand led to some changes in the conduct of the studies.For example, the CFL scoping study led to thefollowing changes in the larger study: (1) a sharpenedfocus on the effects of the Upstream LightingProgram (ULP) rather than on CFL programs morebroadly, as the ULP was found to account for thevast majority of all in-program sales in California;(2) a reduction in the weight attached to theregression analysis vis-a-vis the comparison stateapproach, since the data available for conductingregression analysis were only available for 1 yearand covered only 75% of all CFL sales nationally(and likely fewer in California); and (3) therejection of a comparison store approach, sinceCalifornia’s programs were focused on promotingCFLs in non-traditional channels (e.g., small gro-cery stores), in contrast to states without matureprograms where CFLs were more commonly sold

2 A market theory describes how a particular market operatesand articulates the market assumptions and associated researchquestions. Similarly, a program theory describes how aparticular program operates within a market and articulatesthe program assumptions and associated research questions. Thetheories are developed by the evaluators and are based on theanalysis of available data and interviews with key program andmarket stakeholders. The initial theories represent models ofreality and may be incomplete; however, they are “tested” overtime by additional data analyses and interviews and can berevised or discarded. Even with these limitations, these theoriesare regarded by evaluators and program managers as very usefultools and are certainly better than having no theory at all.

Energy Efficiency (2010) 3:257–266 259

in traditional channels (e.g., mass merchandisingstores).

The RNC scoping study led to a clearer articulationof the possible ways for the RNC programs to achievethe long-term goals of reduced energy use, demand,and emissions: (1) by improving compliance withexisting code, (2) by facilitating construction that ismore efficient than required by the current code, and(3) by contributing to code upgrades. By identifyingthe linkages between program activities and theseexpected long-term outcomes, the logic modelingprocess laid out indicators of short- and medium-term outcomes that could be measured, and pointedthe way towards how to attribute savings to theprograms. Furthermore, the scoping study helped toidentify the relative importance of key actors involvedin the RNC market (e.g., Title-24 consultants) and atarget for additional research.

The HBL scoping study also led to a clearerarticulation of the possible ways for the HBLprograms to achieve the long-term goals of reducedenergy use, demand, and emissions. For example,incentives for HBL technologies are often leveragedas an “ice-breaker” to both end-users and installationcontractors—opening the dialog with the program andfor opportunities to implement other higher impactmeasures. The scoping study helped to identify therelative importance of key actors involved in the HBLmarket (e.g., contractors were confirmed as thecritical decision-maker, but big box do-it-yourselfstores, such as Home Depot, had a larger role thanpreviously theorized) and topics for additional re-search.

Step 2. Analyze market evolution

Because market effects generally occur slowly overtime, understanding the long-term evolution of themarket is critical to any market effects evaluation.Ideally, this is achieved through ongoing evaluationefforts over the course of many years. However, thereare times when we do not have this luxury. Instead,we can only conduct a one-shot effort to develop thebest understanding of the long-term market effects ofthe considered energy efficiency programs. As aresult, it is necessary to resort to a wide range ofexisting data sources to reconstruct the evolution of aparticular energy efficiency market, both within andbeyond California. A central focus of this step is toreconstruct historic trends in actual sales of a

particular energy efficiency technology (e.g., CFL)or practice(s) in California. Trends in other keyvariables such as consumer awareness and attitudes,technology costs, and retailer stocking behavior arealso included, where appropriate. For example, theCFL Interim Report (The Cadmus Group, Inc et al.2009) included the following sources of data inexamining the market evolution of CFLs: (1) data onconsumer purchases and awareness and CFL retailprices, (2) a qualitative assessment of cumulativehistoric market effects based on interviews withprogram managers and stakeholders, and (3) areview of prior California IOU CFL programevaluations.

As another example, the RNC Phase I report(Nexus Market Research, Inc et al. 2009) examinedhistoric trends in: (1) RNC efficiency practices; (2)builder awareness, attitudes and practices; (3) othermarket actors’ awareness, attitudes, and practices;(4) home buyers’ awareness and attitudes; and (5)incremental costs of efficiency measures. The HBLscoping study revealed step-wise changes in themarket for HBL technologies, in which newtechnologies surpassed and mostly supplanted theprevious technologies over a 15-year period, suchas mercury vapor giving way to pressurized sodiumproducts, then to metal halides, then to fluorescents.In each case, the progression in the marketrepresented not only broader penetration in termsof energy efficiency but also reflected improve-ments in other features such as reduced installationcosts, better understanding of the lighting applica-tions, enhanced color rendering, and ease ofmaintenance.

Step 3. Analyze market effects

While the analysis of market evolution describedabove is expected to contribute to the assessment ofmarket effects from California’s energy efficiencyprograms, the centerpiece of these studies is a quasi-experimental comparison of current actual and base-line sales patterns of particular technologies (e.g.,residential new construction), buttressed by inter-views with key stakeholders (e.g., manufacturers,retailers, builders) and consumers regarding themarket effects of the programs. By “baseline” wemean a hypothetical projection of what sales patternswould have looked like in the complete absence ofthe specific program(s) promoting that specific

260 Energy Efficiency (2010) 3:257–266

technology in California, either now or at any timein the past.3

As with the analysis of market evolution, a widerange of market indicators other than sales are ofpotential interest, such as measures of consumerawareness, attitudes, and behavior; retailer stocking,promotional, and pricing practices; and manufac-turers’ business strategies. These market indicatorsare identified during the development of theprogram and market theories. The primary objectivein developing and measuring these non-sales,market indicators is to build a convincing caseregarding market effects by assessing whether ornot the indicators have changed in a mannerconsistent with what would be predicted by theprogram theory.

The core of the effort to analyze market effectsconsists of a quasi-experimental comparison ofcurrent actual and baseline sales patterns in California(where possible), with the baseline deriving fromcurrent sales patterns in a number of alternativecomparison areas. Underlying this approach is theassumption that one or more comparison areas can befound that are reasonably representative of whatwould be happening in California in the absence ofpublic purpose energy efficiency programs. In anideal world, one would use a more powerful quasi-experimental design, such as a pre-test–post-testcomparison design, under which we would comparethe change in sales between two periods for the testversus the comparison area. However, because thesestudies are primarily retrospective studies, for themost part we do not have the luxury of collectingdetailed pre-program data. Key to the effort tostrengthen validity will be the use of multiplemethods both to analyze actual sales patterns ortechnology practices in California and to developcomparison areas.

Step 4. Assess attribution

This step involves sifting through all the evidencedeveloped by the evaluation to make a case regardingthe nature of the market effects produced by California’senergy efficiency programs, if any, and, where possible,the total number of sales of a particular technologyinduced by these market effects that occurred in theyears 2006–2008. Based on study plans and work plans,conclusions regarding these issues are to be based onsuch assessments as the following (this is not anexhaustive list, just illustrative):

& Whether comparisons between estimates of actualand baseline sales of a particular technology inCalifornia for a particular year (or group of years)consistently show significant differences.

& Whether supply-side informants attribute marketeffects to the programs, and if so, what kind andhow much.

& Whether the results of the attempt to reconstructthe evolution of the particular energy efficiencytechnology market within and beyond Californiaare suggestive of long-term market effects.

& Whether differences in the specific pattern of salesof the particular energy efficiency technology overtime under the actual and baseline scenarios showdifferences that are suggestive of market effects.

& In the case of CFLs, whether the regressionanalysis4 shows that the programs have long-term pricing effects, and that consumers haveresponded to these effects.

& In the case of CFLs, whether the analysis of CFLmarketing efforts conducted as part of the mar-keting and outreach (M&O)5 evaluation showssignificant marketing impacts on sales, above andbeyond the effects of specific CFL programs.

Determining which market changes can be attrib-uted to California’s energy efficiency programs—and,hence, are market effects—is based on the extent to

3 It is important to keep in mind that the word “baseline” hasvarious other meanings in energy efficiency evaluation, none ofwhich is intended here. One alternative meaning is the marketconditions in force at the beginning of a period of publicintervention. Another meaning, used in the context of M&V,refers to the most likely alternative equipment or practice to theone that was actually adopted. Despite these alternative mean-ings of the work, we use the term “baseline” for the no-programscenario because we believe this has become a convention inthe field of market effects research.

4 The regression analysis was based on the concept that thesales of CFLs could be predicted as a function of acomprehensive list of explanatory variables, including pro-gram activity levels, socio-economic characteristics, energyprices, population distribution (urban/suburban/rural), andother variables.5 There is a separate evaluation effort of IOU-sponsoredmarketing and outreach activities.

Energy Efficiency (2010) 3:257–266 261

which all of the above findings are consistent withone another and with the program theory developedas part of previous steps. At the end of the day,attribution in the studies will be based on a prepon-derance of evidence approach, under which theresearcher attempts to construct an argument as tojust what has transpired based on the convergence ofevidence from a wide range of sources, and theconsistency of this evidence with the program theory.

Step 5. Estimate energy savings

The quantitative results of the analysis of marketeffects discussed in the previous step are convertedinto a stream of estimated energy savings. Initialestimates of savings from market effects may bebased upon the difference between total actual andbaseline sales of a particular technology (e.g., CFLs),with triangulation among the alternative estimates ofthese two quantities, and adjustments as appropriatebased on other evaluation findings. In the case ofCFLs, because we are comparing actual CFL saleswith a hypothetical estimate of the level of sales thatwould exist in the historical absence of any CFLprograms, the difference between actual and baselineCFL sales represents the current, total, cumulativeeffects on sales of all programs that have ever beenrun in California. As such, it does not differentiatebetween impacts induced recently versus in the past,between different categories of impacts such as directprogram or spillover or between impacts induced byone program versus another. As noted earlier, theultimate objective for the CPUC in this step is toestimate savings from program effects in years 2006–2008 which are not covered in the direct andparticipant spillover effects being measured by theimpact evaluation studies (i.e., non-participant spill-over)6. In order to reach this objective one mustsubtract from the initial savings estimate all savingsestimates produced in the 2006–2008 impact evalua-tions7 that: (1) are clearly associated with retail sales

of a particular technology; (2) are program induced;and (3) are not counted in other impact evaluationresults. In mathematical terms:

Non-participant spillover=total program-inducedsavings!direct savings!participant spillover

If one does not care about distinguishing non-participant spillover from other types of spillover,then the equation is simplified:

Spillover=total program-induced savings!directsavings8

Step 6. Assess sustainability

As defined by California’s EM&V Protocols,sustainability refers to the extent to which theobserved market effects can be expected to last intothe future. Gaining an understanding of the sustain-ability of any observed market effects is very helpfulin shaping the direction of future programming effortsin any energy efficiency market. For example, in thecase of CFLs, particularly critical is the question ofwhether the huge surge in CFL sales nationallybeginning in 2006 would be likely to sustain itself ifsupport programs (in California or elsewhere) werewithdrawn or scaled back.

Sustainability analysis is discussed in California’sEM&V protocols:

Identifying changes in market structure andoperations, and how the changed market con-tains mechanisms to sustain them. This couldinclude examining profitability analyses forimportant support businesses or business oper-ations and how these are maintained withoutcontinued program intervention.

The following questions could be asked to helpassess the extent to which a market has been trans-formed (e.g., see Hewitt 2000):

& Is someone making money by offering it?& Has a private market developed to continue the

facilitation?& Has the profession or trade adopted it as a

standard practice?& Would it be difficult or costly to revert to earlier

equipment or practices?& Are end-users requesting or demanding it?

6 The CPUC and its contractors are already evaluating thedirect effects of the IOU’s energy efficiency programs and,where possible, participant spillover. Non-participant spilloveris not being examined in these evaluations.7 For example, in the case of CFLs, an impact evaluation of theResidential Upstream Lighting Program is being conducted in aseparate study by the same team of contractors conducting themarket effects study. 8 These are direct savings net of free ridership.

262 Energy Efficiency (2010) 3:257–266

& Have the risks to private market actors beenreduced or removed?

In summary, how will we know when a particularmarket is self-sustaining? What other indicatorsshould one look for?

Issues faced in the California evaluations

The three market effects studies are underway. TheCFL market effects team is farther along and hasproduced an Interim Report on its initial findings (TheCadmus Group, Inc et al. 2009); the RNC marketeffects team has finalized its Interim Report (NexusMarket Research, Inc et al. 2009); and the HBLmarket effects team has prepared a scoping study andis collecting and analyzing their data. As the studieshave been implemented, they have been able tosuccessfully follow most of the framework describedabove. However, they have also diverged a little becauseof the availability of data: e.g., the RNC market effectsstudy has primarily relied on interviews with key marketactors coupled with the on-site assessment of actualefficiency levels to assess market effects, while the CFLmarket effects study is examining market effects viainterviews and through a comparison of sales data incomparison states (see below).

The work thus far has confronted and addressedserious challenges to the evaluation of market effects,as discussed in more detail below:

& Selecting a comparison state(s) to California for abaseline

& Availability and quality of data (limiting analyses)& Inconsistent patterns of results& Conducting market effects evaluations at one

point in time, without the benefit of years ofaccumulated research findings

Some of these issues have also occurred in otherstates, as discussed in Rosenberg and Hoefgen’s(2009) review of the evaluation of market effects.Furthermore, it is important to note that the marketeffects studies are ongoing, and some of the method-ologies are still being tested. As a result, our analysisbelow should be treated as “preliminary”; once thestudies are completed, we will have better informationon the challenges ahead and the needed methodolog-ical guidance for evaluating market effects.

Selecting a comparison state(s) to Californiafor a baseline

Quasi-experimental comparisons with non-programstates have historically been a mainstay of marketeffects evaluations. However, choosing a state or agroup of states as a baseline, to compare withCalifornia, is subject to many caveats. The baselinecomparison approach assumes a non-program areathat is the theoretical equivalent to California inthe absence of a (CFL, RNC, etc.) program. In thecase of CFLs, the CFL Market Effects teamselected respondents in three comparison states—Georgia, Kansas, Pennsylvania—through random-digit dialing and on-site visits to homes andretailers. They assumed that CFL sales and usagepatterns in these state approximated baselinemarket conditions for California (i.e., sales thatwould have occurred in California in the absenceof IOU program intervention). The comparisonstates were chosen because they did not havelong-term or significant histories of utility- orregional government-sponsored programs to pro-mote CFLs, and because they shared various socio-economic indicators with California.

The primary shortcoming of using this method-ology is that no single state is really comparable toCalifornia—which is why a group of states wasused as a comparison, rather than a single state.Another possible shortcoming of this approach isthat manufacturer and retailer sales strategies in-program and non-program states may be interde-pendent: it is possible that some manufacturers andretailers may make decisions about how to sellCFLs in one state or region based on what they aredoing in another state (e.g., California). Similarly,California programs may have impacted the nation-al market and, therefore, may have influenced thebaseline comparison states.

Finally, another issue related to this methodologyhas to do with time. If a market effects study had beenconducted 5–10 years ago, the differences betweenCalifornia and the comparison states might have beensignificant with respect to the purchase of CFLs.However, in recent years, other states are becomingmore similar to California with respect to CFL sales(e.g., as reported in the CFL market effects interimreport, significantly more households in the compar-ison states purchased light bulbs in the past 3 months

Energy Efficiency (2010) 3:257–266 263

(57%) than in California (47%)). This may reflectmore activity in those areas as well as less activityin California because of past CFL programs thathave increased the saturation of CFLs, thereforeleading to less demand for CFLs (and sales) inCalifornia. If increases in CFL saturation driven byCalifornia’s many years of programs are indeed partof the explanation for the relatively low self-reported current purchase rates in California, thiswould suggest that the comparison states may notprovide a valid baseline for California. In thelanguage of quasi-experimentation, the test andcomparison groups would be fundamentally non-equivalent (at least in the absence of someapproach to controlling statistically for the differ-ences in saturation).

In the case of RNC programs, selecting a suitablecomparison state is fraught with difficulty. Foranother state to function as a model for a hypotheticalCalifornia baseline, it needs to have similar conditionsfor RNC except that it has not had utility efficiencyprograms (or had very little) in the recent past. ButCalifornia’s size, climate, and policy environment(especially with regard to efficiency aspects ofbuilding codes) are at least somewhat unique. Itseems unlikely that this method would work inevaluating market effects of the utility RNC programsin California.

It is as yet unclear if an alternative approach willprove viable for the RNC market effects study. So far,this study has relied mainly on interviews with marketactors to develop a qualitative understanding of themarket effects of a set of utility programs focused onthe RNC market, and the evaluators will use thosefindings to gauge how the IOUs’ RNC programs haveaffected the efficiency levels of newly constructedhomes as observed in field visits. In addition, thestudy is using a Delphi approach that relies onresponses from a number of experts in the field todevelop a baseline of how the California marketwould have evolved without the considered programs(the slump in the housing economy will also bediscussed). Whether these two approaches will provesufficient for quantitative estimates of market effectsis as yet uncertain.

In the case of the HBL market effects study, thecontractors are starting to determine which compar-ison states to select (including a comparison withall non-program areas outside of California, in

order to avoid potential biases associated with anyregional or statewide networks or sales/distributionchannels). This is particularly challenging becausethis type of technology and its application isspecialized, changing and differentiated regionally,making it difficult to come up with a good match.For example, the building stock in the comparisonstates which is amenable to the use of HBL may bedifferent than that in California (large commercial,big box, and industrial facilities). Moreover, thescale of the sum of individual operations in thecomparison states will not likely be as large as inCalifornia.

Availability and quality of data (limiting analyses)

There is always some bias in the collection ofqualitative and quantitative data. For example,some of the qualitative interview results were frominterviews with people who participated, evaluated,designed, or implemented a program. In order toensure more objectivity, interviews with non-participating stakeholders (e.g., manufacturers,retailers, builders, and trade allies) were alsoconducted. And there is always the potential forself-selection bias in finding respondents willing toanswer a survey; complicating this situation is thepossibility of respondents providing inaccurateresponses to advance their own self-interestthrough their survey responses. There are ways of“validating” some responses from respondents. Forexample, to validate reported purchases of CFLs,the CFL Market Effects Team conducted in-homelighting audits to see and verify the number ofCFLs each respondent reported through the CFLuser survey. The analysis (results are still beinganalyzed) will provide estimates of recent CFLpurchases, CFLs currently installed, and CFLs instorage, and compare these findings with thesurvey results.

Finding the “perfect” data source is very difficult,if not impossible. For example, in the regressionanalysis used in the CFL market effects study forassessing the influence of program activity on CFLsales, the dependent variable representing CFL salesdata excluded major retail channels (e.g., groceriesand small hardware stores that are often targeted bymature CFL programs), through which sales variedwidely across the major groups of interest (The

264 Energy Efficiency (2010) 3:257–266

Cadmus Group, Inc et al. 2009).9 Similarly, the binaryterm in the regression equation that indicated programversus non-program area was simplistic in that therewere, in fact, various levels of programs.10 Inaddition, the lack of reliable, cross-sectional time-series data on CFL sales prevented the team fromlooking at trends over time and possible lag effects.Finally, in the case of HBL, determining the totalinstalled market share of efficient HBL technologiesis not possible.

Inconsistent patterns of results

The interim findings from the CFL market effectsstudy did not provide evidence that additional marketeffects in the form of energy/demand savings (non-participant spillover) could be unequivocally claimeddue to the California IOU programs for the 2006–2008 time period. Instead, different conclusionswere derived from different components of thestudy. For example, while the CFL user surveyresults indicated that there was little or no differ-ence between California and comparison areas inthe 2006–2008 time period (implying no marketeffects), the interviews with upstream actors didprovide some evidence that market effects occurredin prior years. The two analyses are intertwined.For example, it is possible that the baseline sales ofCFLs in the other states may have been over-estimated because they would have been lower ifno-program activity had taken place in Californiaand other states with long-standing programs.Alternatively, the increasing saturation of CFLs inCalifornia has led to fewer CFL sales per house-hold compared to household sales in other states.The CFL Market Effects Team has developed anumber of other hypotheses to explain theseinconsistent findings, and they will attempt toassess the validity of these hypotheses during theremainder of the evaluation.

Conducting market effects evaluations at one pointin time, without the benefit of years of accumulatedresearch findings

Based on work to date, the studies discussed abovehave revealed that the evaluation of market effectsshould be conducted through a program’s lifecycle, rather than at just one point in time. Thereare risks in making policy decisions from a singlesnapshot study, and the findings from such studiesshould be used with caution until a long-termmarket effects evaluation can be implemented (asnoted below). However, we argue that having someinformation (imperfect as it is) is better thanhaving no information at all when making policydecisions, so that these snapshot studies are ofvalue. Ideally, we encourage time-series research tobegin early on in the program’s history rather thanwaiting later.

Conducting market effects evaluations through-out the program’s history allows the researchersboth to study different market indicators as neededat different phases in the program’s life cycle, andto collect time-series data on key variables such asefficient market share. For example, as notedabove, a rigorous assessment of program versusestimated baseline CFL sales conducted earlier inthe life cycle of the California IOU’s CFL programsmight have quantitatively identified potential mar-ket effects, if they exist. Furthermore, the totalmarket effects in a given period of time reflect thecumulative effects of many years of programefforts (e.g., not just the efforts in 2006–2008).Conducting periodic market effects evaluationsmay help to disentangle the market effects fromprevious years. Of course, these types of studiesare laborious and costly, and the implementation ofmultiple market effects studies creates even morestrain on the system. However, several states (e.g.,Massachusetts, New York, and Vermont) havebeen measuring markets over time. On the otherhand, planning a long-term market effects evalua-tion allows evaluation expenses to be spread outover many years of program activity, so that the“annual budget” might not be too burdensome. Inaddition, the interest in similar data amongmultiple jurisdictions might provide opportunitiesfor collaborative studies that can reduce researchcosts.

9 The CFL Market Effects Team considered estimating CFLsales for as many states as possible, but concluded that the costof collecting primary data on CFL sales for all states wasprohibitive.10 The CFL Market Effects Team will be pursuing a number ofmodifications to the regression model to see if more can belearned, including the use of more complex (non-binary)variables.

Energy Efficiency (2010) 3:257–266 265

Conclusions

The study of market effects has significant policyimplications. State regulatory agencies and utilitiesare facing aggressive targets in meeting challengingenergy savings goals and greenhouse gas emissionsreduction targets (especially in California). In order toreach these goals, the energy efficiency industry mustpursue a more comprehensive strategy that includesmore market stakeholders (besides program admin-istrators), more funding, and more information,education, and training, in order to transform themarket. As a result, a different type of evaluation willhave to be conducted—one with a focus on markets,rather than programs. With the promulgation ofmarket transformation programs (as California isattempting to do), the evaluation of market effectswill be critical. We envision that these market effectsstudies will help lay the foundation for the refinementof techniques for measuring the impacts of programsthat seek to transform markets for energy efficiencyproducts and practices. It is possible that future marketeffects studies will not have to distinguish between thedifferent types of spillover (e.g., non-participant versusparticipant) and direct effects. Their focus will dependon what policy framework is in place. Finally, we haveprovided a framework for evaluating market effectsthat we hope others will use for evaluating the marketeffects of their energy efficiency programs. Once morestudies have been completed, we hope that the methodsand findings from these studies can be compiled in acentralized database that is accessible to all, so that wecan learn from each other.

Acknowledgements We want to thank the CPUC for provid-ing support for our work on market effects, including thefollowing CPUC staff: Tim Drew, Kay Hardy, MikhailHaramati, and Ayat Osman. This paper does not necessarilyrepresent the views of the CPUC or any of its employees. Wealso want to acknowledge the following people for their reviewof an earlier draft: Harley Barnes, Ryan Barry, ScottDimetrosky, Kay Hardy, Lynn Hoefgen, Ken Keating, LoriMedgal, Tim Pettit, Mitch Rosenberg, Ellen Rubinstein, BillSteigelmann, and John Stoops. Finally, we thank the anony-mous reviewers for their thoughtful and constructive commentson an earlier version of this paper.

Open Access This article is distributed under the terms of theCreative Commons Attribution Noncommercial License whichpermits any noncommercial use, distribution, and reproductionin any medium, provided the original author(s) and source arecredited.

References

[CPUC] California Public Utilities Commission. (2006). Cal-ifornia energy efficiency evaluation protocols: Technical,methodological and reporting requirements for evaluationprofessionals. San Francisco: California Public UtilitiesCommission.

[CPUC] California Public Utilities Commission. (2007). Deci-sion 07-10-032: Interim opinion on issues relating tofuture savings goals and program planning for 2009-2011energy efficiency and beyond, October 18, 2007. SanFrancisco: California Public Utilities Commission.

[CPUC] California Public Utilities Commission. (2008). Cal-ifornia long-term energy efficiency strategic plan. SanFrancisco: California Public Utilities Commission.

Hewitt, D. C. (2000). The elements of sustainability. InEfficiency & sustainability, proceedings of the 2000summer study on energy efficiency in buildings, 6.179-6.190. Washington DC: American Council for an Energy-Efficient Economy.

Nexus Market Research, Inc, KEMA, Summit Blue Consulting,Itron, Inc, & The Cadmus Group, Inc. (2009). 2009Residential New Construction Market Effects Study—Phase I Draft Report. Oakland: California Institute forEnergy and Environment. Available at: http://ciee.ucop.edu/energyeff/market.html.

Prahl, R. (2008). CFL market effects study: Final study plan.Oakland: California Institute for Energy and Environment.Available at: http://ciee.ucop.edu/energyeff/market.html.

Rosenberg, M., & Hoefgen, L. (2009). Market effects: Theirrole in program design and evaluation. Oakland: Califor-nia Institute for Energy and Environment. Available at:http://ciee.ucop.edu/energyeff/market.html.

The Cadmus Group, Inc, KEMA, Itron, Inc, Nexus MarketResearch, & A. Goett Consulting. (2008). Compactfluorescent lamps market effects scoping study findingsand work plan. Portland: The Cadmus Group. Availableat: http://ciee.ucop.edu/energyeff/market.html.

The Cadmus Group, Inc, KEMA, Itron, Inc, Nexus MarketResearch, & A. Goett Consulting. (2009). Compactfluorescent lamps market effects: Interim report. Portland:The Cadmus Group. Available at: http://ciee.ucop.edu/energyeff/market.html.

266 Energy Efficiency (2010) 3:257–266