Reductions in Seasonal Climate Forecast Dependability as a Result of Downscaling

12
Transactions of the ASABE Vol. 51(3): 915-925 E 2008 American Society of Agricultural and Biological Engineers ISSN 0001-2351 915 REDUCTIONS IN SEASONAL CLIMATE FORECAST DEPENDABILITY AS A RESULT OF DOWNSCALING J. M. Schneider, J. D. Garbrecht ABSTRACT. This investigation addresses a practical question from an agricultural planning and management perspective: are the NOAA/CPC seasonal climate forecasts skillful enough to retain utility after they have been downscaled to field and daily scales for use in crop models to predict impacts on crop production? Utility is defined herein as net forecast dependabilities of at least 50%, where net dependability is the product of the large‐scale 3‐month forecast dependability with a factor accounting for losses in dependability due to the higher spatiotemporal variability of 1‐month station data. This loss factor is estimated from station data by computing the frequency of matching sign (FOMS) between the direction of departures from average of 3‐month forecast division values and 1‐month station values, for average temperature and precipitation, over a 10‐year study period and 96 stations in six regions of the U.S. The resulting FOMS does not display any consistent differences across regions, locations, or months, so is averaged across all months and stations. Average FOMS calculated in this manner are 76% for average temperature and 66% for total precipitation. The decimal FOMS are then used as the multiplicative loss factor on previously reported 3‐month forecast division reliability values to produce estimates of the net reliability for downscaled forecasts at locations within each forecast division. The resulting guidance is dependent on region and forecast variable, with the forecasts for above‐average temperature emerging as worthy of consideration for use in agricultural applications over the majority of the contiguous U.S. The Northeast, the Great Lakes, parts of the Northern Great Plains, interior California, and northwest Nevada are the only regions with insufficient net dependability to preclude immediate consideration. Conversely, forecasts for cooler than average temperature do not retain sufficient net dependability after downscaling to be an attractive option in any part of the contiguous U.S. at this time. Forecasts for wetter or drier than average conditions retained sufficient net dependability to encourage further development over only about 10% of the contiguous U.S., in regions well‐known to experience the strongest ENSO impacts on precipitation. The forecast divisions where agricultural decision support might benefit from NOAA/CPC seasonal precipitation forecasts are located in Florida, south Texas, southwest New Mexico, Arizona, central and southern California, and parts of Oregon, Washington, Idaho, and Montana. Keywords. Agricultural management, Average air temperature, Climate, Climatology, Decision support, Downscaling, Forecast, Precipitation, Seasonal. ecision support in crop and forage agriculture is based largely on field studies, with some support from crop modeling. The effects of climate on agricultural yields and profitability is usually rep‐ resented by a local climatology derived from nearby weather station data, either as the values for weather during field stud‐ ies or expressed as station statistics (e.g., mean, standard deviation, skewness of precipitation; frequency of wet days; growing degree days or similar thermal units) used to drive a daily weather generator for crop modeling. Given the varia‐ tions in weather from year to year, seasonal climate forecasts appear to offer an opportunity to reduce risks and maximize profits under varying climate. Official seasonal climate fore‐ Submitted for review in July 2007 as manuscript number SW 7072; approved for publication by the Soil & Water Division of ASABE in May 2008. The authors are Jeanne M. Schneider, Research Meteorologist, and Jurgen D. Garbrecht, Research Hydraulic Engineer; USDA‐ARS Grazinglands Research Laboratory, El Reno, Oklahoma. Corresponding author: Jeanne M. Schneider, USDA‐ARS Grazinglands Research Laboratory, 7207 West Cheyenne St., El Reno, OK 73036; phone: 405‐262‐5291, ext. 251; fax: 405‐262‐0133; e‐mail: Jeanne.Schneider @ars.usda.gov. casts for average temperature and total precipitation have been offered by the National Oceanic and Atmospheric Ad‐ ministration's Climate Prediction Center (NOAA/CPC) for the contiguous U.S. since December 1994 (Barnston et al., 2000). Unfortunately, any attempt to incorporate the NOAA/ CPC seasonal climate forecasts into agricultural decision support is faced with several immediate obstacles: the proba‐ bilistic nature of the forecasts; the question of the skill or de‐ pendability of the forecasts; the infrequency of forecasts significantly different from climatology; and most relevant to this analysis, the physical and temporal scale of the fore‐ casts. Crops and forages grow in individual fields, but season‐ al climate forecasts are offered for large areas (each approximately 9 × 10 4 km 2 , termed “forecast divisions” herein) and 3‐month periods, so some type of downscaling in both space and time is required to use them at the field scale. There are statistical reasons why forecasts are generated for regional and seasonal scales, in particular the higher variabil‐ ity of weather (especially precipitation) at a location compared to an area average; i.e., it is more difficult to dis‐ cern a robust seasonal forecast signal in “noisy” station data (e.g., Gong et al., 2003). However, the potential payoff for in‐ dividual operators across the U.S. is large enough to justify developing and testing a methodology for incorporating any D

Transcript of Reductions in Seasonal Climate Forecast Dependability as a Result of Downscaling

Transactions of the ASABE

Vol. 51(3): 915-925 � 2008 American Society of Agricultural and Biological Engineers ISSN 0001-2351 915

REDUCTIONS IN SEASONAL CLIMATE FORECAST

DEPENDABILITY AS A RESULT OF DOWNSCALING

J. M. Schneider, J. D. Garbrecht

ABSTRACT. This investigation addresses a practical question from an agricultural planning and management perspective: arethe NOAA/CPC seasonal climate forecasts skillful enough to retain utility after they have been downscaled to field and dailyscales for use in crop models to predict impacts on crop production? Utility is defined herein as net forecast dependabilitiesof at least 50%, where net dependability is the product of the large‐scale 3‐month forecast dependability with a factoraccounting for losses in dependability due to the higher spatiotemporal variability of 1‐month station data. This loss factoris estimated from station data by computing the frequency of matching sign (FOMS) between the direction of departures fromaverage of 3‐month forecast division values and 1‐month station values, for average temperature and precipitation, over a10‐year study period and 96 stations in six regions of the U.S. The resulting FOMS does not display any consistent differencesacross regions, locations, or months, so is averaged across all months and stations. Average FOMS calculated in this mannerare 76% for average temperature and 66% for total precipitation. The decimal FOMS are then used as the multiplicative lossfactor on previously reported 3‐month forecast division reliability values to produce estimates of the net reliability fordownscaled forecasts at locations within each forecast division. The resulting guidance is dependent on region and forecastvariable, with the forecasts for above‐average temperature emerging as worthy of consideration for use in agriculturalapplications over the majority of the contiguous U.S. The Northeast, the Great Lakes, parts of the Northern Great Plains,interior California, and northwest Nevada are the only regions with insufficient net dependability to preclude immediateconsideration. Conversely, forecasts for cooler than average temperature do not retain sufficient net dependability afterdownscaling to be an attractive option in any part of the contiguous U.S. at this time. Forecasts for wetter or drier thanaverage conditions retained sufficient net dependability to encourage further development over only about 10% of thecontiguous U.S., in regions well‐known to experience the strongest ENSO impacts on precipitation. The forecast divisionswhere agricultural decision support might benefit from NOAA/CPC seasonal precipitation forecasts are located in Florida,south Texas, southwest New Mexico, Arizona, central and southern California, and parts of Oregon, Washington, Idaho, andMontana.

Keywords. Agricultural management, Average air temperature, Climate, Climatology, Decision support, Downscaling,Forecast, Precipitation, Seasonal.

ecision support in crop and forage agriculture isbased largely on field studies, with some supportfrom crop modeling. The effects of climate onagricultural yields and profitability is usually rep‐

resented by a local climatology derived from nearby weatherstation data, either as the values for weather during field stud‐ies or expressed as station statistics (e.g., mean, standarddeviation, skewness of precipitation; frequency of wet days;growing degree days or similar thermal units) used to drivea daily weather generator for crop modeling. Given the varia‐tions in weather from year to year, seasonal climate forecastsappear to offer an opportunity to reduce risks and maximizeprofits under varying climate. Official seasonal climate fore‐

Submitted for review in July 2007 as manuscript number SW 7072;approved for publication by the Soil & Water Division of ASABE in May2008.

The authors are Jeanne M. Schneider, Research Meteorologist, andJurgen D. Garbrecht, Research Hydraulic Engineer; USDA‐ARSGrazinglands Research Laboratory, El Reno, Oklahoma. Correspondingauthor: Jeanne M. Schneider, USDA‐ARS Grazinglands ResearchLaboratory, 7207 West Cheyenne St., El Reno, OK 73036; phone:405‐262‐5291, ext. 251; fax: 405‐262‐0133; e‐mail: [email protected].

casts for average temperature and total precipitation havebeen offered by the National Oceanic and Atmospheric Ad‐ministration's Climate Prediction Center (NOAA/CPC) forthe contiguous U.S. since December 1994 (Barnston et al.,2000). Unfortunately, any attempt to incorporate the NOAA/CPC seasonal climate forecasts into agricultural decisionsupport is faced with several immediate obstacles: the proba‐bilistic nature of the forecasts; the question of the skill or de‐pendability of the forecasts; the infrequency of forecastssignificantly different from climatology; and most relevantto this analysis, the physical and temporal scale of the fore‐casts. Crops and forages grow in individual fields, but season‐al climate forecasts are offered for large areas (eachapproximately 9 × 104 km2, termed “forecast divisions”herein) and 3‐month periods, so some type of downscaling inboth space and time is required to use them at the field scale.There are statistical reasons why forecasts are generated forregional and seasonal scales, in particular the higher variabil‐ity of weather (especially precipitation) at a locationcompared to an area average; i.e., it is more difficult to dis‐cern a robust seasonal forecast signal in “noisy” station data(e.g., Gong et al., 2003). However, the potential payoff for in‐dividual operators across the U.S. is large enough to justifydeveloping and testing a methodology for incorporating any

D

916 TRANSACTIONS OF THE ASABE

useful climate forecast signal, derived from official forecaststhat are freely available (in this case, the NOAA/CPC fore‐casts), into risk‐based decision support systems. The essen‐tial elements of an application methodology have beencreated and are outlined below.

First, we assessed the practical utility of the NOAA/CPCseasonal climate forecasts for agricultural applications overeach forecast division, since the information on forecast per‐formance previously available considered only national sum‐maries. Several measures were created to assess forecastutility (Schneider and Garbrecht, 2003, 2006) to address thefollowing two questions:

(1) Are the forecasts for departures large enough to justifytheir use? The NOAA/CPC seasonal climate forecastsare statements of shifts in odds relative to conditionsduring a 30‐year reference period, termed a climatolo‐gy. The forecasts may indicate a shift in odds towardeither end of the climatological distribution (e.g., wet‐ter or drier, warmer or cooler) or may be for “equalchances,” which means a forecast equal to the cli‐matology. To be useful in agricultural management,the forecasts need to be significantly different fromclimatology in order to offer new information beyondthe climate information already accounted for in cur‐rent management practices. Further, are the forecastsfor large departures offered often enough to bother? Ifnon‐climatology forecasts are offered rarely (e.g., oneevery second year on average), they may not offer suf‐ficient return on the investment required to modifymanagement practices to include them. This forecastcharacteristic was addressed with a measure called“usefulness” (Schneider and Garbrecht, 2003).

(2) Are the forecasts skillful enough to justify their use?In other words, do these probabilistic forecasts forshifts in odds get the odds right? This forecast charac‐teristic is termed “reliability” in the climate forecastcommunity. Schneider and Garbrecht (2006) com‐bined a threshold requirement determined by the defi‐nition of “usefulness” with the concept of “reliability”to produce a measure called “dependability.” Thismeasure computes the success rate of forecasts in pre‐dicting climate variations numerically distinct fromclimatology, where success is defined as correctly pre‐dicting the direction of the variation from the mean(warmer/cooler, wetter/drier). Summarizing the re‐sults from Schneider and Garbrecht (2003, 2006),forecast usefulness and dependability for the 3‐monthforecasts vary significantly across the U.S., as shownin table 1.

These results are consistent with a recent analysis ofNOAA/CPC forecast skill reported by Livezey and Timo‐feyeva (2008), which employed traditional meteorologicalmeasures for probabilistic forecasts. The point here, howev‐er, is that both sets of analyses addressed forecast perfor‐mance over 3‐month periods and the relatively large‐areaforecast divisions. As such, they do not address questions rel‐ative to dependability or skill if the forecasts are applied atsmaller space or time scales.

The primary components of our forecast applicationmethodology are a spatial downscaling methodology(Schneider and Garbrecht, 2002; Garbrecht et al., 2004) anda temporal disaggregation approach (Schneider et al., 2005)for the NOAA/CPC seasonal climate forecasts. Our spatial

downscaling approach is different from the technique cur‐rently employed by NOAA/National Weather Service(NOAA/NWS) for their city‐specific climate forecasts. Weassume that the predicted shift in probability for the large spa‐tial scale applies to all points within that forecast area. Thisapproach deliberately sidesteps the challenges associatedwith deriving relationships between statistics for large areasand an embedded station (the approach taken by NOAA/NWS, which is unfortunately problematic for precipitation;e.g., Meyers et al., 2008) and defers the related problem to theanalysis reported here. Our temporal disaggregation ap‐proach consists of two parts: first, transforming the overlap‐ping 3‐month forecasts to a sequence of 1‐month forecasts(Schneider et al., 2005); second, employing a tailored weath‐er generator that properly reflects the 1‐month forecasts inthe production of ensembles of daily “weather” (Garbrecht atal., 2004). Together, the spatial downscaling and temporaldisaggregation techniques provide the means to apply theseasonal climate forecasts at the field spatial scale and dailytime step. Hereafter, we refer to this collection of techniquesas “spatiotemporal downscaling.”

This brought us to the crux of this analysis: is there anyforecast signal left after the spatiotemporal downscaling?What is the “net” outcome? Are the differences betweenmonthly precipitation totals or average temperature at a sta‐tion, and 3‐month totals or averages at forecast divisionscales, so large that the NOAA/CPC forecasts might com‐pletely lose their already limited utility when we try to usethem in agricultural management? We expect a negative im‐pact on dependability, specifically, and the open question isthe degree of reduction, or loss in probabilistic skill. If a suffi‐cient degree of dependability survives the spatiotemporaldownscaling, there is reason to proceed with modeling anddevelopment of climate forecast‐dependent decision supportusing the methodologies in hand. If not, it might be prudentto defer until the forecasts improve enough to overcome thelosses in dependability due to downscaling.

Note that all possible downscaling methodologies willface similar challenges, perhaps varying in degree and withregion. The analysis results presented here are specific to ourspatiotemporal downscaling methodology. In addition, theseanalysis results are intended to be indicative, rather than de‐finitive or exhaustive.

METHODSThe variable we are downscaling is the forecast division

3‐month departure from the climatological average for pre‐cipitation or average temperature. The utility parameter weare most concerned with is the dependability of the seasonalforecasts (Schneider and Garbrecht, 2006, hereafter SG06).By definition, dependability will decrease whenever the signof the actual departure (positive or negative) is different atsmaller space or shorter time scales than at forecast scales.An estimate of the expected reduction in dependability canbe developed by doing a simple count of historical caseswhere the sign of departures at the different scales match (i.e.,are the same sign). In other words, if the forecast division waswetter than average, was the station also wetter? Or if the3‐month total was drier than average, was the 1‐month totalalso drier than average? Note that this analysis will not useactual seasonal forecasts; instead, we use daily station data

917Vol. 51(3): 915-925

Table 1. Selected results for dependability from Schneider and Garbrecht (2006), tabulated by forecast direction and lead time. The study periodcovered 1997 through the first three months of 2005, a total of 97 forecasts at the shortest lead time. The ratios are the dependability for each

forecast division, in fractional form. Dependability is defined as the number of matching (same direction) outcomes divided by the numberof “useful” forecasts in that direction. A forecast was deemed useful if it satisfied a minimum departure from climatology. If the forecasts

are reliable in the sense of “getting the odds right,” then these ratios should be approximately equal to 0.5. Note that small samples(arbitrarily defined as fewer than six useful forecasts) may not be good indicators of future former performance. Accordingly,

all cases where the dependability was less than 0.5 or where there were fewer than six useful forecasts have beenshaded in the table. The unshaded cases are deemed “dependable” at the 3‐month forecast division scale.

Forecast Division

Warm Cool Wet Dry

0.5 MonthLead Time

3.5 MonthLead Time

6.5 MonthLead Time

0.5 MonthLead Time

0.5 MonthLead Time

3.5 MonthLead Time

0.5 MonthLead Time

3.5 MonthLead Time

N New England 1/1 2/4 2/3 1/4 0/1 0/0 0/0 0/0NE New England 1/1 2/4 1/2 0/3 0/1 0/0 0/0 0/1N New York 2/3 2/2 2/4 0/2 2/2 1/1 0/1 0/0S New England 1/1 4/5 3/3 0/1 1/1 0/0 0/0 0/0E Great Lakes 2/3 2/2 3/7 0/1 2/4 2/4 1/3 0/1Ohio 4/6 1/2 3/5 0/0 4/4 0/0 2/6 2/5Mid‐Atlantic Coast 3/5 6/9 2/5 0/0 1/1 0/0 0/2 0/0N Appalachians 4/4 3/3 5/7 0/0 1/2 0/0 1/3 1/1Central Appalachians 7/12 4/6 4/7 0/0 1/1 1/1 0/3 1/3Coastal Virginia 6/9 5/8 6/10 0/0 1/1 0/0 1/2 0/0S Appalachians 8/10 9/11 7/11 0/1 0/2 0/2 1/3 0/0Coastal Carolinas 7/10 11/12 9/11 0/4 4/4 0/0 5/7 2/4Interior Carolinas 10/13 9/11 6/11 0/3 2/4 2/3 5/5 1/1Upper Michigan 6/10 6/10 6/9 0/3 4/5 1/1 3/5 0/1N Minnesota 8/13 5/9 5/8 0/6 4/5 1/2 0/2 1/2E North Dakota 9/12 4/6 5/6 1/6 1/3 0/2 0/1 0/1W North Dakota 8/11 4/6 3/5 1/6 1/3 0/1 1/3 0/1E Montana 6/9 4/5 2/4 0/5 1/2 1/2 2/4 1/4N‐Central Montana 14/15 6/7 1/2 0/1 1/3 2/3 7/8 8/9S‐Central Montana 12/14 4/6 2/3 0/1 2/5 2/3 5/7 6/8W Montana 14/15 5/9 4/7 0/1 3/6 2/4 7/7 7/8N‐Central Michigan 4/8 3/8 4/8 0/2 2/6 1/2 3/9 0/3S Michigan 3/7 0/3 4/8 0/1 2/6 3/6 2/6 1/4E‐Central Illinois 4/9 1/3 5/6 1/2 1/2 0/1 1/6 1/6N Illinois 4/9 3/6 5/6 1/3 1/1 0/0 1/3 1/2N Wisconsin 5/9 6/9 5/9 1/6 2/3 0/0 1/1 1/2SE Minnesota 7/9 6/9 7/9 2/7 0/2 0/0 0/1 2/3E South Dakota 6/10 4/6 3/4 1/6 1/3 0/0 0/1 3/4Central South Dakota 5/9 1/4 3/3 2/5 2/2 0/0 0/2 0/3W South Dakota 6/8 2/2 1/1 1/2 0/0 0/1 2/3 0/1NE Wyoming 7/7 0/1 0/1 0/1 1/2 1/2 2/3 0/0NW Wyoming 8/9 0/1 0/2 1/2 1/2 2/3 1/4 1/1E Iowa 4/7 4/6 4/5 5/8 1/2 0/0 0/2 2/3NW Iowa 6/9 3/5 3/3 3/7 1/3 0/0 1/2 2/5Central Nebraska 7/7 1/2 0/1 1/2 0/4 1/1 0/1 3/6S Nebraska 7/7 0/2 0/0 2/5 2/4 0/0 3/5 1/4W NE Cheyenne 6/6 1/1 0/0 1/2 1/1 0/0 0/0 0/3E Kentucky 6/9 3/5 5/6 0/0 2/3 1/1 1/5 1/4W Kentucky 6/8 3/5 3/4 0/2 2/2 2/3 2/6 3/5SE Missouri 4/7 5/7 2/3 0/2 1/1 1/1 2/4 2/3NE Missouri 3/7 1/2 4/4 2/3 0/0 0/0 0/3 1/2NW Missouri 6/7 1/2 2/2 4/5 2/3 0/0 1/3 2/4E Kansas 5/7 4/6 0/2 1/2 2/3 0/0 0/4 2/5Central Kansas 7/7 3/4 0/0 2/3 3/4 1/1 1/5 0/4W Kansas 7/9 4/5 0/1 2/4 4/5 1/1 1/6 0/4

averaged over single months and 3‐month periods in compar‐ison to encompassing forecast division data to develop ourestimates of the frequency with which the signs of departuresmatch. If the signs always match perfectly (frequency of100%), there would be no loss in dependability expected dueto the spatiotemporal downscaling. If the frequency is lessthan 100%, then we can expect the dependability score to bedecreased by that factor.

Our analysis examines ten years (1991‐2000) of actualprecipitation and average temperature data at the spatiotem‐poral scales in question. The study duration was chosen as acompromise between considerations related to the probabil‐istic forecasts and the variable nature of the observations,versus the application decision‐making framework. Proba‐bilistic forecasts require multi‐year applications to realizethe forecast signal and any associated practical values, or for

918 TRANSACTIONS OF THE ASABE

Table 1. (Continued).

Forecast Division

Warm Cool Wet Dry

0.5 MonthLead Time

0.5 MonthLead Time

0.5 MonthLead Time

0.5 MonthLead Time

0.5 MonthLead Time

3.5 MonthLead Time

0.5 MonthLead Time

3.5 MonthLead Time

NE Colorado 7/9 5/5 2/2 1/2 2/2 1/1 0/2 1/4SE Colorado 10/12 6/6 1/1 1/3 5/5 1/1 1/5 1/5W Colorado 12/14 12/14 5/7 0/2 0/1 0/0 0/4 0/4SW Wyoming 6/8 0/2 1/2 1/2 0/0 0/1 0/0 0/0Central Tennessee 7/8 7/9 6/7 0/0 2/4 0/1 0/2 0/0W Tennessee 6/7 6/9 5/6 0/2 2/3 0/0 0/2 0/0Ozark Mountains 6/8 4/6 3/5 0/0 2/2 0/0 0/3 1/3Central Oklahoma 7/10 5/7 0/1 0/1 3/8 0/3 0/3 1/4Abilene, Texas 11/14 8/10 1/2 1/4 4/9 2/5 1/6 0/5N High Plains Texas 13/16 8/10 2/3 0/1 4/10 3/7 0/9 0/6N Georgia 7/9 9/14 7/14 1/4 2/3 1/3 4/4 2/2N Alabama 7/9 7/10 8/13 1/4 0/0 0/0 2/4 0/1Central Mississippi 9/10 6/9 5/10 1/3 2/2 0/0 2/3 1/2S Arkansas 8/9 5/8 5/7 0/1 3/3 1/1 1/4 1/3E Texas 9/11 7/11 3/5 1/2 12/17 9/13 1/5 2/3Dallas, Texas 13/14 7/11 3/5 2/6 11/15 8/12 3/7 3/4San Antonio, Texas 19/25 12/21 7/10 3/5 9/14 6/9 3/7 3/6Far S Texas 25/33 15/22 9/13 0/5 10/11 7/9 7/9 4/6W Central Texas 25/30 10/18 5/8 0/5 8/14 4/9 4/8 4/8W Texas Panhandle 42/46 19/23 12/13 1/4 6/10 3/7 9/13 6/8Jacksonville, Fla. 12/15 11/16 11/16 4/10 8/11 5/6 11/12 6/6Central Florida 18/26 17/25 19/27 2/3 11/12 7/11 19/23 7/10S Florida 41/47 47/52 50/53 0/2 14/16 13/17 15/20 8/12Florida Panhandle 13/16 7/12 10/15 3/5 4/6 1/3 6/7 4/4Coastal Louisiana 13/14 8/11 8/11 1/4 3/5 1/2 4/4 2/2Coastal Texas, Houston 14/16 10/14 9/10 1/4 9/13 5/9 2/5 0/2NE Washington 13/17 6/12 5/11 0/2 9/10 6/7 1/3 2/2Pendleton, Oregon 12/16 7/11 6/9 0/1 2/4 2/3 2/2 0/0Central Washington 18/20 9/13 7/11 0/1 5/9 4/6 3/3 2/2Seattle, Wash. 19/24 17/24 15/21 0/0 9/15 11/14 4/4 6/7Coastal Washington 23/26 14/26 18/30 0/0 11/18 11/13 4/6 5/8E Idaho 12/14 3/6 4/7 1/3 0/0 0/1 1/1 1/1Idaho Central Mountains 16/18 7/10 5/8 0/1 4/4 3/3 2/2 1/1SW Idaho 14/16 6/10 5/9 0/2 1/3 0/1 2/3 3/4E Oregon 13/15 6/11 5/9 0/0 2/4 1/2 1/2 2/2Oregon Coastal Valley 14/17 13/18 9/15 0/0 6/7 5/7 1/1 2/2Oregon Coast 18/19 13/24 8/20 0/0 9/14 4/10 4/4 4/4NE Utah 13/17 18/20 4/6 0/1 0/0 0/0 1/1 0/0SE Utah 22/30 25/28 12/17 0/0 3/4 0/0 0/6 0/4W Utah 22/31 18/25 11/18 0/0 2/2 0/0 1/2 1/3NE Nevada 19/27 12/22 9/15 0/0 1/2 0/0 4/6 3/4NW Nevada 13/21 19/24 11/14 0/0 4/5 0/0 5/7 4/5Sacramento, Calif. 8/20 11/25 4/14 0/0 4/7 1/1 3/3 2/2N Calif. Coast 15/20 13/21 9/17 0/0 5/8 0/2 2/2 1/1Central Nevada 35/43 33/41 23/30 0/0 5/7 0/0 1/3 3/5Fresno, Calif. 16/25 12/25 11/21 0/0 5/9 1/1 5/6 3/4Central Calif. Coast 14/19 9/20 6/17 0/0 6/7 2/2 2/3 0/2S Calif. Coast 19/23 15/30 9/21 0/3 7/9 2/2 5/5 1/2SE California 36/57 36/58 28/49 0/0 6/10 2/2 8/8 2/4Las Vegas, Nevada 56/62 57/63 56/61 0/0 8/10 3/4 9/12 6/7SW Arizona 66/75 60/70 60/68 0/0 7/10 3/7 16/18 11/12NE Arizona 43/51 37/47 32/39 0/0 5/8 3/6 11/17 7/9SE Arizona 67/73 63/70 58/64 0/0 8/12 3/6 19/20 13/14N New Mexico 15/20 13/16 3/4 0/0 6/10 1/4 2/10 1/8E New Mexico 20/24 11/14 4/5 1/2 6/11 2/8 3/11 1/9C New Mexico 30/38 17/24 12/16 0/0 8/11 3/6 8/14 4/9S New Mexico 44/52 25/31 18/22 1/1 7/13 2/7 11/13 8/10

any assessment of forecast performance, so ten or more yearsof analysis would be preferred. Further, ten years is a short pe‐riod from a climatology viewpoint (especially for precipita‐

tion); i.e., 10‐year statistical descriptions only capture part ofthe variability. But the practical reality is that any longer per-iod loses relevance for possible agricultural applications by

919Vol. 51(3): 915-925

Great Lakes

Southeast

Southern Great Plains(TX through KS)

Northern Great PlainsPacific Northwest

Southwest

Figure 1. Locations of stations used in correlation analysis. Squares indicate sites used for both average temperature and precipitation, circles indicatesites used just for precipitation, and triangles indicate sites used just for average temperature.

individual operators. Even five years is a long time from arancher or farmer's point of view, especially relative to typi‐cal short‐term agricultural operating loans. Ten years servesas a workable compromise.

We choose two locations in each of eight forecast divi‐sions in six regions of the contiguous U.S., for a total of96�locations for each variable (fig. 1). While we consideredconducting this analysis just for regions where the forecastshave demonstrated dependability, we decided such an ap‐proach would be less than satisfactory. Climate forecast tech‐niques continue to evolve, and it is impossible to anticipatewhere performance improvements might manifest first. Webelieve it is preferable to have an answer that will continueto be valid as the forecasts improve, and since our analysis ap‐proach does not depend on the current forecast techniques perse, this should be possible.

We used actual average temperature and precipitationdata over a 10‐year period at locations (NOAA/NCDC,2005), and averaged over forecast divisions (called climatedivisions by NOAA/CPC, e.g., NOAA/CPC, 2006a) on1‐month and 3‐month scales, including all 12 overlapping3‐month forecast periods. Station data sites were chosen onthe basis of continuity of monthly data in the record, with themajority of sites requiring no interpolation from nearby sta‐tions to fill data gaps. Only 0.1% of the precipitation data and0.8% of the average temperature data required “filling” to

provide continuous monthly data during the study period,with negligible impact on the resulting analysis.

The number of cases (points of comparison) in the analysisdepends on which aspect of the downscaling is under consid‐eration (space, time, or both) and the variable (precipitationor average temperature). The time disaggregation techniquedeveloped for precipitation (Schneider et al., 2005) uses allthree of the 3‐month periods that include the month in ques‐tion; e.g., June depends on April‐May‐June, May‐June‐July,and June‐July‐August forecasts and averages. The time dis‐aggregation technique developed for average temperatureuses only the centered 3‐month period for the month in ques‐tion; e.g., June depends only on the May‐June‐July forecastand averages. The differences in the techniques reflect thedifferences in the characteristics of precipitation (highlyvariable in space and time) and average temperature (lessvariable). Table 2 summarizes the number of cases by vari‐able and by stage of downscaling.

Since “frequency of matching signs of departures from av‐erage” is a long and clumsy phrase, we will use FOMS hereaf‐ter, usually with units of percentage. A visual example of theFOMS between 3‐month precipitation totals for a forecast di‐vision, and totals for a station within that forecast division,is presented in figure 2.

An example of the FOMS between 3‐month and 1‐monthstation precipitation is presented in figure 3. Figure 3 illus-

Table 2. Number of cases in each step of the frequency analysis.Number of cases in 10‐year study period

PrecipitationSpatial downscaling: forecast division to station, both are 3‐month periods 120 per station: 10 years, each with 12 3‐month periodsTime disaggregation: 3‐month to 1‐month at individual stations 360 per station: 10 years, each with 12 3‐month periods, times 3Both space and time: forecast division at 3‐months to stations at 1‐month 360 per station: 10 years, each with 12 3‐month periods, times 3

Average temperatureSpatial downscaling: forecast division to station, both are 3‐month periods 120 per station: 10 years, each with 12 3‐month periodsTime disaggregation: 3‐month to 1‐month at individual stations 120 per station: 10 years, each with 12 3‐month periodsBoth space and time: forecast division at 3‐months to stations at 1‐month 120 per station: 10 years, each with 12 3‐month periods

920 TRANSACTIONS OF THE ASABE

3-m

onth

Dep

artu

res

from

Ave

rage

(in

ches

)fo

r Fo

reca

st D

ivis

ion

(54)

incl

udin

g A

bile

ne, T

X

3-month Departures from Average (inches) at Abilene, TX

-/+4/120

+/+49/120

-/-51/120

+/-16/120

ÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔ-10

-5

0

5

10

-10 -5 0 5 10

Departures From Average Precipitation

Slope of linear fit: 0.76R = 0.78Percent Matching Sign = 83.3

Figure 2. The top left quadrant encompasses cases where the large‐areaprecipitation departure from average was positive (wetter than average),but the station departure was negative (drier than average). Similarly, thetop right quadrant holds cases where both departures were positive (wet‐ter than average). The lower right quadrant holds cases with negativelarge‐area departures but positive station departures, and the lower leftquadrant holds cases where both departures were negative. For this fore‐cast division and station, 100 of 120 cases had matching signs, or afrequency‐of‐matching‐signs (FOMS) of 83% (100/120 × 100).

ÓÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓÓ

ÓÓÓÓÓÓÓÓÓÓÓÓÓÓÔÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔÔ

ÔÔÔÔÔÔÔÔÔÔÔÔÔ

-10

-5

0

5

10

-10 -5 0 5 10

Departures From Average Precipitation

centeredleadingtrailing

3-m

onth

Dep

artu

res

from

Ave

rage

(in

ches

) at

Abi

lene

, TX

1-month Departures from Average (inches) at Abilene, TX

Slope of linear fit: 0.97R = 0.57Percent Matching Sign = 70.3

-/+64/360

+/+95/360

-/-158/360

+/-43/360

Figure 3. An example showing the FOMS for 3‐month versus 1‐monthprecipitation totals for a station. All months are combined on this plot,showing all contributing 3‐month periods for each month, producing a to‐tal of 360 points. The quadrants have similar meanings as in figure 2; forexample, the top right quadrant encompasses cases where the 3‐monthand 1‐month precipitation departures from average were both positive(wetter than average). The different symbols represent the 3‐month peri‐ods contributing to the 1‐month disaggregated value. For example, forJune, “centered” indicates comparison between the month in question(June) and the 3‐month period May‐June‐July, while “trailing” indicatesthat the month in question (June) is at the end of the comparison 3‐monthperiod April‐May‐June. For this station, the matching sign cases total 253of 360, or FOMS of 70%.

trates the most complex of the comparisons, with the differ‐ent 3‐month periods contributing to the estimation of each1‐month precipitation departure indicated in order to illus‐trate the general insensitivity of FOMS to the contributing3‐month period. Since the FOMS did not depend in any sys‐tematic fashion on the relative positioning of the 1‐month pe‐riod, all results reported below are for the combined 360cases. An example for both the spatial downscaling and tem‐poral disaggregation is not shown. Generally, the FOMS intime are lower than the FOMS in space, for both variables.

RESULTSRecall that our goal is to estimate the degree of impact on

forecast dependability due to our spatiotemporal downscal‐ing methodology, and that we are using the “frequency ofmatching signs of departures from average” (FOMS) in actu‐al data at the relevant spatiotemporal scales to estimate themagnitude of the loss in dependability. FOMS will necessari‐ly depend to some degree on the number and location of sta‐tions selected for analysis, and the length and period of record(for example, 1961‐1990 versus 1971‐2000). However, weexpect our selected cases to be sufficient to provide guidancein this application.

The spatial downscaling (forecast division to station for3‐month values), temporal disaggregation (3‐month to1‐month values at stations), and the combined results(3‐month forecast division vs. 1‐month station values) wereexamined separately to provide some insight as to which as‐pect of the process produced the largest impact. We alsosearched for any indication of significant differences inFOMS with region or season, coastal vs. inland sites, and aridvs. humid environments. Differences in FOMS for a givenmonth between sites can be large, but there were no consis‐tent patterns to support separation by any of the suspectedseasonal or geographic factors (e.g., coastal versus inland).As a result, we present the FOMS results for all months as asingle number for each station.

PRECIPITATIONThe FOMS for precipitation, for each station, grouped by

region, for downscaling in space (forecast division averagetotal to station total), disaggregation in time (station 3‐monthto 1‐month totals), and the full spatiotemporal downscaling(forecast division 3‐month total to station 1‐month total) arepresented in figure 4.

The station‐to‐station variation in FOMS due to spatialdownscaling is large (a range of 25% in the Great Lakes andPacific Northwest regions), but the average FOMS value forall stations is better than might have been expected, about80%. The spread in FOMS between stations due to temporaldisaggregation is smaller (13% in the Southwest), but themagnitude is also smaller, averaging a bit over 70%. For thecomplete spatiotemporal downscaling, the average FOMSfor precipitation is only 66%, with a spread of 14%. Statedanother way, the direction of the departure from average pre‐cipitation (wetter or drier) is different for 1‐month stationdata from that of the 3‐month forecast division data roughly1 in 3 times. This means that the dependability of the large‐scale seasonal precipitation forecasts will be decreased ac‐cordingly when the forecasts are applied at 1‐month and localscales.

921Vol. 51(3): 915-925

50

60

70

80

90

100Downscaling Precipitation in Space

FOM

S (

%)

GL NGP PNW SW SGP SE

50

60

70

80

90

100Downscaling Precipitation in Time

FOM

S (

%)

GL NGP PNW SW SGP SE

50

60

70

80

90

100Downscaling Precipitation in Space and Time

FOM

S (

%)

GL NGP PNW SW SGP SE

Figure 4. FOMS for precipitation departures at each station, organized byregion and type of downscaling or disaggregation. The abbreviations re‐fer to the regions in figure 1: GL = Great Lakes, NGP = Northern GreatPlains, PNW = Pacific Northwest, SW = Southwest, SGP = SouthernGreat Plains, and SE = Southeast. The mean FOMS for each region is indi‐cated by a horizontal line. The overall mean FOMS in precipitation acrossall stations after downscaling and disaggregation (average for all valuesin the last panel) is 66.4%.

50

60

70

80

90

100Downscaling Average Temperature in Space

FOM

S (

%)

GL NGP PNW SW SGP SE

50

60

70

80

90

100Disaggregating Average Temperature in Time

FOM

S (

%)

GL NGP PNW SW SGP SE

50

60

70

80

90

100Downscaling Avg. Temperature in Space and Time

FOM

S (

%)

GL NGP PNW SW SGP SE

Figure 5. FOMS for average temperature departures at each station, or‐ganized by region and type of downscaling or disaggregation. The abbre‐viations are the same as in figure 4, and the mean FOMS for each regionis indicated with a horizontal line. The overall mean FOMS in averagetemperature across all stations after downscaling and disaggregation (av‐erage of all values in last panel) is 76.6%.

922 TRANSACTIONS OF THE ASABE

If the variations in space and time were statistically inde‐pendent, one would expect the FOMS after downscaling anddisaggregation to be the product of the (decimal) FOMS ofthe two components: 0.8 × 0.7 = 0.56, or 56%. The goodnews is that they are dependent to a degree, so the net FOMSis higher, but 66% does imply a significant loss (34%) in de‐pendability for spatiotemporally downscaled precipitationforecasts.

AVERAGE TEMPERATUREThe FOMS for average temperature, for each station,

grouped by region, for downscaling in space, disaggregationin time, and the combined procedures are presented in fig‐ure�5. The pattern in scatter and magnitude of the FOMS foraverage temperature departures is very similar to that for pre‐cipitation: spatial downscaling produces a larger spread(20%) in FOMS than temporal disaggregation (16%), but hasa larger average FOMS. FOMS after spatial downscaling av‐erages about 88%, after temporal disaggregation about 79%,and the average FOMS after both techniques is 76%, with aspread of 18%. In other words, the direction of the departurefrom average temperature (warmer or cooler) is different for1‐month station data from that of the 3‐month forecast divi‐sion data roughly 1 in 4 times. This implies a reduction inlarge‐scale dependability by 24% for 1‐month and locationapplications of the seasonal average temperature forecasts.

Stations in the Pacific Northwest and Southwest haveslightly higher FOMS for precipitation departures than theother regions, but given the high scatter between stations, andthe significant terrain influences in those regions, the differ‐ence could be an artifact of the particular set of stations cho‐sen for the analysis. The FOMS for average temperaturedepartures also vary slightly by region, but we judge thesedifferences to be possible sampling artifacts as well. The pri‐mary difference between the FOMS for average temperaturedepartures versus precipitation departures is in the magnitudeof the overall average FOMS values, which are higher for av‐erage temperature. This 10% difference in average FOMS(66% versus 76%) after spatiotemporal downscaling for the

two variables is a direct reflection of the more variable natureof precipitation compared to average temperature on thesescales.

POST‐DOWNSCALING NET DEPENDABILITY

Since our goal is an estimate of the possible impact due tospatiotemporal downscaling of future forecasts, and giventhe lack of strong indications of variation in FOMS with loca‐tion and season, we simply averaged the FOMS across all sea‐sons and locations to produce a number that represents ourexpected impact on forecast dependability for each variable.It has long been understood that the spatiotemporal correla‐tion in temperature is higher than that for precipitation, so weexpected the losses in dependability due to spatiotemporaldownscaling to be less with average temperature and greaterfor precipitation, and that is what we found. The multiplica‐tive FOMS factors are 0.67 for precipitation dependabilityand 0.76 for average temperature dependability. The nextstep is to apply these factors to the large‐scale forecast depen‐dabilities to determine where the forecasts retain sufficientdependability (in the sense of correctly predicting the oddsfor useful departures from climatology) to justify downscal‐ing and examining the forecasts for possible incorporationinto agricultural decision support systems.

To compute the net dependability that we expect fordownscaled forecasts, we multiplied the FOMS factor withthe 3‐month/forecast division dependability for each forecastdivision (as decimals), as reported in SG06. The SG06 de‐pendability results were developed from forecasts issuedfrom Jan‐Feb‐Mar 1997 through Jan‐Feb‐Mar 2005 (97 fore‐cast cycles over a bit more than eight years), which is not adirect match to the 10‐year analysis period used to developthe multiplicative FOMS factors (1991‐2000). We do not ex‐pect the mismatch in analysis period to be important in thisanalysis. We also eliminated all forecast divisions with fewerthan six useful forecasts (less than 6% of the 97 forecasts is‐sued during that period), expecting such low frequencies ofuseful forecasts (Schneider and Garbrecht, 2003) to be oflittle utility in agricultural applications.

Figure 6. Maps of the net dependability of spatiotemporally downscaled forecasts for the shortest lead time; the numbers are the percentages of fore‐casts expected to have correctly predicted the direction of 1‐month station precipitation or average temperature departures. Forecast divisions withlow usefulness (fewer than six forecasts satisfying the 8% departure threshold in SG06) are left blank. Regions with net dependabilities of 50% or largerare emphasized with shading (cont'd).

923Vol. 51(3): 915-925

Figure 6 (cont'd). Maps of the net dependability of spatiotemporally downscaled forecasts for the shortest lead time; the numbers are the percentagesof forecasts expected to have correctly predicted the direction of 1‐month station precipitation or average temperature departures. Forecast divisionswith low usefulness (fewer than six forecasts satisfying the 8% departure threshold in SG06) are left blank. Regions with net dependabilities of 50%or larger are emphasized with shading.

924 TRANSACTIONS OF THE ASABE

The resulting net dependability for each forecast divisionat the shortest lead time forecasts (0.5 months), separated byvariable and direction, is shown in figure 6. Net dependabil‐ity results for longer lead times can be derived in the samemanner from the dependabilities reported in figures 3through 6 in SG06.

The numbers on the maps in figure 6 can be interpreted asthe percentage of forecasts expected to have correctly pre‐dicted the direction of 1‐month station precipitation or aver‐age temperature departures when the forecasts were forconditions at least 8% wetter, drier, warmer, or cooler thanthe 30‐year average (our threshold for “usefulness”), per thedefinition of dependability in SG06. Since the SG06 analysisused the average (mean) of the 3‐month, large‐scale climatol‐ogy as the dividing point, by definition a dependable forecast(one that correctly predicts the odds) will have a dependabil‐ity of 50%. (The skewness of precipitation does produce adifference between the mean and median (50% probability),but we judge this difference to be small enough to ignore inthis context.) From a practical viewpoint, a net dependabilitygreater than 50% is obviously preferred. Despite the variabil‐ity in our estimate of losses in dependability due to downscal‐ing (the FOMS multiplicative factors), we choose to continueto use values of 50% net dependability as the required mini‐mum value for designating a forecast division to be promis‐ing for further investigation for seasonal forecast applica-tions at field scale.

Note that these are expected values, in the sense of “aver‐age over 10 years.” As shown in the FOMS results, individualstations can experience significantly better or worse resultsover a 10‐year period.

The biggest losses in net dependability from the spatio‐temporal downscaling are for the precipitation forecasts. Theimpact of the 66% FOMS for precipitation departures is sig‐nificant, reducing the number of forecast divisions with atleast six useful forecasts and dependabilities >50% from 33to 8 for wetter than average forecast departures, and from 22to 13 for drier than average forecast departures. (There are102 forecast divisions covering the contiguous U.S.)

With the larger FOMS, the average temperature forecastsfare much better. The number of forecast divisions with atleast six useful forecasts for warmer than average conditionsand dependabilities >50% dropped from 91 to 78. Unfortu‐nately, for the cooler than average forecasts, the single fore‐cast division that satisfied the dependability criteria at the3‐month/forecast division scale dropped below 50% in netdependability.

CONCLUSIONThe point of this analysis was to determine if a sufficient

degree of dependability survives our spatiotemporaldownscaling methodology to justify possible modeling anddevelopment of climate forecast‐dependent decision supportusing the current NOAA/CPC climate forecasts and method‐ologies in hand. The resulting guidance is mixed, dependenton region and forecast variable, with the forecasts for above‐average temperature emerging as worthy of consideration in78 of 102 forecast divisions, covering most of the contiguousU.S. The Northeast, the Great Lakes, parts of the NorthernGreat Plains, interior California, and northwest Nevada arethe only regions with insufficient net dependability to pre‐

clude immediate consideration of use of the warmer than av‐erage forecasts. Forecasts for wetter than average conditionsretained sufficient net dependability to encourage further de‐velopment in only 8 of 102 forecast divisions, and in 13 of102 forecast divisions for drier than average forecasts, all inregions well‐known to experience the strongest ENSO im‐pacts on precipitation. These forecast divisions are located inFlorida, south Texas, southwest New Mexico, Arizona, cen‐tral and southern California, and parts of Oregon, Washing‐ton, Idaho, and Montana. Conversely, forecasts for coolerthan average temperature do not retain sufficient net depend‐ability after downscaling to be an attractive option in any partof the contiguous U.S. at this point in time.

For anyone considering the use of downscaled NOAA/CPC forecasts in agricultural decision support, we suggest afew checks before proceeding, due to the large station‐to‐station variability in correlations, and aspects of the seasonaltiming of the forecasts. Net dependability is necessary, butnot sufficient, to guarantee utility of seasonal forecasts for aparticular application. For example, useful forecasts mightnot be offered during the months most critical to a particularcrop, or the magnitude of the forecast departures from aver‐age might not be large enough to induce a discernable impacton productivity or financial outcome. Those located in a re‐gion with net downscaled dependability >50% should ex‐amine the climate forecast time series for the encompassingforecast division (digitally available at NOAA/CPC, 2006b)to determine if the timing and frequency of forecasts appearspromising for the crop or forage of interest. If potentially use‐ful forecasts are offered during the months of interest, thencalculate the FOMS between the 3‐month forecast division(NOAA/CPC, 2006c) and the closest 1‐month station valuesfor precipitation or average temperature. If the FOMS areclose to (or better than) the average numbers reported herein,consider downscaling the NOAA/CPC climate forecasts topursue the development of climate forecast‐based decisionsupport.

An alternative to the use of downscaled NOAA/CPC fore‐casts is the development of custom climate forecast tools fora particular crop and location. This approach is being usedsuccessfully for a number of crops in Florida (e.g., SoutheastClimate Consortium, 2008), but it can require a significantdevelopment effort in comparison to that required to down‐scale the NOAA/CPC forecasts.

The failures in net dependability reported here are primar‐ily the result of the limits in our collective knowledge of thesources of climate variability for individual locations and areonly weakly related to the choice of downscaling methodolo‐gy. All general climate forecasts, regardless of the agencyproducing them, suffer limitations similar to those of theNOAA/CPC forecasts examined here (e.g., Goddard et al.,2003), and lose skill when downscaled (e.g., Gong et al.,2003). The skill of the NOAA/CPC seasonal climate fore‐casts can be expected to improve on large spatiotemporalscales as innovations continue to be tested, demonstrated,and implemented. However, the fundamental mismatch be‐tween forecast scales and application scales will limit theutility of any such improvements for agricultural applica‐tions. Currently, the seasonal forecasts are based on averagesover NOAA/NCDC climate divisions, huge areas that en‐compass significant variability in seasonal precipitation.This variability is the basis of the 80% FOMS in spatialdownscaling of precipitation. Forecasts developed for small‐

925Vol. 51(3): 915-925

er areas, and in particular the experimental climate divisionsbased on precipitation variability (Wolter and Allured,2007), have the potential to avoid most of the spatialdownscaling issue, assuming that they display dependabilitycomparable to or slightly less than the current forecasts. Not‐ing that the loss in net dependability due to temporal disag‐gregation is distinctly larger than that due to spatialdownscaling, it would appear that the bigger improvementcould be achieved if the climate forecasts were offered forsingle months, rather than 3‐month periods. Again, thiswould depend on the skill of the 1‐month forecasts, but thedependability would not need to be as high since one couldavoid the 3‐month to 1‐month disaggregation completely.Regardless, the availability of smaller spatial‐scale, 1‐monthforecasts would significantly simplify the evaluation and im‐plementation of the NOAA/CPC forecasts in agricultural ap‐plications. On this basis alone, such a line of developmentdeserves consideration as a possible improvement to the cur‐rent NOAA/CPC forecasts.

REFERENCESBarnston, A. G., Y. He, and D. A. Unger. 2000. A forecast product

that maximizes utility for state‐of‐the‐art seasonal climateprediction. Bull. American Meteor. Soc. 81(6): 1271‐1279.

Garbrecht, J. D., J. M. Schneider, and X. J. Zhang. 2004.Downscaling NOAA's seasonal precipitation forecasts to predicthydrologic response. In Proc. 18th Conference on Hydrology.Paper 6.8. Boston, Mass.: American Meteorological Society.

Goddard, L., A. G. Barnston, and S. J. Mason. 2003. Evaluation ofthe IRI's “net assessment” seasonal climate forecasts:1997‐2001. Bull. American Meteor. Soc. 84(12): 1761‐1781.

Gong, X., A. G. Barnston, and M. N. Ward. 2003. The effect ofspatial aggregation on the skill of seasonal precipitationforecasts. J. Climate 16(18): 3059‐3071.

Livezey, R. E., and M. M. Timoveyeva. 2008. Insights from a skillanalysis of the first decade of long‐lead U.S. three‐monthtemperature and precipitation forecasts. Bull. American Meteor.Soc. 89 (in press).

Meyers, J. C., M. Timofeyeva, and A. C. Comrie. 2008. Developingthe local 3‐month precipitation outlook (abstract). In Proc. 19thConference on Probability and Statistics. Paper P1.4. Boston,Mass.: American Meteorological Society. Available at:www.confex.com/ams/htsearch.cgi. Accessed 2 May 2008.

NOAA/CPC. 2006a. Seasonal outlooks: Probability of exceedance(POE) maps. Silver Spring, Md.: NOAA Climate PredictionCenter. Available at: www.cpc.ncep.noaa.gov/products/predictions/long_range/poe_index.php?lead=1&var=p.Accessed 12 December 2006.

NOAA/CPC. 2006b. Probability of exceedance (POEs) of CPC'slong‐lead seasonal forecasts for temperature and precipitation,since December 1994. Silver Spring, Md.: NOAA ClimatePrediction Center. Available at: www.cpc.ncep.noaa.gov/pacdir/NFORdir/HUGEdir2/hut.html. Accessed 12 December 2006.

NOAA/CPC. 2006c. CPC outlook archive: Observation data. SilverSpring, Md.: NOAA Climate Prediction Center. Available at:www.cpc.ncep.noaa.gov/pacdir/NFORdir/HUGEdir2/huo.html.Accessed 12 December 2006.

NOAA/NCDC. 2005. Surface data: Daily cooperative station data.Asheville, N.C.: NOAA National Climatic Data Center.Available at: www.ncdc.noaa.gov/oa/climate/climatedata.html#daily. Accessed 20 June 2005.

Schneider, J. M., and J. D. Garbrecht. 2002. A blueprint for the useof NOAA/CPC precipitation climate forecasts in agriculturalapplications. In Proc. 3rd Symposium on EnvironmentalApplications. Paper J9.12. Boston, Mass.: AmericanMeteorological Society.

Schneider, J. M., and J. D. Garbrecht. 2003. A measure of theusefulness of seasonal precipitation forecasts for agriculturalapplications. Trans. ASAE 46(2): 257‐267.

Schneider, J. M., and J. D. Garbrecht. 2006. Dependability andeffectiveness of seasonal forecasts for agricultural applications.Trans. ASABE 49(6): 1737‐1753.

Schneider, J. M., J. D. Garbrecht, and D. A. Unger. 2005. Aheuristic method for time disaggregation of seasonal climateforecasts. Weather and Forecasting 20(2): 212‐221.

Southeast Climate Consortium. 2008. AgClimate: A Service of theClimate Consortium. Tallahassee, Fla.: Florida State University.Available at: www.agclimate.org/Development/apps/agClimate/controller/perl/agClimate.pl. Accessed 5 May 2008.

Wolter, K., and D. Allured. 2007. New climate divisions formonitoring and predicting climate in the U.S. IntermountainWest Climate Summary 3(5): 2‐6.

926 TRANSACTIONS OF THE ASABE