Working With State and Local Service Delivery SystemsThe Politics of Evaluating Educational...

16
141 WORKING WITH STATE AND LOCAL SERVICE DELIVERY SYSTEMS The Politics of Evaluating Educational Opportunity Programs at the Community College Level ANNA F. LOBOSCO JUDITH S. KAUFMAN University at Albany State University of New York The conflict between the political agenda and practical implementation is not unique to the evaluation of Educational Opportunity Programs at community colleges; it is a problem that plagues the development, initiation, and maintenance of all innovative services and service delivery systems. It is a particular feature of most remedial education programs due to their regulated (legislated and funded) nature. This article documents the interaction between state and local service delivery systems and discusses the politics of conducting a multi-site evaluation which straddled state and local funding mechanisms and service delivery systems in terms of the problems encountered and appropriate evaluation practice. onflict between the legislative agenda at the state level and practical implementation of any program or innovation at the local level is not unique to the establishment of academic support services at the community college level in New York State; it is a problem that plagues the development, initiation, and maintenance of all innovative services and service delivery systems. One of the major trends in contemporary human services planning is the development of a coordinated system of services delivered at the local level (Healy, 1983). AUTHORS’ NOTE: This article is based on a multisite program evaluation conducted by the Evaluation Consortium at Albany (School of Education, the University at Albany, State University of New York). The evaluation was contracted by the State University of EVALUATION REVIEW, Vol 13 No 2, Apnl 1989 141-156 © 1989 Sage Pubhcations, Inc at AXINN LIBRARY*E on January 27, 2016 erx.sagepub.com Downloaded from

Transcript of Working With State and Local Service Delivery SystemsThe Politics of Evaluating Educational...

141

WORKING WITH STATE AND

LOCAL SERVICE DELIVERY SYSTEMS

The Politics of Evaluating EducationalOpportunity Programs at theCommunity College Level

ANNA F. LOBOSCOJUDITH S. KAUFMAN

University at AlbanyState University of New York

The conflict between the political agenda and practical implementation is not unique tothe evaluation of Educational Opportunity Programs at community colleges; it is a

problem that plagues the development, initiation, and maintenance of all innovativeservices and service delivery systems. It is a particular feature of most remedial educationprograms due to their regulated (legislated and funded) nature. This article documentsthe interaction between state and local service delivery systems and discusses thepolitics of conducting a multi-site evaluation which straddled state and local fundingmechanisms and service delivery systems in terms of the problems encountered andappropriate evaluation practice.

onflict between the legislative agenda at the state level andpractical implementation of any program or innovation at thelocal level is not unique to the establishment of academic supportservices at the community college level in New York State; it is a

problem that plagues the development, initiation, and maintenance ofall innovative services and service delivery systems. One of the majortrends in contemporary human services planning is the development of acoordinated system of services delivered at the local level (Healy, 1983).

AUTHORS’ NOTE: This article is based on a multisite program evaluation conducted bythe Evaluation Consortium at Albany (School of Education, the University at Albany,State University of New York). The evaluation was contracted by the State University of

EVALUATION REVIEW, Vol 13 No 2, Apnl 1989 141-156© 1989 Sage Pubhcations, Inc

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

142

Therefore, evaluators increasingly will be required to conduct evalu-ations that straddle service delivery systems and assess relations betweenpolicy and practice (McLaughlin, 1987). The political considerationnoted in this retrospective analysis of both the conduct and outcome of amultisite evaluation can assist evaluators in the focus, design, choice ofmethods, and management of future evaluations.

BACKGROUND

For 1986-87, the governor and the legislature of New York Stateapproved $400,000 to strengthen and/ or establish counseling andtutoring support services at three community colleges for EOP qualifiedstudents on an experimental basis. The funds also included moneys toevaluate each of the approaches developed by the colleges in order tofacilitate the development of a model program that could be replicatedin other community colleges within the SUNY system.

The concept and practice of funding counseling and tutoring servicesfor Educational Opportunity Program (EOP) students in state-operatedinstitutions of higher education is well established and has been shownto be effective. The recently released Legislative Commission ofExpenditure Review (LCER) indicated that tutoring significantlyincreased EOP student persistence and success in completing collegeprograms at four-year state-operated campuses. Previously, funding forcounseling and tutoring had not been made available to communitycolleges in the State University of New York (SUNY) system. Althoughthe majority of the SUNY community colleges do have a history ofsupporting Educational Opportunity Programs, this support has beenlimited to direct student aid.

New York Central Administration’s Office of Special Programs. The assistance and closecooperation of personnel from the Office of Special Programs, and the collegeadministration, EOP directors, and program staff at Erie, Monroe, and SuffolkCoynynMU!~ Co//c~ a//oM~ /nc cva/ua~or~ ~o conduct a~rM~/u/a.MC.Mm~ o/~n~/Mn<~Community Colleges allowed the evaluators to conduct a fruitful assessment of thefundedprograms. However, this article does not represent the position of the contractor or theparticipating institutions; nor does it necessarily imply their agreement with the presentedinformation. Portions of this article have been taken from a broader analysis of thisevaluation, &dquo;Issues of Implementation with Regard to Educational Opportunity Programsat the Community College Level, &dquo;by Judith S. Kaufman and Anna F. Lobosco, which hasbeen submitted for publication in S. S. Goldberg (ed.) Readings in Equal Education (lOthc~J. TTtc aM~nor~ arc tndc~~d ~o Z~Mrtc ~~//man ~9~ an~yu~n ~oo~r (7P~7~/bred.). The authors are indebted to Laurie Wellman (1988) and Judith Wooster (1987) fortheir in-depth reviews of implementation theory as it pertains to state-mandated

implementation of innovative programs.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

143

During the summer of 1986, the Office of Special Programs ofSUNY/Central Administration sought proposals from establishedcommunity college Educational Opportunity Programs. Of particularinterest were courses of study and programs that emphasized tutoringand counseling services for at-risk students, independent of specificcourse demands. Identification of those courses in which EOP studentsfared least well or those that were avoided were requested to be includedin the proposals. Once these courses were identified, the proposals wereto show the relationship between student needs and suggested remedies.Minimum criteria used in the selection process were (1) an establishedEducational Opportunity Program with a full-time director; (2) develop-ment of a counseling and tutoring model that would promote self-confidence, time management, study skills, and academic and personalgrowth among EOP students; (3) structured counseling and tutoringprograms that are both diagnostic and prescriptive in nature; (4)determination of ideal interaction that should occur between faculty,counselors, tutors, and students, and exploration of effective ways totrain and orient faculty and program staff to new methods and strategiesfor effective service delivery; (5) development of methods for evaluatingprogram effectiveness; (6) development of methods that would en-courage student participation in counseling and tutoring; and (7)development of a model program with potential applicability to othercommunity colleges in New York State.

Three campuses were selected to participate in the program andreceive additional funding to enhance existing program efforts: ErieCommunity College, serving 545 EOP-funded students and another500-550 EOP-designated students (those who are eligible for supportservices but receive no funding); Monroe Community College, serving245 students; and Suffolk Community College serving 195 EOP

students. Each of these programs had designed a unique service-deliveryprogram to meet the needs of their local EOP student populations. Theprograms included varying degrees of supplemental instruction, aca-demic tutoring by professional and peer tutors, student and faculty-stafforientations and workshops, and eclectic diagnostic and counselingsupport that covered the financial, academic, emotional, social, andcareer-planning needs of students.

During the summer of 1987, a formative assessment was conductedby the Evaluation Consortium at Albany to evaluate the differentapproaches employed across the three EOP sites. McLaughlin (1987)determined that implementation of programs that strive to stretch social

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

144

policies across levels of government will, inevitably, create complex andintractable implementation issues. One major issue encountered in thisevaluation concerned the conflict between practical concerns at the locallevel and statewide expectations.

Because of the impact of the interaction between service deliverysystems, a retrospective analysis of the evaluation findings and theevaluation process was conducted. Evaluation findings that documentedinteraction between state and local service delivery systems wereanalyzed within the framework of implementation theory (Rogers andShoemaker, 1971; Fullan and Estabrook, 1973; Giacquinta, 1973;Fullan, 1983; Miles, 1983; Huberman and Miles, 1984). In addition, thepolitics of conducting a multisite evaluation that straddled state andlocal funding mechanisms and service delivery systems was consideredin terms of the problems encountered and appropriate evaluationpractice.

EVALUATION FINDINGS AND IMPLEMENTATION THEORY

The evaluation documented several administrative problems duringthe first year of funding for enhanced EOP support services. Theseproblems stem largely from the intersystemic nature of this fundedprogram. McLaughlin (1987: 172) states that

an obvious conclusion running through empirical research on policy implemen-tation is that it is incredibly hard to make something happen, most especially acrosslayers of government and institutions... policymakers can’t mandate what matters... policy success depends critically upon two broad factors: local capacity and will.

As would be expected, the state level took on an administrative,executive policy formulation function, and local service delivery wasconcerned with the programmatic implementation of the policy. Majorconcerns were related to urgent and unrealistic time frames for

implementation of the funded services, gaps in mutual understanding ofthe local and the larger frame of reference and noncomplementary viewsof program ownership, and conflict between local interpretation of fullopportunity policies and stipulated constraints on EOP funded serviceavailability. Each of these concerns posed formidable problems thataffected program implementation and desirable outcomes for the targetpopulation. The following sections elaborate on these concerns.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

145

TIME FRAMES

Proposals were sought and approved during a three-month spanbetween the completion of the 1985-86 academic year and the beginningof the 1986-87 academic year; funded programs were expected to be inplace for the 1986-87 academic year. The late notice of grant approvaland lack of adequate planning time posed severe problems for each ofthe three programs. The most widely cited problems included hiringstaff, obtaining adequate space, purchasing necessary materials andequipment for the tutoring component, and building relationships withfaculty. As a result of these difficulties, the funded services were notwidely available until after the beginning of the spring 1987 semester.

Although the community colleges in New York State are part of theSUNY system, they are funded through the counties being served. Thecounty hiring process is extensive and requires that job vacancies beposted for a substantial period of time. Thus the hiring process forcounty civil service positions cannot be hurried. Additionally, it wasnoted on all campuses that classroom and office space was at a premiumand increased planning time would assist program personnel in

acquiring needed space. Although the provision of counseling serviceswas delayed or disrupted for only a short time on each campus, thetutoring and supplemental instruction services were compromised byfrequent changes in location.

Time and money as well as the temporary nature of the grant fundingprevented the investment in textbooks and supplementary resourcesneeded for adequate provision of tutoring services. At one locationextensive resources were made available to tutors by the academicdepartments they worked with. At other sites, tutors requested thatsimilar resources be purchased for them or procured through a loanarrangement. Finally, additional time was needed for EOP personnel towork and plan with faculty in order to develop the collaboration neededto provide supplemental instruction.

Funding for the increased/ enhanced support services was, by nature,temporary. Therefore, individual institutions did not readily provideadditional support that might have eased the implementation process.Aside from resources for the tutoring component, lack of officeequipment (telephones, computer terminals, and so on), permanentspace and clerical assistance were also in short supply. Conceivably, hadmore planning time been allotted and a longer-term funding streambeen established, institutional support for such needs might have been

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

146

more available. As Giacquinta (1973) notes, a successful attempt tochange a school organizationally generally proceeds in three basicstages: initiation of the innovation, implementation, and incorporationas a stable part of the organizational structure. Without a clearcommitment to long-term state funding of an expensive system ofservices that would be available to only a portion of the student body,the community colleges were cautious with regard to their degree ofcommitment to that program. Miles (1983) extends this analysis andnotes a need for institutionalization studies that go beyond imple-mentation. He echoes a commonly held notion that a &dquo;good&dquo; innovationendorsed by its users will &dquo;somehow just stay around.&dquo; Certainly, thereluctance of the individual community colleges to increase theircommitment to an experimental program with an uncertain fundingstream argues against that notion. The assistance of the central

administration in developing skill with and commitment to an inno-vation is seen as essential in the institutionalization of the innovation

(Miles, 1983).Planning time for future improvement in program services is

essential. Those institutions that had already had some institutionallysupported tutoring and counseling services built into their programshad a different view of the implementation process. Largely, theseprograms experienced difficulties in accommodating a larger staff andthe accountability constraints of cumbersome amounts of paperwork.Conversely, the programs that did not already have some institutionallysupported services found the implementation process considerablymore formidable and noted broader difficulties that involved establish-ment within the larger college community and a place within theinstitutional prioritization schedule. Thus those programs with someexisting support services already had the benefit of time in the

establishment process and were battling the problems of expansion;infant programs, on the other hand, did not have that advantage. Thetime frame for use of legislated funding did not allow lead time forplanning and, ultimately, made both the establishment and expansionprocesses more difficult. Huberman and Miles (1984) note that ongoingassistance and in-service training were beneficial in reversing a roughstart for innovative programs, but found that in the initial stages of

implementation, training is less crucial than prior experience. Thus theexpanding programs were having less difficulty with implementation ofthe innovative programs because they had prior experience with theprovision of support services on a more limited basis.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

147

Fullan (1983), in an implementation study of &dquo;follow through,&dquo;reminds the reader that implementation is multidimensional; it dependson planned (strategic) and unplanned (contextual) factors. The evalua-tion effort was commissioned and planned with this in mind. Strategicand contextual differences at each site were explored and found to havea significant impact on the implementation process and the texture ofthe individual programs. In this regard, time was both a strategic and acontextual factor.

GAPS IN UNDERSTANDING AND CONFLICTING VIEWS OF OWNERSHIP

One of the major trends in contemporary human services planning isthe development of a coordinated system of services delivered at thelocal level. It is acknowledged that coordination begins with knowl-edge of shared concerns, functions, and data regarding the populationsto be served; this is accomplished through communication and per-severance (Healy, 1983). Collaborative planning by all involved audi-ences, active participation in problem identification, and a closer asso-ciation between and among service providers, governments, agencies,and consumers cannot happen without a communication process basedon mutual desire to benefit the client and to enhance the efficiency ofthe system (Martinson, 1982; Baxter, 1982; Albright et al., 1981).A striking feature of this evaluation was the knowledge of and

commitment to disadvantaged students and their needs possessed byOffice of Special Programs personnel, college administrators, EOPdirectors, and program staff. Administrators at the state, college, andprogram levels indicated that a smoother working relationship hasevolved over the last few years. In each case, the commitment to the

disadvantaged student and active inquiry into the philosophical andpractical perspectives at the other levels of administration has enhancedprogram implementation and functioning. Certainly the input fromeach of the pilot sites has reduced the threats to implementation posedby a &dquo;top-down&dquo; approach (Fullan and Estabrook, 1973; Rogers andShoemaker, 1971). However, the inevitability of a better understandingof one’s own concerns and constraints at each level is certain to causeconflict when doing business among the bureaucracies.Two conflicts were pointed out when viewing attempts at collab-

oration to the benefit of the student. The first concerned the Office of

Special Programs and the community college, where both claimedownership of the EOP because both had dedicated substantial support.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

148

The second involved the problems inherent in multisite implementation.Although a certain philosophical flexibility is desired so that each

program can be maximally responsive to the local community, a degreeof standardization is necessary to maintain the fiscal integrity of fundedprograms.

In discussing ownership considerations, college administrators refer-red to the level of institutional support provided to the EOPs prior toreceipt of grant funding for enhanced tutoring and counseling services.They noted that community colleges serve a large population of studentsthat would not be eligible for admission to four-year colleges or capableof college-level work without academic assistance. The current statefunding formula, based on the number of disadvantaged students servedby the community college, is not adequate for meeting the costs of theremedial and developmental studies programs needed by disadvantagedstudents, let alone provide funding for more extensive EOP-typeprograms. Thus a considerable financial commitment to the EOP, andthe general population of disadvantaged students, had been clearlydemonstrated prior to receipt of grant funding for enhanced EOPsupport services. Further, each institution indicated that the EOP andits program staff were considered an internal resource and focal unit ininstitutional planning and provision of services for the wider populationof disadvantaged students. In light of this, the institutional determi-nation of ownership of a valued resource is understandable.

Conversely, it is clear that the Office of Special Programs hadproprietary concerns as well. Although the funding unit tried to alloweach institution to administer its own program, the necessity of

monitoring the expenditure of grant monies and overseeing provision oftargeted services gave state-level administrators a clear leadership role.In this regard, the personnel from the Office of Special Programsmaintained an active, but not excessive, presence at each communitycollege and assisted local decision makers in clarifying policy issues toensure consistency with the legislated intent of the grant funding.

Undoubtedly, this feeling of ownership and a willingness to tenderprogrammatic support at both levels shows the program to be consistentwith existing values, past experience, and the needs of the students beingserved. Rogers and Shoemaker (1971) indicate that such a perceivedprogrammatic compatibility should allow adoption of the innovativeprogram at a higher rate than would otherwise be expected. This, too,might temper the negative effects of a &dquo;top-down&dquo; approach to programimplementation.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

149

STANDARDIZATION AND THE NEED FOR LOCAL FLEXIBILITY

A further issue concerning problems inherent in multisite imple-mentation, particularly in the field of human services, indicates a need tobe responsive to the clients being served. The experimental nature of thelegislative funding allowed each of the three pilot sites to develop aprogram that would be particularly responsive to local needs andconcerns; however, the ultimate intent was the establishment of astandardized model program that could be adopted by communitycolleges across the state. Throughout the proposal, implementation,and evaluation process the need for establishment of such a model

program was emphasized. Yet, each site stressed the importance oftailoring support services to local needs.

This conflict was particularly evident in the institutional response tofull opportunity policies. The full opportunity concept guaranteesadmission to the college for any person within the service area holding ahigh school or high school equivalency diploma. Further, the communitycolleges have instituted policies that extend this concept beyondguaranteed admission to include mainstreaming and provision ofsupport services that give all students, regardless of handicap ordisadvantage, the opportunity to complete successfully a course ofstudy. At one site, the full opportunity policy precludes distinction ofEOP students within the general student body. Therefore, outside of theEOP staff, there was little awareness of which students were beingserved by the EOP. This lack of distinction was vigorously supported byadministration and faculty. Consequently, tutors hired under the grantfunding were housed in academic developmental studies laboratoriesand were considered part of a pool of tutors available to the entirecollege community, EOP eligibility notwithstanding. Similarly, pro-gram staff and students at all three sites noted the need and applicabilityof EOP-type services for a broader range of disadvantaged students anda desire to assist these students. Repeatedly, state-level staff remindedthe local programs that funded services could be made available only tothose EOP-qualified students in order to fulfill the programmaticcommitment to economically and educationally qualified students. Inany case, conflict existed between the need for local responsiveness andthe standardization necessary to make similar services more widelyavailable.

The conflict between the local responsiveness and need for stan-dardization illuminates the distinction between the fidelity-of-use view

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

150

of implementation and the organizational process perspective. Fidelity-of-use studies involve determining the degree of implementation of aninnovation in terms of the extent to which actual use of the innovation

corresponds to intended or planned use. A process perspective toexploring implementation differs from a fidelity approach in that itseeks to focus on the organizational changes that occur as an innovationis implemented. In a sense, the need for standardization requires afidelity-of-use perspective, whereas local responsiveness would demandan organizational process perspective.

POLITICS AND EVALUATION PRACTICE

The same intersystemic contextual climate was found to have poseddistinct problems for the evaluators in conducting the evaluation.Specific problems included responding to the state-level client whileadvocating for the discrete local programs, reducing local resistance to astate mandated evaluation, conducting an evaluation within andbetween two distinctly different service delivery systems, formulatingrecommendations that will serve both state and local service deliverysystems without threatening a legislated funding stream, and educatingthe client to the dangers of a premature impact evaluation when adetermination of programmatic impact is the desired product.

RESPONSIVENESS VERSUS ADVOCACY

One of McLaughlin’s (1987) guidelines for conducting imple-mentation evaluation and policy analysis reminds evaluators that &dquo;therelevant frame of reference is the implementing system, not a discreteprogram or project&dquo; (p. 175). She further reminds evaluators to becognizant of the fact that program effects may be interpreted differentlywithin a systems context. It is also necessary to sort out the effect of

policy as opposed to the effect of the individual programmaticinterpretation of the policy.

This evaluation was conducted by an organization that channeled thetalents of eight evaluators into a statewide multisite responsive evalua-tion. The evaluation design employed the triangulation technique andcombined quantitative and qualitative methods and data analysis. Fieldsite visits, semistructured interviews, content analysis, and surveyresearch methods were used to assess convergence and agreement.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

151

Nonetheless, when the data came in for analysis and report writing,the evaluators had to make a concerted effort to remain responsive tothe state-level client by keeping the larger system as the frame ofreference while repressing a strong urge to advocate for the individuallocal programs. The strength of the field site observations and interviewdata was consistent with naturalistic techniques that urge the researcherto become immersed in the natural setting. Therefore, the evaluatorscould easily and readily feel the needs and frustrations of the localprogram staff. The luxury of having eight evaluators allowed ( 1 ) severalevaluators to collect data at each site that served to increase the

reliability of the information, (2) some evaluators to collect data atmultiple site that strengthened comparisons, (3) some evaluators towork with both quantitative and qualitative data to better assessconvergence and agreement, (4) some evaluators to remain in the officeto analyze data from an objective perspective, and (5) two evaluators topartake in every aspect of the evaluation for a summary perspective.Several checks were in place to guard against bias for the local

programs, but the responsive nature of the evaluation did allow a certainlevel of advocacy for the local site. The state-level client, in this instance,wanted a clear picture of the effects of local interpretations of the fundedprogram and problems being faced at each site. They also wantedinformation that would justify requests for additional funds from thestate legislature to continue and expand the program.

The conflict between responsiveness to the state-level client andadvocacy for the local programs was most in evidence when recom-mendations were being formulated. In order to provide useful recom-mendations it was necessary to have a clear view of the two major uses ofthe evaluation report. The state uses were ( 1 ) documentation ofsuccessful practice that can be used to develop a model program and (2)justification to request additional legislative appropriations. A sec-ondary use concerned dissemination to the funded sites to allow furtherprogram development and improvement.Many recommendations could have been made that would have

addressed discrete needs at individual sites and would be useful at thelocal level for program improvement; however, many of these recom-mendations were not under the aegis of the state-level client nor werethey the types of things eased by additional funding. Further, as

McLaughlin (1987) suggests, program effects are interpreted differentlywhen viewed from a systems context. Ten recommendations were

ultimately developed that reflected the two major uses of the evaluation.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

152

Several of the recommendations were also valuable for programimprovement. Those recommendations that addressed only discreteneeds were included in the local summaries of each field site visit. Theformal recommendations were responsive to state-level client needs andeach local program had additional information that would assist in

program improvement.

REDUCING LOCAL RESISTANCE TO THE EVALUATION

A major problem faced by the evaluation team was reducing theresistance of local program staff to an evaluation that was imposed uponthem by their funding agency. Undoubtedly, the unspoken threat oflosing a funding stream, the historic conflict between the two systemiclevels (county versus state), and differing perspectives of the program(the administrative versus actual delivery of service perspectives)increased rather than decreased local resistance. Although the evaluatorsclearly and regularly indicated that the evaluation was formative andintended to document local needs and successful practice in order todevelop a model program, local personnel remained anxious andguarded.

Although the evaluation design did allow a certain amount of localinput into the evaluation process, the evaluation was mandated by thestate level and imposed upon the local level. This provided problems indata collection and in keeping to the predetermined time lines. In someinstances, collecting the needed data was hindered by nonresponse,limited response, guarded response, and late response. In other

instances, the presence and/or insistence of state-level personneleffectively cut off the trickle of information that had been established.

&dquo;Successful implementation generally requires a combination ofpressure and support from policy&dquo; (McLaughlin, 1987, p. 173). Muchthe same can be said for this type of evaluation process. A balance ofpressure and support from the state level had to be exacted to reducelocal resistance and allow in-depth data collection. When the second-year evaluation was commissioned, the evaluators sought additionalinput from the local sites. Local program personnel were asked tobecome involved in the planning and design of the evaluation and inestablishing time lines. The local sites readily offered suggestions thatwere largely accepted by the new evaluation team. In addition, havingbeen involved in the previous year’s evaluation and viewing theformative orientation, site personnel appeared more willing to be

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

153

involved in the evaluation process. However, the data collection processfor the second year noted similar problems, though local resistance didnot occur across all sites as in the previous year.

THE DANGER OF PREMATURE IMPACT EVALUATION

Certainly, any program subject to a funding agency must eventuallyshow impact on the target population in order to be assured of acontinued and uninterrupted funding stream or to justify an increase inappropriation. The state-level client, in this instance, wanted anevaluation that would justify an increase in, or at the very least acontinuation of, the legislative appropriation. Specifically, they re-quested an impact evaluation in both the first and second funding yearsthat would serve to justify continuing and expanding the experimentalprograms at the community colleges.

The problem here is that programs generally do not have animmediate effect (or impact) on the target population in terms ofprogram goals. Hord et al. (1987) indicate that a full three years isneeded before a new program will begin to show effects. In the earlystages of implementation, summative measures usually are inappropriate(McLaughlin, 1987). Thus, in the first or second year of the program, theeffects of the program on EOP students with regard to program goals ofincreased academic achievement, increased retention of EOP studentswithin programs, and increased graduation from the community collegewould not have been apparent. An impact evaluation would have failedto justify legislative appropriation of needed funds and would havejeopardized program continuance.

Rebell and Block (1985), however, take a somewhat different stance.They describe impact studies as the forerunners of implementationanalysis. From this perspective an impact study is used to determinewhether there is effective compliance with particular laws or policydirectives. Theoretically and practically, the evaluators felt this to be themore appropriate study to conduct during the initial stages of programimplementation.

Although the state-level client still wanted an indicator of program-matic impact, they were convinced to work toward that goal over severalyears. The first year provided a descriptive study of what was happeningat each site. The second-year study was designed to provide anassessment of compliance with policy and funding directives, design ofan impact evaluation, and collection of baseline data for a future impact

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

154

analysis. In each case, the collection of data was laying a foundation foran eventual evaluation of program effect on the target population interms of program goals. This approach allowed the sites to makeprogram improvements that would serve to increase the effectivenessand responsiveness of the program prior to doing an actual impactevaluation.

SUMMARY

In closing, we point out that the findings of this evaluation were usedto highlight formidable concerns in compensatory program imple-mentation and evaluation of programs that fall within both state andlocal service delivery systems. It would be premature to say that there isany right way to tackle the problems or that there are any right answersto the questions that have been raised.A major point uncovered in the retrospective analysis of this

evaluation concerns the responsiveness of the evaluation to the differingneeds of decision makers when assessing a program that deals withsocial policies and spans several levels of a service delivery system:

Macro-level analyses generally provide insufficient guidance to policymakers orpractitioners interested in understanding program outcomes (positive or negative),evaluating alternatives, assessing internal work requirements or developing modelsof how policies operate in practice.... Conversely, micro-level analyses ignoresystemic attainments and unanticipated consequences for the institutional settingas a whole so cannot speak to the expected organizational consequences or system-side effects of a policy. Micro-level analyses thus provide limited guidance topolicymakers faced with system-wide decisions ... the conceptual challenge tothird generation implementation analysts lies in integrating these two communitiesof discourse in models that accommodate these multi-level, multi-actor com-

plexities [McLaughlin, 1987, p. 177].

The responsive design of this evaluation, as opposed to a rigid designbased on a singular evaluation model, allowed the evaluators greaterflexibility in meeting the diverse needs of the client, the audiences, andthe stakeholders. As a result, the final evaluation report was responsiveto the information needs of personnel at both the local and state levels.

Innovative programs are rarely implemented without &dquo;start-upproblems&dquo; and this program was no exception. However, much of theimplementation process was consistent with the framework of imple-

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

155

mentation theory. Most of the problems were anticipated at the statelevel and steps were taken to ease the effects of unavoidable problems.Although it was premature to offer judgments on program componentsand methods included in a model program, the formative evaluationserved to facilitate communication between levels in the service deliverysystem and functionally to ease the implementation process for similarprograms at a slowly growing number of community colleges in NewYork State. Preliminary results from the second-year evaluationindicate that many of the problems uncovered in the first-yearevaluation are endemic in the political process that frames social policyimplementation across levels of government. But the evaluators haveand will continue to demonstrate that awareness of these problems canbegin to mitigate their effects. The same can be said for the evaluationprocess: Awareness of the political factors that enter into evaluation ofsuch programs should assist evaluators in the planning, conduct, andmanagement of similar evaluations.

REFERENCES

ALBRIGHT, L., S. HASAZI, L. A. PHELPS, and M. E. HALL (1981) "Interagencycollaboration in providing vocational education for handicapped individuals."

Exceptional Children 47 (8): 584-589.BAXTER, J. M. (1982) "Solving problems through cooperation." Exceptional Children

(February): 400-407.Evaluation Consortium at Albany (1987) A Formative Evaluation of Three Pilot

Community College Educational Opportunity Programs. Albany: SUNY CentralAdministration, Office of Special Programs.

FULLAN, M. (1983) "Evaluating program implementation: what can be learned fromfollow through." Curriculum Inquiry 13 (2): 215-227.

FULLAN, M. and G. ESTABROOK (1973) "The process of educational change at theschool level: deriving action implications from questionnaire data." Presented at theannual meeting of the American Educational Research Association.

GIACQUINTA, J. (1973) "The process of organizational change in the schools," in F. N.Kerlinger (ed.) Review of Research in Education. Itasca, IL: F. E. Peacock.

HEALY, A. (1983) "The needs of children with disabilities: a comprehensive view." IowaCity: University of Iowa Press.

HORD, S. M., W. L. RUTHERFORD, L. HULING-AUSTIN, and G. E. HALL (1987)Taking Charge of Change. Alexandria, VA: ACSD.

HUBERMAN, A. M. and M. B. MILES (1984) Innovation Up Close: How SchoolImprovement Works. New York: Plenum.

MARTINSON, M. L. (1982) "Interagency services: a new era for an old idea."

Exceptional Children (February): 395-399.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from

156

McLAUGHLIN, M. W. (1987) "Learning from experience: lessons from policy imple-mentation." Educ. Evaluation and Policy Analysis 9 (2): 171-178.

MILES, M. B. (1983) "Unraveling the mystery of institutionalization." Educ. Leadership41 (3): 14-19.

REBELL, M. A. and A. R. BLOCK (1985) Equality and Education: Federal Civil RightsEnforcement in the New York City School System. Princeton, NJ: PrincetonUniversity.

ROGERS, E. M. and F. F. SHOEMAKER (1971) Communication of Innovations. NewYork: Free Press.

WELLMAN, L. (1988) "Factors relating to the implementation of the New York Statecurriculum for English as a second language in secondary schools." Doctoraldissertation, University at Albany, SUNY.

WOOSTER, J. (1987) "The effects of three variables on teacher implementation of acentrally imposed curriculum." Doctoral dissertation, University at Albany, SUNY.

Anna F. Lobosco is Associate Director of the Evaluation Consortium in the School ofEducation at the University of Albany, State University of New York.

Judith S. Kaufman is currently teaching in the Educational Psychology Program at theCollege of St. Rose, Albany, New York.

at AXINN LIBRARY*E on January 27, 2016erx.sagepub.comDownloaded from