Formalization in Human Service Organizations: the role of evaluation

40
FORMALIZATION IN HUMAN SERVICE ORGANIZATIONS: the Role of Evaluation 1980, John Theiss INTRODUCTION "Service agencies need to encourage agents (service delivery staff) to develop and use their own good judgment and share service responsibility with clients. If they do, citizens will be more likely to make desired changes in their behavior . . .. We have too often come to expect that agencies can change people and have forgotten that people must change themselves." (Whittaker, 1980, p. 246) This paper argues that the present foci of effectiveness measures and accounting responsibility in social welfare services limit the service delivery agent's opportunities to exercise judgment. It further argues that this change may be counterproductive because potentially poor quality and incomplete, if not misdirected, data are used to define and direct the role formalization. Though formalized and non-formalized service models contain numerous assumptions, the fundamental ones can be summarized as follows: Formalized service approaches are based on a deterministic world view in which actions have predictable consequences and the relationships between the actions and consequences are invariant. The resulting assumption is, specific service activities that produced specific service outcomes can be generalized across cases. Non-Formalized service approaches are based on a contextual world view in which, like snowflakes, no two cases, clients Or Problems are exactly the same. The resulting assumption is that service provision must be customized and that client and worker together produce changes in the client which are desired by the client and consistent with the goals of service delivery. In some situations it may be argued that formalized service approaches are based on contextually oriented world views. The crux of this argument is that such approaches provide the client with the tools of, and access to, achievement of a desired [email protected] Page 1 of 25jttheiss

Transcript of Formalization in Human Service Organizations: the role of evaluation

FORMALIZATION IN HUMAN SERVICE ORGANIZATIONS: the Role of Evaluation

1980, John TheissINTRODUCTION

"Service agencies need to encourage agents (service delivery staff) to develop and use their own good judgment and share service responsibility with clients. If they do, citizens will be more likely to make desired changes in their behavior . . .. We have too often come to expect that agencies can change people and have forgotten that people must change themselves." (Whittaker, 1980, p. 246)

This paper argues that the present foci of effectiveness measuresand accounting responsibility in social welfare services limit the service delivery agent's opportunities to exercise judgment. It further argues that this change may be counterproductive because potentially poor quality and incomplete, if not misdirected, data are used to define and direct the role formalization. Though formalized and non-formalized service models contain numerous assumptions, the fundamental ones can be summarized as follows:Formalized service approaches are based on a deterministic world view in which actions have predictable consequences and the relationships between the actions and consequences are invariant.The resulting assumption is, specific service activities that produced specific service outcomes can be generalized across cases.Non-Formalized service approaches are based on a contextual worldview in which, like snowflakes, no two cases, clients Or Problemsare exactly the same. The resulting assumption is that service provision must be customized and that client and worker together produce changes in the client which are desired by the client andconsistent with the goals of service delivery.In some situations it may be argued that formalized service approaches are based on contextually oriented world views. The crux of this argument is that such approaches provide the client with the tools of, and access to, achievement of a desired

[email protected] Page 1 of 25jttheiss

outcome. The sincerity of those who advance the argument may be unquestionable, but often the methods used to select the servicesand procedures for service delivery do not adequately consider client abilities or provide appropriate opportunity to achieve the goals of service. The crucial difference is that in non-formalized approaches the service deliverer is responding to individual clients and the goal of service, while in formalized approaches the service deliverer may be responding to procedures and standards of delivery.These summaries are extreme. Most activities are composed of varying degrees of each type of activity. The real difference is in the assumptions underlying the activities.The following paper examines some of the issues involved with formalization in human services organizations, the role evaluation may play in increasing formalization and the resultingimplications for human service agencies and their evaluators. Thepaper is divided into four parts. Part I introduces human serviceorganizations and the basic issues concerned with formalization in them. Part II elaborates on these issues, discussing the relationship between human service organizations' functioning andpurposes. Part III presents evaluation procedures and issues, including their potential impact on formalization. Part IV describes some recent developments in human services and their implications for the role of evaluation in human services.

Part I - Human Service Organizations

I-A. Human Service DeliveryI-A-1. DefinitionAccording to Demone and Harshbarger, human services are "those public and private programs, profit and non-profit, specifically designed and formally organized to alleviate individual or familyproblems or to fulfill human needs in the areas of personal growth and development (1974: pp 14)."

I-A-2. The production process in human services

[email protected] Page 2 of 25jttheiss

The production process in human services is frequently characterized by the fact that the input and the output are the same unit, the client, and that the client is reactive to the production process if not an active participant in it. Whittaker (1980) described the process as 'coproduction' and offered the following rational.

"The agent can supply encouragements, suggest options, illustrate techniques, and provide guidance and advice, but the agent alone cannotbring about the change. Rather than the agent presenting a finished product to the citizen, agent and citizen together produce the desired transformation." (p. 246)

Additionally the "desired transformation" he refers to is cooperatively identified and, within the limits of organizationalperogative, varies from client to client. Even in instances wherethe organization offers little or no latitude, the transformationvaries in its subjective interpretation. Clearly, a police officer's intervening to resolve a domestic disturbance, or a doctor's surgical procedure, though they fit prescribed production categories are perceived in a variety of ways by theirclients. These examples emphasize the importance of coproduction since if the arguing couple or the patient fail to share the agent's desire for the goal of service, the effect produced, the output state, is likely to be short lived or inappropriate.In selecting particular service activities and intended outputs, human service agencies define the aspects of clients with which they will be primarily concerned (Lefton & Rosengren, 1966). Medical facilities heal physical illness, employment services find jobs and schools teach students, yet in every categorical service more than the selected aspects of clients affect the production process and probable outcome state. Depressed medical patients are less likely to recover, clients' whose life is stable are more likely to get and keep jobs and motivated students learn more. Thus each categorical program needs to either diversify its services, make referrals, or accept less than optimum effectiveness. Further, the agents who deliver the services need to be able to identify the clients' problems and

[email protected] Page 3 of 25jttheiss

the services to address those problems. As Katz & Kahn (1966) write: To insure cooperation in production,

". . . there must be relative stability of personnel in the staff roles of the organization, the roles charged with responsibility for training and treatment. In addition, there must be considerable area of discretionary power within these roles. The reactive nature of subjects or patients requires reciprocal spontaneity on the part of staff." (p. 116 )

I-A-3. Human service delivery agent rolesThe extent of discretionary power an agent requires was conceptualized by Lefton & Rosengren (1966) as a response to lateral and longitudinal properties of service delivery activity.Laterality is the extent of the organization's interest In the client's biography, be it a limited slice as in the case of free flu shots for the elderly, or a broad interest in the client as aperson, as in many settlement houses and community mental health programs. Longitudinality is the dimension of time, or span of interest. Longitudinality includes time in two senses; one is thefrequency and duration of agent-client contacts and the other thelength of time over which the service is provided. Increased longitudinality increases the extent of the client's biography, some of which may influence the effectiveness of service procedures, to which the agent is exposed. Though lateral and longitudinal dimensions of service delivery tend to vary concomitantly, the authors emphasize that such does not have to be the case. A referral service is likely to have high lateralityand low longitudinality while a cancer treatment service may havethe opposite emphases. Figure I illustrates each potential combination of laterality and longitudinality.

Figure IVarious combinations of Laterality and Longitudinality as They

Usually Occur in Selected Human Services

Laterality Longitudinality

[email protected] Page 4 of 25jttheiss

Nursing Homes high highReferral Service high lowCancer Treatment (outpatient)

low high

Traffic Law Enforcement

low low

It is important to recognize that though the nature of a service may dictate the service agent's general range of client interest;it is the agent's flexibility within that range that is important. For example, in the case of a referral service; duringa call concerning services for handicapped or learning disabled the agent may collect broad based biographical information and beable to later provide detailed documentation of the callers circumstances and needs. If the call is about help for an accidental poisoning victim, the same agent needs to be free to relay the Poison Center's phone number and hang up without getting substantial documentation. These concepts not only facilitate our understanding clarify the of the service delivery activity, but will help to implications of rigid organizational accounting discussed later.

I-A-4. SummaryHuman service delivery roles can vary from highly formalized, wherein the agents interest in the client is narrowly delineated,to relatively informal problem solving, wherein the agent's interest is restricted to a prescribed class of outcomes but the process of achieving them is largely left to the discretion of the agent and client. As the span of time over which the service is rendered, and/or the variety of services provided, increases the probability of the agent becoming aware of and having to dealwith aspects of the individual not included in the service procedure increases. That is, as laterality and longitudinality increase, formalized service delivery activities are less likely to be effective.

[email protected] Page 5 of 25jttheiss

I-B. Organizational StructureI-B-1. Background of structural theoryEarly conceptualizations of organizational structure relied on the parameters of height and width. Height meant the number of vertical levels in an organization or, in terms of interactions, the distance between the ultimate authority and the production agent. Width referred to the number of persons or functions at any or all vertical levels. Formalized production roles were found to be typical of tall organizations and less formalization typical of 'wider' organizations. This conceptualization is a product of studies done early in the period of industrialization on free market organizations. These studies indicated that structure was dependent upon the technology available to the production process (Woodward, 1965; Thompson, 1967).More recently, the rise in professionalism, the increased involvement of free market enterprise in research, development and public relations, the great increase in corporate size and product diversification, and the increase in the size and number of human service organizations have necessitated different conceptualizations. Researchers in the later 1950s and early 1960s (March & Simon, 1958; Hall, 1962) reported the existence ofstructural variance between organizational units throughout organizations. This variance occurred both vertically and horizontally and is related to complexity, formalization and centralization (Hall, 1977).

I-B-2. Concepts related to organizational structure Complexity refers to horizontal and vertical task differentiation, and to the spatial dispersion of the organization or unit examined. Formalization is the extent to which rules and procedures most often are used to regulate behavior and centralization is defined as the distribution of power in an organization. Notice that these variables are likely to interact. For example, rules necessitate centralized decision-making concerning the rules and exceptions, and spatial dispersion limits the appropriate locus of such decision-making and/or the implications of the rules.

[email protected] Page 6 of 25jttheiss

I-C. Formalization in Human ServicesI C-1. Definition of formalizationWebster offers "prescribed customs, rules and ceremonies" as the definition of formalization. Organizational theorists have operationalized formalization as written rules, procedures, instructions and communications {Pugh, et. al., 1968), use and observation of rules (Hage and Aiken, 1977) and observed standardized behavior over time (Hall, 1967), In each case, activity is highly routine and specified by the rules of the organization and/or the practice of the task. " In highly formalized, standardized and specialized situations, the behaviorof the role occupant is highly specified, leaving him few optionsthat he can exercise in carrying out his job" (Hall 1977). Manufacturing and computer operations are examples of organizations whose activity is routine enough to be amenable to high formalization.Perrow (1967) represented the extreme non-formalized cases as those which call for intuition and possibly inspiration to be performed successfully, and Hall (1977) offered "scientific research" and "organizations dealing with human problems" as examples of such non-formalized circumstances. Halls example "organizations dealing with human problems" may lead the reader to believe that all human service delivery is non-formalized. While much of it is, human service organizations range from fairly highly formalized to very informal (Hage & Aiken, 1969). Casework, counseling and teaching/learning activities are instances where service agent roles are less amenable to formalization while benefit determination , behavior modificationtreatments and rote practice for memorization are examples of service agent roles which can be more formalized.

I-C-2. Concepts related to formalizationThere are advantages to varying degrees of formalization dependent upon staff, environment and technology. Hall (1977) summarizes the staff considerations as, "If a set of people are viewed as having excellent judgment and self-control,

[email protected] Page 7 of 25jttheiss

formalization will be low, if they are viewed as incapable of making their own decisions and requiring a large number of rules to guide their behavior, formalization will be high'!. Research by Hage and Aiken (1969) and by Blau (1970) supports Hall's summary. Hage and Aiken found that organizations relying on rulesand close supervision were characterized by less professionalizedstaff while Blau found that the presence of highly qualified personnel was correlated with decentralization of decision-making.The environment of human service organizations, primarily client and societal expectations and resource availability, affect formalization of human service delivery roles (Khandwalla, 1974; Katz & Kahn, 1966). Client expectations influence formalization in that to effectively deliver the service the agent must presentit in a form which is acceptable to the client (Berkanovic and Reeder, 1974). For example, an unemployed handicapped adult mightbalk at food stamps being represented as a return on taxes paid, particularly if few taxes had ever been paid. Conversely, the same representation is highly effective when dealing with the elderly or temporarily laid off workers. Thus, a heterogeneous client population might require less formalization in the intake interview than a homogeneous one.This assumes that there are enough resources to make the service broadly available and that the society, organization and agent feel that an objective is broad service dissemination. If resources are limited, or high client load is anticipated, a standardized representation of the services might be employed. The representation would serve to limit the service to those who find the representation acceptable and those who need the services so badly that they demean themselves. Thus formalizationof human service delivery roles responds to environmental as wellas organizational variables and these variables interact (Hasenfeld, 1978) •Technological considerations concern the behaviors, materials andsystems applied in production, and the predictability of the output state~ Technology is the 'how' of production. When production tasks and effects are viewed as stable and predictable, formalization is likely to be high. Conversely, when

[email protected] Page 8 of 25jttheiss

the tasks and effects are viewed as unpredictable, formalization is likely to be low. It is important to note the use of 'viewed' in the previous statements. Initially, studies of variation in technology indicated that technology was invariant and played a role in determining what organizational structure would be (Woodward, 1965; Hage and Aiken, 1969). As recently as 1977, Hallstated, "Organizational structures are determined by the technology employed ln carrying out the organizational tasks" (p13).Research by Glisson (1978) has developed an interactive model of the structure/technology relationship. He demonstrated that technology is also a dependent variable, bearing out Perrow's (1967) statement indicating that technology is dependent upon theorganizations perception of the uniformity of the inputs. Glisson's hypothesis was that technology may not be totally a reflection of the state of knowledge about service delivery but is also a reflection of the organization's as a relatively standard process. He 'view' of service delivery found that hierarchical organizational structures caused service delivery agents to implement a routine service delivery technology. In light of the high lateral and longitudinal qualities of the services Glisson studied, the technology implemented failed to consider the potential variation in the nature of inputs and the reactive nature of the service delivery process.

I-C-3. SummaryThe formalization of human service delivery roles is affected by the competence of the role incumbents, the routineness of the activities to be engaged in and the expectations and support available from clients and other groups and individuals. There isevidence that these variables interact and that in cases where there is little knowledge concerning them, structural forces in the organization may effect routinization of service delivery tasks. That is, when the technology is viewed as routine, staff are not viewed as competent and constituencies are silent or in favor of routinization, service delivery roles are likely to be formalized.

[email protected] Page 9 of 25jttheiss

I-D. SummaryDepending upon the extent of longitudinal and lateral characteristics of the service provided, human service delivery agent roles may be more or less amenable to formalization. As theextent of client biography considered In the service process increases the agent needs greater extents of discretion in the selection of services and delivery procedures. It is probable that structural forces in human service organizations mitigate against this discretion. In instances where services have high longitudinal and lateral characteristics, limited service delivery agent discretion reduces service effectiveness.

[email protected] Page 10 of 25jttheiss

II - Human Service Organizations' Effectiveness, Efficiency and Accountability

II-A. EffectivenessEffectiveness is a poorly defined term in both organizational andhuman service theory. It is variously interpreted to refer to aspects of survival or goal attainment. In terms of survival, themeanings range from adaptation (Katz & Kahn, 1979) to environmental management (Aldrich, 1979) or resource management (Yuchtman & Seashore, 1967). Goal attainment meanings span from public to private goals and from organizational to individual goals (Perrow 1970, March & Simon 1958). But regardless of the definition used, measuring the effectiveness of human services isa normative and imprecise effort. It is normative in that operational definitions of effectiveness are unlikely to address each constituency which has a stake in the service (Scott, 1977).It is imprecise in that the measurements of effectiveness which dominate organizational analysis have been primarily of the variable analysis typei (Perrow, 1977) in a context where correlation is often our closest approximation to causality and examination of, much less knowledge of, every relevant variable is improbable (Campbell, 1977; Cronbach, 1975).The selection of definitions of effectiveness is, in many cases, academic. Often the provider of resources is also the goal setter, particularly in governmentally funded services. Where once prescribed programs goals were vague and often inconsistent interpretations of broadly stated and conflicting social policies, recent congressional interest has led to the call for more specifically stated service goals. Frequently, prescribed J 4 procedures must be used to derive local goals and objectives, consistent with the legislated service goal (~1arvin, 1979). Thus, the goals are public and achievement of them must be documented to insure continued funding and community support. This is not to overlook the presence of other goals and factors in organizational survival, only to point out the overriding nature of the public goals. In human services attainment of goalsand survival are becoming inextricably linked, or at least the stage is being set for such a linkage (Zweig, 1979). Consistent

[email protected] Page 11 of 25jttheiss

with these developments, an accounting of the achievement of goals, and of efforts to achieve them, is increasingly important.

II-B. AccountabilityAccountability in human service delivery also has two distinct meanings, which though not necessarily incompatible (Scott, 1977)have been debated as polar opposites (Hanlan, 1971; Newman & Turem 1974) • One meaning is administrative accountability or upward accountability which is typical of the majority of programmonitoring and evaluation activities in human services (Gruber, 1974). This form of accountability focuses on standardized activities and goal attainment. It is administratively defined and thus is responsive to organizational needs. The other meaningis client or downward accountability which focuses on individualized activities and quality of service. Client accountability assumes that service deliverers are responsible tothe client first, not the organization which employs them (Hoshino, 1973). The question of which meaning is appropriate goes back to the normative nature of effectiveness measures. Whoever determines the measures of effectiveness also delineates the region of accounting.For example, in the Kissena Apartments case cited by Gottesfeld et. al. (1973), a primary purpose in building apartments for agedJewish émigrés was to improve their health status and longevity. As a result of these desired effects, standard selection procedures were developed to insure that the project attained its' goals. Screening of applicants for residency included several health related items which had the effect of screening out individuals who were at higher risk of hospitalization and mortality. Though these administrative procedures improved the chances of the project reporting goal attainment, they lost sightof the fact that the goal of the project was better health. to improve health, not select and house those with In contrast, Kissena II, a later project which allowed greater agent discretion in selecting tenants likely to benefit from residence,anticipates higher mortality and hospitalization rates. Though less than a year old when the case was reported, "there have beensome problems" at Kissena II which have contributed to the

[email protected] Page 12 of 25jttheiss

development of plans for services found to be unnecessary at the 'successful' Kissena I. They include outpatient, physical and mental health services. Despite the disparity between intended and achieved goals Kissena I was used as a model for other projects.In Kissena I, administrative accountability was emphasized to thedetriment of client accountability. Kissena II emphasized client accountability to a greater degree. Clearly the accounting process is easier at Kissena I, procedures are formalized and theproducts appear desirable. Kissena II cannot formalize its response to individuals; in fact it has actively sought to diversify the mix and availability of service delivery activitiesand of services. As a result Kissena II appears better equipped to achieve the desired goals but staff has limited ability to provide an accounting of either effectiveness or efficiency.

II-C. EfficiencyEfficiency is often thought of in terms of smooth, predictable operations because in non-service organizations such behavior epitomizes efficiency. Actually, efficiency is not an activity measured concept (Etzioni, 1964). Regardless of the internal operations of organizations the ones that achieve high output with low input are efficient and those that require greater inputand/or deliver lower output are less efficient. An efficiency ratio in business is often presented as income/costs. In human services it can be represented as benefit/costs, but identifying the actual extent of benefits is very difficult (Thompson, 1980).Also errors in computation of the efficiency of human service organizations are not obvious. In business an inefficient organization, even if it represents itself as efficient, will notsurvive. In human services appearing to be efficient is often sufficient to insure survival of the organization. Since efficiency is commonly construed as smooth predictable organizational activity and since effectiveness is so difficult to operationalize in human service areas, it is not surprising that many human service agencies attempt to centralize, formalizeand reduce complexity.

[email protected] Page 13 of 25jttheiss

More rigid organizational structure gives the operators of the agency more control over its operations, increases the routinization of its members activities, allows the implementation of detailed organizational accounting and gives the appearance of efficiency~ The Kissena I apartments are an example of this phenomenon. By minimizing the discretion of rental agents the units were quickly filled and there were few unexpected demands placed on the operators of the apartments. Kissena II allowed its agents to attend to a greater extent of the applicant's biography. The operators quickly ran into a variety of renter demands which increased organizational complexity and required decentralization and an increase in non-routine service delivery activities.II D. The Difference Between Effectiveness and EfficiencyAccording to Pennings and Goodman (1977), effectiveness and efficiency are complimentary with effectiveness referring to the quality and number of outputs while efficiency is a measure of effort or cost expended in the production process. For example: achain saw and an axe are equally effective in felling a tree. To determine the relative efficiency of either, one would have to specify which input costs were of concern, measure them for each tool and compare the results. If cash outlay is a concern the less expensive axe might be the tool of choice, while if minimizing physical effort is important the chain saw is much less strenuous. Regardless of the efficiency criteria, efficiencyis only at issue as long as the effect, a felled tree, is achieved.In human services the effect can easily be oversimplified or overlooked. In fact, organizations can appear efficient without necessarily being more effective (see Perrow, 1972, pp. 512; Broskowski and Driscoll in Attkinson et. al., 1978). Kissena I isan example of an apparently very efficient human service organization because it is administratively accountable. On the other hand, Kissena II attempted to be efficient in a real sense and lost much of its administrative accountability.In cases where accounting procedures become excessively rigid andthus agent's roles more highly formalized, agent recognition of contextual characteristics of individual cases will be reduced

[email protected] Page 14 of 25jttheiss

and their ability to respond to observed variation in input/clients will be curtailed (Gruber, 1974; Rosengren & Lefton, 1970; Hoshino, 1973). The Kissena Apartments goal of improvement in health and longevity gave the rental agents freedom to attend to, and respond to, individual circumstances. When the effectiveness measure, an increase in health or longevity, was defined as simply tenure as a resident and number and extent of hospitalizations, this laterality was minimized. Anindividual whose tenancy might increase his longevity from 6 months to one year had less potential for admission than an individual whose life expectancy was high, regardless of any effect produced by tenancy. Notice that the retention of the wordimprovement in the effectiveness measure would have reduced the formalization of the rental agent's role and greatly increased both the complexity and the subjectivity of accounting proceduresand evaluation. Research on organizations has demonstrated that formalization and efficiency are highly correlated (Hall, 1977). But inputs and/or procedures become less predictable, the correlation of efficiency with formalization becomes weaker (Perrow, 1977). There are two probable reasons for this. The first is that as inputs become less standardized, more knowledge and detail is required to achieve effective formalization. This reason can be viewed as treatment for every contingency. This is an unlikely possibility in human service delivery, considering the state of our knowledge of clients and our ability to predict or produce desired change in them. The second is that the nature of human interaction is contextual and variable. In this case, correlations are not to be expected and those that are found can be expected to vary over time and by location (Gergen this case, the goal of evaluation research needs 1973). In to be the identification of contemporary constructs and concepts which willcontribute to the service delivery agent's ability to analyze andrespond to each case, and to information in that case, intelligently (Cronbach, 1975; Gowin, 1981).In the case of many human service roles, the first situation is analogous to Cronbach's (1975) "hall of mirrors" in which researchers looking for causal relationships become ever more

[email protected] Page 15 of 25jttheiss

involved in chasing interactions to the 'n'th level. The second is analogous to his description of the researcher's role as one of pinning down the contemporary facts from which to develop concepts that will enable us to comprehend and adjust to change constructively.

II-E. SummaryEffectiveness is the extent to which an organization is able to achieve desirable or pre identified outcomes. Efficiency is the extent to which the costs of achieving the outcomes are minimized. Accounting is the method of documenting the process, and by implication the costs, of achieving the outcomes and the occurrence of the outcomes. As the inputs, outputs and outcomes, of organizations' activities become more complex, accounting becomes more difficult to do and efficiency and effectiveness become more difficult to establish. In human service organizations this complexity is often beyond our capacity to setup an accounting system. Additionally, setting up an accounting system when processes and outcomes are not documentable can reduce effectiveness by reducing service agent'sopportunity to identify and focus on achieving the most appropriate outcomes.

[email protected] Page 16 of 25jttheiss

III - EVALUATING HUMAN SERVICESIII-A. The 'Value' in EvaluationIII-A-1. EyaluatingHuman services exist as a result of social values; values like freedom, equality of protection from harm by opportunity and treatment, and others~ As a result, human service agencies' and programs' value is based on their effectiveness in helping their clients achieve valued states. In the interest of objectifying services' worth, specific intended service outcomes are selected to represent the value of the service, program or agency. Thus, aservice has value to the extent that it achieves selected serviceoutcomes. Notice that Kissena I's effectiveness is an inflated value while Kissena II's effectiveness, though nearly immeasurable, is accepted as more valuable than Kissena I's. The problem of measuring service value is further complicated by changes in the social valuation of intended service outcomes. To demonstrate this variation in social valuation the following two human services are contrasted.The effectiveness of a jobs program may be based on the number ofclients it places in employment. Its effectiveness may be specified further as measures of job tenure and/or performance, possibly even some measure of per case difficulty. The services' value would be represented as a product of those measures of effectiveness, i. e. assumed to be a measure of the social value of the service provided. The jobs program may have more or less value than a polio vaccination service and the respective valuations probably shifted between 1954 and 1974 (Figure II).

Figure II

[email protected] Page 17 of 25jttheiss

This change is not a change in the inherent value of the servicesbut a change in their relative desirability. In 1954, polio was crippling thousands of children yearly and a number of public andprivate agencies were publicizing both the problem and the process of seeking a vaccine. Conversely, unemployment was low, and there were jobs available at various skill levels and in various fields for most interested jobless. In 1974, the situation was reversed, few children contracted polio and there were no agencies publicizing polio as a problem, but unemploymentwas high and governments and labor organizations were publicizingthe plight of the unemployed. In most areas of the country, job availability in many fields was limited and jobs for unskilled and semi-skilled workers had all but disappeared. For a few yearsafter 1974, unemployment declined and health services began to reemphasize the need for polio vaccinations. As a result, the relative value of the services tends to converge. The higher peakvaluation of polio vaccination relates to the author's bias, yours may differ but the direction of change over time should be the same.

III-A-2. ValuingEvaluators seldom attempt to do comparative evaluations of dissimilar services. The comparative decisions are made by politicans, other decision makers, and by proxy, the public (Weiss, 1977; Patton, 1978). Thinking about developing an interval scale for the vertical axis of Figure II, the relative valuation axis, demonstrates the complexity and near futility of such an undertaking. As a result, evaluations which represent program worth in terms of outcomes may provide information about the frequency, and quality of both intended and untended service outcomes. They may also present information about the apparent value to the client, and to society in the form of a smaller group, like a community or neighborhood. What they cannot presentis a value factor with which to compare dissimilar services. Evenevaluations of similar organizations are seldom comparable due tovalue biases of the researcher, subtle differences in service delivery and interpretation of goals, evaluation focus and evaluation methodology (Weiss, 1972; Attkisson, et.al, 1978;

[email protected] Page 18 of 25jttheiss

Rossi et.al., 1978) . Evaluation presents evidence, man values the evidence and acts on the combination.

III-B. Evaluation ClassificationsEvaluations which report on program value in terms of outcomes are called impact evaluations. IMPACT evaluations measure serviceeffectiveness in terms of achievement of outcome states, states which represent the values upon which the service is based. In the jobs example - employment; the outcome state could be representing the values of equal opportunity and the work ethic. Impact evaluations report on the extent to which operationally specified outcomes have been achieved. The actual valuation of service takes place separate from the evaluation and mayor may not appear to consider the evaluation results and conclusions (Weiss, 1977; Patton, 1978; Young & Comtotis, 1979).Evaluations may also attempt to document service delivery activities for the purpose of identifying correlations and/or causal relationships between the activities and the achievement of outcome units. This type of evaluation is a process evaluation. PROCESS evaluations often lead to agencies placing greater emphasis on the activities which correlated with desired outcomes at the expense of those that didn't. Process evaluationsevaluate activities in terms of their apparent contribution to program effectiveness. A third type of evaluation, MONITORING, is an activity accountingprocess. This accounting relies on knowledge derived from processevaluations, experience or theory to evaluate program effectiveness. Program monitoring evaluations often become mechanisms of administrative control and can appear to be measuring efficiency even when the connection between service delivery and program outcome is unknown (Hoshino, 1973; Levy et.al., 1974). Program monitoring evaluations measure service effectiveness in terms of activities which are known, or assumed,to produce the valued output state. The benefit of this type of evaluation is that it can be done without complex identification of output quality. It appears to document efficiency and, by implication, effectiveness.

[email protected] Page 19 of 25jttheiss

Impact, process and monitoring are non-exclusive classifications of evaluation in that they can occur in various combinations and with various degrees of detail. In addition to these functional classifications, there are two evaluation techniques who's popularity is such that they will be discussed also. One is BENEFIT/COST evaluation (Thompson, 1980), which attempts to explicitly contrast input and output values as economic value. Benefit/cost evaluations represent effectiveness as the difference between operationally delineated costs of services delivered and a value of the output state in dollars. The second,UTILIZATION FOCUSED Evaluation (Patton, 1978), attempts to present decision makers with the information they are most interested in and/or the information that the evaluators identifyas most likely to be used appropriately. Implementation of these evaluation techniques can fall into any one, or combination, of the classifications.

III-C. The Validity of Evaluation DataIII-C-1. Impact EvaluationThere are three major categories of weakness in impact evaluationfindings. The first category is that of construct validity (Cook & Campbell, 1979) which includes threats such as inadequate specification of variables, not specifying the appropriate or sufficient number ·of variables and not recognizing relationshipsamong the variables as they are operationalized. Evaluators are unlikely to specify the range of possible program outcomes as either broad areas of outcome or many narrow and clear variables:The former because measurement would be too difficult, the latterbecause it would be too tedious. As a result program effectiveness is likely to be over or under represented (Weiss, 1977; Attkisson et.al., 1978). For example, attitude changes, or service initiated changes in the client's family or other relationships may be missed or not looked for. The biases of the evaluator and/or the groups for whom the evaluation is being donehas a similar effect (Weiss, 1972, 1977; Rossi et. al., 1979). The groups for whom the evaluation is being done, may specify or only respond to certain variables, neglecting others which may beimportant. Additionally, the evaluator's expectations or values

[email protected] Page 20 of 25jttheiss

can cause some variables and relationships to be misrepresented or neglected.A last but very important threat to the construct validity of impact evaluations is the preponderance of broad program goals. Such goals make specification of all relevant outcomes impossibleand are the source of much of the preceding list of construct validity problems.The second major category relates to internal validity, the validity of evaluation conclusions (Cook & Campbell, 1979). Without control groups constructed randomly it is speculative at best to specify that an outcome is a result of service, or that the outcome is more desirable than what might occur without the service being instituted. When randomization is is used in evaluation, problems specifying a causal relationship between service, and outcomes are minimized. Other problems could arise despite the presence of a control group. The controls could receive service elsewhere, take direction from individuals who are served or in other ways provide spurious results. Since few program evaluations are able to construct a control group much less use random assignment, internal validity, the validity of a statement that ascribes causality to the program, service or a service activity, is usually low.The larger social framework of neighborhood and community also affect programs and their consequences. So, too, do national systems of values, laws and sensitivities (Weiss, 1972). That is,not only is it unlikely that all outcomes are identified; it is unlikely that the service alone caused the outcomes. It may be that the service had no effect. Conversely, many changes are beyond the capacity of the program to produce, often under temporary circumstances. Therefore, a small amount of informationconcerning other forces affecting the occurrence or non-occurrence of desired outcomes could prove more useful than largeamounts of documentation concerning program failure.The third major category of validity problems with impact evaluation concerns external validity or generalizability of findings (Cook & Campbell, 1979). Just as national systems of values, laws and sensitivities may affect program outcomes, localcharacteristics may be important variables. If bigotry against

[email protected] Page 21 of 25jttheiss

hiring minorities made a job training program fail, or vigorous judicial enforcement helped it to succeed, evaluation findings will not be generalizable to other settings; unless similar forces were operating there.In short, it is difficult to specify the potential outcomes of services, the extent to which the services caused the outcomes identified and the usefulness of a similar service under different circumstances.

III-C-2. Process EvaluationSimilar to the construct validity problems evaluations, process evaluations have difficulty of impact specifying service activities as one or more measurable variables. Aspects of service delivery programs and activities are too complex and unstable to be effectively operationalized as discreet variables (Mann, 1965).Process evaluations internal validity is also difficult to establish. Not only are a wide range of variables impinging on the process through the program and service agent, the clients themselves and their environment may exert considerable influence(Whittaker, 1980). As a result, activities and forces identified as being correlated with desirable outcomes are artifacts of an imperfect evaluative process and should be regarded with extreme skepticism.Such skepticism must also be extended to the generalizability or external validity of process evaluation findings. The range of variables cited above may well be very different in a different setting. For example, the provision of baby formula to poorly educated South Americans is known to have vastly different effects than those achieved in the United States. It is possible that comparisons of US populations using baby formula broken downby education would have indicated the problem. If so, the problemwould have its origin in poor internal validity of the original study.

III-C-3. Program monitoring evaluation

[email protected] Page 22 of 25jttheiss

Progam monitoring, as noted previously, is both an accounting andan evaluative activity. Since it is based on impact and process evaluation data, it is subject previously to all the discussed threats to validity. The activity of monitoring assembles selected data concerning program activities and forces impinging on the organization. For example, a jobs program may record data concerning service activities, community response to the service,and features of clients accepted and rejected for service. Program monitoring evaluations use this data to reach conclusionsconcerning the service delivery program. The data is a selected sample of potential variables, often selected by program staff who have an investment in achieving a good evaluation, further decreasing the evaluation's internal validity. Variables are alsoselected for their codability and because process evaluations or intuition have identified them as related to desirable outcomes, a threat to construct validity. The evaluator then is in the position of assessing program effectiveness based on a non-randomsample of questionable variables which are assumed to be, or at one time were found to be, related to the occurrence of desirableoutcomes. The Kissena Apartments case is an example of how program monitoring data can be related more to the appearance of efficiency than to effectiveness.

III-C-4. Benefit/cost evaluationBenefit/cost evaluations operationally define their value parameters in terms of dollars and are subject to the same limitations on outcome specifications as impact evaluations. The decision to include or exclude certain aspects of clients future status, to time limit the evaluation, and to select a baseline from which to measure economic change are all normative and are subject to problems of identification and specification. More importantly, benefit/cost evaluations are not social welfare evaluations but economic welfare evaluations (Titmuss, 1972). As such they do not provide a measure of social effectiveness, though they may provide one indicator of such effectiveness. By their nature measures of cost effectiveness are measures of efficiency based on a limited aspect of the value of some service

[email protected] Page 23 of 25jttheiss

delivery outcomes, with dollars as the only input value of concern.

III-C-5. Utilization Focused EvaluationThe problems of utilization focused evaluation, beyond those identified in the discussions of the classifications, are somewhat less obvious. First evidence concerning program effectiveness is confined to the areas of concern to the contractor or decision maker. In Patton's words (p 22);

"The evaluation research approach to assessmentof program effectiveness offers an alternative . . . but the acceptance of this research approach will depend at least partly upon its usefulness (utilization focus) to those makers of decisions whose own clarity, politics and research understanding are uncertain from the perspective of social scientists."

Here usefulness equals use, and use is usually predicated on anticipating what data the decision makers will find relevant, potentially at the expense of the worker and/or client and/or real value of the service.Second, and more important, a utilization focus makes the evaluator the decision maker, or at least the presumptive manipulator of the decision makers only about the things maker. That is, telling decision they have interest in is only slightly different from telling them what they want to hear.

III-C-6. SummaryWeiss (1972) summarizes the limitations of evaluations very effectively when she states that the selection of variables in evaluation studies is usually made " ••. on the basis of scraps of data the accumulated folk wisdom of practitioners or on the basis of theory" (p 47). The first two selection criteria are obviously open to questions of precision. The "application of theory" raises the question of whether the research is a test of

[email protected] Page 24 of 25jttheiss

theory or an evaluation of a service program. On the face of it the two activities are exclusive. Since the results are offered as evaluative data a validity adjustment should be made prior to any use of any findings. Such an adjustment would be in relation to the empirical validity of the theory. Considering the dearth of well grounded and empirically sound theory in the field of evaluation the validity, and thus application, of findings would be reduced.If this were the case, Campbell's (1977) call for limited and closely monitored implementation of evaluative findings would be answered. So long as program alterations are made service wide and innovations and changes are only superficially monitored the field of evaluation will be without the feedback it needs to improve its technology and develop empirically sound theories.III-D. The Application of EvaluationThe application of evaluation findings fall in four general categories: program termination, program expansion, program alteration and none. The last three categories occur most frequently (Behn, 1977) and based on the probable quality of the data used to draw conclusions, it should not be surprising that few service programs are terminated (Kaufman, 1976). On one hand,evaluation experts tell us that evaluations which do not considerthe norms of decision makers are unlikely to be used (Patton, 1978; Mayntz, 1977), and on the other hand, data quality is questioned (Gergen, 1973; Campbell, 1979), leaving program and service decisions to primarily political processes (McGowan, 1976). In fact, even when presumably accurate data concerning program failure are presented, program redirection or reorganization are the treatments of choice. No change in the program or service is a product of the same circumstances, plus organizational investment in the status quo (Staw, 1977; Warwick,1975). According to Maynard-Moody (1980);

"Justifying past decisions rather than cutting losses is further encouraged by the ambiguity of the results of most evaluation. Unless convinced that past decisions were and will continue to be mistakes, people are unlikely to

[email protected] Page 25 of 25jttheiss

change approaches. Past decisions even if faulty, are at least familiar . . .." (p , 7)

Considering the previously cited weakness of evaluation data, program expansion is likely to be a product of political processes or fortuitous circumstances in which evaluation resultsplayed a small or no role. For example, the Oakland Model Cities Program (Pressman & Wildavski, 1973) and the Experimental Education Project (McGowan, 1976) are both products of fortuitouscircumstances, both continued despite negative evaluations and are cited as failures~ Another project, Head Start, grew from a pilot program to a national program without clear indications of positive effects (Bentler & Woodward, 1978). It is only in the last few years that the Consortiiium for Longitudinal Studies hasassembled empirically sound data which documents program effectiveness. It is important years after to note that the success is evident five to fifteen the intervention. Such a delayoften leaves unanswerable questions of causality and questions concerning the appropriateness of such programming for the present clients.A portion of the remaining category, program alteration, bears most directly on the subject of this paper. Program alteration can take many forms. Redistribution of resources and/or goals, service integration or specification and structure redesign are the major ones. It is the area of service specification, specifically the development and refinement of organizational accountability mechanisms, which provides a strong argument against some applications of evaluation data and conclusions. Given the interdependent nature of structures and technology in social welfare, program monitoring evaluations and techniques andthe products of process evaluations can be employed to formalize the service deliverer's role beyond the technology of the 'coproduction' process. From the evidence cited previously, it isclear that the results of effectiveness evaluations can be, and often are, ignored. In fact, as in the Kissena Apartments case, the operational definition of effectiveness can be stated so thatis is only indirectly related to program goals. Further, the Kissena Apartments case demonstrates how program goals can be displaced in the process of routinizing service delivery agent

[email protected] Page 26 of 25jttheiss

behavior. Thus, inadequate, inappropriate or ignored results of effectiveness evaluations may not indicate the presence of over formalization much less of any program problem at all.

III-E. SummaryEvaluation of human service programs is an uncertain art which becomes increasingly complex as the range and type of outcomes valued increases. It is probable that this complexity is, at least for the immediate future, beyond our ability to measure or even understand in a generalizable framework.The uncertain results of evaluations can be, and often are, used to formalize service agent roles without the ability to adequately identify the effects of such formalization.

[email protected] Page 27 of 25jttheiss

35IV - Organizational Theory; Implications for EvaluationIV-A. Developments in Organizational TheoryOf primary concern here are the factors of bureaucratic resistance to change (Warwick, 1975; Pressman & Wildavski, 1973),the tendency of organizations to exert control in situations of Kahn et.al, 1964) • The thesis of staff compliance with threaten them {Mohr, 1971; this paper is; when the uncertainty formalization (Khandwalla, 1974), and that doesn't directly limitations of evaluation results are overlooked or minimized, a combination of these factors contribute to using evaluation findings to justify the development and implementation of consistent with formalized service delivery roles which are not the state of knowledge concerning the effects of specific servicedelivery activities. In fact, formalization of service delivery agents' roles may change the definition of service effectiveness from achievement of an outcome state to an accounting of standardized units of service distributed.The growth and popularity of benefits and income redistribution programs like food stamps is an example of this phenomenon. Though the U.S. Department of Agriculture advertizes Food Stamps as improving nutrition, the Department counts only distribution of stamps, not what they purchase or the nutritional status of the recipients. Though this may be socially acceptable in the case of nutrition, in other situations the same rational is not yet acceptable. Medicaid programs cause national debates, demonstrations and lobbying activity when they specify which practitioners and which services clients may use. Similarly, at present, the acceptability of Aid for Dependent Children administered by Social Security style systems of dispersion without counseling, training or verification of whether the childbenefits from the payments is improbable. But there is evidence that we are headed in that direction (Kahn & Kamerman, 1978).

IV-A-1. Organizational ControlThe first factor, the tendency of organizations to exert control in situations of uncertainty is supported by the work of Freeman

[email protected] Page 28 of 25jttheiss

(1973), Pfeffer and Leblebici (1973), Khandwalla (1974) and Dalton (1959), and is discussed in detail in March & Simon (1958). The research of Freeman, Khandwalla, Pfeffer and Leblebici indicates that when established organizations are threatened with environmental uncertainty or vulnerability, they tend to centralize and formalize their operations. Daltons' research reached a similar but less global judgment. He found that as areas or departments with high uncertainty developed or were recognized, strong upwardly rationalize the uncertainty. that, as the extent of motivated managers were moved in to Each of these researchers notes environmental stress increases formalization increases, and that this is often a maladaptive syndrome.Research by Lawrence and Lorsch (1976) would seem to conflict with these findings. Their study examined three organizations operating in unstable markets, plastics, food and container manufacturing. They found that in response to market and technological instability, the plastics firm was the least formalized of the three but was a high performer in the field of plastics. Two factors explain this apparent contradiction. First,variability in the field of plastics predated the development of the bureaucratic structure, thus the structure may have been primarily determined by the technology. Second, only surviving firms were examined. The survivors could easily be the anomalies.

IV-A-2. Staff ComplianceThe second factor, staff compliance, leads to an increase in the frequency of occurrences of behaviors desired by the organization, particularly when the behaviors are closely monitored role requirements. Weber reported in 1930 that members of bureaucratic organizations desire the structure found therein and are only likely to resist authority when they are personally threatened. Merton (1957), Thompson (1961, 1967), Argyris (1964) and Crozier (1964) have elaborated on this observation. As a result of their work, Weber's report can be expanded to: members of bureaucratic organizations are likely to have selected their careers for the security they provide, often become preoccupied with rules to the exclusion of the jobs to be done, and are

[email protected] Page 29 of 25jttheiss

likely to resist authority only when they perceive a threat to personal value that exceeds the value they place on their security. Aiken and Hage (1966) further supported this final qualification with their study of social welfare professionals. They found that job codification alienated professionals but did not disrupt organizational relations until the rules became excessively rigid and strictly enforced.These findings indicate that as behaviors of service delivery staff are formalized, to the extent that they can be monitored, agents are likely to conform to role requirements. Within bounds,it is not even important if the required behaviors are perceived as effective, just that they do not infringe on the agents' personal needs or beliefs too harshly. Additionally, alienation literature identifies features of high turnover, increased absenteeism and generally poor work performance among alienated employees. Thus, staff who are dissatisfied may not quit and the impact of their absenteeism and poor work performance is likely to impact on the client rather than on the organization. If they do quit, individuals who will not resist are likely to be hired in their place (Blau & Scott, 1962). In short, threatened bureaucratic organizations will consolidate their control of staff by formalization, and staff are not likely to strongly resist the process.

IV-A-3, Bureaucratic ResistanceThe third factor cited, bureaucratic resistance to change is thoroughly documented in Warwick's Theory of Public Bureaucracy (1975). In effect, he reports that over a period of time a seriesof changes in bureaucratic structure and procedures disappeared and the organization returned to its pre-change behaviors, in spite of the fact that the changes had beneficial outcomes. Katz & Kahn (1979) and Aldrich (1979) agree that organizations though they interact with the environment, exercise control over parts of it, selectively perceive parts of it and, in cases where change is unavoidable, make changes that have the least structural impact. The implication of these findings is that oncea bureaucracy is established, it is highly resistant to change. Thus, though the bureaucratic structure which administers the

[email protected] Page 30 of 25jttheiss

service delivery system preceded experimental knowledge of service delivery technology, the structure is likely to be highlyresistant to change, even if it is inconsistent with the technology available. Therefore, a retreat from an existing levelof formalization is unlikely (Patti, 1974).

IV-A-4. SummaryIn an uncertain environment organizations are likely to increase formalization. Unless the formalization poses a substantial threat to human service agents personal values they are unlikely to effectively resist, though the quality of their work may deteriorate. Existing and newly imposed levels of formalization which are enforced will be extremely difficult to reduce in the future.

IV-B. The Stage for FormalizationHuman service programs were once widely supported, particularly problems, the intended to by national legislators. Recently, economic persistence of the problems the services were address (Williams, 1979) and a lack of historical documentation of program effectiveness (Attkisson 1978; Newman & Turem, 1974) haveeroded that support. The extent of that erosion is demonstrated by California's Proposition 13 and by the electorate's high levelof interest in, and support for, President Reagan's tax cut/increase military spending proposals. Clearly, the environment of human service organizations has become at least somewhat hostile. Sunset legislation and its concomitant emphasison definitive evaluations (Hicks, 1979; Behn, 1977), the increasing role of the budget office in program and policy decisions (Weinberg, 1979) and budget set aside for program evaluation are but a few of the signs of this hostility. If, as Khandwalla (1974) suggests, organizations do tighten up and standardize their operations when threatened, evaluative conclusions could be used as evidence to support formalization ofservice delivery agent roles. Such formalization would offer the organization the internal control that Pfeffer and Leblebici

[email protected] Page 31 of 25jttheiss

(1973) report that organizations seek when competition, in this case for survival, is high.According to Maynard-Moody (1980) "program evaluation as currently conceived could encourage programs to become rigid, narrow, inefficient and costly." He states that these negative effects may, in part, be due to applying the concepts useful for studyng for-profit organizations to evaluating human service organizations." In other words, evaluation findings may facilitate formalization of service delivery roles. The appearance of efficiency resulting from the formalization may be detrimental to service effectiveness, effectiveness which is so difficult to measure with accuracy. The fact that the appearance of efficiency usually improves productivity which usually improves effectiveness in free-market organizations is clearly aninsufficient cause for wholesale transfer of the formula to humanservice organizations.

IV-C. ConclusionHuman service agencies and their funders need to recognize that as the individuals served, the service procedures and the desiredoutcomes of service become less easily specified, accounting (theappearance of efficiency) will become more difficult. That is, accounting will be too complex to implement. In the words of Maynard-Moody (1980), "Human service organizations may appear chaotic and unproductive when compared to a factory but may in fact be constructive to ambiguous and variable social problems." For example, the food stamps program, if it intends only to provide increased buying power, can be very routine, have high levels of effectiveness and be very formalized (have the appearance of efficiency). At the other extreme, services which aim to change individual motivation and/or behavior may surpass the optimum mix of effectiveness and organizational accountability by having the service delivery agent fill out a formal intake screening form with the client. Continuing the previous example, if food stamps set out to improve nutritional status, basing eligibility solely on income would be inconsistentwith the goal of the program.

[email protected] Page 32 of 25jttheiss

Proposed solutions to this problem abound in evaluation literature. The approaches recommended have often focused on waysto make agencies evaluable (Rutman, 1980) and on restructuring the methods of initiating social services so that they can be, atleast initially, properly evaluated (Rossi, et. al., 1974) Most of these proposals have focused on achieving some measure of standardization across, or control over, organizations. Only recently has evaluation literature taken heed of Cronbach's 1975 exhortation that social researchers "develop explanatory concepts, concepts that will help people use their heads". One promising way to develop these concepts is proposed by McClintock(1980). He states that rather than focusing on identifying the characteristics of presumed value in social programs, evaluators should perform a "sense making function." Evaluation should help programs to understand the relationship between its behaviors andthe quality of services it provides. In other words, evaluation should educate agencies about their structure and its implications for their performance (Maynard-Moody & McClintock, 1981). Both Maynard-Moody and McClintock's work emphasizes helping the agencies to increase their control over the occurrence of the states valued by decision makers, and thereforeincrease the agencies' ability to document the occurrences. It does not focus on defining procedures, identifying correlations or specifying the units of value achieved.One way to "make sense" is to negotiate the content and emphases of evaluations with the parties who will contribute to and use the evaluation. This process is similar to Pattons' Utilization Focused Evaluation in that the individuals using the evaluation results describe parameters within which the evaluation is done. The differences are two;first, sense making evaluation asks the questions how and why,

not what and how much. The answers to what and how much are useful by-products rather than the only products.

second, negotiated evaluations make the agency an equal partner in the evaluation not just its subject. In other words, negotiated evaluations are limited by the context of the agency, not just the expectations of the decision makers.

[email protected] Page 33 of 25jttheiss

BIBLIOGRAPHYArgyris, C., Integrating the Individual and the Organization,

John Wiley & Sons, NY, 1964Aiken, H. & J. Hage, Organizational alienation: a comparative

analysis, American Sociological Review, v3l, 1966, pp. 497-507Aldrich, H., Organizations and Environments, Prentice-Hall, NJ,

1979Attkisson, C., W. Hargreaves & M. Horowitz, Evaluation of Human

Service Programs, Academic, NY, 1978Behn, R., The false dawn of sunset laws, The Public Interest,

#49, 1977, pp. 103-118Bentler, P. & J. Woodward, A Head Start reevaluation: positive

effects are not yet demonstrated, Evaluation Quarterly, v3, 1978, pp. 493-510

Berkanovic, E. & L. Reeder, Can money buy the appropriate use of services, Journal of Health and Social Behavior, v15, 1974, pp• 9 3 - 9 9

Bernard, S., Why service delivery programs fail, Social Work, v24, 1975, pp. 206-211

Blau, P., Decentralization in Bureaucracies, in Power in Organizations; M. Zald, ed, Vanderbilt Press, TN, 1970

Blau, P. & R. Scott, Formal Organizations, Chandler, SF, 1962 Brager, G. & S. Holloway, Changing Human Service Programs, Free

Press, NY, 1978Brokowski, A. & J. Driscoll, The organizational context of

program evaluation, in Evaluating Human Service Programs, Attkisson, et. al., Academic, NY, 1978, pp. 43-58

Campbell, D., Assessing the Impact of Planned Social Change, "Evaluation and Program Planning", V2, 1979, pp. 67-90

Campbell, J., On the Nature of Organizational Effectiveness, in New Perspectives on Organizational Effectiveness, Pennings, J., P. Goodman & Associates, Jossey-Bass, CA, 1977

Cook, T. & D. Campbell, Quasi-Experimentation: Design and Analysis Issues for Field Settings, Rand - McNally, IL, 1979

Cook, T. & C. Reichardt, Qualitative and Quantitative Methods in Evaluation Research, Sage, CA, 1979

Consortium for Longitudinal Studies, Summary Report, Lasting effects after preschool, DHSS pub. #79-3012, 1979

Cronbach, L., Beyond the two disciplines of scientific psychology, American Psychologist, Feb., 1975, pp. 116-127

Crozier, M., The Bureaucratic Phenomenon, U of Chicago Press, IL,1964

Dalton, M., Men Who Manage, John Wiley, NY, 1959Datta, L. & R. Perloff, Improving Evaluations, Sage, CA, 1979

[email protected] Page 34 of 25jttheiss

Demone, H. & D. Harshbarger, A Handbook of Human Service Organizations, Behavioral Publications, NY, 1974

Etzioni, Amitai, Modern organizations, Prentice-Hall, NJ, 1964 Freeman, J., Environment, technology and the administrative

intensity of manufacturing organizations, American Sociological Review, v38, 1973, pp. 750-763

Gergen, K., Social science as history, Jnl. of Social Psychology,v28, 1973, pp. 309-316

Glisson, C. A., Dependence of Technological Routenization on Structural Variables in Human Service Organizations, Administrative Science Quarterly, V23, #13, 1978, pp. 383-395

Goodman,S., J. pennings & Associates, New perspectives on Organizational Effectiveness, Jossey-Bass, SF, 1979

Gottesfeld, H., F. Lieberman, S. Roen & S. Gordon, Strategies in innovative human service programs, in Developments in Human Services, vl, Schulberg, H., F. Baker & S. Roen. eds., Behavioral Publications, NY, 1973

Gowin, D., Educating, Cornell U Press, NY, 1981Gruber, H., Total administration, Social Work, v23, 1974, pp.

625-636Hage, J. & H. Aiken, Social Change in Complex Organizations,

Random House, NY, 1970Hage, J. & Ii. Aiken, Routine technology, social structure and

organizational tools, Administrative Science Quarterly, V14, #3, 1969, pp. 366-376

Hage, J. & M. Aiken, Relationship of centralization to other structural properties, Administrative Science Quarterly, v12, 1967b, pp. 72-92

Hanlan, A., Casework beyond bureaucracy, Social Casework, v52, 1971, pp. 195-199

Hall, R., Organizations: Structure and Process, 2nd ed., Prentice-Hall, NJ, 1977

Hall, R'. The concept of bureaucracy: an empirical assessment, American Journal of Sociology, v69, 1963, pp. 32-40

Hall, R., Intra organizational structural variation: application of the bureaucratic model, Administrative Science Quarterly, v7, 1962, pp. 295-308

Hasenfeld, Y., Client-organization relations: a systems perspective, in The Management of Human Services, Sarri & Hasenfeld, eds., Columbia U Press, NY, 1978 Hasenfeld, Y. & R.English, eds., Human Service Organizations,

U. of Michigan Press, MI, 1974Hicks, R., Sunset Legislation, in Evaluation, in Legislation,

Zweig, ed., Sage, CA, 1979

[email protected] Page 35 of 25jttheiss

Hickson, D., C. Hinnings, C. McMillan & J. Schwitter, The culturefree context of organizational structure: a tri-national comparison, Sociology, v8, 1974, pp. 59-80

Hoshino, G., Social services; the problem of accountability, Social Service Review, v47, 1973, pp. 373-384

Kamerman, S.B. & Kahn, AJ. (1978) Eds. Family Policy: Government and Families in Fourteen Countries. New York: Columbia University Press

Katz, D. & R. Kahn, The Social Psychology of Organizations, 2nd ed., John Wiley & Sons, 1979

Kaufmann, H, Are Governmental Organizations Immortal?, The Brookings Inst., Washington D.C., 1976

Khandwalla, P., Mass output orientation of operations technology and administrative technology, Administrative Science, Quarterly, v]9, 1974, pp. 74,-97

Lawrence, P. & J. Lorsch, Organization and Environment: Managing Differentiation and Integration, Harvard Graduate School of Business Administration, Cambridge, MA, 1967

Lefton, M. & W. Rosengren, Organizations and clients: lateral andlongitudinal dimensions, American Sociological Review, v3l, 1966, pp. 802-810

Levy, F., A. Meltzner & A Wildavsky, Urban Outcomes, Schools, Streets and Libraries, U of CA Press, CA,1974

McClintock, C. & E. Reilinger, New Directions for human service planning, in Southern Review of Public Administration, 1980. V4-5, pp 404-425

McClintock, C., Program evaluation as sense-making: an approach for the '80s, unpublished paper, Cornell U., 1980

McGowan, E., Rational fantasies, Policy Sciences, v7 1976, pp. 439-454

Mann, J., The outcomes of evaluative research, in Changing Human Behavior, Scribner, NY, 1965, pp. 191-214

March, G. & H. Simon, Organizations, John vJiley & Sons, NY, 1958Marris P. &M. Rein, Dilemas of Social Reform: Poverty and

Community Action in the united States, Routledge & K. Paul, London, 1967

Marvin, K., Evaluation for Congressional committees, the quest for effective procedures, in Evaluation in Legislation, F. Zwieg ed., Sage, CA, 1979

Maynard-Moody, S., Revaluing charity: some negative effects of evaluation research, unpublished paper, Cornell U., 1980

Maynard-Hoody, S. & C. McClintock, Square pegs in round holes. Program evaluation and organizational uncertainty, Policy Studies Journal, 1981, v9, pp. 644-666

Mayntz, R., Sociology, value freedom. and the problems of political counseling, in Using Social Research in Public

[email protected] Page 36 of 25jttheiss

Policy Making, C. Weiss, ed., Lexington Books/ DC Heath, KY, 1977

Merton, R., Social Theory and Social Structure, rev. ed., Free Press, NY, 1957

Mohr, L., Organizational technology and organizational structure,Administrative Science Quarterly, v16, 1971, pp. 444-459

Newman, E. & J. Turem, The crisis of accountability, Social Work,v23, 1974, pp. 5-16

Patti, R., Organizational resistance to change, Social Service Review, v48, 1974, pp. 367-383

Patton, M. Q., Utilization Focused Evaluation, Sage, CA., 1978 Pennings, J., P. Goodman & Associates, New Perspectives on

Organizational Effectiveness, Jossey-Bass, 1977Perrow, C., Three types of effectiveness studies, in New

Perspectives on Organizational Effectiveness, Pennings, Goodman and Associates, Jossey-Bass, 1977

Perrow, C., Complex Organizations: A Critical Essay, Scott, Foresman & Co., IL, 1972

Perrow, C., The Radical Attack on Business: A Critical Analysis, Harcourt, Brace, Jovanovitch, NY,1972

Perrow, C., Organizational Analysis: A Sociological View, Wadsworth Pub., CA, 1970

Perrow, C., Departments, Power and Perspective in Industrial Firms, in Power in Organizations M. Zald, ed., Vanderbuilt U. Press, TN, 1970

Perrow, C., A framework for the comparative analysis of organizations, American Sociological Review, v32, 1967, pp. 194-208

Pfeffer, J. & H. Leblebici, The effect of competition on some dimensions of organizational structure, Social Forces, v52, 1973, pp. 268-279

Pressman, J. & A. Wildavski, Implementation, U. of California Press, CA, 1973

Pugh D., D. Hickson, C. Hinings & C. Turner, The context of organizational structure, Administrative Science Quarterly, v14, 1969, pp. 91-114

Pugh D., D. Hickson, C. Hinings & C. Turner, Dimensions of organizational structure, Administrative Science Quarterly, v13, 1968, pp. 65-105

Rosengren, J. & J. Lefton, Organizations and Clients, C. E. Merrill, OH, 1970

Rosengren, vl., Structure, policy and style: strategies of organizational control, Administrative Science Quarterly, v12,1967, pp. 140-164

[email protected] Page 37 of 25jttheiss

Rossi, P., Issues in the evaluation of human service delivery, Evaluation Quarterly, V2, 1978, pp. 573-599

Rossi, P., H. Freeman & s. Wright, Evaluation: .A Systematic Approach, Sage, Beverly Hills, CA., 1979

Rutman, L., Planning Useful Evaluations, Sage, CA, 1980Sarri, R. & Y. Hasenfeld, eds., The Management of Human Services,

Columbia U. Press, NY, 1978Schlenker, B., Social psychology as science, Journal of

Personality and Social Psychology, v29, 1974, pp. 1-15 Schulberg, H., F. Baker & S. Roen, eds., Developments in Human

Services, Behavioral Publications, NY, 1973Scott, W.,Effectiveness of organizational effectiveness studies,

in New Perspectives in Organizational Effectiveness, Pennings J., P. Goodman & Ass., Jossey-Bass, CA, 1977

Staw, B., The experimenting organization; problems and prospects,in Psychological Foundations of Organizational Behavior, B. Staw, ed., Goodyear, CA, 1977

Thompson, J., Organizations in Action, McGraw-Hill, NY, 1967 Thompson, J. & W. McEwan, Organizational Goals and Environments;

goal setting as an interactive process, American Sociological Review, v23, 1961, pp. 23-30

Thompson, M., Benefit/Cost Evaluation for Program Evaluation, Sage, CA, 1980

Titmuss, R., Abel-Smith, B. & K Titmuss. Social Policy: An introduction. Rutledge, 1974.

Warwick, D., A Theory of Public Bureaucracy, Harvard U Press, MA., 1975

Weinberg, H., Using policy analysis in congressional budgeting, in Evaluation in Legislation, Sage, F. Zweig, ed., 1979, pp. 28-44

Weiss, C., Research for policies sake; the enlightenment functionof social research, Policy Analysis, v3, 1977, pp. 531-539

Weiss, C., Evaluation Research, Prentice-Hall, NJ, 1972Whittaker, G. P., Coproduction: citizen participation in service

delivery, Public Administration Review, May/Jun, 1980, pp. 240-246

Wildavsky, A., The self-evaluating organization, Public Administration Review, v32, 1972, pp. 509-520 Wilenski • & • Lebeaux, Conceptions of Social Welfare, in

"Human Service Organizations", Hazenfeld, Y. & R. English, eds., U of Mi. Press, Ann Arbor, MI, 1974

Williams, H., Foreward, in, Evaluation in Legislation, F. Zwieg, ed., Sage, 1979

Woodward, J., Industrial Organizations; Theory and Practice, Oxford D., Press, London, 1965

[email protected] Page 38 of 25jttheiss

Young, C. & J. Comtotis, Increasing congressional utilization of evaluation results. in Evaluation in Legislation, F. Zwieg, ed., Sage, CA, 1979

Yuchtman E. & S. Seashore, A system resource application to organizational effectiveness, Administrative Science Quarterly, v22, 1977, pp. 97-103

Zald, M., ed., Power in Organizations, Vanderbilt U Press, TN, 1970

Zwieg, F., Evaluation in Legislation, Sage, CA, 1979

[email protected] Page 39 of 25jttheiss

i EXPLANATION OF "VARIABLE ANALYSIS TYPE" (Perrow, 1977: pp. 97); "Those . . .who are critical of effectiveness studies are referring to the variable analysis type, which is virtually the only type of effectiveness study now practiced by scholars. In its simplest form, it designates Y as a legitimate goal or output of the organization and studies the effect upon Y of changes in X or a number of Xs. Y may be the adaptability or flexibility of the organization, productivity, profitability, amount of job satisfaction, growth, or wealth, to cite the most common dependent variables that Steers (1977) found. X may be training, supervisory style, authority structure, integration, coordination, specialization, or any number of things. A typical study, then, might ask, what is the effect of centralization of decision making on job satisfaction?"

ii . The Consortium coordinated the collaboration of 12 investigators who had independently designed and implemented infant and preschool programs in the 1960s. The effort pooled the investigators data and followed up on the original subjects who were aged 9 to 19 at the time of follow up. (Lasting Effects after Preschool, Consortium for Longitudinal Studies, DHEW pub., OHDS-80-30179, Oct. 1979)