The Ethical Tipping Points of Evaluators in Conflict Zones

23
http://aje.sagepub.com/ American Journal of Evaluation http://aje.sagepub.com/content/early/2014/06/09/1098214014535658 The online version of this article can be found at: DOI: 10.1177/1098214014535658 published online 9 June 2014 American Journal of Evaluation Colleen Duggan and Kenneth Bush The Ethical Tipping Points of Evaluators in Conflict Zones Published by: http://www.sagepublications.com On behalf of: American Evaluation Association can be found at: American Journal of Evaluation Additional services and information for http://aje.sagepub.com/cgi/alerts Email Alerts: http://aje.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: What is This? - Jun 9, 2014 OnlineFirst Version of Record >> at University of York on June 15, 2014 aje.sagepub.com Downloaded from at University of York on June 15, 2014 aje.sagepub.com Downloaded from

Transcript of The Ethical Tipping Points of Evaluators in Conflict Zones

http://aje.sagepub.com/American Journal of Evaluation

http://aje.sagepub.com/content/early/2014/06/09/1098214014535658The online version of this article can be found at:

DOI: 10.1177/1098214014535658

published online 9 June 2014American Journal of EvaluationColleen Duggan and Kenneth Bush

The Ethical Tipping Points of Evaluators in Conflict Zones

Published by:

http://www.sagepublications.com

On behalf of:

American Evaluation Association

can be found at:American Journal of EvaluationAdditional services and information for

http://aje.sagepub.com/cgi/alertsEmail Alerts:

http://aje.sagepub.com/subscriptionsSubscriptions:

http://www.sagepub.com/journalsReprints.navReprints:

http://www.sagepub.com/journalsPermissions.navPermissions:

What is This?

- Jun 9, 2014OnlineFirst Version of Record >>

at University of York on June 15, 2014aje.sagepub.comDownloaded from at University of York on June 15, 2014aje.sagepub.comDownloaded from

Article

The Ethical Tipping Points ofEvaluators in Conflict Zones

Colleen Duggan1 and Kenneth Bush2

Abstract

What is different about the conduct of evaluations in conflict zones compared to nonconflictzones—and how do these differences affect (if at all) the ethical calculations and behavior ofevaluators? When are ethical issues too risky, or too uncertain, for evaluators to accept—or tocontinue—an evaluation? These are the core questions guiding this article. The first section con-siders how the particularities of conflict zones affect our ability to conduct evaluations. The secondsection undertakes a selective review of the literature to better understand how ethical issues havebeen addressed both in evaluation research and in evaluation manuals. The third section draws on aseries of structured conversations with evaluators to probe more deeply into the ethical challengesthey face in conflict zones—with a particular interest in the ‘‘ethical tipping points’’ of evaluators. Thefourth section considers ways evaluation actors can manage ethical challenges in conflict zones,concluding with a brief discussion of how these issues might be located more centrally in evaluationresearch and practice.

Keywords

complexity, conflict evaluation, ethics, politics, risk

‘‘Sometimes you have to break the rules in order to behave ethically.’’

– Evaluator 3, May 2013

Introduction

The term ethics and its derivatives are used in this article to refer to those ‘‘values relating to humanconduct, with respect to the rightness and wrongness of certain actions and to the goodness and bad-ness of the motives and ends of such actions.’’1 With regard to evaluation, we are concerned with theethical decision making of evaluators as (1) members of a profession, guided by the principles and

1 International Development Research Centre (IDRC), Ottawa, Canada2 University of York, York, UK

Corresponding Author:Colleen Duggan, International Development Research Centre (IDRC), Ottawa, ON Ontario, Canada, K1P 0B2.Email: [email protected] Bush may be contacted via email: [email protected]

American Journal of Evaluation1-22ª The Author(s) 2014Reprints and permission:sagepub.com/journalsPermissions.navDOI: 10.1177/1098214014535658aje.sagepub.com

at University of York on June 15, 2014aje.sagepub.comDownloaded from

codes of behavior expected of that profession; and (2) individual human beings within the context oftheir particular cultural and psychological makeup. We are interested in how conflict context affectsthe ethical calculus of evaluators, with a particular focus on ‘‘tipping points’’—those moments (orthe conditions within which) evaluators cross a liminal ethical threshold in their decision making toeither terminate a contract (‘‘walk’’) or refuse a contract (‘‘balk’’).

Our emphasis on the conflict environment as context positions this article within a researchstream that was highlighted in the theme of the 2009 annual conference of the American EvaluationAssociation (AEA): ‘‘Context Matters.’’ We also anchor our thinking in the arguments sketched outrecently in a special issue on this theme in New Directions for Evaluation, which pointed out that‘‘the different approaches that evaluators use to make judgments depend on the contexts in whichthose different approaches are used. [T]he challenge . . . is more explicitly understanding andmatching the valuing approach and context.’’ (Patton, 2012, p. 98 in Julnes, 2012) If, however,we are to put context into context, and if we are to understand the difference diverse contexts make,then we need to begin to tease out the thick details from context-specific studies—as we are attempt-ing to do in this article. Such examinations will require the use of methodologies that are attentive tothe psychological, sociological, and ethnographic subtleties of each case.

The writing of this article was precipitated by the evaluation research and capacity building work wehave jointly undertaken with humanitarian, peacebuilding and international development actors fromthe global North and South (evaluators, evaluation managers, and commissioners or clients of evalua-tion). Most recently, the direct inspiration for the article can be traced to an animated discussion withparticipants in our workshop at the South Asian Community of Evaluators Conclave in Kathmandu(March 2013).2 Participants were immersed in a series of ethically fraught scenarios in which they wererequired to identify ethical challenges and then to devise strategies for managing them (or avoiding themin the future). One group grappled with a semihypothetical case involving a hydroelectric dam project inAfghanistan. That group teased out the ethical issues, systematically analyzed them, weighed them up,and hashed out both preventive and responsive options. When reporting back in plenary, the team deliv-ered its verdict: ‘‘We would walk. There is no way we could continue the evaluation under those con-ditions.’’ The group’s response initiated a broader discussion among participants about those occasionswhen they had either turned down, or terminated, a contract. The ensuing conversation was one that nei-ther of us had heard before in quite so much detail—despite being so central to our evaluation work inconflict zones. This article is our attempt to engage and disentangle these issues and to place them morecentrally on the professional and research agenda of evaluators.

It should be emphasized that this article is an exploratory study that draws us into not one, buttwo, gray zones within evaluation research: (1) the question of ethics in evaluation and (2) the ques-tion, ‘‘what is different about the conduct of evaluations in conflict zones compared to nonconflictzones—and how does conflict context affect (if at all) the ethical calculations and behavior of eva-luators?’’ As discussed subsequently, despite some noteworthy research on evaluation ethics, thisparticular subfield is relatively neglected. Of the available research on evaluation and ethics, noneaddresses the particular challenges confronting evaluators in conflict zones. By exploring evaluationethics within the context of conflict zones, this article seeks to draw attention to an area of theory andpractice that is conspicuous by its absence in the field of evaluation. The need for such an explora-tion is highlighted by the ongoing presence of international aid initiatives in violence-prone set-tings—and the need to evaluate them in ways that are sound methodologically, politically, andethically. This applies to private sector investments, no less than not-for-profit humanitarian, inter-national development, or peacebuilding initiatives.

Turning to practice, we note a similar lacuna in the guidelines and textbooks intended for evalua-tors and current or future evaluation managers and commissioners. And, where discussions on ethi-cal practice are present, they rarely extend beyond hortatory assertions of the importance of ethics,and exhortations that evaluators do the ‘‘right things’’ in their work, while avoiding doing the

2 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

‘‘wrong things.’’ In practical terms, there is very little guidance on ‘‘how to be an ethical evaluator,’’ letalone, how to be an ethical evaluator in a conflict zone. Further, in those places where ethics do enterthe ambit of evaluation as a profession, they tend to be conceptualized and presented statically as a listof challenges and dilemmas to be aware of—which is a bit like giving a child a picture of a car beforesending him or her out to play on the road. Yet, as evaluators know through experience, ethical chal-lenges are anything but static or one dimensional. Rather, they are dynamic, omnipresent, multidimen-sional, and abundant. No list could ever be comprehensive, given the variety of permutations thatethical challenges might assume and the variability of the contexts within which they arise.

Evaluation in conflict-affected contexts involves significant risks and high stakes for evaluationactors. Serious consequences—political, security, and otherwise—can, and do, unfold from unethi-cal evaluation practice. Recognizing, planning for, and grappling with ethical challenges in conflictzones is not easy; ethical evaluation practice does not, in our opinion (and experience), lend itself toeasy operationalization,3 and much less institutionalization. For this reason, the observations, dis-cussions, and suggestions raised in this article draw from professional craft knowledge,4 garneredin the evaluation workshops we have done with evaluators and evaluation managers and throughstructured conversations with professional evaluators. The systematization and sharing of suchknowledge is critical for protecting the rights and interests of all evaluation stakeholders and for con-tinuing the professionalization of evaluation as an area of research and practice.

The article is structured as follows: The first section considers how the particularities of conflictzones affect our ability to conduct evaluations. The second section undertakes a brief review of theliterature to better understand how ethical issues have been addressed both in evaluation researchand in evaluation manuals. The third section draws on a series of structured conversations with eva-luators to probe more deeply into the ethical challenges faced in conflict zones, with a particularinterest in the ethical tipping points of evaluators. The fourth section considers ways that evaluationactors can manage ethical challenges in conflict zones and concludes with a brief discussion of howthese issues might be located more centrally in evaluation research and practice.

A Definitional Caveat: ‘‘Conflict Context’’The term ‘‘context’’ may be used in different ways, by different people, to mean different things.However, in the field of evaluation, Dahler-Larsen and Schwandt offer a helpful understanding ofthe term ‘‘that which surrounds an object of [evaluative] interest and helps by its relevance to explainit’’ (Dahler-Larsen & Schwandt, 2012 citing Sharfstein). Fitzpatrick notes that ‘‘although different(evaluators) highlight different elements of context that are important to our practice, there continuesto be a lack of unified theory or conceptualization of the potential elements in the context that influ-ence our practice’’ (Greene, 2005, as cited in Fitzpatrick, 2012, p. 8). This is true of the field of eva-luation writ large, and it is especially true of the evolving subfield of evaluation in conflict zones.

In this article, conflict context is used to refer to:

! the influence that ‘‘conflict’’ (elaborated subsequently) has on the environmental conditions(physical, historical, social, cultural, political, and organizational) within which an evaluationis undertaken; and

! the impact of conflict—its presence, legacy, or potentiality—on evaluators and evaluationstakeholders,5 on the conduct of an evaluation and on the communication and actual use ofits findings.

Our use of the term conflict includes, but is not limited to, militarized conflicts characterized byclashes between armed groups (governmental, quasi-governmental, antigovernmental, and nongo-vernmental) as well as the violence and terror inflicted by such groups on unarmed civilians. How-ever, the term includes a broad range of nonmilitarized forms of violence as well.6 For the purposes

Duggan and Bush 3

at University of York on June 15, 2014aje.sagepub.comDownloaded from

of this article, we stretch this term further to include social violence, such as misogynous violence,caste violence, homophobic violence, criminalized violence, and the violence toward indigenous peo-ples around the world (to name but a few examples). This inclusive definition is particularly importantbecause of the ease with which the most pervasive forms of violence may disappeared from public orscholarly attention whether through arguments of cultural relativity; through the inattentiveness thataccompanies privilege; or through discriminatory legislation that creates a veneer of legitimacy foracts of systemic violence. Further, in all cases of violent conflict, there is always a synergistic andcoexisting melange of different types of conflict (violent, nonviolent, and proto-violent). Thus, narrowdefinitions, or one-dimensional analytical lenses, risk obscuring the ways that different forms of con-flict may interact (e.g., militarized violence and gendered violence) or the ways in which some formsof violence may mutate (such as the militarization of ‘‘postconflict’’ criminality).

An Analytical Caveat: When Is an Issue an Ethical Issue?Ethical issues may not even be identified as ethical issues. They are left unnoticed. By the time they fes-ter enough to be noticed, it is too late. All hell has broken loose.—Evaluator 6, May 2013

The first challenge we confront—both as evaluators and as evaluation researchers—is determiningwhen a particular configuration of conditions, circumstances, and available choices should beframed and addressed as an ethical problem. One evaluator’s ethical dilemma may be viewed byanother evaluator as a political or methodological constraint or simply as a logistical ‘‘reality.’’(Morris, 2008, pp. 2–3). The subsequent scenario illustrates how the same issue within an evaluationmay be viewed and addressed differently depending on the analytical lens employed.

Scenario. An evaluation team is told by the implementing nongovernmental organization that, due tolevels of insecurity, the evaluators will be unable to undertake site visits to certain villages involvedin a postpeace agreement-related project that is located in a still-militarized conflict zone.

! To the extent that the facts of this case are taken at face value, then this may be framed as acase where logistical challenges (fluctuating levels of insecurity) have affected the conduct ofan evaluation.

! When analyzed through a methodological lens, the evaluation team may view the lack ofaccess as a challenge to representativeness of the sample and the generalizability of its eva-luation findings.

! The application of a political lens highlights issues of competing power and interests. Thus,when the evaluation team leader discovers that foreign researchers and international teamshave been driving back and forth to the same areas that they were told were unsafe, questionsarise as to whether the lack of access is truly the result of insecurity or the actions of stake-holders with vested interests in hiding something.

! To the extent that the now-inaccessible sites exclude significant war-affected populationsfrom an evaluation (e.g., a minority ethnic group, a particular tribe, and a displaced orresettled population), then an evaluator will need to consider whether the findings of the eva-luation—even with clear caveats—are ethically defensible or whether they misrepresent theefficacy/impact of the project, the impact of the war, or postconflict reconstruction efforts.The application of this ethical lens should initiate (1) an exploration of the options availablefor completing the most methodologically and ethically rigorous evaluation within the givencircumstances; (2) a process of calculating the pros and cons of continuing the evaluationdepending on the space that can be made available for the evaluation. This would includean assessment of whether the circumstances might be part of a campaign by vested intereststo manipulate findings.

4 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

Pressure to avoid applying an ethical lens to situations is highest when a premium is placed onprogram funder visibility and where there is acute pressure to be seen to be ‘‘doing good’’ and not‘‘doing bad’’ by exacerbating conflicts or by inhibiting societal reconstruction (particularly in peace-keeping and peacebuilding interventions).7 Such pressures are part of the ‘‘Real World Evaluation’’experience addressed by Bamberger, Rugh, and Mabry (2006). In conflict contexts, they are the cen-tral concern of funders. Paradoxically then, the political desire to be seen to be doing good may cre-ate disincentives to the application of ethical frames of analysis.8

While every situation is different, and while every evaluator may calculate risks differently, a focuson ethical tipping points helps to refine our thinking in ways that are empirically grounded and prac-tically oriented. In our research, we decided that the driving motivation for a decision to balk or walkhad to be defined by the evaluators themselves as being based on ethical considerations at the time thatthe decision was made. While this might initially appear self-evident, it is not so black and whitewithin the narratives of evaluators in conflict zones. For example, one evaluator interviewed said thathe would not accept any contracts in Iraq or Afghanistan (Evaluator 10). But he was quick to explainthat his decision was not based on security concerns, but rather, a belief that any evaluation in thesecountries would inevitably be politicized and ‘‘contaminated’’ by the national interests and militaryimperatives of the major actors there. Security, for him, was a logistical issue. The ethical issue under-pinning his decision to turn down such evaluations was rooted in a desire to avoid misuse, manipula-tion, or politicization of evaluation findings—not safety or security. Another evaluator however,working in a militarized conflict zone in South East Asia, framed security explicitly as an ethicalissue—more specifically, the moral responsibility of the team leader to ensure the safety and securityof all stakeholders ‘‘as far as humanly possible’’ (Evaluator 6). Together, these two examples suggestthat more critical reflection is needed in recognizing tendencies to generate lists of issues labeled‘‘ethical.’’ Sometimes these issues are entangled with the ethical, sometimes they are not. But by pla-cing these instances in context, we are better able to understand when, why, and how they are, or arenot, rooted in the ethical and how they may be addressed.

These examples, like the previous scenario, also underscore the importance of issue framing byevaluators. While work experience is essential for the development of the analytical capacity toidentify potential ethical issues, the fluidity and uncertainty in a conflict context require the evalua-tor to be constantly monitoring and weighing up the kaleidoscopic combinations of factors affectingethical calculations.

The Impact of Conflict Context on Evaluation

You can’t be a 9:00 to 5:00 [evaluator] in a war zone.

– International aid worker, Batticaloa, Sri Lanka, 2008.

As the preceding quote suggests, conflicts do not follow office hours. The evaluator working withinsuch environments is subject to related stresses and pressures 24 hr a day, 7 days a week. This realityerases the line between the personal and the professional self as well as personal and professionalethics. Looking in from the outside, it is easy to forget the insidious and incessant pressures of work-ing in conflict zones (psychological as well as professional). However, we should also note a signif-icant difference between evaluators and others (humanitarian or development workers) inmilitarized conflict contexts: the very short amount of time they typically spend ‘‘in the field’’ con-ducting an evaluation. Consequently, external evaluators find themselves bungee jumping in and outof work environments, in which they may be unfamiliar with (1) the thick details of the sociocul-tural, historical, political, or development context; (2) the subtleties and complexity of conflict

Duggan and Bush 5

at University of York on June 15, 2014aje.sagepub.comDownloaded from

dynamics and relationships; and (3) the fluidity of the conflict environment. The first two sources ofunfamiliarity may be reduced through preparation, previous experience, or by ensuring that theappropriate knowledge or skill set is represented by an evaluation team. Addressing the third sourceof unfamiliarity may require access to trusted, detailed, on-the-ground, local knowledge which is noteasily or quickly available, particularly, if an evaluator is not a geographic area expert. While certainsensibilities travel well from one conflict context to another, no two contexts are ever the same,which means, as one evaluator puts it, ‘‘you are always learning on the hoof’’ (Evaluator 2).

The Symbiotic Relationship Between Conflict Context and Evaluation9

To better understand the interactions between conflict, context, and evaluation, we have con-ceptually disaggregated evaluation into four constituent components or ‘‘domains’’: methods,logistics, politics, and ethics. Figure 1 illustrates the ways in which conflict context impingeson these domains individually and collectively. The domains are nested in a symbiotic relation-ship with structures and processes of conflict. That is, they will affect, and be affected by, con-flict (illustrated here by the two-way arrows between the extreme context of the conflict zoneand each domain). For example, conflict context will affect the logistics and methodology of anevaluation; but decisions about methodology and logistical arrangements for an evaluation mayalso affect conflict dynamics. The figure highlights the intersection and interaction of thesedomains. By framing the major challenges to evaluation in this way, we may move beyond sta-tic, one-dimensional, checklists of issues within each domain— the methodological challengesare x, the ethical dilemmas are y, the logistical challenges are z, and so on. While such lists arenecessary, they are insufficient for understanding and exploring the dynamic complexities andsynergies affecting evaluation within conflict contexts.

The points of domain intersection are labeled in the figure (as ‘‘A’’ through to ‘‘E’’) to correspondto the subsequent examples. These are intended to help us better delineate and explain the kinds ofissues that may arise within these vectors.

Figure 1. The intersection of evaluation domains in conflict zones of moderate intensity.

6 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

A: Ethicomethodological issues—For example, reliance on an evaluation methodology that‘‘disappears’’ key stakeholders and thereby misrepresents the effects of a program and fur-ther marginalizes an already marginalized group.

B: Logisticomethodological issues—For example, the lack of access to stakeholders (due totime, insecurity, or geography) which compromises methodological integrity and may feedinto a sense of grievance by neglected groups—especially if they have been specifically tar-geted or abused during a conflict.

C: Politicologistical issues—For example, when an evaluator is only allowed to see ‘‘model’’/successful sites by a program implementer, thereby inflating the sense of efficacy of the pro-gram and unduly promoting the parties responsible for the project—thus feeding directlyinto the competition of political stakeholders.

D: Ethicopolitical issues—For example, when the client who commissioned the evaluationapplies pressure on the evaluator to change the findings or to write a positive evaluation,when data do not warrant it in order to reap benefits (broadly defined)

E: Omni-domain issues—For example, when a client insists on the exclusive use of a particularmethodology, thereby delegitimizing all other (context-appropriate) methods that wouldallow for a more robust evaluation; the motive, in this example, is the desire to cook theresults to justify cutting a program to which the client is ideologically opposed.

By way of explanation, it may be helpful to compare B (logisticomethodological) with C (politico-logistical). While both examples illustrate a situation where an evaluator is unable to access a sufficientlyrepresentative sample of stakeholders required for a robust evaluation, the reason for the problem is dif-ferent in each case. In the former case, logistical obstacles (time, geography, or insecurity) affect an eva-luator’s ability to employ a rigorous-enough methodology. In the latter case, vested political interestsactively block access to stakeholders—something the evaluator may or may not recognize. These vestedinterests could be held by any number of actors in a conflict zone—host or home government officials,program staff of the initiative being evaluated, or the funders and clients of an evaluation.

The relationships between the domains and conflict dynamics are fluid as well as interdependent.They may shift over time if conflict intensifies, further constraining an evaluator’s latitude foraction. As conflict intensifies (i.e., an increase in the volatility, risk, and levels of potential harm),the four domains of evaluation are forced into each other, so that decisions and actions in onedomain inevitably affect all domains (see Figure 2). Thus, for example, it becomes increasingly dif-ficult, if not impossible, for logistical issues to be addressed independently of ethics, politics, andevaluation methods. While this dynamic may also be evident in nonconflict contexts, the differencehere is existence of acute levels of risk and the speed with which relatively minor problems (or mis-calculations) in one domain may trigger a chain reaction of serious proportions. As discussed in thefourth section, the same dynamic is evident within the ethics domain, thus affecting the ethical cal-culations of evaluators. Conflict context pressurizes ethics issues and calculations so that even a rel-atively minor spark can set off a destructive chain reaction, like a string of firecrackers. This beginsto shed light on why evaluation is so much more difficult in contexts affected by violent conflict.10

A Brief Review of Ethics in Evaluation Research and Literature

Looking for Ethical Guidance in EvaluationWhile ethical issues are acutely important within for the conduct of evaluations in conflict zones,evaluators find themselves with few avenues for practical guidance in the evaluation manuals, hand-books, and guidelines developed by and for organizations working in conflict zones. For example,the most recent OECD Development Assistance Committee guidelines on evaluation in ‘‘settings ofconflict and fragility’’ seek ‘‘to promote critical reflection [and] to help fill the learning and

Duggan and Bush 7

at University of York on June 15, 2014aje.sagepub.comDownloaded from

accountability gap in settings of conflict and fragility by providing direction to those undertaking orcommissioning evaluations and helping them better understand the sensitivities and challenges thatapply in such contexts’’ (Organization for Economic Cooperation and Development [OECD], 2012,p. 20). While these much-anticipated guidelines consolidate and establish the broad parameters ofevaluation in conflict zones, there is a conspicuous blind spot when it comes to appreciating theconflict-specific ethics challenges: ‘‘ethics’’ is mentioned only 4 times in the 100-page report(OECD, 2012, pp. 21, 38, 90).

However, the absence of a consideration of evaluation ethics is not unique to the OECD guide-lines. Many handbooks are devoid of references to ethics.11 Where ethics do make an appearance, itis typically a hit-and-run reference presented in a wholly hortatory manner. That is, evaluators, eva-luation managers, and clients are exhorted to behave ethically with no provision of concrete gui-dance on how to address ethical challenges. Not surprisingly, there is no discussion of the ethicalminefields within which evaluators work in conflict zones.

This is not to say that there are no ethical guidelines for the field of evaluation (AEA, 2004; Aus-tralasian Evaluation Society, 1997). The most conspicuous and influential are the AEA GuidingPrinciples for Evaluators, consisting of systematic inquiry; competence; integrity/honesty; respectfor people; and responsibility for general and public welfare.12 The Canadian Evaluation Societyadvocates a similar set of standards intended to serve as guiding ethical principles: ‘‘competence,integrity, and accountability.’’ They exhort evaluators to ‘‘to act with integrity in their relationshipwith all stakeholders.’’13 Such principles are the building blocks for evaluation as a field of ethicalpractice. In the context of this article, however, it needs to be emphasized that (1) while such guide-lines certainly apply generally, none of these are conflict-zone specific and (2) all of them lean heav-ily toward the hortatory, leaving the evaluator to his own devices as to how exactly the principles orexhortations should be applied. One reason for this may be traced to an inescapable reality: ethics,unlike methodology, is not a technical issue and therefore eludes standardization. Further, no twosituations are alike, and we are dealing with the vagaries and variability of human judgment, mor-ality, and behavior within ever-changing work environments.

Figure 2. The amalgamation of evaluation domains if conflict zone’s intensity increases.

8 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

Approaches to the Study of Ethics in Evaluation ResearchThe relative absence of ethics considerations in evaluation manuals and handbooks should in no waysuggest that ethical issues have been entirely absent from the evaluation research agenda. Indeed, areview of the literature indicates that there have been four primary approaches to exploring ethics inevaluation research.

! Surveys offering a broad overview of ethical issues faced by evaluators. This approach tendsto sketch out the broad parameters of the issues, but not the details. Thus, we may betterappreciate the extent of a phenomenon, but not the specifics, context, or variability.

! Case-based approaches best illustrated by Morris’ Evaluation Ethics for Best Practice. Casesand Commentaries (2008), which presents evaluators’ analyses of hypothetical cases usingthe template of AEA’s Guiding Principles. The pedagogical utility of this approach is highbecause it allows the reader to better understand how the principles may be used to assess thecase and to guide the formulation of possible responses. The focus is largely on professionalrather than personal ethics. The limitations of this approach are that it remains an analytical oracademic exercise and cannot recreate that sense of immediacy, consequence, and personalculpability that inhabits the ‘‘real world.’’14

! Literature reviews compile and assess the academic writing on evaluation ethics. This is bestillustrated in comprehensive review by Morris (2011) of all ethics-related articles publishedover the past 25 years of the American Journal of Evaluation and Evaluation Practice. Thisapproach gives us a sense of the ethical debates and issues within the field of evaluation stud-ies but speaks more about the nature of the field than to the practical challenges confrontingevaluators.

! Deductive approaches begin by positing an organizing principle, concept, or hypothesis thatthen serves to structure the subsequent collation and analysis of theory and practice. As dis-cussed subsequently, an example which is particularly helpful in the current context isSmith’s 1998 article ‘‘Professional Reasons for Declining an Evaluation Contract’’ whichis organized around the premise that ethical calculations are shaped according to whetheran evaluator is predisposed toward either ‘‘guild maintenance’’ (support of the profession)or societal improvement.15

This article employs an additional approach: structured conversations. These consist of semi-structured interviews with a range of 10 evaluators who have worked extensively in conflict con-texts. These conversations maintain a particular focus on the ethical challenges, dilemmas, andtipping points encountered in the course of their professional careers. The conversational characterof the approach allows for the careful in-depth probing into the circumstances, assessments, andresponses of the evaluators in actual (rather than hypothetical) situations. Importantly, this approachhelps us to better understand how evaluators ‘‘interpret and adapt [professional guidelines] in appli-cation, often prioritizing standards according to a situation-specific hierarchy of values.’’ (Mabry,1999, p. 199). While this approach is strong in the analytical depth it may apply to specific cases,it may lack the breadth of a broad-based survey. Nonetheless, for the purposes of this study, theapproach has proved to be particularly useful for exploring the tensions between personal and pro-fessional ethics and the dynamics of ethical decision making in conflict zones.

Despite significant contributions made by evaluation researchers over the years,16 Morris’s 2011literature review leads to the conclusion that there is a ‘‘need for increased empirical research on theethical dimensions of evaluation’’ (Morris, 2011, p. 134).17 At the time of that review, Morris notedthat there was only one textbook devoted to program evaluation ethics.18 One very positive devel-opment in efforts to increase ethical sensitivities in the field of evaluation was the establishment ofthe Ethical Challenges series of the American Journal of Evaluation in 2000, in which commentators

Duggan and Bush 9

at University of York on June 15, 2014aje.sagepub.comDownloaded from

are invited to examine the ethical dimensions of an evaluation case study, allowing for the use ofcontextually rich case studies and scenarios.

Two surveys on the members of two different evaluation associations—on different continents,undertaken 10 years apart—underscore the pervasiveness of the ethical pressures placed on profes-sional evaluators. The first survey, undertaken by Turner (2003), produced a list of ethical chal-lenges confronting members of the Australasian Evaluation Society. This included:

! Evaluation managers or funders trying to influence or control evaluation findings; sometimesincluding pressure on evaluators for positive results (cited repeatedly), sometimes includingpressure to provide ‘‘dirt’’ on a program;

! conflicts between an organization’s needs and those of the client of the evaluation;! political interference;! disagreement over the dissemination or suppression of reports;! requests to use information gathered for one purpose (e.g., program improvement) for a dif-

ferent purpose (e.g., accountability);! Unilateral changes to terms of reference midstream or at the time of reporting an evaluation

and dealing with the implications for quality and relevance of data collected;! discovery of issues of incompetence or poor performance among program staff.

This list invariably evokes nods of recognition among evaluators. The prevalence of suchchallenges is suggested in the findings of a more recent survey of 2,500 AEA members, focus-ing more narrowly on pressures to alter the findings of an evaluation—whether through changesin approach or in methodology or through the manipulation or misrepresentation of the contentof the findings (Morris & Clark, 2013). Forty-two percent of the respondents reported havingbeen pressured to misrepresent findings, with 70% of this subgroup having experienced thispressure on more than one occasion.

As discussed subsequently, evaluators working within conflict contexts experience much the samechallenges with one critical, game-changing, difference: conflict context. It is the difference in contextthat magnifies the scale and scope of challenges, increases the opacity and complexity of the decision-making environment, and volatilizes the consequences of mistakes and miscalculation.

Evaluation Ethics—When Do We Balk and When Do We Walk inConflict Contexts?

So let me get this clear: [Project staff] walk through a minefield to get to Burma, work there for afew months, and then walk back through the minefield to get home? And we are trying to schedulean evaluation meeting so that it is not raining when they are going through the minefield?—Eva-luator 6, April 2013.

When are ethical issues too risky, or conditions too opaque, to accept or to continue an evaluation?Where are those ‘‘ethical tipping points’’ for evaluators—understood as those moments (or the con-ditions within which) evaluators cross a liminal ethical threshold in their decision making to eitherrefuse a contract (balk) or terminate a contract (walk). Many of the evaluators interviewed for thisarticle had not stopped to reflect on the question of ethical tipping points in their work. Thus, in somecases, the conversations raised ethical issues that evaluators themselves had not acknowledged, letalone explored, or processed. As one evaluator pointed out, ‘‘Ethics are not at the forefront of yourmind when you are in a conflict zone. Rather, a more pragmatic and security-oriented logic under-pins your decision-making’’ (Evaluator 2). In his view, it is only afterward that we ‘‘put the overlayof ethics on it.’’

10 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

Before seeking, accepting, or rejecting a contract, many evaluators use formal and informalmechanisms to gather information on the track record or integrity of the program or evaluation cli-ent. On the basis of the information collected, they make an informed decision.19 While this sys-tematic approach was commonly cited in our interviews, one evaluator explained that ultimatelythe decision to accept or reject a contract in a conflict zone was based 10% on consciously consid-ered factors and 90% tacit, unstated, factors—many of which were unrelated to ethical considera-tions (Evaluator 10). These two examples illustrate contradictory (though in practice, notmutually exclusive) approaches to deciding whether or not to accept a contract: the search for suf-ficient, reliable, information versus gut reaction (does it feel right?). The decision then to balk orwalk is a combination of the analytical and the visceral. As we discuss in the Conclusion, the formeris amenable to training and data inputs; the latter is more of a function of the sensibilities that evolvethrough experience.

When Do We Balk in Conflict Contexts?When do we balk in conflict contexts? Our efforts to answer this question are helped considerably bythe work of Smith (1998) and Mabry (1997). In the Table 1, the first column outlines Smith’s criteriafor ‘‘declining evaluation work.’’ The second column lists the issues identified by Mabry that help toassess the potential ethical implications of the work, so that evaluators may accept or reject a con-tract on systematic grounds. All of the issues considered by Smith and Mabry are evident in our con-versations with conflict zone evaluators, though they also included more mundane and quotidianreasons, such as:

! The disinterest factor or lack of novelty in a contract ‘‘when it seems boring.’’! Timing or logistics: awkward timing relative to other commitments/ or insufficient time

frames for completion of the evaluation.! The ‘‘irritation quotient’’: the degree to which the contract is likely to generate aggravation in

excess of satisfaction. All interviewees identified specific evaluation clients with whom theyrefuse to work, specifically because of the bureaucratic and logistical hassles of doing so.

! Capacity or competence issues doubts about the ability of the client or manager of an evalua-tion to provide the necessary inputs to complete an evaluation to a professional standard.

Two characteristics stand out in the above-mentioned lists: (1) most are not explicitly ethicsrelated, and could be framed as, and addressed with, methodological, logistical, or political modal-ities and (2) none are particular to conflict contexts, that is, they apply generally to conflict and non-conflict contexts.

What we begin to see in the comparison of decision making inside and outside of conflict zones isthat it is not the content of the ethical challenges, but the context that makes the difference—specif-ically, the way that context ‘‘hardwires’’ the impact of these decisions directly into peace or conflictstructures and processes, with individual and societal consequences. Thus, many of the reasons citedfor balking or walking in conflict contexts are embedded in, and animated by, the stresses anddynamics of conflict zones (in our sample, primarily militarized conflicts zones). In addition to thereasons listed previously, there are unique reasons that unfold from conflict contexts. For example:

! When there are misgivings about the integrity of the evaluation client. One evaluator wasasked back by an nongovernmental organization (NGO) to undertake a second evaluationof its project in the Palestinian Territories (Evaluator 7). But when the evaluator reviewed herinitial evaluation in the version that was posted on the NGO’s website, she realized that it hadbeen manipulated to include a more politically contentious case study. While the pressure onevaluators to manipulate evaluation findings is ubiquitous, in this case, the implications

Duggan and Bush 11

at University of York on June 15, 2014aje.sagepub.comDownloaded from

extend beyond the normal parameters of funding politics and institutional self-aggrandizement, into the overtly violent and political arena of one of the most conspicuousand protracted conflicts in international politics.

! Manipulation of power relationships. In another case, an evaluator was told by the countryrepresentative of a U.N. agency in an active conflict zone, ‘‘If I don’t like your evaluation,we simply won’t pay you’’ (Evaluator 1). While this might be framed as an issue of the integ-rity of the evaluation client, it is very much embedded in the power imbalances of client–eva-luator relationships and the cash-transactional character of the profession. In this particularcase, the evaluation was inextricably caught up in the competition for U.N. memberstate funding by international agencies. However, the ethical implications lay in the questionof whether the counterproductive programming of the U.N. agency in question should con-tinue to be funded by bilateral U.N. member development agencies; and whether it shouldcontinue to be funded to monopolize programming in conflict-affected parts of the country.These were clearly people consequences not paper consequences.

! When there is a fear that the evaluation will be used unethically to whitewash harmful pol-icies or programs. As noted previously, three of the evaluators we spoke with stated that they

Table 1. Ethics in Evaluation: Walking Versus Balking.

Smith (1998) Mabry (1997)

From a guild maintenance orientationa

! An evaluation contract should be turned down if:1. the work is not going to be profitable to the

evaluator;2. the desired work is not possible at an

acceptable level of quality;3. the work is not evaluation;4. the work is likely to harm the client or violate

ethical principlesb;5. it may harm the fellow evaluators—e.g.,

undermining their professional status orlivelihoodb;

6. it undermines the interests, reputation, orprofessionalism of evaluation; or it antagonizesmajor market clients or it could ultimately leadto a lost in evaluation’s market share.

Precontract issues to assess ethical implications of anevaluation

1. access to crucial or relevant data;2. client openness to critique and commitment

to use of findings;3. sufficiency of evaluation resources (e.g., ability

to complete to professional standard);4. assessment of whose interests are served

through the evaluation;5. conflict between program ethos and

evaluator’s valuesb;6. if an evaluator finds noncompliance with legal

or regulatory requirements, will he or shereport it and bear potential personal/professional consequences?b

7. will the evaluation feed into internal orexternal political conflicts orcounterproductive partisanship?b

8. might the evaluation be used to intimidatestakeholders?b

9. might the evaluation be used to inflate ordistort program quality?b

10. conflict of interestb;11. might the evaluation support the status quo

or promote a particular agenda?b

12. will the evaluation divert funds fromstakeholders in unjustifiable amounts.

Cited in Smith (1998)

From a socially betterment orientation! An evaluation contract should be turned down if:

1. the work would violate professional acceptedstandards of practice;

2. it is not based on the latest scientific orscholarly knowledge about the evaluationprocess and the evaluand;

3. the evaluation work does not support thevalues of society and contribute to the socialgood.b

Note. Adapted from (Smith 1998) and Mabry (1997).aBy ‘‘guild maintenance,’’ Smith is referring to decisions based on the maintenance of the ‘‘guild interests’’ and market interestsof the profession of the evaluation.bIssues more overtly connected to ethical considerations.

12 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

have a personal policy to refuse any evaluation contracts in Afghanistan or Iraq based on abelief that any work in these regions would inevitably be politicized and swallowed into military‘‘imperatives’’ of the major actors there (Evaluators 5, 9, and 10). Another evaluator turneddown an evaluation contract from an oil company working in West Africa because of his con-cern that it would be misused by the company, given its poor track record in community relations,labor practices, and human rights. In this evaluator’s assessment, working under contract withthe company amounted to complicity in documented unethical behaviors (Evaluator 1).

! Static or overly rigid terms of reference inhibit the evaluator from asking necessary, but awk-ward, questions, in particular on justice, equity, or diversity issues and impacts. As one eva-luator puts it, ‘‘as ethical evaluators, we need to raise uncomfortable questions in aprofessional way that opens up space for a conversation that motivates people to take actionon issues of justice, equity, and diversity’’ (Evaluator 3). This was argued to apply to all eva-luations, not just those with explicit justice, equity, and diversity objectives. The ethicalimplication that follows from this argument is that evaluators risk tacitly accepting (evenendorsing) discriminatory or repressive structures and behaviors, particularly by state actors,if they do not use their privileged access in the service of justice, equity, and diversity issues.

! When the methodology of an evaluation predetermines the outcome/findings or instrumen-tally reinforces the self-serving objectives or strategies of the evaluation client. An evaluatorrefused to evaluate a U.N.-managed project on posttraumatic stress disorder (PTSD) becausehe felt the methodology was designed intentionally to endorse and justify the mainstreamingof the pilot project (Evaluator 5). The project being evaluated involved the exclusive use ofanti-PTSD drugs manufactured by the multinational pharmaceutical company co-funding theproject. The decision by the evaluator was a combination of ethical factors as follows: a per-ceived conflict of interest within the project, concern that his findings would be manipulatedthrough the imposition of an inappropriate methodology, and a deep concern that the dignityand well-being of war-affected population would be violated.

The Dynamics of Ethical TippingAmong the 10 evaluators interviewed for this article, a number of characteristics become clear aboutthe dynamics underpinning decisions to balk or walk. The first is that balking is significantly moreprevalent than walking—on a scale of about 20 to 1. Second, and not surprising, once a contract isaccepted, personal and professional interests incentivize the continuation of the evaluation. Third,within the narratives of evaluators’ experiences of working in conflict, it is possible to observe someof the psychological processes guiding ethical calculations, in particular the process of cognitiveconsistency. Cognitive consistency refers to the attempt to keep beliefs, feelings, actions, and cogni-tions mutually consistent—a kind of ‘‘systematic bias in favour of information consistent with infor-mation that we have already assimilated’’ (Jervis XX, as cited in Lebow, 1981, p. 104). But thepursuit of consistency becomes irrational when it closes our minds to new information or differentpoints of view. Though, it should be noted that even irrational consistency may be useful in the shortrun because it facilitates quick decision making when required—a useful skill to have in the com-pressed decision-making space of conflict contexts. However, persistent denial of new informationdiminishes the ability to learn from our environment, reassess possibilities, and respond accordingly.In the context of evaluation practice, this refers to an inability to recognize the need to recalculateethical assessments and make the necessary adjustments within an evaluation. The challenge forevaluators is to ‘‘strike a balance between persistence and continuity on the one hand and opennessand flexibility on the other’’ (Lebow, 1981, p. 105).

The ethical tipping points of evaluators in conflict zones are contingent upon the process throughwhich evaluators gather, manage, and assess environmental information, particularly information

Duggan and Bush 13

at University of York on June 15, 2014aje.sagepub.comDownloaded from

that challenges what they either expect to see or want to see, regarding ethical propriety. The tippingprocess may be relatively slow and incremental or sharp and precipitous. Figure 3 sketches out thefactors we have gleaned from our conversations and work with evaluators regarding the tensionsaround managing or balancing ethical issues in evaluation, and the considerations involved whenan evaluator decides to balk, walk, or stick it out. These include evaluator capacities on one handand the nature of the ethical issues on the other.

Conclusions: Positioning Evaluation Ethics in Conflict Zones in Researchand Practice

What Is It About Ethical Calculations in Conflict Zones That Are Different Than NonconflictZones?

1. Even small ethical miscalculations can escalate rapidly to large and potentially dangerousproblems, whereas, nonconflict zones are more forgiving of minor miscalculations.

2. Nonethical issues can quickly become ethical issues. For example, one evaluator interviewednoted that the choice of a translator on the grounds of technical competence may be cast as apreference for one ethnic group/ conflict stakeholder over another (Evaluator 1).

3. Ethical dilemmas in nonconflict contexts rarely carry risks of physical security, includingdeath.

4. The increased prevalence, volatility, and potential damage engendered by mishandled ethicalissues.

5. You cannot be a 9:00–5:00 evaluator in conflict zones. The extreme nature of political and opera-tional constraints in conflict zones means that evaluators need to be constantly tuned in to thework they are doing. This reality means that the separation between the personal and the profes-sional becomes wafer thin when it comes to ethics and the consequences of miscalculation.

Figure 3. Balking, walking, or sticking it out? The tipping point.

14 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

A central premise underpinning this article is that conflict context affects our ability to plan, con-duct, and use evaluations—methodologically, logistically, politically, and of course, ethically.20 Thefirst section mentioned previously suggests why and how this is the case. Our research is compara-tive in the sense that we are implicitly comparing and contrasting evaluation in conflict and noncon-flict context. We have argued that because of the fluidity and volatility of conflict zones, ethicalchallenges are both more pervasive and potentially more dangerous compared to nonconflict envir-onments. Further, the negative consequences of even minor miscalculations are amplified and poten-tially lethal.

Figures 4 and 5 illustrate an important reason why this may be so. In nonconflict contexts, itwould appear that the evaluator is better able to address ethical issues as they arise, one at a time,in a compartmentalized fashion.21 In conflict contexts however, this luxury, disappears. As illu-strated in Figure 5, in conflict zones, an evaluator loses the scope of action to address issues oneat a time. Rather, all ethical issues begin to meld into each other. A decision made to address onechallenge—whether ethical, political, methodological, or logistical—has implications for another.The conflict context increases the pressure within which ethical decisions are made, for example,as a result of the volatility, politicization, uncertainty, win–lose/zero-sum incentive structures, vio-lently competitive interests, and so on.

Further, there may be a greater ethical fog in conflict zones, whereby decisions or issues are notclearly identifiable as being ethically significant; or that decisions about methodology come to beimbued with ethical consequence (e.g., the omission of crucial stakeholders from an evaluation oroverrepresentation of a stakeholder group, resulting in claims of exclusion22 or the misrepresenta-tion of program effects).

Evaluation Actor Strategies for Dealing With Ethical Challenges in Conflict ZonesIn this study, we have focused on evaluators. However, the ethical calculations for the evaluator onthe ground may be markedly different from the ethical calculations of the evaluation manager orclients of the evaluation. One evaluator interviewed described the client of an evaluation in the

Figure 4. Ethical issues in a nonconflict zone.

Duggan and Bush 15

at University of York on June 15, 2014aje.sagepub.comDownloaded from

immediate post-Barre period in Somalia (1991), as being ‘‘blissfully ignorant’’ of the dangers andrisks confronting the evaluation team (Evaluator 10). The Agency representative stood back and letthe evaluation team deal with any political, logistical, or ethical issues as they arose. This raisesquestions and perhaps suggests a need for further research about what are the ethical responsibilitiesof clients within such circumstances. It is worth noting that a different interviewee in Northern Ire-land explicitly did not want his client intruding at an operational level because it would have inhib-ited his team from ‘‘doing the job’’ (Evaluator 8). In this case, the evaluator noted that it was‘‘necessary to break the rules in order to behave ethically,’’ a comment that underscored his percep-tion of the challenges evaluators can face when dealing with hyperbureaucratized clients who tend tobe overly intrusive, rule-bounded, and risk averse.

Our experiences in working on evaluation in conflict zones (author number two as an evaluator;author number one as an evaluation client and manager) indicate that although there will always beinstances in which evaluators will walk or balk (or clients will invoke contractual provisions to dis-continue), there are underexplored opportunities for dealing with ethics challenges. In concluding,we offer three general strategies for assisting evaluators, managers, and clients of evaluation to shar-pen their own ‘‘ethical compass.’’

Know your own ‘‘lines in the sand’’. Much of the evaluation literature and praxis discussions focus on therole that values play in shaping evaluator experiences. In conflict zones, the adaptation of the Greekproverb ‘‘evaluator, know thyself’’ should be expanded to include ‘‘and know thy client.’’ Similarly,‘‘client, know thyself—and know thy evaluator.’’ As noted in the comments made to us by evalua-tors, too often evaluators and their clients underestimate the difficulty and complexity of conflictzone operating environments. Conflict zones are often volatile and unpredictable places. Evaluatorsand clients need to ask uncomfortable questions such as how will negative or explosive findings bemanaged, communicated, and most importantly, used? Is the time frame and budget for this

Figure 5. Ethical issues in a conflict zone.

16 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

evaluation realistic?23 Most importantly, clients and evaluators need to have a conversation aboutthe potential negative consequences that evaluation can have for the evaluand (or other stake-holders), and political stakes involved in any evaluation taking place in a conflict zone. It all soundslike common sense, but it’s striking how often these conversations fail to take place.

Build knowledge, experience, and skills to mitigate or manage ethics dilemmas. ‘‘Knowing thyself’’ alsosurfaces the question of evaluator competencies. Evaluation in conflict zones requires a skill set thatgoes beyond the usual social science approaches and tools at the disposal of evaluators. In addition tothe usual technical competencies of evaluators, they (or their team) need to possess:

! an appreciation of the conflict zone–specific ethical challenges such as the potential for doingharm to historically marginalized groups;

! political sensitivities, diplomacy, and conflict resolution skills;! peace and conflict research skills (such as knowledge of peace and conflict analysis

frameworks);! anthropological, historical, and political sensibilities;! in militarized zones, a technical knowledge of the structures, strategies, weapons, and beha-

vior patterns of all armed actors;! knowledge and appreciation of the intersection of the political and ethnographic at local

levels;! cultural competence and cultural humility.

A number of evaluators also emphasized the importance of work experience in conflict zones.When they entered a conflict zone early in their careers, they were, as one evaluator puts it, ‘‘clue-less’’ (Evaluator 10). Unwittingly, they ‘‘almost inevitably aligned with one faction or set of inter-ests rather than others’’ (Evaluator 10). The danger here is not necessarily the threat to evaluatorindependence or the fact that evaluators are aligned with factions or that this may be a requirementfor access to, and work in, particular areas. Rather, the danger lies in the blindness to the potentialsecurity or political repercussions that may follow initial actions, which may be further exacerbatedby subsequent decisions or actions undertaken in a similar context-less fog. Most evaluators inter-viewed argued that there is a positive correlation between the level of experience and capacity toassess the logistical, political, methodological, and ethical dimensions of an evaluation in a conflictcontext. This, however, clashes with research findings elsewhere, which found ethical sensibilitywas not necessarily correlated with experience.24 One possible explanation is that conflict environ-ments are less forgiving of ethical miscalculations, errors of judgment, or mistakes. The consequentlessons are sharper and more quickly delivered, allowing for lasting, if uncomfortable, learning. Thisapparent discrepancy in research findings calls for further inquiry.

As noted earlier, while ethical evaluation behavior cannot necessarily be learned, there is a cur-ious lack of formal professional development spaces in which present or future evaluators can shar-pen their own ethics compass by studying basic ethics principles and exploring their potentialapplication to evaluation practice. The AEA’s online training package for the Guiding Principlesfor Evaluators25 provides a first essential step in this direction.

Put in place mechanisms and processes to deal with emerging dilemmas. Morris notes that most of theopportunities for influencing ethical practice in evaluation come through prevention, most oftenat the contracting or entry stage of the evaluation (2008, p. 195–205). But what other mechanismsand spaces exist for dealing with ethics dilemmas, particularly once the evaluation is fullyunderway?

Duggan and Bush 17

at University of York on June 15, 2014aje.sagepub.comDownloaded from

For research in conflict zones, projects are largely vetted by ethics review boards26 of one form oranother. While this process is not without its problems—lack of quality control and absence of mon-itoring and enforcement capacity, for example—it nonetheless represents a structural mechanismthat formally undertakes an ethical assessment of all research proposals. It is a process undertakenby an entity (often university or government based) that technically possesses the authority torequest changes to the project or to reject it all together based on ethical considerations. Evaluationconsultation or advisory groups27 generally do not carry the same clout but can be a useful source ofadvice or ‘‘ethics sounding board’’ for evaluators in conflict zones. In conflict zones, an evaluationadvisory group can offer support on myriad dimensions including probing the appropriateness of theevaluation design and the robustness of ethics provisions; increasing access to diverse stakeholdergroups; bolstering the legitimacy of external evaluators (VeLure Roholt & Baizerman, 2012, p. 77);and keeping the evaluators appraised of changing conditions on the ground.

As most evaluators know, troubleshooting ethics dilemmas once an evaluation is underway canbe difficult and time consuming. The literature, our experience, and the evaluators we spoke withhave used the following tactics and strategies for troubleshooting ethics problems as they havecropped up:

! One evaluator used the organizational ethics standards that appeared in his contract to buttresshis arguments with an evaluation manager.

! Another consulted her network of evaluator colleagues in order to ‘‘crowd source’’ potentialsolutions.

! Consider invoking the conflict resolution provision in a contract and used a trusted third partyto negotiate a solution.

! Ensure the incorporation of sufficient evaluator–client ‘‘check-ins’’ in order to discuss emer-ging findings and unexpected ethical dilemmas. Often, it is the element of (disagreeable) sur-prise that can generate annoyance and irritation on the part of clients.

! Consider requesting a meta-evaluation by a recognized expert in cases of accusations of ethi-cal malfeasance.

! Consider using an evaluation manager as an ally. Evaluation managers often consider them-selves as members of the evaluation profession; they sit between evaluators and the seniormanagers of their own organizations. This is not always a comfortable place to be, but it doesequip them with a wide political perspective which can be tapped into by evaluators.

We have endeavored to make the case that doing evaluation in conflict zones presents uniqueethical challenges. Extreme context is infused with extreme ethical implications—more risks,greater risks, and greater consequences of all decisions and actions. Much work remains to be donein examining and dealing with ethics dilemmas and in finding strategies to anticipate, avoid, ordeal with these dilemmas. We hope that this article has made a modest contribution to deepeningresearch and strengthening professional reflection in what we consider to be an underaddressedarea of practice.

Authors’ Note

This article draws on a number of sources including (1) the findings of a 4-year research project entitled Eval-

uating Research In and On Violently Divided Societies; (2) our experiences in coteaching a course on evalua-tion in conflict-affected settings for evaluators, students, researchers, and mid-career professionals working indevelopment and humanitarian agencies, nongovernmental organizations (NGOs), and community-based orga-nizations; (3) an ethics training workshop for evaluators that we developed and conducted at evaluation asso-ciation meetings in Africa and South Asia; and (4) a series of structured conversations with 10 evaluators whohave worked in conflict zones.

18 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/orpublication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Notes

1. A literature search for a clear definition of ethics reveals a disjuncture between (1) academic articles claiming

to explore ethical issues and (2) the clear articulation and consistent application of a definition. The definition

used is this article derives what we consider to be the most useful Internet-accessible dictionary. Ethics, Dic-

tionary.com: http://dictionary.reference.com/browse/ethics?s¼t (Accessed June 25, 2013).

2. For details on the Conference, see http://evaluationconclave2013.org/ (Accessed April 22, 2013).

3. Defined by Amo and Cousins ‘‘as the process of translating an abstract construct into concrete measures for

the purpose of observing the construct,’’ As cited in Patton, 2007, p. 100.

4. Understood as the ‘‘diverse, complex and dynamic nature of knowledge derived from practice’’ (Titchen &

Ersser, 2001, as cited in VeLure Roholt & Baizerman, 2012, p.122).

5. The term ‘‘stakeholders’’ includes evaluators, commissioners/clients, managers, and users of evaluations as

well as the implementing agency and beneficiaries of the initiative being evaluated, that is, all those indi-

viduals and parties with a direct or indirect interest in the evaluation. These roles are not necessarily

mutually exclusive.

6. For a challenging discussion of the potential scope of violence, see Scheper-Hughes and Bourgois (2004).

Examples include mob violence (by Hindu extremists in India sectarian and in recreational rioting in

Northern Ireland); pogroms (1983 anti-Tamil riots in Sri Lanka); state-sanctioned intimidation (Mugabe’s

Zimbabwe African National Union—Patriotic Front attacks on White Farmers and the Movement for Dem-

ocratic Change); interparty violence (Kenya in December 2007–March 2008); structurally violent regimes

that create and manipulate fear to control the civilian population (archetypically manifest in Apartheid

South Africa or the Conosur dictatorships spanning the mid-1970s to later 1980s); and genocidal violence

(Cambodia under the Khmer Rouge and Sudan under Omar al-Bashir). The last two categories of examples

illustrate the ways in which militarized and nonmilitarized forms of violence may be employed in tandem

in the pursuit of absolute political, social, and economic control.

7. A review of internal U.N. reports (readily available on Wikileaks.com) illustrates many instances where

large-scale interventions under U.N. auspices have inflicted profound harm on local populations and soci-

ety through sexual exploitation, black marketeering, corruption, and so on. For a general review, see Pol-

man (2004).

8. For a study of how child rights projects have, in some cases, increased the vulnerability of children in South

Asia, see Zaveri (in press).

9. This section draws from earlier work which established the foundation for this article (Bush & Duggan,

2013)

10. A full discussion of each of these domains and their impact on evaluation in conflict zones is presented in

the earlier work by the authors (Bush & Duggan, 2013).

11. This would include, for example, a Swedish International Development Cooperation Agency (SIDA)

assessment of 34 evaluation reports entitled Are SIDA Evaluations Good Enough? (SIDA, 2008); the Eur-

ope Aid Cooperation Office protocol for ‘‘Quality Control in the Evaluation Reports’’ [sic], http://ec.

europa.eu/europeaid/evaluation/methodology/guidelines/gui_qal_flr_en.htm#02; a European Community

Humanitarian Office Manual for the Evaluation of Humanitarian Aid (European Community Humanitarian

Office, 1999).

12. http://www.eval.org/publications/guidingprinciples.asp

Duggan and Bush 19

at University of York on June 15, 2014aje.sagepub.comDownloaded from

13. For a copy of the Standards, see http://www.evaluationcanada.ca/site.cgi?en:6:10 (Accessed September 2,

2013).

14. The use of role-play (where evaluators or students are required to adopt the personae of the different sta-

keholders in a given scenario) attempts to instill a sense of immediacy into the exercise. Indeed, it is even

possible to throw ‘‘real-time’’ developments into a case in an effort to impose a sense of urgency or con-

sequence into decisions made by participants. Nonetheless, a sense of ‘‘gameness’’ is omnipresent in this

approach.

15. Desautels and Jacob (2012) offer an interesting blended approach that employs (1) the same analytical

frame as Smith (1998), albeit relabeled as ‘‘corporatist’’ and ‘‘altruistic’’ and (2) a case study approach

or what they call a ‘‘vignette design.’’

16. See Morris (2004, 2007, 2008, 2011); Morris and Clark (2013); and Fitzpatrick and Morris (1999). Other

researchers contributing to this task include Newman and Brown (1996); Mabry (1997, 1999); Smith

(1998); English (1997); Schwandt (1997); Desautels and Jacob (2012); Datta (2000, 2002, 2011); Grasso

(2010); Whitmore (2001); Berends (2007); Birman (2007); Department for International Development

(2011); Jayawickrama (2013); and Duggan (2012).

17. Morris specifically notes the need for increased research on the ethical perceptions of stakeholders with

whom evaluators interact.

18. The text book cited by Morris is Newman and Brown (1996). Morris’s own contribution to the field is evi-

dent in his 2008 volume, which, although not a textbook per se, is nonetheless used as one and sets ethical

challenges in a practical context through the use of case studies. Also noteworthy in this context is Church

and Rogers (2006), especially Chapter 11: Ethics. From the broader perspective of research ethics, see the

excellent volume by Mertens and Ginsberg (2009).

19. Evaluators 1, 5, 6, 7, and 10. Mechanisms included Internet searches, informal communication networks,

professional grapevine, and so on.

20. Elsewhere we have addressed the impact of conflict context on these four domains of evaluation in greater

detail (Bush & Duggan, 2013).

21. Our (limited) research and practical experience suggest that this is the case. This hypothesis merits further

inquiry.

22. Noted as a significant danger by VeLure in his evaluation of a museum exhibition on the legacy of The

Troubles in Northern Ireland (VeLure Roholt & Baizerman, 2012, p. 84).

23. Refer to Hendricks and Bamberger (2010) for a more detailed discussion of the consequences of under-

funding evaluations.

24. Research challenging a connection between levels of professional experience and ethical sensibilities

include Desautels and Jacob (2012); Morris and Cohn (1993); Morris and Jacobs (2000); and Newman and

Brown (1996). Because the research cited here does not specify whether the evaluators worked inside or

outside conflict contexts, it is likely that respondents were mixed, tending predominantly toward noncon-

flict context.

25. See http://www.eval.org/p/cm/ld/fid¼105 (Accessed March 27, 2014).

26. Known as Institutional Review Boards in the United States.

27. Defined as ‘‘ . . . a committee or group without governing authority or responsibility that is put together

and managed by and evaluator . . . and is composed of individuals with expert evaluation knowledge and

experience, and may also include those with expertise and/or experience in the problem or condition

being evaluated, the program, service, or its host organization, and other relevant aspects of a particular

evaluation . . . ’’ Baizerman, Fin, and VeLure Roholt (2012, p. 2).

References

American Evaluation Association. (2004). Guiding principles for evaluators (revised). Retrieved from http://

www.eval.org/Publications/

20 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from

Australasian Evaluation Society. (1997). Australian evaluation society guidelines for the ethical conduct of

evaluations. Retrieved August 2009 from http://www.aes.asn.au

Bamberger, M., Rugh, J., & Mabry, L. (2006). Real world evaluation: Working under budget, time, data, and

political constraints. Thousand Oaks, CA: Sage.

Baizerman, L., Fin, A., & VeLure Roholt, R. (2012). From consilium to advice. A review of the evaluation and

related literature on advisory structures and processes. New Directions in Evaluation, 136, 5–29.

Berends, L. (2007). Ethical decision-making in evaluation. Evaluation Journal of Australasia, 7, 40–45.

Birman, D. (2007). Sins of omission and commission—To proceed, decline, or alter? American Journal of

Evaluation, 28, 79–85.

Bush, K., & Duggan, C. (2013). Evaluation in conflict zones: Methodological and ethical challenges. Journal of

Peacebuilding and Development, 8, 5–25.

Church, C., & Rogers, M. (2006). Chapter 11: Ethics, in Designing for results: Integrating monitoring &

evaluation in conflict transformation programs. Search For Common Ground, Washington DC, pp.

188–198. Retrieved from http://www.sfcg.org/Documents/manualpart1.pdf

Datta, L. (2000). Seriously seeking fairness: Strategies for crafting non-partisan evaluations in a partisan world.

American Journal of Evaluation, 21, 1–14.

Datta, L. (2002). The case of the uncertain bridge. American Journal of Evaluation. 23, 187–196.

Datta, L. (2011). Politics and evaluation: More than methodology. American Journal of Evaluation, 32,

273–294.

Desautels, G., & Jacob, S. (2012). The ethical sensitivity of evaluators: A qualitative study using a vignette

design, Evaluation, 18, 437–450.

Department for International Development. (2011). Ethics principles for research and evaluation. Retrieved

from April 3, 2013, from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/

67483/dfid-ethics-prcpls-rsrch-eval.pdf

Dahler-Larsen, P., & Schwandt, T. A. (2012). Political culture as context for evaluation. New Directions

in Evaluation. 135, 75–87.

Duggan, C. (2012,October 4). Lines in the sand: Building an ethical approach to evaluation in contexts affected

by violence and conflict. Paper presented at the 10th Biennial Conference of the European Evaluation Soci-

ety, Helsinki, Finland.

European Community Humanitarian Office. (1999). European community humanitarian office manual for the

evaluation of humanitarian aid. Retrieved from http://www.cihc.org/members/resource_library_pdfs/8_

Programming/8_2_Evaluation/echo-manual_for_he_Evaluation_of_Humanitarian_Aid-1999.pdf

English, B. (1997). Conducting ethical evaluations with disadvantaged and minority target groups. Evaluation

Practice, 18, 49–54.

Fitzpatrick, J. L. (2012). Commentary: Collaborative evaluation within the larger evaluation context. Evalua-

tion and Program Planning, 35, 558–563.

Fitzpatrick, J. L., & Morris, M. (Eds.). (1999). Current and emerging ethical challenges in evaluation. New

directions for evaluation (Vol. 82). San Francisco, CA: Jossey-Bass.

Grasso, P. G. (2010). Ethics and development evaluation: Introduction. American Journal of Evaluation, 31,

533–539.

Hendricks, M., & Bamberger, M. (2010). The ethical implications of under-funding development evaluations.

American Journal of Evaluation, 31, 549–556.

Jayawickrama, J. (2013). ‘If they can’t do any good, they shouldn’t come’: Northern evaluators in southern rea-

lities. Journal of Peacebuilding and Development, 8, 26–41.

Julnes, G. (Ed.). (2012). Promoting valuation in public interest: Informing policies for judging value in evalua-

tion. New Directions for Evaluation, 133.

Lebow, R. N. (1981). Between peace and war: The nature of international crisis. Baltimore, MD: Johns

Hopkins University Press.

Mabry, L. (1999). Circumstantial ethics. American Journal of Evaluation, 20, 199–212.

Duggan and Bush 21

at University of York on June 15, 2014aje.sagepub.comDownloaded from

Mabry, L. (1997). Ethical landmines in program evaluation. In R. E. Stake (Chair), Grounds for turning down a

handsome evaluation contract. Symposium conducted at the meeting of the American Educational Research

Association, Chicago, IL.

Mertens, D. M., & Ginsberg, P. E. (Eds.). (2009). The handbook of social research ethics. Thousand Oaks, CA:

Sage.

Morris, M. (2004). Not drinking the poison you name: Reflections on teaching ethics to evaluators in for-profit

settings. Evaluation and Program Planning, 27, 365–369.

Morris, M. (2007). Foundation officers, evaluation, and ethical problems: A pilot investigation. Evaluation and

Program Planning, 30, 410–415.

Morris, M. (Ed.). (2008). Evaluation ethics for best practice: Cases and commentaries. New York, NY:

Guilford.

Morris, M. (2011). The good, the bad, and the evaluator: 25 years of AJE ethics. American Journal of Evalua-

tion, 32, 134–151.

Morris, M., & Clark, B. (2013). You want me to do WHAT? Evaluators and the pressure to misrepresent find-

ings. American Journal of Evaluation, 34, 57–70.

Morris, M., & Cohn, R. (1993). Program evaluators and ethical challenges: A national survey. Evaluation

Review, 17, 621–642.

Morris, M., & Jacobs, L. (2000). You got a problem with that? Exploring evaluators disagreements about ethics.

Evaluation Review, 24, 384–406.

Newman, D. L., & Brown, R. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage.

Organization for Economic Cooperation and Development. (2012). Evaluating peacebuidling activities in

settings of conflict and fragility—Improving learning for results. DAC Guidelines and Reference

Series. OECD/Organization for Economic Cooperation and Development, Paris, France. doi:10.1787/

9789264106802-en

Polman, L. (2004). We did nothing: Why the truth does not always come out when the UN goes in. London,

England: Penguin Books.

Patton, M. Q. (2012). Contextual pragmatics of valuing. New Directions for Evaluation, 133, 97–108.

Patton, M. Q. (2007). Process use as usefulism. New Directions for Evaluation, 116, 99–112.

Scheper-Hughes, N., & Bourgois, P. (Eds.). (2004). Introduction. In Violence in war and peace: An anthology

(pp. 1–32). Oxford, England: Blackwell Publishing.

Schwandt, T. A. (1997). The landscape of values in evaluation: Charted terrain and unexplored territory. In D. J.

Rog & D. Fournier (Eds.), Progress and future directions in evaluation: Perspectives on theory, practice,

and methods (pp. 25–39). San Francisco, CA: Jossey- Bass.

Swedish International Development Cooperation Agency. (2008). Are SIDA assessments good enough—An

assessment of 34 evaluation reports. SIDA Studies in Evaluation 2008:1. Retrieved April, 2013, from

http://www.sida.se/Publications/Import/pdf/sv/Are-Sida-Evaluations-Good-Enough—An-Assessment-of-

34-Evaluation-Reports.pdf

Smith, N. L. (1998). Professional reasons for declining an evaluation contract. American Journal of Evaluation,

19, 177–190.

Turner, D. (2003). Evaluation ethics and quality: Results of a survey of Australasian evaluation society

members. AES Ethics Committee. Retrieved from http://www.aes.asn.au/images/stories/files/About/

Documents%20-%20ongoing/ethics_survey_summary.pdf

VeLure Roholt, R., & Baizerman, M. (2012). A model for evaluation advisory groups: Ethos, professional craft

knowledge, practices and skills. New Directions for Evaluation, 136, 119–127.

Whitmore, E. (2001). To quit or not to quit . . . Yet. American Journal of Evaluation, 22, 260–264.

Zaveri, S. (in press). Evaluation and vulnerable groups: Forgotten spaces. In K. Bush & C. Duggan (Eds.),

Evaluation in the extreme: Research, impact and politics in violently divided societies. New Delhi: Oxford

University Press.

22 American Journal of Evaluation

at University of York on June 15, 2014aje.sagepub.comDownloaded from