Legal and Ethical Implications arising from the Development and Deployment of Autonomous Weapon...

73
“LEGAL AND ETHICAL IMPLICATIONS ARISING FROM THE DEVELOPMENT AND DEPLOYMENT OF AUTONOMOUS WEAPON SYSTEMS” Comdt WAYNE TYRRELL LLB, BL Submitted in part fulfilment of the requirements for the MA (LMDS) National University of Ireland Maynooth 2014 Supervisors: Dr Anne O’Brien, NUI Maynooth Comdt Niall Verling, Military College

Transcript of Legal and Ethical Implications arising from the Development and Deployment of Autonomous Weapon...

“LEGAL AND ETHICAL IMPLICATIONS ARISING FROM THE DEVELOPMENT

AND DEPLOYMENT OF AUTONOMOUS WEAPON SYSTEMS”

Comdt WAYNE TYRRELL

LLB, BL

Submitted in part fulfilment of the requirements for the

MA (LMDS)

National University of Ireland Maynooth

2014

Supervisors: Dr Anne O’Brien, NUI Maynooth

Comdt Niall Verling, Military College

I

MA (LMDS)

STUDENT DECLARATION

1. I certify that this thesis does not incorporate, without acknowledgement, any

material previously submitted for a degree or diploma in any university; and that

to the best of my knowledge and belief it does not contain any material previously

published or written by another person, except where due reference is made in the

text.

2. Permission is given for the Military College Library and the NUI Library

Maynooth to lend or copy this dissertation upon request.

3. The views and conclusions expressed herein are those of the student author and do

not necessarily represent the views of the Command and Staff School or the

Military College.

SIGNED: ___________________ RANK: Commandant

NAME: Wayne Tyrrell

II

ACKNOWLEDGEMENTS

Dr Anne O’Brien and Comdt Niall Verling, for providing the road map to this fascinating

journey. My class, for all the laughs. My good pal Margaret Butterly, for pointing out my

errors, be they personal or grammatical. Colonel John Spierin, Lieutenant Colonel Jerry Lane

and Commander Pat Burke, for opening many doors - and guiding me through the right ones.

Commandant Conrad Barber, for his sagacity. Michel Bourbonniere, for the first golden ray of

intellectual sunshine. Mark Gubrud for his candour and fervour. Peter Redmond and Zulu for

recalling me and identifying the wood from the trees. Prof Heintschell Von Heinegg for his

Teutonic legal precision. Col Darren Stewart - I speak his language, just not as eloquently. Prof

Armin Krishnan, for his sincere generosity of mind. Prof Ronald Arkin, for his personal time

and a wealth of publications. Nathalie Wizeman, for the inside view. Prof Michael Lewis, - we

have followed the same road, save that his was Route 66 and mine the back-roads of Ireland.

Above all, my wonderful wife, for her unwavering support, always.

III

ABSTRACT

Opinions surrounding the development and deployment of autonomous weapon systems are

polarised. Some see them as heralding an age of military conflict where machines, not men,

will decide over the taking of human life on the battlefield. Others believe that these

systems may provide a potentially reliable alternative to human blood lust in conflict. This

study evaluates the myriad of opinions by traversing the prominent literature and

attempting to parse the sometimes frenetic nature of the discourse. In approaching the

subject from the legal and ethical perspectives, the principles of International Humanitarian

Law are addressed alongside issues such as artificial intelligence, increased propensity for

war and the relinquishment of decisions regarding the taking of human life to machines.

Utilising a modified hermeneutical approach, the research draws upon the thoughts of

some of the most eminent legal, scientific and military experts, with a view to eliciting a

deeper understanding of certain key areas. By revealing novel perspectives, this study

identifies four key findings surrounding supervision of autonomous weapon systems,

universal interpretation of International Humanitarian Law and both individual and state

responsibility. Ultimately, however, the analysis calls upon the reader to take a

conceptually different approach to understanding the future of autonomous weapon

systems, where the incremental control of development becomes the focus, rather than the

end state.

The findings and analysis of this research brings forward the wider knowledge of the

subject and are immediately relevant to the international discourse surrounding the work of

the United Nations Certain Conventional Weapons Committee. While the research is of

particular interest to legal academics in the area of International Humanitarian Law,

military commanders and scientists working with robotics, it has broad application to the

development of weapons generally and the overlap between law and ethics.

IV

TABLE OF CONTENTS

STUDENT DECLARATION ............................................................................................... I

ACKNOWLEDGEMENTS ................................................................................................ II

ABSTRACT ........................................................................................................................ III

TABLE OF CONTENTS ................................................................................................... IV

CHAPTER ONE - THESIS INTRODUCTION ................................................................ 1

CHAPTER TWO - LITERATURE REVIEW .................................................................. 5

Introduction ...................................................................................................................... 5

The law .............................................................................................................................. 6

Military Necessity .......................................................................................................... 7

Proportionality ............................................................................................................... 8

Distinction ................................................................................................................... 10

Humanity ..................................................................................................................... 11

The ethical argument ..................................................................................................... 12

Individual Accountability ............................................................................................ 13

Resort to War ............................................................................................................... 16

Relinquishing the decision to kill a human to a non-human .................................... 17

Developing Artificial Intelligence for Autonomous Weapons................................... 19

Conclusion ....................................................................................................................... 21

CHAPTER THREE - METHODOLOGY ....................................................................... 24

Introduction .................................................................................................................... 24

Epistemological Approach ............................................................................................ 24

Methodological Approach ............................................................................................. 25

Methodological Limitations........................................................................................... 27

V

CHAPTER FOUR - RESEARCH FINDINGS ................................................................ 28

Introduction .................................................................................................................... 28

Narrowing the Focus of the Research Analysis ........................................................... 29

Precautions in Attack ..................................................................................................... 29

Proportionality ............................................................................................................... 34

Accountability for Autonomous Weapon Systems ...................................................... 36

Individual Responsibility............................................................................................. 37

State Responsibility ..................................................................................................... 40

Analysis of the findings .................................................................................................. 42

Conclusion ....................................................................................................................... 45

CHAPTER FIVE - CONCLUSIONS AND RECOMMENDATIONS ......................... 47

Introduction .................................................................................................................... 47

Strengths and weaknesses ............................................................................................. 47

Areas for further study .................................................................................................. 49

Recommendations .......................................................................................................... 50

Conclusions ..................................................................................................................... 51

APPENDIX A - ACADEMIC BIOGRAPHIES OF INTERVIEWEES ....................... 52

APPENDIX B - LIST OF QUESTIONS FOR INTERVIEWEES ................................ 62

BIBLIOGRAPHY .............................................................................................................. 64

1

“It has become appallingly obvious that our technology has exceeded our humanity. ...”

Albert Einstein1

CHAPTER ONE – THESIS INTRODUCTION

This quote reflected on the development of nuclear weapons almost seven decades ago.

Now, more than ever, technologies deployed in the modern battlespace are developing at an

exponentially swift rate, civilians seem to die in greater proportion to combatants and the

horrors of warfare seem unchecked by international legal restraint.

For some time now, new weapons technologies have been in discrete development, which

can act autonomously, without direct human control or interface. Referred to by some as

‘Killer Robots’ (Krishnan, 2009), these systems have many names: Autonomous Weapon

Systems (Bernard, 2012:8) or Robotic Weapon Systems (US Department of Defence,

2012), Lethal Autonomous Robotics (Heyns, 2012). For the purpose of this thesis they shall

be referred to as Autonomous Weapons Systems (AWS), which is arguably the least

divisive and most generic of terms and that favoured by the International Committee of the

Red Cross (Lawand, 2013: 1).

The difficulty in coming to an agreed name for these systems is also reflected in the failure

“It has become appallingly obvious that our technology has exceeded our humanity. We scientists, whose

tragic destiny it has been to help make the methods of annihilation ever more gruesome and more effective,

must consider it our solemn an transcendent duty to do all in our power to prevent these weapons from being

used for the brutal purpose for which they were intended”, Albert Einstein, Quoted in the New York Times,

29 August 1948.

2

to reach consensus on a definition to cover exactly what these technologies are. In their

broadest sense, these systems are essentially robots, which integrate weapons systems and

can operate, at some level, without human supervision (Human Rights Watch, 2012: 6).

The ICRC has defined these systems as weapons that are programmed to learn or adapt

their function in response to changing circumstances in the environment in which they are

deployed and are capable of searching for, identifying and applying lethal force to a target,

including a human target, without any human intervention or control. This definition

connotes a mobile system with some form of artificial intelligence, capable of operating in

a dynamic environment with no human control (Lawand, 2013: 2). Arguably, in their most

basic sense, such systems have been operational in a defence role for over three decades, an

example of these being the US Navy’s MK-15 Phalanx Close-in Weapons System,

designed to fire at incoming missiles or threatening aircraft. While these forms of

automated defence systems have been broadly accepted, it is the emergence of the

automated offensive systems that has given rise to concerns in many quarters, including,

most notably, the report in 2012 by Human Rights Watch, entitled ‘Loosing Humanity’.

Kellenberger (2011: 26) recognises that while such systems have not yet been weaponised,

there is considerable interest and funding for research in this area. At present the US Navy

has successfully tested its X-47B aircraft, which is designed to take off and land

autonomously on an aircraft carrier (Lee, 2013), while the Army and Marines have

developed K-Max helicopters, which have already flown autonomously in Afghanistan to

deliver cargo between forward operating bases (Fein, 2013). Concurrently, the UK is

preparing to test fly an eight tonne supersonic semi-autonomous unmanned attack

warplane, known as Taranis (Muir, 2013). According to the ICRC, the deployment of AWS

3

will reflect a paradigm shift and a major qualitative change in the conduct of hostilities,

giving rise to a range of fundamental legal, ethical and societal issues (Kellenberger,

2011:27).

Such a paradigm shift in military technology has far-reaching legal and ethical

ramifications in respect of their development and deployment and the international legal

and defence communities are only now beginning to look seriously at these issues.

Accordingly, this research emerges at an important and useful juncture in terms of

reviewing the legal and ethical implications arising from the development and deployment

of AWS.

As a review of the literature will testify in Chapter II, the concerns expressed in relation to

AWS can be conceptually categorised into a framework of two overlapping areas, those of

a legal and ethical nature. Within each of these categories a number of identifiable concerns

emerge in the literature, broadly based on the principles of International Humanitarian Law

and ethical norms. Consequently, within this conceptual framework, we will see legal

questions arising in relation to Humanity, Military Necessity, Proportionality and

Distinction. This study will also consider the literature on ethical questions relating to

divesting the decision to kill a human to a machine, individual accountability for such

decisions, an increased inclination to wage war in the absence of the need to deploy human

soldiers and whether Artificial Intelligence can reproduce ethical decision making. The

various perspectives evaluated in the review of the literature will enabled the distillation of

specific questions for the substantive research, where the views of subject area experts were

elicited to develop a deeper understanding of the subject.

4

Thus, against this bi-spacium conceptual framework, the methodology for approaching this

study of AWS will be outlined in Chapter III. Here, the derivative questions will be formed

with a view to interviewing leading experts with backgrounds in International

Humanitarian Law, robotics, and the military, before setting out and analysing the findings,

which emerged from the discussions. Supervision and accountability emerge as central

themes in the findings, pointing towards a need to approach the subject in a conceptually

different way to that evident in the existing literature. The conclusion of this thesis will take

the findings emergent in Chapter IV and relate them to the current discussion surrounding

autonomous weapons, thus generating recommendations to advance understanding of this

subject area.

5

CHAPTER TWO – LITERATURE REVIEW

Introduction

While autonomous weapons are a recent evolution in war, like so many technological

developments, the rate of advancement is such as to entail the likelihood of mass

deployment in the near future (Human Rights Watch 2012:09). Consequently, any failure to

understand the implications of their emergence and their future use will result in their

deployment with little chance of their removal (Heyns 2013:21). With the underlying

rationale of this study being the enhanced understanding of the legal and ethical issues

surrounding AWS, this chapter aims to set out the various perspectives from all sides of the

debate.

Over the past five years these emerging technologies have prompted an ever increasing

body of literature from academics, governmental and non-governmental agencies,

international bodies and armed forces. This barrage of literature, from a diverse array of

perspectives, seemingly focuses attention on three areas of difficulty for autonomous

weapons, that of law, ethics and technology. The technological impediments are slowly

being resolved with the passage of time and development, but attention has only recently

turned to the ethical and legal considerations, which are often conflated in the discussion

and rarely are they evaluated within their distinct paradigms.

6

This review of literature seeks to parse this ostensibly frenetic discourse to establish a

distinct and non-contagious legal and ethical framework, against which we can compare

conflicting perspectives on AWS. The challenge, and indeed the significance, of this

approach is its attempt to avoid an inclination towards partisan affiliation by presenting

opposing views in the context of their relevant legal or ethical paradigm.

In establishing breadth for this review, a multitude of themes emerged from the

interdisciplinary literature. The principal arguments surrounding autonomous weapons

systems fall within two overlapping areas, that of law and ethics. As the conceptual

framework for evaluating this study is based on both a legal and ethical paradigm, we will

first examine the specifically legal arguments by initially identifying the area of law from

which they emanate.

The Law

The legality of AWS falls to be considered against the paradigm of the International

Humanitarian Law2. The legality of any new weapon is assessed not just against the

established laws in the relevant treaties, but more importantly against the overarching legal

principles (Stewart 2011: 283). Thus, in circumstances where no specific legal provisions

are made for weapons the ‘Martens Clause’3, established at the Hague Peace Conferences

2 This body of law is alternatively referred to as the Law of Armed Conflict (LOAC) or the Law of War is

chiefly concerned with the public international legal treaties of the Geneva and Hague Conventions (Solis

2010: 22). 3 “Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to

declare that in cases not included in the Regulations adopted by them, populations and belligerents remain

under the protection and empire of the principles of international law, as they result from the usages

7

of 1899, applies. Having examined the literature in respect of this subject, it readily

emerges that there are no existing express prohibitions under International Humanitarian

Law banning the use of autonomous weapons4. Accordingly, it is to the four legal

principles of LOAC that we turn in order to evaluate the legality of autonomous weapons:

1. military necessity,

2. proportionality,

3. distinction,

4. humanity.

Military Necessity

Military necessity requires that only objectives which give a distinct military advantage

should be targeted5. Human Rights Watch (2012: 34), in their report on autonomous

weapons, express the view that military necessity, like proportionality, requires a subjective

analysis of situations. It allows military forces, in planning military actions, to take into

account the practical requirements of a military situation at any given moment and the

imperatives of winning, but those factors are limited by the requirement of humanity. The

report describes military necessity as a context-dependent, value-based judgment of a

commander and gives the example where an enemy soldier has become hors de combat.

The report opines that a fully autonomous robot sentry would find it difficult to determine

established between civilized nations, from the laws of humanity and the requirements of the public

conscience.” 4 The lack of express prohibition of autonomous weapons systems was confirmed by Prof. Armin Krishnan, et

al, during the research. 5 Article 52 of Additional Protocol I to the Geneva Conventions 1977 provides a widely-accepted definition of

military objective: "In so far as objects are concerned, military objectives are limited to those objects which

by their nature, location, purpose or use make an effective contribution to military action and whose total or

partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military

advantage"

8

whether an intruder it shot once was merely knocked to the ground by the blast, feigning an

injury, slightly wounded but able to be detained with quick action, or wounded seriously

enough to no longer pose a threat. Human Rights Watch (2012: 34-35) assert that it might

therefore unnecessarily shoot the individual a second time.

Schmitt and Thurnher (2013: 258) believe that this argument is a mischaracterisation of

military necessity. They readily accept that the shooting of a soldier who has surrendered or

hors du combat is illegal. However, they differentiate between a combatant and a soldier

who is no longer a combatant, asserting that this is a matter which falls to be determined

under the principle of Distinction. Their assertion is that Military Necessity is determined

by assessing whether the targeting of an objective results in a military advantage and

clearly the shooting of a soldier confers such an advantage (2013: 258-259). A portentous

argument is proffered by Krishnan (2009: 91-92) where he predicts that autonomous

weapons may become superior fighters to humans in conflict and it will therefore become a

military necessity to use them. While it might be argued that this could have a humanising

effect, he argues that the resultant relaxation of the restraints on warfare could have

disastrous consequences.

Proportionality

Proportionality is the requirement to balance military advantage against collateral damage6.

In their recent report, Human Rights Watch (2012: 32) also evaluated autonomous weapons

in terms of this principle. The requirement that an attack be proportionate, they view as one

6 Collateral damage is the unintended injury or death to civilian persons or property.

9

of the most complex rules of International Humanitarian Law. Determining the

proportionality of a military operation depends heavily on context and accordingly they

believe that the decision requires human judgment that a fully autonomous weapon would

not have. They recognise that a legally compliant response in one situation could change

considerably by slightly altering the facts. Heyns (2013: 13) also recognises that the value

of a target, which determines the level of permissible collateral damage, is constantly

changing and depends on the moment in the conflict.

Echoing the views of Liu (2012: 643), Human Rights Watch (2012: 32) consider it highly

unlikely that a robot could be pre-programmed to handle the infinite number of scenarios it

might face so it would have to interpret a situation in real time. Even proponents of

autonomous weapons recognise the challenging contextual determinations made by

commanders in this regard (Schmitt and Thurnher, 2013: 255).

However, those favouring the development of autonomous weapons believe that advanced

algorithms, incorporating adjustable values with low base thresholds for collateral damage,

alterable for evolving battlefield situations, could provide a reasonable basis for

autonomous proportionality assessments. Notwithstanding, Schmitt and Thurnher’s

noteworthy acceptance of the requirement for humans to make the actual proportionality

decision is echoed by the US Department of Defence’s policy (2012: 3) on autonomous

weapons. Other proponents of autonomous weapons argue that computerised weapons

could more quickly and precisely calculate blast and other weapon effects that cause

10

collateral damage, while such calculations would be far too complex for a soldier to make

in real time (Krishnan 2009: 92; Guetlein 2005: 5).

Distinction

Distinction is the requirement to distinguish between combatants who can be killed and

non-combatants who are protected7. It is this principle that appears to generate the greatest

degree of discussion, perhaps because it is approaching the most controversial of decisions

in International Humanitarian Law, that of determining who can be killed. Human Rights

Watch (2012: 30) believe that this issue poses one of the greatest obstacles to fully

autonomous weapons complying with International Humanitarian Law, as they would not

have the ability to sense or interpret the difference between soldiers and civilians,

especially in contemporary combat environments. Indeed, the increasing prevalence of

irregular warfare exacerbates the difficulty as the principle of distinction becomes very hard

to observe where an enemy blends in with the civilian population (Krishnan 2009: 94).

Similarly, the ICRC and others like Asaro (2012: 697) accept that a robot could be

programmed to behave more ethically and far more cautiously on the battlefield than a

human being, However, Kellenberger8 (2011: 5) believes that the ability to discriminate

between civilians and combatants will be the central challenge in developing autonomous

weapons. The often quoted example of the shooting down of the civilian Iran Air Flight by

USS Vincennes in 1988 resulting in the deaths of all 290 on board, serves as a portentous

7 Article 51.3 of Protocol I to the Geneva Conventions explains that "Civilians shall enjoy the protection

afforded by this section, unless and for such time as they take a direct part in hostilities. 8 President of the International Committee for the Red Cross from Jan 2000 to July 2012.

11

reminder of the capacity for autonomous systems to make errors of distinction (Singer

2009: 40, Liu 2012: 641).

It has been recognised by many, including Singer (2009: 398) and Heyns (2013: 13), that

humans are not necessarily superior to machines in their ability to distinguish and in some

cases the powerful sensors and processing powers of autonomous weapons could

potentially lift the “fog of war” for human soldiers and prevent the kinds of mistakes that

often lead to atrocities during armed conflict, and thus save lives. Schmitt and Thurnher

(2013: 253) state that these views fail to recognise that the inability to distinguish between

combatants and non-combatants is only a difficulty if autonomous weapons are deployed

into an area where such distinction is necessary. If deployed into an area in which there are

only combatants, the requirement becomes otiose and as such these weapons cannot be

illegal per se, based on this argument. This view was also alluded to by the United National

Special Rapporteur on summary or arbitrary executions (Heyns 2013: 13).

Humanity

Humanity, as a principle of International Humanitarian Law, prohibits the use of force that

causes unnecessary suffering to those taking part in hostilities and historically has been the

basis for banning certain weapons9 (Gillespie 2011: 24, 50). However, in themselves,

autonomous weapons are not of a nature to cause unnecessary suffering (Liu 2012: 641)

and the argument against the humanity of these weapons tends to focus on the inability of

9 Previously banned weapons include dum-dum bullets, poisonous gas, undetectable fragments and laser

blinding systems

12

robots to demonstrate the human emotions of compassion and empathy (Human Rights

Watch 2012: 4). Schmitt (2013: 13) disagrees and raises the argument that relying on this

factor is empirically suspect, such that while emotions can restrain humans, it is equally

true that emotions can unleash the basest of instincts. He cites atrocities such as Rwanda,

the Balkans, Darfur and Afghanistan, as examples of unchecked emotions leading to

horrendous suffering; and while there were underlying political and ethnic foundations for

these conflicts, he argues that autonomous weapons would not comport themselves in like

manner.

While the legal principle of humanity under International Humanitarian Law is concerned

with prohibiting unnecessary suffering, the preponderance of discourse in relation to

humanity and autonomous weapons centres on ethical issues, rather than strictly legal ones.

Accordingly, it is to the study of ethics that we must turn to adequately assess humanity in

terms of these emerging technologies.

The ethical argument

In a recent interview, the preeminent writer on autonomous weapons, Peter Singer, was

asked whether robots were able to behave ethically. In reply he stated that:

[W]e want an easy yes or no answer, or in robotic terms, a zero or one framing of

the issues. And yet I think that actually illustrates exactly why we will not see the

ethical problems solved by robotics. At the end of the day, neither war nor ethics is

a realm of just zeros and ones, even with the most sophisticated robotics. (Singer,

2012: 478)

13

The complexity of an ethical evaluation of autonomous weapons is evident from the

literature, which reflects this lack of definition. In his appraisal of autonomous weapons

mentioned above, Thurnher (2012: 81-82) concludes that the law would permit the use of

autonomous weapons systems under most circumstances, but he also recognises many

arguments against their use, albeit ethical in nature rather than legal. He holds that in order

to satisfy accountability, an identifiable individual must be answerable for the taking of a

human life and only a human can, therefore, make life and death decisions, not robots. He

is not alone in identifying ethical concerns with autonomous weapons; Anderson and

Waxman (2013: 14-18) also raise concerns in respect of individual accountability and

furthermore, they assert that in reducing the risk to soldiers and civilians it will be easier to

resort to war. We shall also consider the concept of a computer based ethical governor to

establish a form of algorithmic control of robot morality. This is often cited as a basis to

justify the use of war robots and to counter difficult moral questions such as relinquishing

the decision to kill to a non-human (Matthias 2011:279).

Individual Accountability

Within the military there are many layers of delegated authority, from the commander-in-

chief down to the private, but at each layer there is a responsible human to bear both the

authority and responsibility for the use of force10

. While a commander may delegate this

obligation to another responsible human agent, he or she then assumes a duty to oversee the

conduct of that subordinate agent (Asaro 2012: 701).

10

The doctrine of command responsibility, as set out in the Rome Statute establishing the International

Criminal Court and the jurisprudence of the International Criminal Tribunal for the former Yugoslavia

(ICTY), does not permit commanders to abdicate their moral and legal obligations to determine whether the

use of force is appropriate in a given situation.

14

Asaro (2012) raises the concern that autonomous weapons may not have an identifiable

operator, in the sense that no human individual could be held responsible for the weapon’s

actions in a given situation. Consequently, such systems might prevent the possibility of

establishing any individual criminal responsibility that requires moral agency and a finding

of mens rea. He further cites a difficulty with holding a commander responsible for the

atrocities committed by an autonomous weapon, thus shielding the human commander

from what might have otherwise been considered a war crime. The concerns relating to the

inability to hold anyone responsible for the acts committed by autonomous weapons are

echoed by many (Sparrow 2007: 73, Sharkey 2013: 791).

Some more nuanced responsibility issues are further developed by Krishnan (2009: 129),

where he refers to diffusion of responsibility, thereby diminishing individual responsibility.

This relates to circumstances where the decision to kill is comprised of numerous small

decisions taken by many individuals, each of which do not entail an immoral act, but taken

as a whole they may have an immoral outcome. Each individual in the decision chain may

not be morally or legally culpable for a wrongful death and consequently no human is

ultimately responsible. He refers to a hypothetical example cited by Atkinson (2007), who

is a Senior Research Scientist in the Institute for Human and Machine Cognition, where he

considers the case of a modern cruise missile, involving a person in the chain of command

who makes a decision to launch the weapon at a primary target. The ethical responsibility

lies with that person. Such weapons have the capability to be retargeted automatically by

systems on-board the missile. For example, on arrival in the target area the missile does not

detect the artillery gun that was its primary target. In such circumstances the missile is pre-

programmed to loiter in the area and look for another valid target. If the missile were then

15

to detect a tank, it would proceed to destroy the tank and not its initial target of an artillery

gun. (This technology, he stated in 2007, exists and may already be deployed.) He goes on

to questions who is ethically responsibility for the decision to kill the people in the tank?

The person who originally launched the missile, but has no idea of what it actually

attacked? The programmers of the “search and destroy” automation on-board the missile?

The military program manager who decided to develop and deploy such systems? He

concludes that it is very easy to see how the responsibility for the decision to kill, in

particular, can been blurred by the use of AWS. Furthermore, he poses the question

whether by taking away clear responsibility, does it make it easier to kill?

Anderson and Waxman (2013: 16) also consider the various individuals that might be held

responsible for the actions of autonomous weapons, from the soldier deploying the device,

to the commander ordering its use and ultimately the engineer or designer who

programmed the system in the first place. They point to the recent US Department of

Defense policy directive (2013: 17) stating that it is innovative in its insistence upon

training human soldiers in the proper operation of systems, thus strengthening human

accountability as autonomous weapons are brought online.

Stewart (2011: 291) makes the point that while an individual may not be held responsible,

State responsibility exists concurrently and can be invoked by those seeking redress under

Human Rights Law mechanisms such as the European Court of Human Rights or the

United Nations Human Rights Council. This perspective brings into focus the responsibility

of States, who primarily exercise the decision to engage in war and an argument arises

16

whereby States may feel less restrained from engaging in conflict where they have the

ability to deploy autonomous weapons.

Resort to War

Peter Singer (2009: 47) expresses a widely held concern that robotics will lower the

threshold for violence. He believes that these technologies promise less harm to both

service personnel and civilians, and consequently, politicians will be likely to resort to them

sooner.

Others believe that the reduced risk to combatants operating these systems, due to their

distance from the battlefield, will tend to reduce the political costs and the risks of going to

war (Asaro 2012: 692, Alston 2010: 20). Human Rights Watch (2012: 39, 41) have also

registered their concerns about the increased likelihood of war, as a result of autonomous

weapons, and they draw attention to the advent of the drone which has allowed the United

States to conduct military operations in Afghanistan, Pakistan, Yemen, Libya, and

elsewhere, without fear of casualties to its own personnel. These concerns even resonate in

the expressions of the UK Ministry of Defence (2011: 517), where they accept that it would

be unlikely that a similar scale of force would be used in these theatres if an unmanned

capability were not available.

However, proponents balance this concern against the moral responsibility on every

commander to reduce loss of life to both service personnel and civilians, even where this

17

involves the ethically problematic decision to relinquishing the decision to kill to a non-

human.

Relinquishing the decision to kill a human to a non-human

This ethical argument emanates from the view that a machine, no matter how good, cannot

completely replace the presence of a true moral agent in the form of a human possessed of

emotion, a conscience and the faculty of moral judgment.

An often quoted ethical argument against autonomous weapons falls from their lack of

empathy and their inability to show mercy. As autonomous weapons have no ability to

understand human suffering they could inflict the worst suffering on humans without being

emotionally affected (Krishnan 2009: 133). The UK Ministry of Defence policy (2011:

520) reflects these concerns and notes that to a robot, a school bus and a tank are the same –

merely algorithms in a programme – and the engagement of a target is a singular action; it

having no sense of ends, ways and means, no need to know why it is engaging a target.

This has resulted in the UK Ministry of Defence (2011: 521) expressing concern that time

may be running out before adequate consideration has been given to relinquishing the

decision to kill to autonomous systems.

18

Johnson and Axinn (2013: 135) also raise a very important issue insofar as human soldiers

have an unusual responsibility to disobey illegal orders11

. To do this they must know what

illegal orders are, and have the courage to disobey such. They must be able to entertain

inconsistent goals: to follow orders, and yet to disobey when those orders are illegal. This

requires a judgement based on the moral questions of a specific situation. Without their

own values, robots have no basis for making such a judgement.

Anderson and Waxman (2013: 16) recognise that this is a difficult argument to address.

They point to the future advances in technology where humans will turn over even more

functions with life or death implications to machines, such as driverless cars or automatic

robot surgery technologies, not simply because they are more convenient but because they

prove to be safer. They believe that our basic notions about machine and human decision-

making will evolve, bringing about a world which will accept self-driving autonomous cars

and will be likely to expect those technologies to be applied to weapons and the battlefield,

precisely because it regards them as better. Essentially they postulate a gradual evolution of

ethical norms, which will come to accept machine involvement in many life and death

decisions of the future.

11

The requirement to disobey unlawful orders was established in the Nuremberg Trials in the wake of WWII

and subsequently codified by the International Law Commission of the United Nations in The Nuremberg

Principles, which were a set of guidelines for determining what constitutes a war crime. Nuremberg Principle

V provides that it is not an acceptable excuse to say 'I was just following my superior's orders'. This

requirement is now set out in Article 33 of the Rome Statute establishing the International Criminal Court.

19

Krishnan (2009: 125) also brings an element of balance to the argument against

autonomous weapons where he notes that human soldiers commit war crimes for various

reasons, such as the tendency to seek revenge for friendly losses, weak leadership,

dehumanisation of the enemy, poorly trained troops, clearly defined enemy and unclear

orders, all of which are irrelevant for military robots. This presents an interesting point,

whereby AWS, arguably, could be programmed to conduct themselves in a more ethically

reliable manner than humans. Were this possible, much of the objections against AWS

evaporate (Kellenberger, 2011). Any such ability, however, is contingent on the

technological ability to develop artificial intelligence to enable a computer to act ethically

in complex situations; and there is no general acceptance that artificial intelligence can be

developed to replace the human cognitive abilities.

Developing Artificial Intelligence for Autonomous Weapons

Krishnan (2009: 46), amongst others, recognises that without artificial intelligence,

autonomous weapons will remain rather primitive, giving relatively little military utility.

For this reason, the development of artificial intelligence will very much determine the

future of autonomous weapons. Amidst the emerging debate there is a school of thought

which envisages developing artificial intelligence to design effective perceptual algorithms,

of superior target discrimination capabilities, or ‘ethical governor’, in a manner consistent

with the International Humanitarian Law and Rules of Engagement (Arkin 2010: 339). This

argument is predicated on the basis that although unmanned systems cannot be perfectly

ethical in the battlefield, they can perform more humanely than human soldiers (Stewart

2011: 282).

20

Arkin’s (2009: 1-2) ethical governor12

is a constraint-driven system that, on the basis of

pre-programmed ethical logic, tries to evaluate an action, which has in a previous step been

proposed by the tactical reasoning subsystems of the machine. In essence, by satisfying

various sets of constraints, such as forbidden actions, behavioural obligations and so on, the

system thus ensures the proportionality of a military response.

Not all writers on this subject are entirely convinced about the ability of artificial

intelligence to replace the human capacity to make ethical decisions. Sparrow (2007: 66)

declares himself as remaining agnostic on the question of the extent to which existing or

future systems can truly be said to be autonomous. He makes the observation that

autonomy and moral responsibility go hand in hand. He notes that if we develop artificial

intelligence to establish an agent as autonomous, we accept that their actions originate in

them and reflect their ends. Furthermore, in a fully autonomous agent, these ends are ends

that they have themselves, in some sense, chosen. Where an agent acts autonomously, it is

not possible to hold anyone else responsible for its actions, insofar as the agent’s actions

were its own and stemmed from its own ends, and therefore others cannot be held

responsible for them. Conversely, if we hold anyone else responsible for the actions of an

agent, we must hold that, in relation to those acts at least, they were not autonomous

(Sparrow 2007: 65).

12 Arkin describes his ‘ethical governor’ as “a transformer/suppressor of system-generated

lethal action to ensure that it constitutes an ethically permissible action, either non-lethal or

obligated ethical lethal force [based on] ... predicate and deontic logic” (Arkin, et al, 2009)

21

The UK Ministry of Defence (2011: 623) believes that the development of artificial

intelligence is uncertain and unlikely before the next two epochs, although they accept that

any breakthrough would have significant and immediate implications for autonomous

weapons. Others take a more definitive view in arguing against the use of artificial

intelligence to attain a level equal to or beyond the human capacity to determine ethical

behaviour. Krishnan (2009: 58) evaluates the evolution of artificial intelligence from its

inception in the 1970s and deduces that the modern neural networks and genetic algorithms

have the ability to evolve by themselves and to build up complexity by themselves,

ultimately learning by themselves. He believes that this bottom-up approach to artificial

intelligence applied to robotics would permit machines to advance towards full autonomy,

but this could lead to the creation of machines that can develop behaviours that we did not

anticipate and that we might not fully understand. This conclusion is also adopted by

Human Rights Watch (2012: 29) and contributes to their view that human oversight is

necessary to ensure adequate protection of civilians in armed conflict.

Conclusion

This literature review began by recognising the conflated and frenetic nature of the

discussion on autonomous weapons. Its objective was to set out the arguments as to

whether autonomous weapons are both lawful and ethical against a coherent paradigmatic

framework. This was approached by reviewing the various perspectives against the legal

principles of humanity, military necessity, proportionality and distinction, while the ethical

considerations of individual responsibility, authority to take a life and the replacement of

human ethics with artificial intelligence were addressed from an ethical perspective. In

22

reviewing the relevant literature a number of distinct arguments appear and while there is

overlap, it is possible to disaggregate them into distinctly legal and ethical areas.

Thus, we see opponents of autonomous weapons argue that robots might not be able to

determine whether an injured soldier still posed a threat and then shoot him a second time

without there being any military necessity, while, proponents would state that humans are

equally susceptible to such errors. In terms of proportionality, opponents believe it is highly

unlikely that a robot could be pre-programmed to handle the infinite number of scenarios it

might face in real time. Proponents, on the other hand, believe that with advancing

technology, robots could calculate blast and other weapon effects that cause collateral

damage more quickly than human soldiers. The ability of autonomous weapons to

distinguish between combatants and non-combatants is very challenging, but equally so for

humans faced with making the same determination in modern conflicts. Unlike certain

prohibited weapons, there seems little doubt that autonomous weapons do not cause

unnecessary suffering in their own right and are unlikely to wantonly commit atrocities and

war crimes; however, critics cite their inability to show empathy and compassion as lacking

in humanity.

The evaluation of humanity led to our ethical considerations discussing the need for

individual accountability, where proponents argue that someone or some State will always

be responsible for the deployment of these weapons. We also noted that some consider the

relinquishment of the decision to kill to a non-human as an ethical step too far in terms of

the ever increasing automation of many life altering events. Finally, we noted conflicting

23

views as to whether artificial intelligence could be developed to allay the ethical concerns

surrounding autonomous weapons.

The foregoing arguments establish a framework, against which the subject can be analysed

in greater depth with a panel of experts. The methodological approach, set out in chapter

III, involved the formulation of distinct rhetorical questions from the individual component

arguments, which were submitted to the group of experts as a guide to the interviews.

Ultimately, the objective of this study was to probe more deeply into the arguments

outlined above, with a view to eliciting novel perspectives. As the level of discussion

surrounding these systems has significantly increased in the past year, the international

community, through the United Nations, are now turning to consider the implications of

these weapons. It is felt that this study, concluding at this pivotal time, can enhance

understanding in this area, thus advancing the debate on AWS.

24

CHAPTER THREE – METHODOLOGY

Introduction

The conceptual framework established in Chapter II sets out two distinct, yet overlapping

areas, that of International Humanitarian Law and Ethics. Within each of these paradigms,

four separate questions emerge. This chapter builds a platform upon which this study will

look more deeply into the areas identified in the literature. The chosen qualitative

methodology explores the thesis question by adopting a modified interpretative

hermeneutical approach as a research method to analyse the conflicting perspectives against

the legal and ethical paradigms. Before exploring the research methodology, however, the

impact of epistemology is considered in order to show how the research philosophy for the

thesis was established

Epistemological Approach

Late in 2012, while deployed to South Lebanon, I became involved in a discussion about

the Israeli Defence Forces surveillance systems operating overhead and reflected on the

impact that drone technology was having on modern conflict. Coming from a background

in International Humanitarian Law, I recognised the portent of using drone technology to

distance the killer from the battlefield. I also recalled this philosophical question from my

time as a military pilot, where the 1999 bombing of the bridge in Grdelica13

, during the

Kosovo campaign, was a reminder of the legal and ethical responsibilities resting on those

13

The Grdelica train bombing occurred on 12 April 1999, when two missiles fired by NATO aircraft hit a

passenger train while it was passing across a railway bridge over the Južna Morava river at Grdelica gorge,

some 300 kilometres south of Belgrade in Serbia. As a result, 14 civilians including children and a pregnant

woman were killed and another 16 passengers wounded.

25

who deploy lethal force.

Following my return to duty in Ireland I came upon the Human Rights Watch report,

‘Loosing Humanity’ (2012) and this brought the discussion from September 2012 to mind.

The debate over autonomous weapons captured my imagination from the outset, not least

because of its contradictory perspectives, but also because it is a real example of an age old

question: what control on behaviour is superior and antecedent to law? To me, this has

always been the realm of ethics. The direct application of this question to AWS struck me

as I considered whether they were legally prohibited, and if they were not, ought they to

be? Generally speaking, the promulgation of law is reactive to evolving circumstances, but

their genesis is reflective of something more transcendent, some foundational principles

embedded in our ethical fabric. Simply put, as I understand it, our ethics should guide the

development of our laws. However, not all things are simple, and developing a

methodology to reflect this epistemological background would necessitate a process that

could reflect on the ethical and legal considerations, both in themselves and against each

other.

Methodological Approach

Settling on a methodological approach to data analysis proved challenging. As AWS were

emergent, there was little by way of direct historical context to provide a basis for a

historiography or a case study evaluation. Similarly, without a life based sociological

platform, neither an ethnographic or narrative analysis approach appeared suitable. As a

conceptual picture of the study evolved, it became apparent that the value to be gained

would come from an assessment of the views with a deep understanding of AWS. This

26

approach, it was felt, would complement the qualitative nature of the examination and

could engage in a post-positivist course to extrapolate findings.

As the area of AWS is incipient, niche and complex, I am drawn to the view of Salomon

(1991; 11), where he recognises that discrete and complex environments require different

research approaches. This, he states, has led to the growing acceptance of the qualitative

perspective to research as the better way to handle complex and dynamic environments. On

exploring the wider field of research methods, the possibility of using a hermeneutical

approach, with subtle modifications to suit the nature of the study, offered a useful means

of evaluating the data.

By treating the literature on AWS, evaluated in chapter II, as the ‘core text’, it was possible

to focus on some of the issues probed in the open interviews and feed them back into the

‘core text’, and can be seen as a modified interpretative hermeneutic approach. Initially,

this involved coding the noteworthy issues identified in the open interviews, while

incorporating an understanding of the ontological perspective of the respective

interviewees. Each of the issues were then analysed against the ‘core text’, to identify

views and perspectives that had not emerged in the earlier research of the literature. Finally,

a deeper analysis of the literature reviewed earlier and additional publications was

undertaken to identify whether these views had as yet been the subject of wider discourse.

According to Hitchcock & Hughes (1989), qualitative researchers should be deliberately

open minded, ask questions that are open ended and remain prepared to change direction or

take a developmental view since the world is so complex. Consequently, the

27

methodological tool adopted in this study was to use unstructured interviews to engage with

a number of subject matter experts, by posing some open ended rhetorical questions drawn

from the areas identified in the conceptual framework. These questions, set out in Appendix

B, were intended to serve not as the focus of the interviews, but as a basis to promote

discussion in particular areas of the subject, to elicit novel interpretations and perspectives.

Methodological Limitations

From the outset it was recognised that the scope and quality of the data collected will be

greatly determined by the subject matter experts available for interview and the extent to

which they would engage with the research. Accordingly, it was identified that the research

would benefit from a diverse panel of eminent international experts14

, who could be pre-

apprised of the literature reviewed in Chapter II. Thus, aware of the initial research and a

set of research sub-questions, the subject matter experts would be able to focus their

perspectives more acutely on the areas being reviewed. It was recognised that this approach

presented an intellectual risk, insofar as it might not yield any new or enhanced findings. It

was accepted, however, that in the absence of new or enhanced findings, merely the

journey of understanding would endow the researcher with a deeper appreciation of this

emerging and interesting area. The findings and analysis of the data obtained from the

expert respondents during interviews is set out in the following chapter.

14

Academic Biographies of the expert interviewees are set out in Appendix A.

28

CHAPTER FOUR – RESEARCH FINDINGS

Introduction

This chapter explores the views of those closest to the debate, which were captured during

the research phase. It is a melting pot of ideas and perspectives, where the most prominent

issues arising from the research are discussed. Surprisingly, the seemingly adversarial

nature of the debate becomes less pronounced, as the key areas of concern are

acknowledged by all sides. While the research explored all the issues raised in the

conceptual framework, set out in Chapter II, new and previously untraversed views only

arose in respect of certain areas. Accordingly, this Chapter will primarily concern itself

with those areas that bore some novel perspectives and, as will emerge, the data from the

respondents reveals that the areas of Supervision and Command Responsibility feature as

strong themes for consideration.

While the review of the literature in Chapter II revealed only limited concern regarding

States’ accountability for breaches of law committed by AWS, the research reveals that this

area may be of greater significance for those States that are party to stricter conventions

protecting human rights, particularly the European Convention on Human Rights. The

research also raises new and significant questions regarding the lowering of the threshold in

respect of the decision to resort to war, which will be considered against the incremental

nature of this ever-evolving technology.

Flowing from the evaluation of these areas, the study will present some novel and

important findings, and these will be analysed to ascertain legal and ethical implications

29

arising from the deployment of AWS. In due course, these findings will form the basis of

the thesis recommendations in Chapter V.

Narrowing the Focus of the Research Analysis

The process of gathering data confirmed one of the basic conclusions arrived at during the

review of the literature, namely, that there are no specific legal provisions prohibiting the

use of AWS. Krishnan (2 April 2014) went as far as saying he spent a considerable amount

of time searching for one in the writing of his book and yet was unable to identify any.

During the research phase, it also became evident that some of the areas identified in the

review of the literature, were less revealing in terms of the thesis question posed,

particularly the principles of Humanity and Distinction. Similarly, Military Necessity did

not arise as a prominent issue in its own right; however, the research will indicate its

relevance in respect of the future development of this incrementally evolving technology.

Of the four International Humanitarian Law principles, it was Proportionality that was of

greatest interest in this study, and also its overarching obligation to take precautions in

attack, as required by Article 57 of the Protocols Additional to the Geneva Conventions of

1977, to which we first turn.

Precautions in Attack

In terms of transferring the proportionality assessment to computer based system, Stewart

(28 March 2014) cautions that “we should be careful for what we wish for”, as this may

30

result in imposing an unwanted process on commanders. He states that these situations are

nuanced and circumstance dependent and if one is to “look through the telescope from the

other end”, such a development may prejudicially remove a commander’s freedom of

action, perhaps forcing them into an action, which they might not have elected to execute.

Gubrud (12 March 2014) also makes reference to the requirement to take precautions in

attack and reminds us of the requirement that an attack shall be cancelled or suspended if it

becomes apparent that the attack may be expected to cause incidental loss of civilian life,

injury to civilians, damage to civilian objects, or a combination thereof, which would be

excessive in relation to the concrete and direct military advantage anticipated. He draws the

seemingly rational conclusion that this cancellation or suspension can only be executed

where there is some form of control or human interface. Indeed, according to the ICRC15

,

State practice establishes this rule as a norm of customary international law applicable in

both international and non-international armed conflicts. Consequently, Gubrud (12 March

2014) is of the view that there must be a link between AWS, ultimately giving a human

control over the system.

Like Gubrud (12 March 2014), Bourbonniere (06 March 2014) also believes that there is a

requirement for supervision of AWS, which arises as an implied legal responsibility under

International Humanitarian Law as a consequence of the Martens Clause16

. Heintschell Von

15

Rule 19 of the ICRC’s major international study into current state practice in international humanitarian law

in order to identify customary law in this area. 16

The preamble to the Convention with respect to the laws of war on land (Hague II), 29 July 1899 states:

“Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare

that in cases not included in the Regulations adopted by them, populations and belligerents remain under the

31

Heinegg (24 March 2014) does not entirely agree with this premise, and believes that many

people rely on the Martens Clause to draw conclusions which are not necessarily reflected

in the law as it stands today. Crucially, he is of the belief that when the current law is

analysed, particularly in terms of the Protocols Additional to the Geneva Conventions of

1977 and the United Nations Convention on Certain Conventional Weapons, there is little

room for the Martens Clause to operate in any meaningful way. Accordingly, he is of the

view that the black letter law17

and the prohibitions therein have progressed to such an

extent that the Martens Clause is no longer relevant.

This is not to say that a duty to supervise under International Humanitarian Law does not

arise pursuant to provisions other than the Martens Clause. Stewart (28 March 2014) holds

the view that there is indeed a duty to supervise, but rather than it emanating from the

Martens Clause, he sees it as a function of the requirement to exercise responsible

command. While the application of command responsibility will be dealt with in greater

detail later in this Chapter, under the requirement for Accountability, these opinions point

to a requirement, certainly in the short-term, of at least a limited retention of human control

over AWS.

Indeed, this requirement for some form of control over AWS is compatible with the view of

Redmond (13 March 2014), who believes that there will always be some form of a link to

protection and empire of the principles of international law, as they result from the usages established

between civilized nations, from the laws of humanity and the requirements of the public conscience.”

17

The term black letter law is used to refer to the technical legal rules to be applied in a particular area, which

are most often largely well-established and no longer subject to reasonable dispute.

32

AWS, purely as a technical requirement. This link, he contends, will always be necessary to

update the systems telemetry, or to send it more information about changing circumstances.

Redmond’s (13 March 2014) opinion is at variance with the view encountered in the

literature reviewed in Chapter II, where opponents of AWS believe that they will operate in

a manner that does not require real-time connection to a human interface.

Logically, this calls us to consider the level of human interface that AWS systems might

have. While Gubrud (12 March 2014) believes that you can have a human sitting in front of

some very complicated interface, this point is disputed. Arkin (11 April 2014) is of the

view that the development of advanced systems will entail the functioning of machines in a

manner, which in certain circumstances will be too fast to be controlled by human cognitive

responses. Indeed, Lewis (7 May 2014) helpfully points out that this is ultimately what is

meant by a truly autonomous weapon, one that make firing decisions independent of human

oversight.

Developing this point, Stewart (28 March 2014) recognises that these systems may be seen

as a panacea to expenditure in other areas and consequently, circumstances may prevail

where there are more systems deployed than individuals to supervise them. Krishnan (2

April 2014) also notes that, even at present, individual human supervisors are required to

supervise more that single drone systems. It seems intuitive, therefore, that as

circumstances evolve, where supervision does exist, it may be of such a limited nature as to

make it insufficient to provide for human oversight at all times.

33

Lewis (7 May 2014) sees this as having resonance in terms of the existing defensive AWS,

which have been in operation for upwards of three decades. These systems react

automatically to incoming anti-ship missiles at a rate faster than a human’s cognitive

ability, in order to neutralise threats too swift to be perceptible to human reactions18

.

Interestingly, many who are adverse to AWS, including Gubrud (12 March 2014), would

recognise these systems as acceptable, primarily due to their deployment against incoming

ballistic weapons, rather than human targets. This concern is echoed by Sharkey (2014),

who cautions against the gradual extension of currently supervised defensive systems in a

manner that strays beyond the limits of appropriate target supervision. He states that the

accelerating pace of warfare should not be permitted to dictate the use of computerised

weapons that are not meaningfully controlled by humans.

It seems, therefore, that any efforts to incorporate human supervision over AWS should

include a requirement for the supervisions and control to be effective and meaningful.

Wizemann (22 April 2014) and Krishnan (2 April 2014) also raise the related issue in terms

of the questionable effectiveness of human supervision where individual operators may be

tasked with supervising multiple platforms, thus highlighting the requirement for any

supervision to be effective and meaningful.

18

These systems are sometimes referred to as Sense and React to Military Objects (SARMO) weapon

systems, and are designed to automatically intercept incoming munitions such as missiles, artillery shells and

rockets. Examples include Phalanx, NBS Mantis, C-RAM and Iron Dome. These systems can detect, evaluate

and respond within seconds thus making it extremely difficult for human operators to exercise effective

supervision.

34

KEY FINDING # 1: Some form of effective human supervision and control of AWS

will be necessary.

It was noted by Lewis (7 May 2014), however, that while this recommendation is

appropriate from a practical standpoint, it means that such supervision renders the systems

as something less than truly autonomous. While this requirement for supervision emerges

as the principle finding from the research, it has further justification in terms of the lack of

universal agreement over may central aspects of International Humanitarian Law. This was

evident in the research findings on the principle of Proportionality.

Proportionality

Of the four International Humanitarian Law principles reviewed in Chapter II, it was the

requirement for proportionality that gave rise to interesting insights in the research.

Heintschel Von Heinegg (24 March 2014) proffers the view that AWS may be better than

humans for a variety of circumstances, particularly in making the proportionality

determination in conflict. It is already the case that much of the decision making process in

conventional targeting operations is arrived at through the use of computer programmes.

Wizemann (22 April 2014) notes that the collateral damage estimate methodology (CDEM)

used by the U.K. and U.S. militaries, uses a software application referred to as the

Collateral Damage Model (CDM) to assess collateral damage. This computer programme

assess factors such as a weapon’s precision, its blast effect, its affects on humans and

35

structures19

. Schmitt (2013: 20) also asserts that there is no question that AWS can be

programmed to perform CDEM-like analyses and would produce results no less reliable

than the CDEM.

Stewart (28 March 2014) does not concur entirely with this view and identifies a more

nuanced perspective. He identifies that the calculation of proportionality is very much

predicated on whether the military advantage20

is assessed against the specific objective or

is it assessed in terms of the overall campaign advantage that may accrue from the

destruction of the objective. An example of this might arise where an enemy fuel depot is

targeted in a built-up-area. Its destruction is likely to have a wide blast area and could kill

many civilians and be disproportionate. However, if it is one of only two fuel depots

available to the enemy and the destruction of both will bring about their swift defeat, then it

may be proportionate in terms of the military advantage to be gained. Accordingly, Stewart

believes that there will continue to be significant military and human judgement in this

weighing act, which may not be easily reduced to a computer algorithm.

KEY FINDING # 2: Any capacity for a computer to replicate the human judgement

required to assess proportionality in respect of precautions to be taken in attack is

contingent on a universally accepted interpretation of ‘concrete and direct military

advantage’.

19

JSP 900,United Kingdom Joint Targeting Policy, 2009, P.13. 20

Art 57(2)(a)(iii) requires that those who plan or decide upon an attack shall refrain from deciding to launch

any attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to

civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct

military advantage anticipated.

36

While this study only reveals difficulties regarding AWS arising from interpretive

disagreement over the International Humanitarian Law principle of Proportionality, there

may, similarly, be issues in respect of lack of universal agreement over other legal

principles. By contrast, an area of International Humanitarian Law where there is greater

consonance in terms of its interpretation and application is Accountability.

Notwithstanding, the review of the literature reveals that these related areas of

proportionality and precautions in attack, together with accountability, have been somewhat

neglected in the discourse to date.

Accountability for Autonomous Weapon Systems

This link between control and responsibility is recognised by Stewart (28 March 2014).

Pragmatically, he envisages that “the whole area of responsibility won’t be a big bang

transition overnight, and part of the transition process will be the extent to which we are

comfortable that responsibility can be exercised by the control that we have over the

systems”. Accordingly, as AWS become more reliable, humans will be more comfortable

in ceding greater degrees of responsibility to them. Ultimately, however, where does

responsibility lie when an error occurs and an AWS perpetrates a breach of International

Humanitarian Law, resulting in a war crime?

While this question has been traversed at length and the arguments have been set out in

Chapter II, there has been no conclusive position established as to where culpability will

37

lie. The research shows a broad consensus that the AWS cannot be held responsible in

itself. There is similar consensus that there must be accountability for war crimes at some

level, there being an almost visceral (Stewart, 28 March 2014) human desire to hold

someone responsible, as opposed to something. The elusive question is who?

Individual Responsibility

Lewis (7 May 2014) believes that an individual operator would only be liable for a breach

of International Humanitarian Law committed by an autonomous weapon system in

circumstances where they operated the system in a manner which they either knew, or were

negligent, as to the systems likelihood of committing a war crime. Any culpability would

therefore require a mental element, or mens rea, in order to result in a conviction.

The situation is somewhat different in respect of a commander’s responsibility. The

generally accepted standard of proof required to convict under the doctrine of command

responsibility has been formed over the past seven decades. Its most widely accepted

formulation finds expression in Art. 28(a) of the Rome Statute, establishing the

International Criminal Court, and is recognised as customary international law under Rule

152 of the ICRC’s study on customary international law21

. This provision sets out that a

military commander can be held criminally responsible for crimes committed by forces

under his command. This will arise where he failed to exercise control properly over his

21

Study on customary international humanitarian law, conducted by the International Committee of the Red

Cross (ICRC) and published by Cambridge University Press in 2005.

38

forces and failed to take all necessary and reasonable measures within his power to prevent

the commission of the crime.

When this standard is applied in the context of AWS, it becomes clear that a commander

has a duty to exercise control over their operation in a manner to take all necessary and

reasonable measures to prevent the commission of a crime. Accordingly, until such time as

AWS have developed to the point where they can reliably operate without committing a

war crime, commanders must exercise control over them. Of course, this can only be

effectively done in circumstances where they are being supervised. This can be seen as the

legal rationale for the UK’s position on supervision, expressed in their policy directive and

as elucidated by Stewart (28 March 2014). Accordingly, the basis for KEY FINDING # 1

also has a legal basis in terms of the necessity to exercise command responsibility under

International Humanitarian Law.

Bourbonniere (6 March 2014) goes further than this. He is of the view that responsibility

falls on the State concerned and that the deployment of AWS also entails individual

responsibility on behalf of political masters. He believes that the law, as it currently exists,

must change to take account of AWS, insofar as those who decide to go to war, without

incurring a danger to their soldiers, must be responsible for such weapons. Similarly,

Krishnan (2 April 2014) asserts that responsibility for AWS should be at the highest levels

of command and political leadership, as lower level commands should not make decisions

about their deployment due to their lack of technical knowledge and strategic picture.

Lewis (7 May 2014) disagrees with this position and sees it as conflating the concepts of

39

jus ad bellum22 and jus in bello23. Arguably, however, the doctrine of command

responsibility already applies to political superiors, albeit it with a higher threshold in

respect of the knowledge requirement24

.

One of the most difficult concerns regarding accountability is the issue of diffusion of

responsibility, which occurs where the decision to kill is comprised of numerous small

decisions taken by many individuals, each of which do not entail an illegal act, but taken as

a whole they may have an illegal outcome. Heintschell Von Heinegg (24 March 2014)

observes that, to date, it has mainly been academics involved in the discussion, not

governmental lawyers and consequently, States have not been overly concerned about

whether these systems are in compliance with the law. While acknowledging that States

have undertaken weapon reviews, he notes that where there is a diffusion of responsibility

it becomes very difficult to hold any one individual responsible for their action. This, he

believes, serves to reduce the concerns of States’ in developing and deploying AWS.

Notwithstanding this, Heintschell Von Heinegg (24 March 2014) points to the rules

regarding precautions in attack under the second Protocol Additional to the Geneva

Conventions, 1977, noting that they apply to those who plan and execute attacks, and

accordingly it is the planners and executioners who bear the ultimate responsibility for

breaches of International Humanitarian Law. Furthermore, he recognises that the technical

nature of these systems will entail the inclusion of data recording devices, akin to an

22

Jus ad bellum (Latin for "law to war") is a set of criteria that are to be consulted before engaging in war, in

order to determine whether entering into war is permissible; that is, whether it is a just war. 23

Jus in bello (Latin for “law in war”) sets out the means and methods to be applied during war and the limits

to acceptable wartime conduct. 24

Art 28(b) of the Rome Statute applies to superiors, who are not military commanders, where the knowledge

requirement is ‘knew, or consciously disregarded information which clearly indicated, that subordinates were

committing or about to commit such crimes’.

40

aeroplanes black-box, which will preserve a recording of how the system acted and who

gave it riding instructions. This, in a somewhat circular manner, brings accountability back

to substantially rest on the shoulders of commanders.

Key Finding # 3: Commanders who are involved in the planning and execution of

attacks conducted by AWS will bear the greatest degree of responsibility for breaches

of international law occasioned by these systems.

The extent of accountability for autonomous weapons stretches beyond individual

responsibility as States can be held responsible, in their own right, for breaches of

International Humanitarian Law and International Human Rights Law.

State Responsibility

It was observed during the course of the review of literature for Chapter II that there has

been relatively little by way of discussion regarding the responsibility of States for breaches

of international law by AWS. Indeed, the only reference cited came from Stewart (28

March 2011), in an article on new technologies and International Humanitarian Law.

Heintschell Von Heinegg (24 March 2014) agrees that there is a paucity of discussion in

this area. Acknowledging that this responsibility emanates from International Human

Rights Law, rather than International Humanitarian Law, he believes that the concept of

extraterritorial applicability has not yet expanded to become greatly established, save for

one or two cases in the European Court of Human Rights. Noting the relatively

41

sophisticated nature of the European Convention on Human Rights, he holds the view that

the jurisprudence of the Court in Strasbourg, while having established the doctrine of

extraterritorial applicability, remains in a state of flux regarding its application.

Notwithstanding this high threshold of liability, Heintschell Von Heinegg (24 March 2014)

believes that the International Human Rights Law paradigms applicable in other regions of

the world will not follow the strict course adopted in Europe for the protection of human

rights.

Stewart (28 March 2014) recognises that the use of AWS will immediately trigger a

signatory State’s obligations under the European Convention on Human Rights, subject to

the unresolved position in terms of extra-territorial applicability. He believes that States

would be more uncomfortable with accountability for the actions of AWS, rather than

soldiers, in the human rights construct. Such a discomfort with the actions of AWS in terms

of International Human Rights Law is also mentioned by Krishnan (2 April 2014), who

raises the possibility that International Human Rights Law may, in time, recognise that

being killed by an AWS offends a right to a dignified death. This prospect seems somewhat

distant at present, but the rate of advancement of International Human Rights Law indicates

that greater scrutiny will be brought to bear in terms of the protection of the right to life

under this legal paradigm.

KEY FINDING # 4: States party to conventions espousing more stringent standards

of International Human Rights Law will be exposed to litigation for breaches of

provisions protecting the right to life.

42

Aside from IHRL, Heintschell Von Heinegg (24 March 2014) affirms the separate liability

of States for war crimes under IHL, as established under Article 3 of the Hague Convention

of 190725

. While he notes that the threshold for holding a State accountable is lower than

that for an individual, it should be recognised that the laws and principles regarding State

responsibility are not well developed, notwithstanding the advances heralded by the UN

General Assembly’s adoption of the Draft Articles on the Responsibility of States26.

Accordingly, State responsibility is not considered in the following analysis of the findings.

Analysis of the Findings

This study has implications for many sectors involved in the development and deployment

of AWS, most notably, the military. The exposure of commanders to consequences where

AWS have breached International Humanitarian Law will, no doubt, be a cause of great

concern to militaries. While this may appear to be an unfair apportionment of liability,

falling as it does from the doctrine of command responsibility, it seems that this will be the

inevitable position. This exposure of military commanders may be ameliorated by the

extent to which militaries approach the requirement for control and supervision of AWS.

Ironically, therefore, the personal exposure of commanders may result in the strongest call

for supervision coming from the military sphere.

25

This provision is restated in Article 91 of the Protocol Additional to the Geneva Conventions, 1977, such

that “A Party to the conflict which violates the provisions of the Conventions or of this Protocol shall, if the

case demands, be liable to pay compensation. It shall be responsible for all acts committed by persons

forming part of its armed forces”. 26

The final text of the Draft Articles on the Responsibility of States for Internationally Wrongful Acts by the

International Law Commission (ILC) was adopted United Nations General Assembly under resolution 56/83,

in December 2001.

43

Any such engagement with supervision will manifest itself in the development of

procedures providing for the use of AWS. To date this has not been avoided by those

militaries involved in the development and use of these technologies. Procedures, however,

are often issued and remain extant, only being reviewed when they have been eclipsed by

circumstances or practice. Inherently, therefore, procedures fail to keep pace with progress.

This will be particularly problematic in terms of AWS unless the military procedures take

account of the rapidly evolving nature of this technology. As the requirement for

supervision in Key Finding 1 demonstrates, any procedures will have to be iterative in

nature, constantly evaluating, amending and re-evaluating the transition towards greater and

perhaps even full autonomy.

Military commanders will also be very sensitive to their State’s exposure to liability under

International Human Rights Law constructs, particularly those States party to the ECHR.

This will require militaries to develop a detailed understanding of the emergent concept of

extra-territorial applicability of human rights obligations for breaches of human rights law

occurring during conflicts. Certainty with respect to this will be difficult to achieve due to

global inconsistencies and the somewhat fluid nature of this developing doctrine. Of course,

while this will be of concern to military commanders, it will be more concerning to their

political masters.

Although the ex officio concerns of politicians will focus on the exposure of States to

litigation, they will, no doubt, have personal regard for the risk of bearing political and

legal consequences as strategic commanders. This may give additional impetus for the

44

implementation of effective supervision of AWS, and ultimately, it will be a matter for

politicians to decide on the extent to which States will be bound to conventions to regulate

their use. Therefore, self-interest of politicians may also influence the gathering of inter-

State accord for the supervision of these systems.

This gives rise to considerations of pragmatism and practicality, concepts inherent in

political decision making. Those interviewed and much of the literature indicates that there

is little regard for the practical aspects of dealing with such an incrementally evolving

technology. The existence of limited AWS for many decades already and the increasing

rate of technological development in this area suggests that there is no fixed point against

which to assess these technologies, as by their inherent nature they will continue to morph

and become ever more sophisticated. Stewart (28 March 2014) recognises that this suggests

that the debate ought to have greater focus on the transition process in respect of regulating

AWS. He sees the debate as focussing on a broad concept or visualisation of what the end

state will be and thus fails to recognise the importance of regulating for its progress and

development. Recalling the ‘logic’ of Donald Rumsfeld in terms of the unknown

unknowns27, he believes that the pathways to get there are, invariably, unanticipated.

Therefore, as we are unaware of what may impact on the whole pathway in terms of the

development of AWS, it is important to make provision for the incremental development of

these technologies, rather than solely focussing on what AWS will develop into at some

conceptual point in the future.

27

The unknowns we don’t know we don’t know.

45

Accordingly, the overarching conclusion arrived at during the course of this research

suggests that considerations in respect of the regulation of AWS should have greater regard

for their evolving nature and not merely their legality or portent for the future. Such an

approach points towards an almost radical shift in the conceptual approach for considering

AWS, which accepts their existence and inevitable progression, but sets out a clear

roadmap for their development and deployment by providing an incremental transitionary

process.

Such a view may not be agreeable to many, such as Gubrud (12 March 2014) and

campaigners like Human Rights Watch, who seek an outright ban on AWS. However, it is

reflective of the growing acceptance by many, including Arkin (11 April 2014),

Bourbonniere (6 March 2014) and Krishnan (2 April 2014), that, for the time being, there

ought to be a moratorium on permitting AWS to make the decision to kill without human

oversight.

Conclusion

Having interrogated the data gathered to generate findings, this study has developed a

deeper understanding of the legal and ethical issues identified in the review of literature in

Chapter II. Consequently, in terms of the implications arising from the development and

deployment of autonomous weapons systems, this study has identified a duty to supervise,

the need for consensus in terms of International Humanitarian Law, as well as parameters

for individual and State accountability. Most notably, however, the overarching conclusion

of this study calls on the various communities debating this subject to consider the issues

46

from a different perspective. While it may be reasonable to accept that these weapons are

not illegal per se, they pose significant challenges for regulation and this research suggests

that the manner of approaching the subject requires a radical conceptual re-think.

Consequently, if there is to be a useful discourse in terms of autonomous weapons, the

debate ought to focus on charting a safe course for their development and deployment, one

that is centred on control and supervision.

This study will now turn to focus on the potential impact of these findings on the wider

debate. In so doing, the concluding chapter will also test the findings in terms of their

strengths and limitations before identifying how they might form the basis for further study.

47

CHAPTER FIVE - CONCLUSIONS AND RECOMMENDATIONS

Introduction

Albert Einstein, remarking on the advent of the atomic bomb in 1948, stated that our

technology had exceeded our humanity. This observation has been echoed recently in the

report by Human Rights Watch (2012) and the concerns of many others, in respect of the

emergence of AWS. In setting out to study the legal and ethical implications arising from

the deployment of AWS, it was immediately evident that the technology has been

developing at an increasingly rapid pace, yet the discussion as to their impact is in a

relatively nascent state. The breadth of their implications cast a wide net, touching on

technical, legal and ethical areas, as well as reaching into the military and political arenas.

Against a somewhat frenetic debate, Chapter II set out the various positions and

perspectives in a coherent format. Using this theoretical construct, these positions and

perspectives were explored in the research phase, which led us to the findings. At this

juncture, it is worthwhile assessing some of the strengths and weaknesses of the study to

establish context for the concluding recommendations.

Strengths & Weaknesses

The principle strength of this study was the academic calibre of those interviewed. All who

contributed were leading academics and this was enhanced by the broad spectrum of

backgrounds, representing legal, scientific and military perspectives. This presented a

48

foundation upon which a balanced perspective could be built. Rigorous cross questioning of

those interviewed, where opposing positions were presented for comment, also brought a

depth and finesse to the findings. These views found expression through the lens of the

author, who came from a technical and command background in the military and latterly as

a military lawyer.

This research comes at a timely juncture, as the issue of AWS has become internationally

prominent over the past number of months. Prompted by much of the literature reviewed in

Chapter II, the United Nations Committee on the Certain Conventional Weapons

Convention held a meeting with a panel of experts from 13 to 16 May 2014 at the United

Nations in Geneva. Indeed, a number of those interviewed for this study were requested to

appear to discuss some of the issues arising in this study.

From a national perspective, a strength of this research arises from it being the first study of

the subject in Ireland. It may also be the first study internationally to focus on the area of

command responsibility in terms of AWS, from a country subject to the European

Convention on Human Rights. Indeed, it is hoped that pre-publication draft requested by

the Irish Department of Foreign Affairs in advance of the expert meeting in Geneva will

assist in shaping Ireland’s submission to the United Nations Committee on Certain

Conventional Weapons Covention in respect of these weapons.

49

However, the study is not without its weaknesses. It must also be noted that the data was

gathered from a limited number of interviewees. While this was balanced by sourcing some

of the most eminent voices in the area, they were in most instances conducted by audio

visual interview, owing to the international locations of those involved. Significantly, also,

the scale of the study was circumscribed by the confines of the Magister Artium

programme, and greater scope for developing the arguments could have brought increased

richness to the findings. This limitation revealed numerous areas for further study, set out

below.

Areas for further study

As a consequence of the neoteric nature of the debate, an array of areas presented

themselves as having scope for further study and will appeal to prospective researchers

from a number of disciplines. From a legal view-point, the issue of establishing culpability

in terms of the prosecution of offences for acts committed by AWS will be of interest, as

will the specific frameworks that can be applied for the regulation of AWS. Technically,

further study into the functionality of how supervision and control can be maintained,

particularly in the context of information overload from complex and even multiple

devices, will be required. From a military and policy perspective, further study into

mechanisms to inform and educate commanders on the use and responsibility for AWS will

possibly pre-empt early problems in the deployment of these systems.

Academically, this subject area will continue to evolve and numerous avenues for more in-

depth study are evident. With such an array of perspectives, a cross-sectorial evaluation of

50

legal issues from a purely ethical perspective presents a most interesting opportunity for

developing further learning in this area.

Recommendations

In light of the timing of this study, the findings are of direct and immediate relevance to the

considerations of the United Nations Committee on the Certain Conventional Weapons

Convention, as the focal point for charting the international community’s way forward for

AWS. It is to the international community at large and the Committee on the Certain

Conventional Weapons in particular, that the following recommendations are addressed:

1. AWS should not yet be exposed to situations where disagreement exists over the

interpretation of International Humanitarian Law.

2. International consensus should be sought on the retention of effective human

supervision and control of AWS until they can be shown to satisfactorily comply

with all aspects of International Humanitarian Law.

3. The international community should set out incremental transitional road map for

the development and deployment of AWS.

4. Militaries involved in the deployment of AWS should review their policies for the

investigation and prosecution of offences by commanders and operators where

AWS have been involved.

5. States deploying AWS should remain conscious of their potential liability in terms

of the extra-territorial applicability of their International Human Rights Law

obligations.

51

Conclusions

The wide application and novel perspectives of the findings ensure that they will, in some

part, contribute to the existing knowledge in this area. As with researching any particular

subject, the journey to understanding is often as important as the outcome, so too must it be

said for autonomous weapons. The importance of controlling their ethical development is

more significant than arguing over their legality. The closing thought in this research

asserts that in order for AWS to be applied progressively, we must consider how they will

develop and evolve. Only then can we chart a safe passage for their inevitable arrival, one

that respects law, ethics and humanity.

52

APPENDIX A

Respondents’ Biographies

Prof Michael Lewis 52

Prof Ronald Arkin 53

Prof Armin Krishnan 55

Mr Michel Bourbonniere 56

Col Darren Stewart 57

Peter Redmond 58

Mark Gubrud 59

Prof Wolfgang Hientschel Von Heinegg 59

Ms Nathalie Wizemann 60

Prof Noel Sharkey 60

Prof Michael Lewis

Professor Lewis joined the Ohio Northern faculty in August, 2006 and is currently a

Professor of Law in the Law Faculty. Lewis flew F-14's for the United States Navy in

Operation Desert Shield, conducted strike planning for Desert Storm and was deployed to

the Persian Gulf to enforce the no-fly zone over Iraq. He was a Topgun graduate in 1992

and was featured in a NOVA documentary on Topgun and aircraft carriers.

After his naval service, Lewis graduated from Harvard Law School, cum laude, was a

management consultant with McKinsey and Company, and served as a litigation associate

with McGuireWoods, LLP, in Norfolk, Virginia.

53

Professor Lewis has published more than a dozen articles and essays on various aspects of

the law of war and the conflict between the US and al Qaeda. His work has been cited by

the Seventh, Ninth and Eleventh Circuit Courts of Appeals. He has testified before

Congress on the legality of drone strikes in Pakistan and Yemen and on the civil liberties

tradeoffs associated with trying some Al Qaeda members or terrorist suspects before

military commissions. His op-eds have appeared in numerous media outlets including the

LA Times and the New York Post and he has appeared on Public Radio International to

discuss the increasing use of armed drones in warfare. He has delivered scores of

presentations and panel presentations before military and law school audiences alike

including presentations to the international Military Operations Law conference in

Queensland, Australia, the US Army's JAG School in Charlottesville, VA and law school

events at Stanford, Chicago, Columbia, Penn, Duke, Texas and Northwestern among

others.

Education:

J.D., cum laude, Harvard Law School

B.A., John Hopkins University

Prof Ronald Arkin

Regents' Professor and Associate Dean for Research and Space Planning, College of

Computing, Georgia Institute of Technology

Educational Background

•Ph.D. 1987 University of Massachusetts (Amherst)

•M.S. 1977 Stevens Institute of Technology

•B.S. 1971 University of Michigan (Ann Arbor)

54

Current Fields of Interest:

The thematic umbrella for his research is multiagent control and perception in the context

of robotics and computer vision. A primary thrust is the utilization of models of existing

biological motor and perceptual systems, as developed by neuroscientists and cognitive

scientists, within intelligent robotic systems. He has especially been concerned with the

integration of deliberative reasoning into reactive robotic systems, and mechanisms for

coordination and communication between teams of physical agents.

This research has afforded efficient and robust navigational techniques that are being

explored in a diversity of domains: manufacturing environments, aerospace and undersea

applications, campus settings, military scenarios, nuclear waste management, etc. The

emphasis is on generalizable, flexible methods for intelligent robotic control.

Modularization of behaviors and perceptual strategies affords computationally efficient

solutions to navigation in complex and unpredictable domains. A high-level goal is to

produce survivable robotic systems capable of fitting into a particular ecological niche and

successfully competing and cooperating with other environmental agents.

Some areas of recent research activity include: cooperation, communication, and mission

specification in reactive multiagent robotic systems; ecological robotic systems; unmanned

aerial vehicles; usable autonomous agents; human-robot interaction and robot ethics;

coordinated control of a mobile manipulator using a hybrid reactive/deliberative

architecture; and motor behavior learning using genetic algorithms, case-based reasoning,

and adaptive on-line methods.

55

Prof Armin Krishnan

Assistant Professor for Security Studies, Department of Political Science, East Carolina

University

Education

University of Salford, UK, European Studies Research Institute

Doctor of Philosophy, November 2006.

Thesis: Military Privatization and the Revolution in Military Affairs.

University of Salford, UK, School of Politics and Contemporary History

Master of Arts in Intelligence and International Relations, July 2004.

Modules Taken: International History and Intelligence, US Intelligence, Middle Eastern

Security, Terrorism: Threat and Response.

Dissertation: Private Military Companies: Looking for a Positive Role in the Post Cold War

Security Environment.

University of Munich, English Language Department (Institut für Anglistik)

Postgraduate Certificate in English-speaking Countries in Conjunction with General &

Business English

Modules Taken: History, Politics, and Culture of Africa, the Asia-Pacific Region, the

Caribbean, Business English

University of Munich, Germany, Geschwister-Scholl Institut für Politische Wissenschaft

Magister Artium in Political Science, Sociology, and Philosophy, July 2001.

Seminars Taken: Political Systems, Political Theory, International Relations, Introduction

to Sociology, Sociological Systems Theory, Introduction to Philosophy, Theory of the

56

Cinema, the Political Philosophy of Hegel, Plato’s Politeia, Nietzsche’s Genealogy of

Morals, Huntington’s Clash of Civilizations, Rational Choice and Game Theory, Economic

Globalization.

Dissertation: The Concept of the Political in the Political Theories of Carl Schmitt and

Niklas Luhmann, Result: Very Good (1.40).

Michel Bourbonniere

Personal Summary

Legal Counsel, Canadian Space Agency, counsel for major space projects. Contributions

Program (applying various Treasury Board Policies on these issues). Other responsibilities

include: representing Canada at the U.N. COPUOS legal subcommittee meetings and at the

negotiations for the Rome protocol concerning secured transactions, assets based financing

of space assets (Cape Town Convention), CSA assets management, issues concerning

astronauts and other issues concerning the continued operations of the CSA.

Education

2000 D.C.L., (Doctor of Civil law) candidate

McGill University, Montreal, Québec

Thesis research on National Security Law in Outer Space

1996 LL.M. (Air and Space Law)

McGill University, Montreal, Québec

Thesis: ”Commercialization of Remote Sensing U.S. and International Law;

Towards a Liberalization of Economic Regulations”

1985 D.D.N.

Université de Sherbrooke, Sherbrooke, Québec

57

Graduate degree in contractual business law including drafting and execution of

legal instruments

B.A. Political Science

McGill University, Montreal, Québec

Professional Development

2001 Information & Cyber Operations Law Course

USAF JAG School, Maxwell AFB Alabama

1999 Fellowship, Centre for Hemispheric Defence Studies

National Defence University, Washington, D.C.

Certificate in Defence Planning and Resource Management

1999 Certificate in Law of Armed Conflict

International Institute of Humanitarian Law, San Remo, Italy

1997 Certificate in Space Military Operations

Royal Air Force Strike Command, air Warfare Centre,

Operational Doctrine and Training, Cranwell, United Kingdom

1996 Certificate of Military Achievement

International Law or Armed Conflict, Canadian Forces

Colonel Darren Stewart

Currently the Chief of Staff, Directorate of Army Legal Services in the British Army

February 2012 – Present (2 years 3 months) Army Headquarters, Andover, United

Kingdom

58

Director of the Military Department with the International Institute of Humanitarian Law

August 2009 – January 2012 (2 years 6 months)|Sanremo, Italy

Assistant Legal Adviser in Supreme Headquarters Allied Powers Europe (SHAPE), NATO

2003 – 2004 (1 year)

Peter Redmond

Peter Redmond is an Adjunct Lecturer on Graphics Vision and Visualisation at the School

of Computer Science and Statistics in Trinity College Dublin. His areas of interest are

Computer Vision, Augmented Reality, Robotics, Artificial Intelligence, Aeronautics,

Simulation and Entrepreneurship. His publications include

Human Computer Action Design (HCI) Methods Supporting the Envisionment, Design and

Evaluation of a Collision Avoidance System, Cahill J, Redmond P, Butler, W, (2010),

Poster Presented at IHCI 2010 Conference, DCU, September 2010.

Identifying the Human Factors requirements for a collision avoidance system, for use by

Pilots on the airport ramp and taxiway areas, Cahill J, Redmond P, Butler, W, (2008), Paper

Presented at IHCI 2008 Conference, NUI Cork, September 2008.

59

Mark Gubrud

Mark Gubrud is a physicist and an expert on emerging technology and human security at

Chapel Hill, North Carolina. He has previously worked at Princeton University, University

of North Carolina at Chapel Hill, and the University of Maryland. His principle area of

interest is on research, writing and speaking on autonomous weapons, space weapons, and

arms control (Program on Science and Global Security). Mark has published widely on this

subject and is a member of the International Committee of Robot Arms Control (ICRAC).

Professor Dr. Wolff Heintschel von Heinegg

Professor Dr. Wolff Heintschel von Heinegg holds the Chair of Public Law, especially

Public International law, European Law and Foreign Constitutional Law at the Europa-

Universität Viadrina in Frankfurt (Oder), Germany. In the academic year 2003/2004 he was

the Charles H. Stockton Professor of International Law at the U.S. Naval War College and

he currently holds that position for the academic year 2012/2013. From October 2004 until

October 2008 he was the Dean of the Law Faculty of the Europa-Universität. From October

2008 until November 2012 he was the Vice-President of that university. Previously, he

served as Professor of Public International Law at the University of Augsburg. He had been

a Visiting Professor at the Universities of Kaliningrad (Russia), Almaty (Kazakhstan),

Santiago de Cuba (Cuba) and Nice (France). He was the Rapporteur of the International

Law Association Committee on Maritime Neutrality and was the Vice-President of the

German Society of Military Law and the Law of War. Since 2007 he is a member of the

Council of the International Institute of Humanitarian Law in San Remo, Italy. Since May

2012 he is the Vice-President of the International Society for Military Law and the Law of

War. Professor Heintschel von Heinegg was among a group of international lawyers and

naval experts who produced the San Remo Manual on International Law Applicable to

Armed Conflicts at Sea. In 2002 he published the German Navy’s Commander’s Handbook

on the Law of Naval Operations. Professor Heintschel von Heinegg has been a member of

several groups of experts working on the current state and progressive development of

international humanitarian law, including the Manual on Air and Missile Warfare (2010)

60

and the Tallinn Manual on the International Law Applicable to Cyber Warfare. He is a

widely published author of articles and books on public international law and German

constitutional law.

Nathalie Weizmann

Nathalie Weizmann is a legal advisor at the Arms Unit in the ICRC and was the focal point

for the negotiations that led to the adoption of the Arms Trade Treaty, which was the first

time that a majority of States agreed to establish controls on international transfers of

conventional weapons and ammunition.

Prof Noel Sharkey

Noel Sharkey is a Belfast-born Irish computer scientist. He is best known to the British

public for his appearances on television as an expert on robotics; including the BBC 2

television series Robot Wars and Techno Games, and co-hosting Bright Sparks for BBC

Northern Ireland. He is a professor at the University of Sheffield.

Sharkey chairs The International Committee for Robot Arms Control, an NGO that is

seeking an International treaty to prohibit the development and use of autonomous robot

weapons - weapons that once launched can select human targets and kill them without

human intervention.

Sharkey is the founder and editor-in-chief of the academic journal Connection Science, and

an editor for Artificial Intelligence Review and Robotics and Autonomous Systems.

He formerly held the chair in the Department of Computer Science at the University of

Sheffield and was a Professor of Artificial Intelligence and Robotics and a Professor of

61

Public Engagement. He has been supported by an EPSRC Senior Media Fellowship and a

Leverhulme Fellowship of the ethics of battlefield robots.

He holds a doctorate in psychology, a doctorate in science, is a chartered electrical

engineer, a chartered information technology professional, a fellow of the Institution of

Engineering and Technology, a fellow of the British Computer Society, and a fellow of the

Royal Institute of Navigation.

In the academic world, Sharkey is best known for his contribution to machine learning and

cognitive science. Sharkey has written and spoken widely concerning the ethical

responsibilities of governments and international organisations in a world where robotics

applications are dramatically increasing, both in the military and policing contexts, and in

the medical care of children, the elderly and the sick.

62

APPENDIX B

Questions to guide interviews

1. Military necessity

Are there any reasons to believe that the development of AWS leading to wider

proliferation of increasingly cheaper systems, such as swarm technology, will entail

a gradual evolution of defence strategies, such that it will become militarily

necessary for all parties to conflicts to adopt such weapons?

2. Proportionality

As modern targeting practices have become increasingly process based and

empirical in nature, is the element of human judgement in terms of the

proportionality assessment becoming increasingly irrelevant?

3. Distinction

With so much disagreement surrounding the concept of Direct Participation in

Hostilities, is it possible to say that AWS would be any less likely than a human to

err than a human.

4. Humanity

Does the International Humanitarian Law principle of Humanity extend beyond

mere the prohibition on causing unnecessary suffering, such that it requires the

capacity to have a level of empathy beyond the cognitive ability of AWS.

63

5. Individual accountability

Does the lack of discourse concerning the potential for state liability, under

International Human Rights Law, indicate that this area of redress is insufficient to

supplant individual responsibility, or has its impact yet to be more widely

recognised?

6. Resort to War

In an increasingly asymmetric environment of warfare, will the deployment of AWS

result in drawing the conflict towards the territory of the deploying party?

7. Relinquishing the decision to kill a human to a non-human

Should the defence community expect greater public scrutiny in terms of the

development of AWS, which could result in more powerful calls for their

prohibition?

8. Developing an Artificial Intelligence based ‘ethical governor’ for Autonomous

Weapons

In terms of current developments in artificial intelligence, what are your capability

projections for AWS?

64

Bibliography

1. Alston, P. (2010), ‘Report of the Special Rapporteur on extrajudicial, summary or

arbitrary executions’, United Nations General Assembly, New York.

2. Anderson, K. and Waxman, M. (2013), ‘Law and Ethics for Autonomous Weapons

Systems, Why a Ban Won’t Work and the Laws of War Can’, Hoover Institute Task

force on National Security and Law, California: Stanford University Press.

3. Arkin, R., Ulam, P., and Duncan, B. (2009), ‘An Ethical Governor for Constraining

Lethal Action in an Autonomous System’, Georgia Institute of Technology: GVU

Centre.

4. Arkin, R. (2010) ‘The Case for Ethical Autonomy in Unmanned Systems’, Journal

of Military Ethics, 9:4, 332-341.

5. Asaro, P. (2012), ‘On banning autonomous weapons systems: human rights,

automation, and the dehumanization of lethal decision-making’, International

Review of the Red Cross, 94 (886), 687-709.

6. Atkinson, D. (2007), ‘The Danger of Robotic Weapons Systems’, [online], available

from: www.weirdfuture.blogspot.com [11 Feb 2014].

7. Bernard, V. (2012), ‘Science cannot be placed above its consequences’,

International Review of the Red Cross, 94 (886), 458.

8. Fein, G. (2013), ‘Lockheed Martin improves K-MAX situational awareness,

increases autonomy’, IHS Jane’s Defence Weekly, 30 July.

9. Gillespie, A. (2011) A History of the Laws of Wars, Oxford: Hart Publishing.

10. Guetlein, M. (2005), ‘Report on Lethal Autonomous Weapons – Legal and Ethical

Implications’, Newport: Naval War College.

65

11. Heyns, C. (2013), ‘Report of the Special Rapporteur on extrajudicial, summary or

arbitrary executions’, United Nations General Assembly, New York.

12. Hitchcock, G. and Hughes, D. (1989), Research and the Teacher: A Qualitative

introduction to school-based research, London: Routledge.

13. Human Rights Watch (2012), Loosing Humanity,: International Human Rights

Clinic, Harvard Law School.

14. Johnson, A. and Axinn, S. (2013), ‘The Morality of Autonomous Robots’, Journal

of Military Ethics, 12 (2), 129-141.

15. Kellenberger, J., President of the ICRC, (2011) ‘Current Issues of International

Humanitarian Law’, Keynote Address to the 34th

Round Table Conference of the

International Institute of Humanitarian Law, 8-10 September 2011, San Remo, Italy.

16. Krishnan, A. (2013) Killer Robots, London: Ashgate Publishing.

17. Lawand, K. (2013), ‘Seminar on fully autonomous weapon systems’, Paper

presentation by the head of the arms unit of the ICRC to the Mission Permanente de

France, 25 November 2013, ICRC Headquarters, Geneva, Switzerland.

18. Lee, C., (2013), ‘Northrop Grumman confirms talks to extend X-47B tests’, IHS

Jane's Defence Weekly, 11 September.

19. Liu, Hin-Yan. (2012), ‘Categorization and Legality of Autonomous Weapons

Systems’ International Review of the Red Cross, 94 (886), 627-652.

20. Matthias, A. (2011), ‘Algorithmic Moral Control of War Robots: Philosophical

Questions’, Law Innovation and Technology, 3 (2), 279-301.

21. Muir, T. (2013), ‘UK's Taranis UCAV at Woomera for tests’, Austrailian Defence

Magazine, 21 August.

66

22. Salomon, G. (1991). Transcending the qualitative-quantitative debate: The analytic

and systemic approaches to educational research. Educational Researcher, 20(6),

10-18.

23. Schmitt, M. (2013), Autonomous Weapon Systems and International Humanitarian

Law: A Reply to the Critics, Presidents and Fellows of Harvard College.

24. Schmitt, M., Thurnher, J. (2013) “Out of the Loop”: Autonomous Weapon Systems

and the Law of Armed conflict, Presidents and Fellows of Harvard College.

25. Sharkey, N. (2012), ‘The evitability of autonomous robot warfare’, International

Review of the Red Cross, 94 (886), 787-799.

26. Sharkey, N. (2014), ‘Towards a principle for the human supervisory control of robot

weapons, POLITICA & SOCIETÀ 2014, Pre-publication draft, 26 March 2014.

27. Singer, P. (2009) Wired for War, New York: Penguin Books.

28. Singer, P. (2012), Interview with Peter W. Singer, International Review of the Red

Cross, 94 (886), 478.

29. Solis, G. (2010) The Law of Armed Conflict, New York: Cambridge University

Press.

30. Sparrow, R. (2007), ‘Killer Robots’, Journal of Applied Philosophy, 24 (1), 62-77.

31. Stewart, D. (2011), New Technology and the Law of Armed Conflict,

Technological Meteorites and Legal Dinosaurs?’, International Law Studies, 87

(10), 271-298.

32. Thurnher, J. (2013), Examining Autonomous Weapons from a Law of Armed

Conflict Perspective, in Nasu, H. and McLaughlin, R. (eds.), New Technologies and

the Law of Armed Conflict, The Hague: T.M.C. Asser Press.

67

33. Thurnher, J. (2012), No One at the Controls, Legal Implications of Fully

Autonomous Targeting, Joint Force Quarterly, 67 (4), 77-84.

34. UK Ministry of Defence (2011), UK Approach to Unmanned Aircraft Systems, Joint

Doctrine Note 2/11, Shrivenham: The Development, Concepts and Doctrine Centre.

35. US Department of Defence (2012), Autonomy in Weapon Systems, Directive

3000.09, Washington.