Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].
Section 1: Introduction
Across many areas of modern life, “authority is increasingly expressed algorithmically.”[1] War is no exception.
Complex algorithms help determine a person’s creditworthiness.[2] They suggest what movies to watch. They detect healthcare fraud. And they are used to trade stocks at speeds far faster than humans are capable of. (Sometimes, algorithms contribute to market crashes[3] or form a basis for anti-trust prosecutions.[4])
Warring parties express authority and power through algorithms, too. For decades, algorithms have helped weapons systems—first at sea and later on land—to identify and intercept inbound missiles.[5] Today, military systems are increasingly capable of navigating novel environments and surveilling faraway populations, as well as identifying targets, estimating harm, and launching direct attacks—all with fewer humans at the switch.[6] Indeed, in recent years, commercial and military developments in algorithmically-derived autonomy[7] have created diverse benefits for the armed forces in terms of “battlespace awareness,”[8] protection,[9] “force application,”[10] and logistics.[11] And those are by no means the exhaustive set of applications. Meanwhile, other algorithmically-derived war functions may not be far off—and, indeed, might already exist. Consider the provision of medical care to the wounded and sick hors de combat (such as certain combatants rendered incapable of fighting and who are therefore “outside of the battle”[12]) or the capture, transfer, and detention of enemy fighters.
Much of the underlying technology—often developed initially in commercial or academic contexts—is susceptible to both military and non-military use. Most of it is thus characterized as “dual-use,” a shorthand for being capable of serving a wide array of functions. Costs of the technology are dropping, often precipitously. And, once the technology exists, the assumption is usually that it can be utilized by a broad range of actors.
Driven in no small part by commercial interests, developers are advancing relevant technologies and technical architectures at a rapid pace. The potential for those advancements—often in consumer-facing computer science and robotics fields—to be used to cross a moral Rubicon if unscrupulously adapted for belligerent purposes is being raised more frequently in international forums and among technical communities, as well as in the popular press.
Some of the most relevant advancements involve constructed systems through which huge amounts of data are quickly gathered and ensuing algorithmically-derived “choices” are effectuated. “Self-driving” or “autonomous” cars are one example. Ford, for instance, mounts four laser-based sensors on the roof of its self-driving research car, and collectively those sensors “can capture 2.5 million 3-D points per second within a 200-foot range.”[13] Legal, ethical, political, and social commentators are casting attention on—and vetting proposed standards and frameworks to govern—the life-and-death “choices” made by autonomous cars.
Among the other relevant advancements is the potential for learning algorithms and architectures to achieve more and more human-level performance in previously-intractable artificial-intelligence (AI) domains. For instance, a computer program recently achieved a feat previously thought to be at least a decade away: defeating a human professional player in a full-sized game of Go.[14] In March 2016, in a five-game match, AlphaGo—a computer program using an AI technique known as “deep learning,” which “allows computers to extract patterns from masses of data with little human hand-holding”—won four games against Go expert Lee Sedol.[15] Google, Amazon, and Baidu use the same AI technique or similar ones for such tasks as facial recognition and serving advertisements on websites. Following AlphaGo’s series of wins, computer programs have now outperformed humans at chess, backgammon, “Jeopardy!”, and Go.[16]
Yet even among leading scientists, uncertainty prevails as to the technological limits. That uncertainty repels a consensus on the current capabilities, to say nothing of predictions of what might be likely developments in the near- and long-term (with those horizons defined variously).
The stakes are particularly high in the context of political violence that reaches the level of “armed conflict.” That is because international law admits of far more lawful death, destruction, and disruption in war than in peace.[17] Even for responsible parties who are committed to the rule of law, the legal regime contemplates the deployment of lethal and destructive technologies on a wide scale. The use of advanced technologies—to say nothing of the failures, malfunctioning, hacking, or spoofing of those technologies—might therefore entail far more significant consequences in relation to war than to peace.[18] We focus here largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres.
Of course, the development and use of advanced technologies in relation to war have long generated ethical, political, and legal debates. There is nothing new about the general desire and the need to discern whether the use of an emerging technological capability would comport with or violate the law. Today, however, emergent technologies sharpen—and, to a certain extent, recast—that enduring endeavor. A key reason is that those technologies are seen as presenting an inflection point at which human judgment might be “replaced” by algorithmically-derived “choices.” To unpack and understand the implications of that framing requires, among other things, technical comprehension, ethical awareness, and legal knowledge. Understandably if unfortunately, competence across those diverse domains has so far proven difficult to achieve for the vast majority of states, practitioners, and commentators.
Largely, the discourse to date has revolved around a concept that so far lacks a definitional consensus: “autonomous weapon systems” (AWS).[19] Current conceptions of AWS range enormously. On one end of the spectrum, an AWS is an automated component of an existing weapon. On the other, it is a platform that is itself capable of sensing, learning, and launching resulting attacks. Irrespective of how it is defined in a particular instance, the AWS framing narrows the discourse to weapons, excluding the myriad other functions, however benevolent, that the underlying technologies might be capable of.
What autonomous weapons mean for legal responsibility and for broader accountability has generated one of the most heated recent debates about the law of war. A constellation of factors has shaped the discussion.
Perceptions of evolving security threats, geopolitical strategy, and accompanying developments in military doctrine have led governments to prioritize the use of unmanned and increasingly autonomous systems (with “autonomous” defined variously) in order to gain and maintain a qualitative edge. The systems are said to present manifold military advantages—in short, a “seductive combination of distance, accuracy, and lethality.”[20] By 2013, leadership in the U.S. Navy and Department of Defense (DoD) had identified autonomy in unmanned systems as a “high priority.”[21] A few months ago, the Ministries of Foreign Affairs and Defense of the Netherlands affirmed their belief that “if the Dutch armed forces are to remain technologically advanced, autonomous weapons will have a role to play, now and in the future.”[22] A growing number of states hold similar views.
At the same time, human-rights advocates and certain technology experts have catalyzed initiatives to promote a ban on “fully autonomous weapons” (which those advocates and experts also call “killer robots”). The primary concerns are couched in terms of delegating decisions about lethal force away from humans—thereby “dehumanizing” war—and, in the process, of making wars easier to prosecute.[23] Following the release in 2012 of a report by Human Rights Watch and the International Human Rights Clinic at Harvard Law School,[24] the Campaign to Stop Killer Robots was launched in April 2013 with an explicit goal of fostering a “pre-emptive ban on fully autonomous weapons.”[25] The rationale is that such weapons will, pursuant to this view, never be capable of comporting with international humanitarian law (IHL) and are therefore per se illegal. In July 2015, thousands of prominent AI and robotics experts, as well as other scientists, endorsed an “Open Letter” on autonomous weapons, arguing that “[t]he key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”[26] Those endorsing the letter “believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” But, they cautioned, “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”[27]
Meanwhile, a range of commentators has argued in favor of regulating AWS, primarily through existing international law rules and provisions. In general, these voices focus on grounding the discourse in terms of the capability of existing legal norms—especially those laid down in IHL—to regulate the design, development, and use, or to prohibit the use, of emergent technologies. In doing so, these commentators often emphasize that states have already developed a relatively thick set of international law rules that guide decisions about life and death in war. Even if there is no specific treaty addressing a particular weapon, they argue, IHL regulates the use of all weapons through general rules and principles governing the conduct of hostilities that apply irrespective of the weapon used. A number of these voices also aver that—for political, military, commercial, or other reasons—states are unlikely to agree on a preemptive ban on fully autonomous weapons, and therefore a better use of resources would be to focus on regulating the technologies and monitoring their use. In addition, these commentators often emphasize the modularity of the technology and raise concerns about foreclosing possible beneficial applications in the service of an (in their eyes, highly unlikely) prohibition on fully autonomous weapons.
Over all, the lack of consensus on the root classification of AWS and on the scope of the resulting discussion make it difficult to generalize. But the main contours of the ensuing “debate” often cast a purportedly unitary “ban” side versus a purportedly unitary “regulate” side. As with many shorthand accounts, this formulation is overly simplistic. An assortment of thoughtful contributors does not fit neatly into either general category. And, when scrutinized, those wholesale categories—of “ban” vs. “regulate”—disclose fundamental flaws, not least because of the lack of agreement on what, exactly, is meant to be prohibited or regulated. Be that as it may, a large portion of the resulting discourse has been captured in these “ban”-vs.-“regulate” terms.
Underpinning much of this debate are arguments about decision-making in war, and who is better situated to make life-and-death decisions—humans or machines. There is also a disagreement over the benefits and costs of distancing human combatants from the battlefield and whether the possible life-saving benefits of AWS are offset by the fact that war also becomes, in certain respects, easier to conduct. There are also different understandings of and predictions about what machines are and will be capable of doing.
With the rise of expert and popular interest in AWS, states have been paying more public attention to the issue of regulating autonomy in war. But the primary venue at which they are doing so functionally limits the discussion to weapons.[28] Since 2014, informal expert meetings on “lethal autonomous weapons systems” have been convened on an annual basis at the United Nations Office in Geneva. These meetings take place within the structure of the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (CCW). That treaty is set up as a framework convention: through it, states may adopt additional instruments that pertain to the core concerns of the baseline agreement (five such protocols have been adopted). Alongside the CCW, other arms-control treaties address specific types of weapons, including chemical weapons, biological weapons, anti-personnel landmines, cluster munitions, and others. The CCW is the only existing regime, however, that is ongoing and open-ended and is capable of being used as a framework to address additional types of weapons.
The original motivation to convene states as part of the CCW was to propel a protocol banning fully autonomous weapons. The most recent meeting (which was convened in April 2016) recommended that the Fifth Review Conference of states parties to the CCW (which is scheduled to take place in December 2016) “may decide to establish an open-ended Group of Governmental Experts (GGE)” on AWS. In the past, the establishment of a GGE has led to the adoption of a new CCW protocol (one banning permanently-blinding lasers). Whether states parties establish a GGE on AWS—and, if so, what its mandate will be—are open questions. In any event, at the most recent meetings, about two-dozen states endorsed the notion—the contours of which remain undefined so far—of “meaningful human control” over autonomous weapon systems.[29]
Zooming out, we see that a pair of interlocking factors has obscured and hindered analysis of whether the relevant technologies can and should be regulated.
One factor is the sheer technical complexity at issue. Lack of knowledge of technical intricacies has hindered efforts by non-experts to grasp how the core technologies may either fit within or frustrate existing legal frameworks.
This is not a challenge particular to AWS, of course. The majority of IHL professionals are not experts in the inner workings of the numerous technologies related to armed conflict. Most IHL lawyers could not detail the technical specifications, for instance, of various armaments, combat vehicles, or intelligence, surveillance, and reconnaissance (ISR) systems. But in general that lack of technical knowledge would not necessarily impede at least a provisional analysis of the lawfulness of the use of such a system. That is because an initial IHL analysis is often an exercise in identifying the relevant rule and beginning to apply it in relation to the applicable context. Yet the widely diverse conceptions of AWS and the varied technologies accompanying those conceptions pose an as-yet-unresolved set of classification challenges. And without a threshold classification, a general legal analysis cannot proceed.
The other, related factor is that states—as well as lawyers, technologists, and other commentators—disagree in key respects on what should be addressed. The headings so far include “lethal autonomous robots,” “lethal autonomous weapons systems,” “autonomous weapons systems” more broadly, and “intelligent partnerships” more broadly still. And the possible standards mentioned include “meaningful human control” (including in the “wider loop” of targeting operations), “meaningful state control,” and “appropriate levels of human judgment.”[30] More basically, there is no consensus on whether to include only weapons or, additionally, systems capable of involvement in other armed conflict-related functions, such as transporting and guarding detainees, providing medical care, and facilitating humanitarian assistance.
Against this backdrop, the AWS framing has largely precluded meaningful analysis of whether it (whatever “it” entails) can be regulated, let alone whether and how it should be regulated.[31] In this briefing report, we recast the discussion by introducing the concept of “war algorithms.”[32] We define “war algorithm” as any algorithm[33] that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. Those algorithms seem to be a—and perhaps the—key ingredient of what most people and states discuss when they address AWS. We expand the purview beyond weapons alone (important as those are) because the technological capabilities are rarely, if ever, limited to use only as weapons and because other war functions involving algorithmically-derived autonomy should be considered for regulation as well. Moreover, given the modular nature of much of the technology, a focus on weapons alone might thwart attempts at regulation.
Algorithms are a conceptual and technical building block of many systems. Those systems include self-learning architectures that today present some of the sharpest questions about “replacing” human judgment with algorithmically-derived “choices.” Moreover, algorithms form a foundation of most of the systems and platforms—and even the “systems of systems”—often discussed in relation to AWS. Absent an unforeseen development, algorithms are likely to remain a pillar of the technical architectures.
The constructed systems through which these algorithms are effectuated differ enormously. So do the nature, forms, and tiers of human control and governance over them. Existing constructed systems include, among many others, stationary turrets, missile systems, and manned or unmanned aerial, terrestrial, or marine vehicles.[34]
All of the underlying algorithms are developed by programmers and are expressed in computer code. But some of these algorithms—especially those capable of “self-learning” and whose “choices” might be difficult for humans to anticipate or unpack—seem to challenge fundamental and interrelated concepts that underpin international law pertaining to armed conflict and related accountability frameworks. Those concepts include attribution, control, foreseeability, and reconstructability.
At their core, the design, development, and use of war algorithms raise profound questions. Most fundamentally, those inquiries concern who, or what, should decide—and what it means to decide—matters of life and death in relation to war. But war algorithms also bring to the fore an array of more quotidian, though also important, questions about the benefits and costs of human judgment and “replacing” it with algorithmically-derived systems, including in such areas as logistics.
We ground our analysis by focusing on war-algorithm accountability. In doing so, we sketch a three-axis accountability approach for those algorithms: state responsibility for a breach of a rule of international law, individual responsibility under international law for international crimes, and a broad notion of scrutiny governance. This is not an exhaustive list of possible types of accountability. But the axes we outline offer a flavor of how accountability, in general, could be conceptualized in the context of war algorithms.
In short, we are primarily interested in the “duty to account … for the exercise of power”[35] over—in other words, holding someone or some entity answerable for—the design, development, or use (or a combination thereof) of a war algorithm.[36] That power may be exercised by a diverse assortment of actors. Some are obvious, especially states and their armed forces. But myriad other individuals and entities may exercise power over war algorithms, too. Consider the broad classes of “developers” and “operators,” both within and outside of government, of such algorithms and their related systems. Also think of lawyers, industry bodies, political authorities, members of organized armed groups—and many, many others. Focusing on war algorithms encompasses them all.
Objective, Approach, and Methodology
In this briefing report, our objective is not to argue whether international law, as it currently exists, sufficiently addresses the plethora of issues raised by autonomous weapon systems. Rather, we aim to shed light on and recast the discussion in terms of a new concept: war algorithms. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad category of war algorithms might be susceptible to regulation (and how those algorithms might already fit within the existing regulatory system established by international law).
We draw on the extensive—and rapidly growing—amount of scholarship and other analytical analyses that have addressed related topics.[37] To help illuminate the discussion, we outline what technologies and weapon systems already exist, what fields of international law might be relevant, and what regulatory avenues might be available. As noted above, because international law is the touchstone normative framework for accountability in relation to war, we focus on public international law sources and methodologies. But as we show, other norms and forms of governance might also merit attention.
Accountability is a broad term of art. We adapt—from the work of an International Law Association Committee in a different context (the accountability of international organizations)—a three-part accountability approach.[38] Our framework outlines three axes on which to focus initially on war algorithms.
The first axis is state responsibility. It concerns state responsibility arising out of acts or omissions involving a war algorithm where those acts or omissions constitute a breach of a rule of international law. State responsibility entails discerning the content of the rule, identifying a breach of the rule, assigning attribution for that breach to a state, determining available excuses (if any), and imposing measures of remedy.
The second axis is a form of individual responsibility under international law. In particular, it concerns individual responsibility under international law for international crimes—such as war crimes—involving war algorithms. This form of individual responsibility entails establishing the commission of a crime under the relevant jurisdiction, assessing the existence of a justification or excuse (if any), and, upon conviction, imposing a sentence.
The third and final axis is scrutiny governance. Embracing a wider notion of accountability, it concerns the extent to which a person or entity is and should be subject to, or should exercise, forms of internal or external scrutiny, monitoring, or regulation (or a combination thereof) concerning the design, development, or use of a war algorithm. Scrutiny governance does not hinge on—but might implicate—potential and subsequent liability or responsibility (or both). Forms of scrutiny governance include independent monitoring, norm (such as legal) development, adopting non-binding resolutions and codes of conduct, normative design of technical architectures, and community self-regulation.
Outline
In Section 2, we outline pertinent considerations regarding algorithms and constructed systems. We then highlight recent advancements in artificial intelligence related to learning algorithms and architectures. We next examine state approaches to technical autonomy in war, focusing on five such approaches. Finally, to ground the often-theoretical debate pertaining to autonomous weapon systems, we describe existing weapon systems that have been characterized by various commentators as AWS.
In Section 3, we outline the main fields of international law that war algorithms might implicate. There is no single branch of international law dedicated solely to war algorithms. So we canvass how those algorithms might fit within or otherwise implicate various fields of international law. We ground the discussion by outlining the main ingredients of state responsibility: attribution, breach, excuses, and consequences. Then, to help illustrate states’ positions concerning AWS, we examine whether an emerging norm of customary international law specific to AWS may be discerned. We find that one cannot (at least not yet). So we next highlight how the design, development, or use (or a combination thereof) of a war algorithm might implicate more general principles and rules found in various fields of international law. Those fields include the jus ad bellum, IHL, international human rights law, international criminal law, and space law. Because states and commentators have largely focused on AWS to date, much of our discussion here relates to the AWS framing.
In Section 4, we elaborate a (non-exhaustive) war-algorithm accountability approach. That approach focuses on state responsibility for an internationally wrongful act, on individual responsibility under international law for international crimes, and on wider forms of scrutiny, monitoring, and regulation. We highlight existing accountability actors and architectures under international law that might regulate war algorithms. These include war reparations as well as international and domestic tribunals. We then turn to less conventional accountability avenues, such as those rooted in normative design of technical architectures (including maximizing the auditability of algorithms) and community self-regulation.
In the Conclusion, we return to the deficiencies of current discussions of AWS and emphasize the importance of addressing the wide and serious concerns raised by AWS with technical proficiency, legal expertise, and non-ideological commitment to a genuine and inclusive inquiry.
We also attach a Bibliography and Appendices. The Bibliography contains over 400 analytical sources, in various languages, pertaining to technical autonomy in war. The Appendices contain detailed charts listing and categorizing states’ statements at the 2015 and 2016 Informal Meetings of Experts on Lethal Autonomous Weapons Systems convened within the framework of the CCW.
Caveats
The bulk of the secondary-source research was conducted in English. Moreover, none of us is an expert in computer science or robotics. We consulted specialists in these fields, but we alone are responsible for any remaining errors. In any event, given the rapid pace of development, the technologies discussed in this briefing report may soon be eclipsed—if they have not been already.
[1]. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Society 8 (2015), citing Clay Shirky, A Speculative Post on the Idea of Algorithmic Authority, Clay Shirky (November 15, 2009, 4:06 PM), http://www.shirky.com/weblog/2009/11/a-speculative-post-on-the-idea-of-algorithmic-authority (referencing Shirky’s definition of “algorithmic authority” as “the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying ‘Trust this because you trust me.’”).
[2]. On the examples in this paragraph, see generally Pasquale, supra note 1.
[3]. See generally U.S. Commodity Futures Trading Commission & U.S. Securities & Exchange Commission, Findings Regarding the Market Events of May 6, 2010: Report of the Staffs of the CFTF and SEC to the Joint Advisory Committee on Emerging Regulatory Issues (2010), https://www.sec.gov/news/studies/2010/marketevents-report.pdf.
[4]. See, e.g., Jill Prulick, When Bots Collude, New Yorker, April 25, 2015, http://www.newyorker.com/business/currency/when-bots-collude.
[5]. The use of artificial intelligence and other forms of algorithmic systems in relation to war is far from new. For examples from nearly three decades ago, see Defense Applications of Artificial Intelligence (Stephen J. Andriole & Gerald W. Hopple eds., 1988).
[6]. See generally, e.g., Paul J. Springer, Military Robots and Drones: A Reference Handbook (2013); see also infra Section 2: Examples of Purported Autonomous Weapon Systems.
[7]. In a recent report, the Defense Science Board uses a definition of autonomy that implies the use of one or more algorithms: “To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.” Defense Science Board, Summer Study on Autonomy 4 (June 2016) (noting that “[d]efinitions for intelligent system, autonomy, automation, robots, and agents can be found in L.G. Shattuck, Transitioning to Autonomy: A human systems integration perspective, p. 5. Presentation at Transitioning to Autonomy: Changes in the role of humans in air transportation [March 11, 2015]. Available at http://human-factors.arc.nasa.gov/workshop/autonomy/download/presentations/Shaddock%20.pdf.”). Id. at n.1.
[8]. E.g., autonomous agents to improve cyber-attack indicators and warnings; onboard autonomy for sensing; and time-critical intelligence from seized media. See Defense Science Board, supra note 7, at 46–53.
[9]. E.g., dynamic spectrum management for protection missions; unmanned underwater vehicles (UUVs) to autonomously conduct sea-mine countermeasures missions; and automated cyber-response. See Defense Science Board, supra note 7, at 53–60.
[10]. E.g., cascaded UUVs for offensive maritime mining, and organic tactical unmanned aircraft to support ground forces. See Defense Science Board, supra note 7, at 60–68. The term “force application” is defined in the report as “the ability to integrate the use of maneuver and engagement in all environments to create the effects necessary to achieve mission objectives.” Id. at 60.
[11]. E.g., predictive logistics and adaptive planning, and adaptive logistics for rapid deployment. See Defense Science Board, supra note 7, at 69–75.
[12]. Under international humanitarian law (IHL), a person is hors de combat if (i) she is in the power of an adverse party, (ii) she clearly expresses an intention to surrender, or (iii) she has been rendered unconscious or is otherwise incapable of defending herself, provided that in any of these cases she abstains from any hostile act and does not attempt to escape; shipwrecked persons cannot be excluded from the construct of hors de combat. This formulation is derived from the Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts art. 41(2), June 8, 1977, 1125 U.N.T.S. 3 [hereinafter AP I]; see also, e.g., Yoram Dinstein, Non-International Armed Conflicts in International Law 164 (2014).
[13]. Ucilia Wang, Driverless Cars Are Data Guzzlers, Wall Street Journal, March 23, 2014, http://www.wsj.com/articles/SB10001424052702304815004579417441475998338.
[14]. David Silver et al., Mastering the Game of Go with Deep Neural Networks and Tree Search, 529 Nature 484, 488 (2016). Go is a board game pitting two players in a contest to surround more territory than each other’s opponent; it is played on a grid of black lines, with game pieces played on the lines’ intersections. A full-sized board is 19 by 19. Part of the reason Go presents such a difficult computational challenge is because its search space is so large. “After the first two moves of a Chess game,” for instance, “there are 400 possible next moves. In Go, there are close to 130,000.” Danielle Muoio, Why Go is So Much Harder for AI to Beat Than Chess, Tech Insider, March 10, 2016, http://www.techinsider.io/why-google-ai-game-go-is-harder-than-chess-2016-3.
[15]. A Game-Changing Result, The Economist, March 19, 2016, http://www.economist.com/news/science-and-technology/21694883-alphagos-masters-taught-it-game-electrifying-match-shows-what.
[16]. Id.
[17]. In this report, while recognizing certain distinctions and overlaps between them, we use the terms “war” and “armed conflict” interchangeably to denote an armed conflict (whether of an international or a non-international character) as defined in international law and a state of war in the legal sense. See, e.g., Jann Kleffner, Scope of Application of International Humanitarian Law, in The Handbook of International Humanitarian Law (Dieter Fleck ed., 3rd ed. 2013).
[18]. See, e.g., Marten Zwanenburg et al., Humans, Agents and International Humanitarian Law: Dilemmas in Target Discrimination, BNAIC 408 (2005) (examining the destruction of a commercial airliner by the USS Vincennes to illustrate legal and ethical dilemmas involving the use of autonomous agents).
[19]. Among states and commentators, there is no agreement on whether to refer to “autonomous weapons,” “autonomous weapon systems,” or “autonomous weapons systems,” among many other formulations. Throughout this report, where referring to the views of a particular state(s) or commentator(s), we adopt that entity’s or person’s framing. Otherwise, for ease of reference, we adopt the “autonomous weapon system(s)” framing.
[20]. Rebecca Crootof, War Torts: Accountability for Autonomous Weapons Systems, 164 U. Penn. L. Rev. (forthcoming June 2016), http://ssrn.com/abstract=2657680 [hereinafter Crootof, War Torts]. In June 2016, the Defense Science Board highlighted six categories of how autonomy can benefit (Department of Defense) DoD missions:
- Required decision speed: more autonomy is valuable when decisions must be made quickly (e.g., cyber operations and missile defense);
- Heterogeneity and volume of data: more autonomy is valuable with high volume data and variety of data types (e.g., imagery; intelligence data analysis; intelligence, surveillance, reconnaissance (ISR) data integration);
- Quality of data links: more autonomy is valuable when communication is intermittent (e.g., times of contested communications, unmanned undersea operations);
- Complexity of action: more autonomy is valuable when activity is multimodal (e.g., an air operations center, multi-mission operations);
- Danger of mission: more autonomy can reduce the number of warfighters in harm’s way (e.g., in contested operations; chemical, biological, radiological, or nuclear attack cleanup); and
- Persistence and endurance: more autonomy can increase mission duration (e.g., enabling unmanned vehicles, persistent surveillance).
See Defense Science Board, supra note 7, at 45 (June 2016).
[21]. U.S. Dep’t of Defense, Unmanned Systems Integrated Roadmap: FY2013–2038, at 67 (2013), http://www.defense.gov/Portals/1/Documents/pubs/DOD-USRM-2013.pdf.
[22]. Gov’t (Neth.), Government Response to AIV/CAVV Advisory Report no. 97, Autonomous Weapon Systems: The Need for Meaningful Human Control (2016), http://aiv-advice.nl/8gr#government-responses [hereinafter Dutch Government, Response to AIV/CAVV Report]. At the same time, however, the Dutch government “reject[ed] outright the possibility of developing and deploying fully autonomous weapons.” Id.
[23]. See, e.g., Mary Ellen O’Connell, Banning Autonomous Killing, in The American Way of Bombing: Changing Ethical and Legal Norms, from Flying Fortresses to Drones (Matthew Evangelista & Henry Shue eds., 1st ed. 2014).
[24]. Human Rights Watch and the Harvard Law School International Human Rights Clinic, Losing Humanity: The Case against Killer Robots (2012), https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.
[25]. See, e.g., Act, Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/act (last visited Aug. 23, 2016).
[26]. Autonomous Weapons: An Open Letter from AI & Robotics Researchers, Future of Life Institute (July 28, 2015), http://futureoflife.org/open-letter-autonomous-weapons.
[27]. Id.
[28]. AWS have also been raised at the U.N. Human Rights Council, though without the thematic focus given to them in the context of the Convention on Certain Conventional Weapons (CCW). See, e.g., Christof Heyns (Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions), Rep. to Human Rights Council, ¶¶ 142–45, UN Doc. A/HRC/26/36 (Apr. 1, 2014).
[29]. See infra Section 3: International Law pertaining to Armed Conflict — Customary International Law concerning AWS.
[30]. See infra Appendices I and II.
[31]. On various formal and informal models of regulating new technologies, see generally Benjamin Wittes & Gabriella Blum, The Future of Violence: Robots and Germs, Hackers and Drones—Confronting A New Age of Threat (2015); with respect to autonomous military robots, see Gary E. Marchant et al., International Governance of Autonomous Military Robots, 12 Colum. Sci. & Tech. L. Rev. 272 (2011).
[32]. Our concept of “war algorithms” should be distinguished from the “WAR algorithm” concept that has been developed in relation to evaluating environmental impacts. See Environmental Protection Agency, Waste Reduction Algorithm: Chemical Process Simulation for Waste Reduction, https://www.epa.gov/chemical-research/waste-reduction-algorithm-chemical-process-simulation-waste-reduction (last visited Aug. 27, 2016) (explaining that “[t]raditionally chemical process designs, focus on minimizing cost, while the environmental impact of a process is often overlooked. This may in many instances lead to the production of large quantities of waste materials. It is possible to reduce the generation of these wastes and their environmental impact by modifying the design of the process. The WAste Reduction (WAR) algorithm was developed so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environmental and related human health impacts at the design stage.”)
[33]. See infra Section 2: Technology Concepts and Developments (on general definitions of “algorithm”).
[34]. See infra Section 2: Examples of Purported Autonomous Weapon Systems.
[35]. Drawn from the discussion of International Law Association, Committee on Accountability of International Organizations, Berlin Conference: Final Report 5 (2004), http://www.ila-hq.org/en/committees/index.cfm/cid/9 in James Crawford, State Responsibility: The General Part 85 (2013).
[36]. In principle, the threat of use of a war algorithm may (also) give rise to legal implications; however, we focus on the design, development, and use of those algorithms.
[37]. See infra Bibliography.
[38]. Our approach is derived in part from International Law Association, supra note 35, at 5.