Executive Summary

Executive Summary

Note: A PDF of this Executive Summary is available here [link], and more information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Executive Summary

Across many areas of modern life, “authority is increasingly expressed algorithmically.”[1] War is no exception.

In this briefing report, we introduce a new concept—war algorithms—that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.”

In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation—and how those algorithms might already fit within the existing regulatory system established by international law.

*     *     *

Warring parties have long expressed authority and power through algorithms. For decades, algorithms have helped weapons systems—first at sea and later on land—to identify and intercept inbound missiles. Today, military systems are increasingly capable of navigating novel environments and surveilling faraway populations, as well as identifying targets, estimating harm, and launching direct attacks—all with fewer humans at the switch. Indeed, in recent years, commercial and military developments in algorithmically-derived autonomy have created diverse benefits for the armed forces in terms of “battlespace awareness,” protection, “force application,” and logistics. And those are by no means the exhaustive set of applications.

Much of the underlying technology—often developed initially in commercial or academic contexts—is susceptible to both military and non-military use. Most of it is thus characterized as “dual-use,” a shorthand for being capable of serving a wide array of functions. Costs of the technology are dropping, often precipitously. And, once the technology exists, the assumption is usually that it can be utilized by a broad range of actors.

Driven in no small part by commercial interests, developers are advancing relevant technologies and technical architectures at a rapid pace. The potential for those advancements to cross a moral Rubicon is being raised more frequently in international forums and among technical communities, as well as in the popular press.

Some of the most relevant advancements involve constructed systems through which huge amounts of data are quickly gathered and ensuing algorithmically-derived “choices” are effectuated. “Self-driving” or “autonomous” cars are one example. Ford, for instance, mounts four laser-based sensors on the roof of its self-driving research car, and collectively those sensors “can capture 2.5 million 3-D points per second within a 200-foot range.” Legal, ethical, political, and social commentators are casting attention on—and vetting proposed standards and frameworks to govern—the life-and-death “choices” made by autonomous cars.

Among the other relevant advancements is the potential for learning algorithms and architectures to achieve more and more human-level performance in previously-intractable artificial-intelligence (AI) domains. For instance, a computer program recently achieved a feat previously thought to be at least a decade away: defeating a human professional player in a full-sized game of Go. In March 2016, in a five-game match, AlphaGo—a computer program using an AI technique known as “deep learning,” which “allows computers to extract patterns from masses of data with little human hand-holding”—won four games against Go expert Lee Sedol. Google, Amazon, and Baidu use the same AI technique or similar ones for such tasks as facial recognition and serving advertisements on websites. Following AlphaGo’s series of wins, computer programs have now outperformed humans at chess, backgammon, “Jeopardy!”, and Go.

Yet even among leading scientists, uncertainty prevails as to the technological limits. That uncertainty repels a consensus on the current capabilities, to say nothing of predictions of what might be likely developments in the near- and long-term (with those horizons defined variously).

The stakes are particularly high in the context of political violence that reaches the level of “armed conflict.” That is because international law admits of far more lawful death, destruction, and disruption in war than in peace. Even for responsible parties who are committed to the rule of law, the legal regime contemplates the deployment of lethal and destructive technologies on a wide scale. The use of advanced technologies—to say nothing of the failures, malfunctioning, hacking, or spoofing of those technologies—might therefore entail far more significant consequences in relation to war than to peace. We focus here largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres.

Of course, the development and use of advanced technologies in relation to war have long generated ethical, political, and legal debates. There is nothing new about the general desire and the need to discern whether the use of an emerging technological capability would comport with or violate the law. Today, however, emergent technologies sharpen—and, to a certain extent, recast—that enduring endeavor. A key reason is that those technologies are seen as presenting an inflection point at which human judgment might be “replaced” by algorithmically-derived “choices.” To unpack and understand the implications of that framing requires, among other things, technical comprehension, ethical awareness, and legal knowledge. Understandably if unfortunately, competence across those diverse domains has so far proven difficult to achieve for the vast majority of states, practitioners, and commentators.

Largely, the discourse to date has revolved around a concept that so far lacks a definitional consensus: “autonomous weapon systems” (AWS). Current conceptions of AWS range enormously. On one end of the spectrum, an AWS is an automated component of an existing weapon. On the other, it is a platform that is itself capable of sensing, learning, and launching resulting attacks. Irrespective of how it is defined in a particular instance, the AWS framing narrows the discourse to weapons, excluding the myriad other functions, however benevolent, that the underlying technologies might be capable of.

What autonomous weapons mean for legal responsibility and for broader accountability has generated one of the most heated recent debates about the law of war. A constellation of factors has shaped the discussion.

Perceptions of evolving security threats, geopolitical strategy, and accompanying developments in military doctrine have led governments to prioritize the use of unmanned and increasingly autonomous systems (with “autonomous” defined variously) in order to gain and maintain a qualitative edge. By 2013, leadership in the U.S. Navy and Department of Defense (DoD) had identified autonomy in unmanned systems as a “high priority.” In March 2016, the Ministries of Foreign Affairs and Defense of the Netherlands affirmed their belief that “if the Dutch armed forces are to remain technologically advanced, autonomous weapons will have a role to play, now and in the future.” A growing number of states hold similar views.

At the same time, human-rights advocates and certain technology experts have catalyzed initiatives to promote a ban on “fully autonomous weapons” (which those advocates and experts also call “killer robots”). The primary concerns are couched in terms of delegating decisions about lethal force away from humans—thereby “dehumanizing” war—and, in the process, of making wars easier to prosecute. Following the release in 2012 of a report by Human Rights Watch and the International Human Rights Clinic at Harvard Law School, the Campaign to Stop Killer Robots was launched in April 2013 with an explicit goal of fostering a “pre-emptive ban on fully autonomous weapons.” The rationale is that such weapons will, pursuant to this view, never be capable of comporting with international humanitarian law (IHL) and are therefore per se illegal. In July 2015, thousands of prominent AI and robotics experts, as well as other scientists, endorsed an “Open Letter” on autonomous weapons, arguing that “[t]he key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” Those endorsing the letter “believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” But, they cautioned, “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Meanwhile, a range of commentators has argued in favor of regulating autonomous weapon systems, primarily through existing international law rules and provisions. In general, these voices focus on grounding the discourse in terms of the capability of existing legal norms—especially those laid down in IHL—to regulate the design, development, and use, or to prohibit the use, of emergent technologies. In doing so, these commentators often emphasize that states have already developed a relatively thick set of international law rules that guide decisions about life and death in war. Even if there is no specific treaty addressing a particular weapon, they argue, IHL regulates the use of all weapons through general rules and principles governing the conduct of hostilities that apply irrespective of the weapon used. A number of these voices also aver that—for political, military, commercial, or other reasons—states are unlikely to agree on a preemptive ban on fully autonomous weapons, and therefore a better use of resources would be to focus on regulating the technologies and monitoring their use. In addition, these commentators often emphasize the modularity of the technology and raise concerns about foreclosing possible beneficial applications in the service of an (in their eyes, highly unlikely) prohibition on fully autonomous weapons.

Over all, the lack of consensus on the root classification of AWS and on the scope of the resulting discussion make it difficult to generalize. But the main contours of the ensuing “debate” often cast a purportedly unitary “ban” side versus a purportedly unitary “regulate” side. As with many shorthand accounts, this formulation is overly simplistic. An assortment of thoughtful contributors does not fit neatly into either general category. And, when scrutinized, those wholesale categories—of “ban” vs. “regulate”—disclose fundamental flaws, not least because of the lack of agreement on what, exactly, is meant to be prohibited or regulated. Be that as it may, a large portion of the resulting discourse has been captured in these “ban”-vs.-“regulate” terms.

Underpinning much of this debate are arguments about decision-making in war, and who is better situated to make life-and-death decisions—humans or machines. There is also a disagreement over the benefits and costs of distancing human combatants from the battlefield and whether the possible life-saving benefits of AWS are offset by the fact that war also becomes, in certain respects, easier to conduct. There are also different understandings of and predictions about what machines are and will be capable of doing.

With the rise of expert and popular interest in AWS, states have been paying more public attention to the issue of regulating autonomy in war. But the primary venue at which they are doing so functionally limits the discussion to weapons. Since 2014, informal expert meetings on “lethal autonomous weapons systems” have been convened on an annual basis at the United Nations Office in Geneva. These meetings take place within the structure of the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (CCW). That treaty is set up as a framework convention: through it, states may adopt additional instruments that pertain to the core concerns of the baseline agreement (five such protocols have been adopted). Alongside the CCW, other arms-control treaties address specific types of weapons, including chemical weapons, biological weapons, anti-personnel landmines, cluster munitions, and others. The CCW is the only existing regime, however, that is ongoing and open-ended and is capable of being used as a framework to address additional types of weapons.

The original motivation to convene states as part of the CCW was to propel a protocol banning fully autonomous weapons. The most recent meeting (which was convened in April 2016) recommended that the Fifth Review Conference of states parties to the CCW (which is scheduled to take place in December 2016) “may decide to establish an open-ended Group of Governmental Experts (GGE)” on AWS. In the past, the establishment of a GGE has led to the adoption of a new CCW protocol (one banning permanently-blinding lasers). Whether states parties establish a GGE on AWS—and, if so, what its mandate will be—are open questions. In any event, at the most recent meetings, about two-dozen states endorsed the notion—the contours of which remain undefined so far—of “meaningful human control” over autonomous weapon systems.

Zooming out, we see that a pair of interlocking factors has obscured and hindered analysis of whether the relevant technologies can and should be regulated.

One factor is the sheer technical complexity at issue. Lack of knowledge of technical intricacies has hindered efforts by non-experts to grasp how the core technologies may either fit within or frustrate existing legal frameworks.

This is not a challenge particular to AWS, of course. The majority of IHL professionals are not experts in the inner workings of the numerous technologies related to armed conflict. Most IHL lawyers could not detail the technical specifications, for instance, of various armaments, combat vehicles, or intelligence, surveillance, and reconnaissance (ISR) systems. But in general that lack of technical knowledge would not necessarily impede at least a provisional analysis of the lawfulness of the use of such a system. That is because an initial IHL analysis is often an exercise in identifying the relevant rule and beginning to apply it in relation to the applicable context. Yet the widely diverse conceptions of AWS and the varied technologies accompanying those conceptions pose an as-yet-unresolved set of classification challenges. Without a threshold classification, a general legal analysis cannot proceed.

The other, related factor is that states—as well as lawyers, technologists, and other commentators—disagree in key respects on what should be addressed. The headings so far include “lethal autonomous robots,” “lethal autonomous weapons systems,” “autonomous weapons systems” more broadly, and “intelligent partnerships” more broadly still. And the possible standards mentioned include “meaningful human control” (including in the “wider loop” of targeting operations), “meaningful state control,” and “appropriate levels of human judgment.” More basically, there is no consensus on whether to include only weapons or, additionally, systems capable of involvement in other armed conflict-related functions, such as transporting and guarding detainees, providing medical care, and facilitating humanitarian assistance.

Against this backdrop, the AWS framing has largely precluded meaningful analysis of whether it (whatever “it” entails) can be regulated, let alone whether and how it should be regulated. In this briefing report, we recast the discussion by introducing the concept of “war algorithms.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. Those algorithms seem to be a—and perhaps the—key ingredient of what most people and states discuss when they address AWS. We expand the purview beyond weapons alone (important as those are) because the technological capabilities are rarely, if ever, limited to use only as weapons and because other war functions involving algorithmically-derived autonomy should be considered for regulation as well. Moreover, given the modular nature of much of the technology, a focus on weapons alone might thwart attempts at regulation.

Algorithms are a conceptual and technical building block of many systems. Those systems include self-learning architectures that today present some of the sharpest questions about “replacing” human judgment with algorithmically-derived “choices.” Moreover, algorithms form a foundation of most of the systems and platforms—and even the “systems of systems”—often discussed in relation to AWS. Absent an unforeseen development, algorithms are likely to remain a pillar of the technical architectures.

The constructed systems through which these algorithms are effectuated differ enormously. So do the nature, forms, and tiers of human control and governance over them. Existing constructed systems include, among many others, stationary turrets, missile systems, and manned or unmanned aerial, terrestrial, or marine vehicles.

All of the underlying algorithms are developed by programmers and are expressed in computer code. But some of these algorithms—especially those capable of “self-learning” and whose “choices” might be difficult for humans to anticipate or unpack—seem to challenge fundamental and interrelated concepts that underpin international law pertaining to armed conflict and related accountability frameworks. Those concepts include attribution, control, foreseeability, and reconstructability.

At their core, the design, development, and use of war algorithms raise profound questions. Most fundamentally, those inquiries concern who, or what, should decide—and what it means to decide—matters of life and death in relation to war. But war algorithms also bring to the fore an array of more quotidian, though also important, questions about the benefits and costs of human judgment and “replacing” it with algorithmically-derived systems, including in such areas as logistics.

We ground our analysis by focusing on war-algorithm accountability. In short, we are primarily interested in the “duty to account … for the exercise of power” over—in other words, holding someone or some entity answerable for—the design, development, or use (or a combination thereof) of a war algorithm. That power may be exercised by a diverse assortment of actors. Some are obvious, especially states and their armed forces. But myriad other individuals and entities may exercise power over war algorithms, too. Consider the broad classes of “developers” and “operators,” both within and outside of government, of such algorithms and their related systems. Also think of lawyers, industry bodies, political authorities, members of organized armed groups—and many, many others. Focusing on war algorithms encompasses them all.

We draw on the extensive—and rapidly growing—amount of scholarship and other analytical analyses that have addressed related topics. To help illuminate the discussion, we outline what technologies and weapon systems already exist, what fields of international law might be relevant, and what regulatory avenues might be available. As noted above, because international law is the touchstone normative framework for accountability in relation to war, we focus on public international law sources and methodologies. But as we show, other norms and forms of governance might also merit attention.

Accountability is a broad term of art. We adapt—from the work of an International Law Association Committee in a different context (the accountability of international organizations)—a three-part accountability approach. Our framework outlines three axes on which to focus initially on war algorithms.

The first axis is state responsibility. It concerns state responsibility arising out of acts or omissions involving a war algorithm where those acts or omissions constitute a breach of a rule of international law. State responsibility entails discerning the content of the rule, identifying a breach of the rule, assigning attribution for that breach to a state, determining available excuses (if any), and imposing measures of remedy.

The second axis is a form of individual responsibility under international law. In particular, it concerns individual responsibility under international law for international crimes—such as war crimes—involving war algorithms. This form of individual responsibility entails establishing the commission of a crime under the relevant jurisdiction, assessing the existence of a justification or excuse (if any), and, upon conviction, imposing a sentence.

The third and final axis is scrutiny governance. Embracing a wider notion of accountability, it concerns the extent to which a person or entity is and should be subject to, or should exercise, forms of internal or external scrutiny, monitoring, or regulation (or a combination thereof) concerning the design, development, or use of a war algorithm. Scrutiny governance does not hinge on—but might implicate—potential and subsequent liability or responsibility (or both). Forms of scrutiny governance include independent monitoring, norm (such as legal) development, adopting non-binding resolutions and codes of conduct, normative design of technical architectures, and community self-regulation.

Following an introduction that highlights the stakes, we proceed with a section outlining pertinent considerations regarding algorithms and constructed systems. We highlight recent advancements in artificial intelligence related to learning algorithms and architectures. We also examine state approaches to technical autonomy in war, focusing on five such approaches—those of Switzerland, the Netherlands, France, the United States, and the United Kingdom. Finally, to ground the often-theoretical debate pertaining to autonomous weapon systems, we describe existing weapon systems that have been characterized by various commentators as AWS.

The next section outlines the main fields of international law that war algorithms might implicate. There is no single branch of international law dedicated solely to war algorithms. So we canvass how those algorithms might fit within or otherwise implicate various fields of international law. We ground the discussion by outlining the main ingredients of state responsibility. To help illustrate states’ positions concerning AWS, we examine whether an emerging norm of customary international law specific to AWS may be discerned. We find that one cannot (at least not yet). So we next highlight how the design, development, or use (or a combination thereof) of a war algorithm might implicate more general principles and rules found in various fields of international law. Those fields include the jus ad bellum, IHL, international human rights law, international criminal law (ICL), and space law. Because states and commentators have largely focused on AWS to date, much of our discussion here relates to the AWS framing.

The subsequent section elaborates a (non-exhaustive) war-algorithm accountability approach. That approach focuses on state responsibility for an internationally wrongful act, on individual responsibility under international law for international crimes, and on wider forms of scrutiny, monitoring, and regulation. We highlight existing accountability actors and architectures under international law that might regulate war algorithms. These include war reparations as well as international and domestic tribunals. We then turn to less conventional accountability avenues, such as those rooted in normative design of technical architectures (including maximizing the auditability of algorithms) and community self-regulation.

In the conclusion, we return to the deficiencies of current discussions of AWS and emphasize the importance of addressing the wide and serious concerns raised by AWS with technical proficiency, legal expertise, and non-ideological commitment to a genuine and inclusive inquiry. On the horizon, we see that two contradictory trends may be combining into a new global climate that is at once enterprising and anxious. Militaries see myriad technological triumphs that will transform warfighting. Yet the possibility of “replacing” human judgment with algorithmically-derived “decisions”—especially in war—threatens what many consider to define us as humans.

To date, the lack of demonstrated technical knowledge by many states and commentators, the unwillingness of states to share closely-held national-security technologies, and an absence of a definitional consensus on what is meant by autonomous weapon systems have impeded regulatory efforts on AWS. Moreover, uncertainty about which actors would benefit most from advances in AWS and for how long such benefits would yield a meaningful qualitative edge over others seems likely to continue to inhibit efforts at negotiating binding international rules on the development and deployment of AWS. In this sense, efforts at reaching a dedicated international regime to address AWS may follow the same frustrations as analogous efforts to address cyber warfare. True, unlike with the early days of cyber warfare, there has been greater state engagement on regulation of AWS. In particular, the concept of “meaningful human control” over AWS has already been endorsed by over two-dozen states. But much remains up in the air as states decide whether to establish a Group of Governmental Experts on AWS at the upcoming Fifth Review Conference of the CCW.

The current crux, as we see it, is whether advances in technology—especially those capable of “self-learning” and of operating in relation to war and whose “choices” may be difficult for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment—are susceptible to regulation and, if so, whether and how they should be regulated. One way to think about the core concern which vaults over at least some of the impediments to the discussion on AWS is the new concept we raise: war algorithms. War algorithms include not only those algorithms capable of being used in weapons but also in any other function related to war.

More war algorithms are on the horizon. Two months ago, the Defense Science Board, which is connected with the U.S. Department of Defense, identified five “stretch problems”—that is, goals that are “hard-but-not-too-hard” and that have a purpose of accelerating the process of bringing a new algorithmically-derived capability into widespread application:

  • Generating “future loop options” (that is, “using interpretation of massive data including social media and rapidly generated strategic options”);
  • Enabling autonomous swarms (that is, “deny[ing] the enemy’s ability to disrupt through quantity by launching overwhelming numbers of low‐cost assets that cooperate to defeat the threat”);
  • Intrusion detection on the Internet of Things (that is, “defeat[ing] adversary intrusions in the vast network of commercial sensors and devices by autonomously discovering subtle indicators of compromise hidden within a flood of ordinary traffic”);
  • Building autonomous cyber-resilient military vehicle systems (that is, “trust[ing] that … platforms are resilient to cyber‐attack through autonomous system integrity validation and recovery”); and
  • Planning autonomous air operations (that is, “operat[ing] inside adversary timelines by continuously planning and replanning tactical operations using autonomous ISR analysis, interpretation, option generation, and resource allocation”).

What this trajectory toward greater algorithmic autonomy in war—at least among more technologically-sophisticated armed forces and even some non-state armed groups—means for accountability purposes seems likely to stay a contested issue for the foreseeable future.

In the meantime, it remains to be authoritatively determined whether war algorithms will be capable of making the evaluative decisions and value judgments that are incorporated into IHL. It is currently not clear, for instance, whether war algorithms will be capable of formulating and implementing the following IHL-based evaluative decisions and value judgments:

  • The presumption of civilian status in case of “doubt”;
  • The assessment of “excessiveness” of expected incidental harm in relation to anticipated military advantage;
  • The betrayal of “confidence” in IHL in relation to the prohibition of perfidy; and
  • The prohibition of destruction of civilian property except where “imperatively” demanded by the necessities of war.

*     *     *

Two factors may suggest that, at least for now, the most immediate ways to regulate war algorithms specifically and to pursue accountability over them might be to follow not only traditional paths but also less conventional ones. As illustrated above, the latter might include relatively formal avenues—such as states making, applying, and enforcing war-algorithm rules of conduct within and beyond their territories—or less formal avenues—such as coding law into technical architectures and community self-regulation. First, even where the formal law may seem sufficient, concerns about practical enforcement abound. Second, the proliferation of increasingly advanced technical systems based on self-learning and distributed control raises the question of whether the model of individual responsibility found in ICL might pose conceptual challenges to regulating AWS and war algorithms.

In short, individual responsibility for international crimes under international law remains one of the vital accountability avenues in existence today, as do measures of remedy for state responsibility. Yet in practice responsibility along either avenue is unfortunately relatively rare. And thus neither path, on its own or in combination, seems to be sufficient to effectively address the myriad regulatory concerns pertaining to war algorithms—at least not until we better understand what is at issue. These concerns might lead those seeking to strengthen accountability of war algorithms to pursue not only traditional, formal avenues but also less formal, softer mechanisms.

In that connection, it seems likely that attempts to change governments’ approaches to technical autonomy in war through social pressure (at least for those governments that might be responsive to that pressure) will continue to be a vital avenue along which to pursue accountability. But here, too, there are concerns. Numerous initiatives already exist. Some of them are very well informed; others less so. Many of them are motivated by ideological, commercial, or other interests that—depending on one’s viewpoint—might strengthen or thwart accountability efforts. And given the paucity of formal regulatory regimes, some of these initiatives may end up having considerable impact, despite their shortcomings.

Stepping back, we see that technologies of war, as with technologies in so many areas, produce an uneasy blend of promise and threat. With respect to war algorithms, understanding these conflicting pulls requires attention to a century-and-a-half-long history during which war came to be one of the most highly regulated areas of international law. But it also requires technical know-how. Thus those seeking accountability for war algorithms would do well not to forget the essentially political work of IHL’s designers—nor to obscure the fact that today’s technology is, at its core, designed, developed, and deployed by humans. Ultimately, war-algorithm accountability seems unrealizable without sufficient competence in technical architectures and in legal frameworks, coupled with ethical, political, and economic awareness.

Finally, we also include a Bibliography and Appendices. The Bibliography contains over 400 analytical sources, in various languages, pertaining to technical autonomy in war. The Appendices contain detailed charts listing and categorizing states’ statements at the 2015 and 2016 Informal Meetings of Experts on Lethal Autonomous Weapons Systems convened within the framework of the CCW.


 

[1]. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Society 8 (2015), citing Clay Shirky, A Speculative Post on the Idea of Algorithmic Authority, Clay Shirky (November 15, 2009, 4:06 PM), http://www.shirky.com/weblog/2009/11/a-speculative-post-on-the-idea-of-algorithmic-authority (referencing Shirky’s definition of “algorithmic authority” as “the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying ‘Trust this because you trust me.’”). All further citations for sources underlying this Executive Summary are available in the full-text version of the briefing report.

Credits

Credits

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Credits

About PILAC

The Program on International Law and Armed Conflict (PILAC) is a platform at Harvard Law School that provides a space for research on critical challenges facing the various fields of public international law related to armed conflict, including the jus ad bellum, the jus in bello (international humanitarian law/the law of armed conflict), international human rights law, international criminal law, and the law of state responsibility. Its mode is critical, independent, and rigorous. PILAC’s methodology fuses traditional public international law research with targeted analysis of changing security environments. The Program does not engage in advocacy. While its contributors may express a range of views on contentious legal and policy debates, PILAC does not take institutional positions on these matters.

About the Authors

Dustin A. Lewis is a Senior Researcher at the Harvard Law School Program on International Law and Armed Conflict (PILAC). Gabriella Blum, the Faculty Director of PILAC, is the Rita E. Hauser Professor of Human Rights and Humanitarian Law at Harvard Law School. And Naz K. Modirzadeh, the Director of PILAC, is a Professor of Practice at Harvard Law School.

Acknowledgments and Disclaimers

The authors extend their thanks to: Adam Broza, Jessica Burniske, Molly Doggett, Joshua Kestin, and Katie King for research assistance; Jessica Burniske and Katie King for editorial assistance; Adam Broza, Jiawei He, Katie King, Francesco Romani, Svitlana Starosvit, and Anton Vallélian for translation assistance; Jiawei He and the Chinese Initiative on International Law for logistical and translation support; Jennifer Allison, PILAC Liaison to the Harvard Law School Library (HLSL), and the staff of the HLSL for research support; participants at events featuring early PILAC research at Fudan University, Shanghai (May 2016), at China University of Political Science and Law, Beijing (May 2016), and at the Berkman Klein Center for Internet and Society of Harvard University, Cambridge (July 2016) for critical feedback and comments; and Peter Krafft, Claudia Pérez D’Arpino, and Merel A. C. Ekelhof for technical assistance and critical feedback.

Adam Broza and Molly Doggett produced the Bibliography. Jessica Burniske and Joshua Kestin drafted the sub-section of Section 2 on examples of purported autonomous weapon systems. Katie King and Joshua Kestin produced Appendices I and II and provided extensive research assistance concerning the sub-section in section 3 regarding customary international law and autonomous weapon systems.

This Briefing Report has been produced, in part, with financial assistance from the Pierre and Pamela Omidyar Fund, which is an advised fund of the Silicon Valley Community Foundation (SVCF). PILAC also receives generous support from the Swiss Federal Department of Foreign Affairs (FDFA). The views expressed in this Briefing Report should not be taken, in any way, to reflect the official opinion of the SVCF or of the Swiss FDFA. PILAC is grateful for the support the SVCF and Swiss FDFA provide for independent research and analysis. The research undertaken by the authors of this Briefing Report was completely independent; the views and opinions reflected in this Briefing Report are those solely of the authors; and the authors alone are responsible for any errors in this Briefing Report.

License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0).

Web

This Briefing Report is available free of charge at http://pilac.law.harvard.edu.

1 - Introduction

1 - Introduction

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Section 1: Introduction

Across many areas of modern life, “authority is increasingly expressed algorithmically.”[1] War is no exception.

Complex algorithms help determine a person’s creditworthiness.[2] They suggest what movies to watch. They detect healthcare fraud. And they are used to trade stocks at speeds far faster than humans are capable of. (Sometimes, algorithms contribute to market crashes[3] or form a basis for anti-trust prosecutions.[4])

Warring parties express authority and power through algorithms, too. For decades, algorithms have helped weapons systems—first at sea and later on land—to identify and intercept inbound missiles.[5] Today, military systems are increasingly capable of navigating novel environments and surveilling faraway populations, as well as identifying targets, estimating harm, and launching direct attacks—all with fewer humans at the switch.[6] Indeed, in recent years, commercial and military developments in algorithmically-derived autonomy[7] have created diverse benefits for the armed forces in terms of “battlespace awareness,”[8] protection,[9] “force application,”[10] and logistics.[11] And those are by no means the exhaustive set of applications. Meanwhile, other algorithmically-derived war functions may not be far off—and, indeed, might already exist. Consider the provision of medical care to the wounded and sick hors de combat (such as certain combatants rendered incapable of fighting and who are therefore “outside of the battle”[12]) or the capture, transfer, and detention of enemy fighters.

Much of the underlying technology—often developed initially in commercial or academic contexts—is susceptible to both military and non-military use. Most of it is thus characterized as “dual-use,” a shorthand for being capable of serving a wide array of functions. Costs of the technology are dropping, often precipitously. And, once the technology exists, the assumption is usually that it can be utilized by a broad range of actors.

Driven in no small part by commercial interests, developers are advancing relevant technologies and technical architectures at a rapid pace. The potential for those advancements—often in consumer-facing computer science and robotics fields—to be used to cross a moral Rubicon if unscrupulously adapted for belligerent purposes is being raised more frequently in international forums and among technical communities, as well as in the popular press.

Some of the most relevant advancements involve constructed systems through which huge amounts of data are quickly gathered and ensuing algorithmically-derived “choices” are effectuated. “Self-driving” or “autonomous” cars are one example. Ford, for instance, mounts four laser-based sensors on the roof of its self-driving research car, and collectively those sensors “can capture 2.5 million 3-D points per second within a 200-foot range.”[13] Legal, ethical, political, and social commentators are casting attention on—and vetting proposed standards and frameworks to govern—the life-and-death “choices” made by autonomous cars.

Among the other relevant advancements is the potential for learning algorithms and architectures to achieve more and more human-level performance in previously-intractable artificial-intelligence (AI) domains. For instance, a computer program recently achieved a feat previously thought to be at least a decade away: defeating a human professional player in a full-sized game of Go.[14] In March 2016, in a five-game match, AlphaGo—a computer program using an AI technique known as “deep learning,” which “allows computers to extract patterns from masses of data with little human hand-holding”—won four games against Go expert Lee Sedol.[15] Google, Amazon, and Baidu use the same AI technique or similar ones for such tasks as facial recognition and serving advertisements on websites. Following AlphaGo’s series of wins, computer programs have now outperformed humans at chess, backgammon, “Jeopardy!”, and Go.[16]

Yet even among leading scientists, uncertainty prevails as to the technological limits. That uncertainty repels a consensus on the current capabilities, to say nothing of predictions of what might be likely developments in the near- and long-term (with those horizons defined variously).

The stakes are particularly high in the context of political violence that reaches the level of “armed conflict.” That is because international law admits of far more lawful death, destruction, and disruption in war than in peace.[17] Even for responsible parties who are committed to the rule of law, the legal regime contemplates the deployment of lethal and destructive technologies on a wide scale. The use of advanced technologies—to say nothing of the failures, malfunctioning, hacking, or spoofing of those technologies—might therefore entail far more significant consequences in relation to war than to peace.[18] We focus here largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres.

Of course, the development and use of advanced technologies in relation to war have long generated ethical, political, and legal debates. There is nothing new about the general desire and the need to discern whether the use of an emerging technological capability would comport with or violate the law. Today, however, emergent technologies sharpen—and, to a certain extent, recast—that enduring endeavor. A key reason is that those technologies are seen as presenting an inflection point at which human judgment might be “replaced” by algorithmically-derived “choices.” To unpack and understand the implications of that framing requires, among other things, technical comprehension, ethical awareness, and legal knowledge. Understandably if unfortunately, competence across those diverse domains has so far proven difficult to achieve for the vast majority of states, practitioners, and commentators.

Largely, the discourse to date has revolved around a concept that so far lacks a definitional consensus: “autonomous weapon systems” (AWS).[19] Current conceptions of AWS range enormously. On one end of the spectrum, an AWS is an automated component of an existing weapon. On the other, it is a platform that is itself capable of sensing, learning, and launching resulting attacks. Irrespective of how it is defined in a particular instance, the AWS framing narrows the discourse to weapons, excluding the myriad other functions, however benevolent, that the underlying technologies might be capable of.

What autonomous weapons mean for legal responsibility and for broader accountability has generated one of the most heated recent debates about the law of war. A constellation of factors has shaped the discussion.

Perceptions of evolving security threats, geopolitical strategy, and accompanying developments in military doctrine have led governments to prioritize the use of unmanned and increasingly autonomous systems (with “autonomous” defined variously) in order to gain and maintain a qualitative edge. The systems are said to present manifold military advantages—in short, a “seductive combination of distance, accuracy, and lethality.”[20] By 2013, leadership in the U.S. Navy and Department of Defense (DoD) had identified autonomy in unmanned systems as a “high priority.”[21] A few months ago, the Ministries of Foreign Affairs and Defense of the Netherlands affirmed their belief that “if the Dutch armed forces are to remain technologically advanced, autonomous weapons will have a role to play, now and in the future.”[22] A growing number of states hold similar views.

At the same time, human-rights advocates and certain technology experts have catalyzed initiatives to promote a ban on “fully autonomous weapons” (which those advocates and experts also call “killer robots”). The primary concerns are couched in terms of delegating decisions about lethal force away from humans—thereby “dehumanizing” war—and, in the process, of making wars easier to prosecute.[23] Following the release in 2012 of a report by Human Rights Watch and the International Human Rights Clinic at Harvard Law School,[24] the Campaign to Stop Killer Robots was launched in April 2013 with an explicit goal of fostering a “pre-emptive ban on fully autonomous weapons.”[25] The rationale is that such weapons will, pursuant to this view, never be capable of comporting with international humanitarian law (IHL) and are therefore per se illegal. In July 2015, thousands of prominent AI and robotics experts, as well as other scientists, endorsed an “Open Letter” on autonomous weapons, arguing that “[t]he key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”[26] Those endorsing the letter “believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” But, they cautioned, “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”[27]

Meanwhile, a range of commentators has argued in favor of regulating AWS, primarily through existing international law rules and provisions. In general, these voices focus on grounding the discourse in terms of the capability of existing legal norms—especially those laid down in IHL—to regulate the design, development, and use, or to prohibit the use, of emergent technologies. In doing so, these commentators often emphasize that states have already developed a relatively thick set of international law rules that guide decisions about life and death in war. Even if there is no specific treaty addressing a particular weapon, they argue, IHL regulates the use of all weapons through general rules and principles governing the conduct of hostilities that apply irrespective of the weapon used. A number of these voices also aver that—for political, military, commercial, or other reasons—states are unlikely to agree on a preemptive ban on fully autonomous weapons, and therefore a better use of resources would be to focus on regulating the technologies and monitoring their use. In addition, these commentators often emphasize the modularity of the technology and raise concerns about foreclosing possible beneficial applications in the service of an (in their eyes, highly unlikely) prohibition on fully autonomous weapons.

Over all, the lack of consensus on the root classification of AWS and on the scope of the resulting discussion make it difficult to generalize. But the main contours of the ensuing “debate” often cast a purportedly unitary “ban” side versus a purportedly unitary “regulate” side. As with many shorthand accounts, this formulation is overly simplistic. An assortment of thoughtful contributors does not fit neatly into either general category. And, when scrutinized, those wholesale categories—of “ban” vs. “regulate”—disclose fundamental flaws, not least because of the lack of agreement on what, exactly, is meant to be prohibited or regulated. Be that as it may, a large portion of the resulting discourse has been captured in these “ban”-vs.-“regulate” terms.

Underpinning much of this debate are arguments about decision-making in war, and who is better situated to make life-and-death decisions—humans or machines. There is also a disagreement over the benefits and costs of distancing human combatants from the battlefield and whether the possible life-saving benefits of AWS are offset by the fact that war also becomes, in certain respects, easier to conduct. There are also different understandings of and predictions about what machines are and will be capable of doing.

With the rise of expert and popular interest in AWS, states have been paying more public attention to the issue of regulating autonomy in war. But the primary venue at which they are doing so functionally limits the discussion to weapons.[28] Since 2014, informal expert meetings on “lethal autonomous weapons systems” have been convened on an annual basis at the United Nations Office in Geneva. These meetings take place within the structure of the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (CCW). That treaty is set up as a framework convention: through it, states may adopt additional instruments that pertain to the core concerns of the baseline agreement (five such protocols have been adopted). Alongside the CCW, other arms-control treaties address specific types of weapons, including chemical weapons, biological weapons, anti-personnel landmines, cluster munitions, and others. The CCW is the only existing regime, however, that is ongoing and open-ended and is capable of being used as a framework to address additional types of weapons.

The original motivation to convene states as part of the CCW was to propel a protocol banning fully autonomous weapons. The most recent meeting (which was convened in April 2016) recommended that the Fifth Review Conference of states parties to the CCW (which is scheduled to take place in December 2016) “may decide to establish an open-ended Group of Governmental Experts (GGE)” on AWS. In the past, the establishment of a GGE has led to the adoption of a new CCW protocol (one banning permanently-blinding lasers). Whether states parties establish a GGE on AWS—and, if so, what its mandate will be—are open questions. In any event, at the most recent meetings, about two-dozen states endorsed the notion—the contours of which remain undefined so far—of “meaningful human control” over autonomous weapon systems.[29]

Zooming out, we see that a pair of interlocking factors has obscured and hindered analysis of whether the relevant technologies can and should be regulated.

One factor is the sheer technical complexity at issue. Lack of knowledge of technical intricacies has hindered efforts by non-experts to grasp how the core technologies may either fit within or frustrate existing legal frameworks.

This is not a challenge particular to AWS, of course. The majority of IHL professionals are not experts in the inner workings of the numerous technologies related to armed conflict. Most IHL lawyers could not detail the technical specifications, for instance, of various armaments, combat vehicles, or intelligence, surveillance, and reconnaissance (ISR) systems. But in general that lack of technical knowledge would not necessarily impede at least a provisional analysis of the lawfulness of the use of such a system. That is because an initial IHL analysis is often an exercise in identifying the relevant rule and beginning to apply it in relation to the applicable context. Yet the widely diverse conceptions of AWS and the varied technologies accompanying those conceptions pose an as-yet-unresolved set of classification challenges. And without a threshold classification, a general legal analysis cannot proceed.

The other, related factor is that states—as well as lawyers, technologists, and other commentators—disagree in key respects on what should be addressed. The headings so far include “lethal autonomous robots,” “lethal autonomous weapons systems,” “autonomous weapons systems” more broadly, and “intelligent partnerships” more broadly still. And the possible standards mentioned include “meaningful human control” (including in the “wider loop” of targeting operations), “meaningful state control,” and “appropriate levels of human judgment.”[30] More basically, there is no consensus on whether to include only weapons or, additionally, systems capable of involvement in other armed conflict-related functions, such as transporting and guarding detainees, providing medical care, and facilitating humanitarian assistance.

Against this backdrop, the AWS framing has largely precluded meaningful analysis of whether it (whatever “it” entails) can be regulated, let alone whether and how it should be regulated.[31] In this briefing report, we recast the discussion by introducing the concept of “war algorithms.”[32] We define “war algorithm” as any algorithm[33] that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. Those algorithms seem to be a—and perhaps the—key ingredient of what most people and states discuss when they address AWS. We expand the purview beyond weapons alone (important as those are) because the technological capabilities are rarely, if ever, limited to use only as weapons and because other war functions involving algorithmically-derived autonomy should be considered for regulation as well. Moreover, given the modular nature of much of the technology, a focus on weapons alone might thwart attempts at regulation.

Algorithms are a conceptual and technical building block of many systems. Those systems include self-learning architectures that today present some of the sharpest questions about “replacing” human judgment with algorithmically-derived “choices.” Moreover, algorithms form a foundation of most of the systems and platforms—and even the “systems of systems”—often discussed in relation to AWS. Absent an unforeseen development, algorithms are likely to remain a pillar of the technical architectures.

The constructed systems through which these algorithms are effectuated differ enormously. So do the nature, forms, and tiers of human control and governance over them. Existing constructed systems include, among many others, stationary turrets, missile systems, and manned or unmanned aerial, terrestrial, or marine vehicles.[34]

All of the underlying algorithms are developed by programmers and are expressed in computer code. But some of these algorithms—especially those capable of “self-learning” and whose “choices” might be difficult for humans to anticipate or unpack—seem to challenge fundamental and interrelated concepts that underpin international law pertaining to armed conflict and related accountability frameworks. Those concepts include attribution, control, foreseeability, and reconstructability.

At their core, the design, development, and use of war algorithms raise profound questions. Most fundamentally, those inquiries concern who, or what, should decide—and what it means to decide—matters of life and death in relation to war. But war algorithms also bring to the fore an array of more quotidian, though also important, questions about the benefits and costs of human judgment and “replacing” it with algorithmically-derived systems, including in such areas as logistics.

We ground our analysis by focusing on war-algorithm accountability. In doing so, we sketch a three-axis accountability approach for those algorithms: state responsibility for a breach of a rule of international law, individual responsibility under international law for international crimes, and a broad notion of scrutiny governance. This is not an exhaustive list of possible types of accountability. But the axes we outline offer a flavor of how accountability, in general, could be conceptualized in the context of war algorithms.

In short, we are primarily interested in the “duty to account … for the exercise of power”[35] over—in other words, holding someone or some entity answerable for—the design, development, or use (or a combination thereof) of a war algorithm.[36] That power may be exercised by a diverse assortment of actors. Some are obvious, especially states and their armed forces. But myriad other individuals and entities may exercise power over war algorithms, too. Consider the broad classes of “developers” and “operators,” both within and outside of government, of such algorithms and their related systems. Also think of lawyers, industry bodies, political authorities, members of organized armed groups—and many, many others. Focusing on war algorithms encompasses them all.

Objective, Approach, and Methodology

In this briefing report, our objective is not to argue whether international law, as it currently exists, sufficiently addresses the plethora of issues raised by autonomous weapon systems. Rather, we aim to shed light on and recast the discussion in terms of a new concept: war algorithms. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad category of war algorithms might be susceptible to regulation (and how those algorithms might already fit within the existing regulatory system established by international law).

We draw on the extensive—and rapidly growing—amount of scholarship and other analytical analyses that have addressed related topics.[37] To help illuminate the discussion, we outline what technologies and weapon systems already exist, what fields of international law might be relevant, and what regulatory avenues might be available. As noted above, because international law is the touchstone normative framework for accountability in relation to war, we focus on public international law sources and methodologies. But as we show, other norms and forms of governance might also merit attention.

Accountability is a broad term of art. We adapt—from the work of an International Law Association Committee in a different context (the accountability of international organizations)—a three-part accountability approach.[38] Our framework outlines three axes on which to focus initially on war algorithms.

The first axis is state responsibility. It concerns state responsibility arising out of acts or omissions involving a war algorithm where those acts or omissions constitute a breach of a rule of international law. State responsibility entails discerning the content of the rule, identifying a breach of the rule, assigning attribution for that breach to a state, determining available excuses (if any), and imposing measures of remedy.

The second axis is a form of individual responsibility under international law. In particular, it concerns individual responsibility under international law for international crimes—such as war crimes—involving war algorithms. This form of individual responsibility entails establishing the commission of a crime under the relevant jurisdiction, assessing the existence of a justification or excuse (if any), and, upon conviction, imposing a sentence.

The third and final axis is scrutiny governance. Embracing a wider notion of accountability, it concerns the extent to which a person or entity is and should be subject to, or should exercise, forms of internal or external scrutiny, monitoring, or regulation (or a combination thereof) concerning the design, development, or use of a war algorithm. Scrutiny governance does not hinge on—but might implicate—potential and subsequent liability or responsibility (or both). Forms of scrutiny governance include independent monitoring, norm (such as legal) development, adopting non-binding resolutions and codes of conduct, normative design of technical architectures, and community self-regulation.

Outline

In Section 2, we outline pertinent considerations regarding algorithms and constructed systems. We then highlight recent advancements in artificial intelligence related to learning algorithms and architectures. We next examine state approaches to technical autonomy in war, focusing on five such approaches. Finally, to ground the often-theoretical debate pertaining to autonomous weapon systems, we describe existing weapon systems that have been characterized by various commentators as AWS.

In Section 3, we outline the main fields of international law that war algorithms might implicate. There is no single branch of international law dedicated solely to war algorithms. So we canvass how those algorithms might fit within or otherwise implicate various fields of international law. We ground the discussion by outlining the main ingredients of state responsibility: attribution, breach, excuses, and consequences. Then, to help illustrate states’ positions concerning AWS, we examine whether an emerging norm of customary international law specific to AWS may be discerned. We find that one cannot (at least not yet). So we next highlight how the design, development, or use (or a combination thereof) of a war algorithm might implicate more general principles and rules found in various fields of international law. Those fields include the jus ad bellum, IHL, international human rights law, international criminal law, and space law. Because states and commentators have largely focused on AWS to date, much of our discussion here relates to the AWS framing.

In Section 4, we elaborate a (non-exhaustive) war-algorithm accountability approach. That approach focuses on state responsibility for an internationally wrongful act, on individual responsibility under international law for international crimes, and on wider forms of scrutiny, monitoring, and regulation. We highlight existing accountability actors and architectures under international law that might regulate war algorithms. These include war reparations as well as international and domestic tribunals. We then turn to less conventional accountability avenues, such as those rooted in normative design of technical architectures (including maximizing the auditability of algorithms) and community self-regulation.

In the Conclusion, we return to the deficiencies of current discussions of AWS and emphasize the importance of addressing the wide and serious concerns raised by AWS with technical proficiency, legal expertise, and non-ideological commitment to a genuine and inclusive inquiry.

We also attach a Bibliography and Appendices. The Bibliography contains over 400 analytical sources, in various languages, pertaining to technical autonomy in war. The Appendices contain detailed charts listing and categorizing states’ statements at the 2015 and 2016 Informal Meetings of Experts on Lethal Autonomous Weapons Systems convened within the framework of the CCW.

Caveats

The bulk of the secondary-source research was conducted in English. Moreover, none of us is an expert in computer science or robotics. We consulted specialists in these fields, but we alone are responsible for any remaining errors. In any event, given the rapid pace of development, the technologies discussed in this briefing report may soon be eclipsed—if they have not been already.


[1].  Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Society 8 (2015), citing Clay Shirky, A Speculative Post on the Idea of Algorithmic Authority, Clay Shirky (November 15, 2009, 4:06 PM), http://www.shirky.com/weblog/2009/11/a-speculative-post-on-the-idea-of-algorithmic-authority (referencing Shirky’s definition of “algorithmic authority” as “the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying ‘Trust this because you trust me.’”).

[2].  On the examples in this paragraph, see generally Pasquale, supra note 1.

[3].  See generally U.S. Commodity Futures Trading Commission & U.S. Securities & Exchange Commission, Findings Regarding the Market Events of May 6, 2010: Report of the Staffs of the CFTF and SEC to the Joint Advisory Committee on Emerging Regulatory Issues (2010), https://www.sec.gov/news/studies/2010/marketevents-report.pdf.

[4].  See, e.g., Jill Prulick, When Bots Collude, New Yorker, April 25, 2015, http://www.newyorker.com/business/currency/when-bots-collude.

[5].  The use of artificial intelligence and other forms of algorithmic systems in relation to war is far from new. For examples from nearly three decades ago, see Defense Applications of Artificial Intelligence (Stephen J. Andriole & Gerald W. Hopple eds., 1988).

[6].  See generally, e.g., Paul J. Springer, Military Robots and Drones: A Reference Handbook (2013); see also infra Section 2: Examples of Purported Autonomous Weapon Systems.

[7].  In a recent report, the Defense Science Board uses a definition of autonomy that implies the use of one or more algorithms: “To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.” Defense Science Board, Summer Study on Autonomy 4 (June 2016) (noting that “[d]efinitions for intelligent system, autonomy, automation, robots, and agents can be found in L.G. Shattuck, Transitioning to Autonomy: A human systems integration perspective, p. 5. Presentation at Transitioning to Autonomy: Changes in the role of humans in air transportation [March 11, 2015]. Available at http://human-factors.arc.nasa.gov/workshop/autonomy/download/presentations/Shaddock%20.pdf.”). Id. at n.1.

[8].  E.g., autonomous agents to improve cyber-attack indicators and warnings; onboard autonomy for sensing; and time-critical intelligence from seized media. See Defense Science Board, supra note 7, at 46–53.

[9].  E.g., dynamic spectrum management for protection missions; unmanned underwater vehicles (UUVs) to autonomously conduct sea-mine countermeasures missions; and automated cyber-response. See Defense Science Board, supra note 7, at 53–60.

[10].  E.g., cascaded UUVs for offensive maritime mining, and organic tactical unmanned aircraft to support ground forces. See Defense Science Board, supra note 7, at 60–68. The term “force application” is defined in the report as “the ability to integrate the use of maneuver and engagement in all environments to create the effects necessary to achieve mission objectives.” Id. at 60.

[11].  E.g., predictive logistics and adaptive planning, and adaptive logistics for rapid deployment. See Defense Science Board, supra note 7, at 69–75.

[12].  Under international humanitarian law (IHL), a person is hors de combat if (i) she is in the power of an adverse party, (ii) she clearly expresses an intention to surrender, or (iii) she has been rendered unconscious or is otherwise incapable of defending herself, provided that in any of these cases she abstains from any hostile act and does not attempt to escape; shipwrecked persons cannot be excluded from the construct of hors de combat. This formulation is derived from the Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts art. 41(2), June 8, 1977, 1125 U.N.T.S. 3 [hereinafter AP I]; see also, e.g., Yoram Dinstein, Non-International Armed Conflicts in International Law 164 (2014).

[13].  Ucilia Wang, Driverless Cars Are Data Guzzlers, Wall Street Journal, March 23, 2014, http://www.wsj.com/articles/SB10001424052702304815004579417441475998338.

[14].  David Silver et al., Mastering the Game of Go with Deep Neural Networks and Tree Search, 529 Nature 484, 488 (2016). Go is a board game pitting two players in a contest to surround more territory than each other’s opponent; it is played on a grid of black lines, with game pieces played on the lines’ intersections. A full-sized board is 19 by 19. Part of the reason Go presents such a difficult computational challenge is because its search space is so large. “After the first two moves of a Chess game,” for instance, “there are 400 possible next moves. In Go, there are close to 130,000.” Danielle Muoio, Why Go is So Much Harder for AI to Beat Than Chess, Tech Insider, March 10, 2016, http://www.techinsider.io/why-google-ai-game-go-is-harder-than-chess-2016-3.

[15].  A Game-Changing Result, The Economist, March 19, 2016, http://www.economist.com/news/science-and-technology/21694883-alphagos-masters-taught-it-game-electrifying-match-shows-what.

[16].  Id.

[17].  In this report, while recognizing certain distinctions and overlaps between them, we use the terms “war” and “armed conflict” interchangeably to denote an armed conflict (whether of an international or a non-international character) as defined in international law and a state of war in the legal sense. See, e.g., Jann Kleffner, Scope of Application of International Humanitarian Law, in The Handbook of International Humanitarian Law (Dieter Fleck ed., 3rd ed. 2013).

[18].  See, e.g., Marten Zwanenburg et al., Humans, Agents and International Humanitarian Law: Dilemmas in Target Discrimination, BNAIC 408 (2005) (examining the destruction of a commercial airliner by the USS Vincennes to illustrate legal and ethical dilemmas involving the use of autonomous agents).

[19]. Among states and commentators, there is no agreement on whether to refer to “autonomous weapons,” “autonomous weapon systems,” or “autonomous weapons systems,” among many other formulations. Throughout this report, where referring to the views of a particular state(s) or commentator(s), we adopt that entity’s or person’s framing. Otherwise, for ease of reference, we adopt the “autonomous weapon system(s)” framing.

[20].  Rebecca Crootof, War Torts: Accountability for Autonomous Weapons Systems, 164 U. Penn. L. Rev. (forthcoming June 2016), http://ssrn.com/abstract=2657680 [hereinafter Crootof, War Torts]. In June 2016, the Defense Science Board highlighted six categories of how autonomy can benefit (Department of Defense) DoD missions:

  • Required decision speed: more autonomy is valuable when decisions must be made quickly (e.g., cyber operations and missile defense);
  • Heterogeneity and volume of data: more autonomy is valuable with high volume data and variety of data types (e.g., imagery; intelligence data analysis; intelligence, surveillance, reconnaissance (ISR) data integration);
  • Quality of data links: more autonomy is valuable when communication is intermittent (e.g., times of contested communications, unmanned undersea operations);
  • Complexity of action: more autonomy is valuable when activity is multimodal (e.g., an air operations center, multi-mission operations);
  • Danger of mission: more autonomy can reduce the number of warfighters in harm’s way (e.g., in contested operations; chemical, biological, radiological, or nuclear attack cleanup); and
  • Persistence and endurance: more autonomy can increase mission duration (e.g., enabling unmanned vehicles, persistent surveillance).

See Defense Science Board, supra note 7, at 45 (June 2016).

[21].  U.S. Dep’t of Defense, Unmanned Systems Integrated Roadmap: FY2013–2038, at 67 (2013), http://www.defense.gov/Portals/1/Documents/pubs/DOD-USRM-2013.pdf.

[22].  Gov’t (Neth.), Government Response to AIV/CAVV Advisory Report no. 97, Autonomous Weapon Systems: The Need for Meaningful Human Control (2016), http://aiv-advice.nl/8gr#government-responses [hereinafter Dutch Government, Response to AIV/CAVV Report]. At the same time, however, the Dutch government “reject[ed] outright the possibility of developing and deploying fully autonomous weapons.” Id.

[23].  See, e.g., Mary Ellen O’Connell, Banning Autonomous Killing, in The American Way of Bombing: Changing Ethical and Legal Norms, from Flying Fortresses to Drones (Matthew Evangelista & Henry Shue eds., 1st ed. 2014).

[24].  Human Rights Watch and the Harvard Law School International Human Rights Clinic, Losing Humanity: The Case against Killer Robots (2012), https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.

[25].  See, e.g., Act, Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/act (last visited Aug. 23, 2016).

[26].  Autonomous Weapons: An Open Letter from AI & Robotics Researchers, Future of Life Institute (July 28, 2015), http://futureoflife.org/open-letter-autonomous-weapons.

[27].  Id.

[28].  AWS have also been raised at the U.N. Human Rights Council, though without the thematic focus given to them in the context of the Convention on Certain Conventional Weapons (CCW). See, e.g., Christof Heyns (Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions), Rep. to Human Rights Council, ¶¶ 142–45, UN Doc. A/HRC/26/36 (Apr. 1, 2014).

[29].  See infra Section 3: International Law pertaining to Armed Conflict — Customary International Law concerning AWS.

[30].  See infra Appendices I and II.

[31].  On various formal and informal models of regulating new technologies, see generally Benjamin Wittes & Gabriella Blum, The Future of Violence: Robots and Germs, Hackers and Drones—Confronting A New Age of Threat (2015); with respect to autonomous military robots, see Gary E. Marchant et al., International Governance of Autonomous Military Robots, 12 Colum. Sci. & Tech. L. Rev. 272 (2011).

[32].  Our concept of “war algorithms” should be distinguished from the “WAR algorithm” concept that has been developed in relation to evaluating environmental impacts. See Environmental Protection Agency, Waste Reduction Algorithm: Chemical Process Simulation for Waste Reduction, https://www.epa.gov/chemical-research/waste-reduction-algorithm-chemical-process-simulation-waste-reduction (last visited Aug. 27, 2016) (explaining that “[t]raditionally chemical process designs, focus on minimizing cost, while the environmental impact of a process is often overlooked. This may in many instances lead to the production of large quantities of waste materials. It is possible to reduce the generation of these wastes and their environmental impact by modifying the design of the process. The WAste Reduction (WAR) algorithm was developed so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environmental and related human health impacts at the design stage.”)

[33].  See infra Section 2: Technology Concepts and Developments (on general definitions of “algorithm”).

[34].  See infra Section 2: Examples of Purported Autonomous Weapon Systems.

[35].  Drawn from the discussion of International Law Association, Committee on Accountability of International Organizations, Berlin Conference: Final Report 5 (2004), http://www.ila-hq.org/en/committees/index.cfm/cid/9 in James Crawford, State Responsibility: The General Part 85 (2013).

[36].  In principle, the threat of use of a war algorithm may (also) give rise to legal implications; however, we focus on the design, development, and use of those algorithms.

[37].  See infra Bibliography.

[38].  Our approach is derived in part from International Law Association, supra note 35, at 5.

2 - Technology Concepts and Developments

2 - Technology Concepts and Developments

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Section 2: Technology Concepts and Developments

This section sketches key technology concepts and developments, as well as certain states’ understandings of autonomy in relation to war. We set the stage by discussing algorithms and constructed systems. We then outline recent advancements in the AI field of deep learning. Next, we highlight five states’ approaches to technical autonomy in war. In doing so, we also note accompanying standards that states and commentators are actively vetting, such as “meaningful human control” over AWS. Finally, we describe some of the main technologies that various commentators have addressed in relation to autonomous weapon systems.

Two Key Ingredients

In this briefing report, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system.

Algorithm

An algorithm has been defined informally as “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.”[39] Accordingly, an algorithm is “a sequence of computational steps that transform the input into the output.”[40] Yet “[w]e can also view an algorithm as a tool for solving a well-specified computational problem.”[41] In this second approach, “[t]he statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship.”[42] Here, we are most concerned with algorithms that are expressed in computer code and that can be conceptualized as making “decisions” or “choices” along the computational pathway undertaken in light of the input and in accordance with programmed parameters.

The relevant algorithms may vary enormously in terms of their sophistication and complexity. But, at base, they all are conceived and coded initially by humans to take some input and produce some output or to describe a specific computational procedure for achieving a defined desirable input/output relationship.

By limiting our inquiry to war algorithms, we narrow the types of algorithms at issue to those that fulfill three conditions: algorithms (1) that are expressed in computer code; (2) that are effectuated through a constructed system; and (3) that are capable of operating in relation to armed conflict. Not all weapons or systems that have been characterized as “AWS” meet these criteria. But most do. And, more to the point, we see these algorithms as a key ingredient in what most commentators and states mean when they address notions of autonomy.

We predicate our definition on the algorithm being capable of operating in relation to armed conflict, even if it is not initially designed for such use. We thus do not limit our classification to algorithms that are in fact used in armed conflict (though the broader category of capability would subsume those that are actually used). A critique of this approach might be that it is over-inclusive because it does not distinguish between algorithms and the relevant constructed systems that are intended for use in relation to war from the vast array of other such algorithms and systems that might be adapted for such use. Yet one reason to focus on capability—instead of intent—is that much of the underlying technology is modular and can therefore be adapted for use in relation to war even if it was not initially designed and developed to do so. Moreover, with respect to accountability, focusing on capability sweeps in not only those who are in a position to choose to deploy or to operate war algorithms but also those involved in the design and development of those algorithms. The emphasis on capability thereby helps account for the diverse assortment of actors—whether in government, commercial, academic, or other contexts—who might exercise power over, and thus who might be held answerable for, the design, development, or use of war algorithms.

Constructed System

“Robot” is not a legal term of art under international law. One oft-cited, decades-old definition comes from the Robot Institute of America, a trade association of robot manufacturers and users: “a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.”[43] Others draw different definitional boundaries. Alan Winfield, for instance, defines a robot as “an artificial device that can sense its environment and purposefully act on or in that environment.”[44] Neil Richards and William Smart argue that a robot is “a constructed system that displays both physical and mental agency but is not alive in the biological sense.”[45] And the Oxford English Dictionary Online defines a robot in the modern sense[46] as “[a]n intelligent artificial being typically made of metal and resembling in some way a human or other animal.”[47]

We sidestep some of the definitional quandaries attending “robot” by focusing instead on constructed systems. For our purposes, a constructed system is a manufactured machine, apparatus, plant, or platform that is capable both of being used to gather information and of effectuating a “choice” or “decision” which is, in whole or in part, derived through an algorithm expressed in computer code but that is not alive in the biological sense. By limiting our inquiry to systems that are not alive in the biological sense, we also circumvent the subject of biologically engineered agents.

Among the most common sensors used to gather information in “constructed systems” include methods to detect how far away objects are by transmitting certain waves and monitoring their reflections, such as radar (radio waves), sonar (sound waves), and lidar (light waves), as well as cameras. The system may be tele-operated (also known as remotely operated)—or not. It may have a manipulator (used loosely here to denote a component providing the capability to interact in the built environment)—or not. However, if it does not have a manipulator, the system needs, to meet our definition, another avenue to effectuate the algorithmically-derived “choice” or “decision.”

The constructed systems may come in a diverse array of forms,[48] such as marine, terrestrial, aerial, or space vehicles; missile systems; or biped or quadruped robots.[49] They may operate collaboratively—including as so-called “swarms”[50]—or individually. They may use a range of power sources, such as batteries or internal combustion engines to generate electricity or to power hydraulic or pneumatic actuators. And their costs may run the gamut from the budget of a tinkerer to industrial or governmental-scale programs.

A.I. Advancements

Recently published advancements in AI—especially machine learning and a class of techniques called deep learning—underscore the rapid pace of technical development.[51] Those advancements reach into many areas of modern digital life, underlying “web searches to content filtering on social networks to recommendations on e-commerce websites.”[52]

For many years, “[c]onventional machine-learning techniques were limited in their ability to process natural data in their raw form.”[53] For decades, for instance, “constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data … into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input.”[54] An advance came with representational learning, which “is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification.”[55]

Deep learning—including deep neural networks—marked another advance. (A deep neural network can be thought of as “a network of hardware and software that mimics the web of neurons in the human brain.”[56]) Deep-learning methods have been explained as “representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level.”[57] As experts have explained, “[w]ith the composition of enough such transformations, very complex functions can be learned.”[58] The gist is that, “[f]or classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations.”[59]

Consider the example of a digital image. It

comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts.[60]

Through deep-learning techniques, “these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure.”[61]

Already, “[d]eep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years.”[62] Those include beating records in image recognition and speech recognition, as well as beating other machine-learning techniques at, for example, predicting the activity of drug molecules.[63] Writing in 2015, some experts “think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data.”[64] In line with this view, “[n]ew learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress.”[65]

One mark of that progress came late last year when a computer program, AlphaGo, achieved a feat previously thought to be at least a decade away: defeating a human professional player in a full-sized game of Go.[66] (A few months later, AlphaGo won four of five matches against Lee Sedol, who, as one of the top players in the world, had achieved the highest rank of nine dan.[67]) The system designers introduced a new approach based on deep convolutional neural networks that used “value networks” to evaluate board decisions and “policy networks” to select moves. (Convolutional neural networks—the typical architecture of which is structured as a series of stages—“are designed to process data that come in the form of multiple arrays.”[68] In other words, these networks “use many layers of neurons, each arranged in overlapping tiles, to construct increasingly abstract, localized representations of an image.”[69]) For AlphaGo, those deep neural networks were “trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play.”[70] AlphaGo developers also introduced a new search algorithm—which was designed in part to encourage exploration on its own—that combines a sophisticated simulation technique (called Monte Carlo tree search) with the value and policy networks.[71]

By grounding our discussion in algorithms expressed in computer code and effectuated through constructed systems, we sidestep some of the doctrinal debates on what constitutes “artificial intelligence” and “artificial general intelligence”—and on whether the latter may be realistically achievable or is more the stuff of science fiction. These questions are outside of the scope of this briefing report, but they are nonetheless vitally important. In any event, it merits emphasis that existing learning algorithms and architectures already have remarkable capabilities that, at least, seem to approach aspects of human “decision-making.”

For their part, creators of AlphaGo have characterized Go as “exemplary in many ways of the difficulties faced by artificial intelligence: a challenging decision-making task, an intractable search space, and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function.”[72] In the eyes of its designers, AlphaGo provides “hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains.”[73]

Approaches to Technical Autonomy in War

As noted above, there is no agreement on what “autonomy” means in the context of the discussion to date on autonomous weapon systems.

Commentators’ views on what constitutes “autonomy” in this context range enormously. Some, for instance, focus on whether the system navigates with a human on board (“manned”) or without one (“unmanned”). Others emphasize geography, such as whether the weapon is operated by a human remotely or proximately. Some hold that the “autonomy” in AWS should be reserved only for “critical functions” in the conduct-of-hostilities targeting cycle. Still others argue that it is the capability of a system, once launched, to sense, think, learn, and act all without further human intervention. A number of definitions combine various components of these notions. But depending on the definition and classification, it is beyond doubt that some existing military systems contain at least a degree of autonomy. (In the last sub-section of this section, we profile examples of weapons, weapon systems, and weapon platforms that some commentators have characterized as AWS.)

In this sub-section, we focus on the positions of states, because discerning states’ positions and practices is one of the key steps in illuminating the scope of international law as it currently stands (lex lata) and distinguishing that from nascent norms and from the law as it should be (lex ferenda). A handful of states have considered or formally adopted definitions relevant to AWS, whether while focusing on weapon systems or unmanned aerial systems. Below, we summarize five of the most elaborate sets of these considerations and definitions—those by Switzerland, France, the Netherlands, the United States, and the United Kingdom.

Switzerland

In the lead-up to the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Switzerland published an “Informal Working Paper” titled “Towards a ‘compliance-based’ approach to LAWS.” The paper proposes “to initially describe autonomous weapons systems (AWS) simply as” follows:

[W]eapons systems that are capable of carrying out tasks governed by IHL in partial or full replacement of a human in the use of force, notably in the targeting cycle.[74]

According to the paper, “[s]uch a working definition is inclusive, accounts for a wide array of system configurations, and allows for a debate that is differentiated, compliance-based, and without prejudice to the question of appropriate regulatory response.”[75] In the view of Switzerland, “the working definition proposed is not conceived in any way to single out only those systems which could be seen as legally objectionable.”[76] The authors note that “[a]t one end of the spectrum of systems falling within that working definition, States may find some subcategories to be entirely unproblematic, while at the other end of the spectrum, States may find other subcategories unacceptable.”[77] Finally, the paper notes, “[a]s discussions advance, this working definition could and probably should evolve to become more specific and purposeful.”[78]

The Netherlands

On April 7, 2015, the Netherlands Ministries of Foreign Affairs and of Defense requested a report from the Advisory Council on International Affairs (AIV) and the Advisory Committee on Issues of Public International Law (CAVV) addressing five sets of questions concerning autonomous weapon systems:

  1. What role can autonomous weapons systems (and autonomous functions within weapons systems) fulfil in the context of military action now and in the future?
  2. What changes might occur in the accountability mechanism for the use of fully or semi-autonomous weapons systems in the light of associated ethical issues? What role could the concept of ‘meaningful human control’ play in this regard, and what other concepts, if any, might be helpful here?
  3. In its previous advisory report, the CAVV states that the deployment of any weapons system, whether or not it is wholly or partly autonomous, remains subject to the same legal framework. As far as the CAVV is concerned, there is no reason to assume that the existing international legal framework is inadequate to regulate the deployment of armed drones. Does the debate on fully or semi-autonomous weapons systems give cause to augment or amend this position?
  4. How do the AIV and the CAVV view the UN Special Rapporteur’s call for a moratorium on the development of fully autonomous weapons systems?
  5. How can the Netherlands best contribute to the international debate on this issue?

A joint committee of the AIV and the CAVV prepared a report, which the AIV adopted on October 2, 2015 and the CAVV adopted on October 12, 2015.[79] On March 2, 2016, the government responded to the report. (We use the term “government” in this context interchangeably with reference to the Ministries of Foreign Affairs and of Defense of the Netherlands.) The main conclusion of the report, in the words of the government’s response, “is that meaningful human control is required in the deployment of autonomous weapon systems”—a view with which the government concurs.[80]

The government—while noting “[t]here is as yet no internationally agreed definition of an autonomous weapon system”—supports the working definition of AWS which the advisory committee adopted:[81]

A weapon that, without human intervention, selects and engages targets matching certain predetermined criteria, following a human decision to deploy the weapon on the understanding that an attack, once launched, cannot be stopped by human intervention.[82]

Underlying this definition is the notion of the “wider loop” of the decision-making process, which plays a prominent role in the Dutch government’s understanding of accountability concerning AWS. In the view of the Dutch government, with respect to AWS humans are involved in that “wider loop” because humans “play a prominent role in programming the characteristics of the targets that are to be engaged and in the decision to deploy the weapon.”[83] That means, in short, “that humans continue to play a crucial role in the wider targeting process. An autonomous weapon as defined above is therefore only deployed after human consideration of aspects such as target selection, weapon selection and implementation planning, including an assessment of potential collateral damage.”[84] In addition, the government notes, “the autonomous weapon is programmed to perform specific functions within pre-programmed conditions and parameters. Its deployment is followed by a human assessment of the effects. Assessments of potential collateral damage (proportionality) and accountability under international humanitarian law are of key importance in this respect.”[85]

As summarized by the Dutch government, “[t]he advisory committee states that if the deployment of an autonomous weapon system takes place in accordance with the process described above, there is meaningful human control. In such cases, humans make informed, conscious choices regarding the use of weapons, based on adequate information about the target, the weapon in question and the context in which it is to be deployed.”[86] For its part, “[t]he advisory committee sees no immediate reason to draft new or additional legislation for the concept of meaningful human control.”[87] Instead, “[t]he concept should be regarded as a standard deriving from existing legislation and practices (such as the targeting process).”[88] Over all, the government expressly affirms that it “supports the definition given above of an autonomous weapon system, including the concept of meaningful human control, and agrees that no new legislation is required.”[89]

France

In a “non-paper” circulated in the context of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems, France articulated the following considerations with respect to such systems:

France considers that LAWS [Lethal Autonomous Weapons Systems] share the following characteristics:

- Lethal autonomous weapons systems are fully autonomous systems. LAWS are future systems: they do not currently exist.

- Remotely operated weapons systems and supervised weapons systems should not be regarded as LAWS since a human operator remains involved, in particular during the targeting and firing phases. Existing automatic systems are not LAWS either[.]

- LAWS should be understood as implying a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command.

- The delivery platform of a LAWS would be capable of moving, adapting to its land, marine or aerial environments and targeting and firing a lethal effector (bullet, missile, bomb, etc.) without any kind of human intervention or validation.”[90]

Compared to most other states that have put forward working definitions, France articulates a relatively narrow definition of what constitutes a lethal autonomous weapons system in the context of the CCW. Most striking, perhaps, is the condition that there be “a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command.” Moreover, France clarifies that, in its view, the definition of a “lethal autonomous weapons system” includes only a delivery “platform” that “would be capable of moving, adapting to its land, marine or aerial environments and targeting and firing a lethal effector … without any kind of human intervention or validation.” This formulation combines autonomy in navigation and maneuver with  autonomy in certain key elements of the targeting cycle.

United States

In a series of directives and other documents, the U.S. Department of Defense (DoD) has elaborated one of the most technically specific state approaches to autonomy in relation to weapon systems.

A central document is DoD Directive 3000.09 (2012). It “[e]stablishes DoD policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms.”[91] The directive is applicable to certain DoD actors and related organizational entities.[92] It concerns “[t]he design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that can independently select and discriminate targets,” as well as “[t]he application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.”[93] However, the directive expressly “does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance.”[94] Among the relevant terms defined in the glossary of Directive 3000.09 are the following:

Autonomous weapon system: “A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”[95]

Human-supervised autonomous weapon system: “An autonomous weapon system that is designed to provide human operators with the ability to intervene and terminate engagements, including in the event of a weapon system failure, before unacceptable levels of damage occur.”[96]

Semi-autonomous weapon system: “A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator. This includes: [s]emi-autonomous weapon systems that employ autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.”[97]

Directive 3000.09 establishes that, as a matter of policy, “[a]utonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[98] More specifically, “[s]ystems will go through rigorous hardware and software verification and validation … and realistic system developmental and operational test and evaluation … in accordance with” certain guidelines.[99] In addition, “[t]raining, doctrine, and tactics, techniques, and procedures … will be established.”[100] In particular, those measures will ensure that autonomous and semi-autonomous weapon systems will, first, “[f]unction as anticipated in realistic operational environments against adaptive adversaries.” Second, they will ensure that those systems will “[c]omplete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” And third, they will ensure that those systems “[a]re sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.”[101]

The directive also establishes that “[c]onsistent with the potential consequences of an unintended engagement or loss of control of the system to unauthorized parties, physical hardware and software will be designed with appropriate: … Safeties, anti-tamper mechanisms, and information assurance in accordance with [another relevant DoD directive]. … Human-machine interfaces and controls.”[102] Furthermore, “[i]n order for operators to make informed and appropriate decisions in engaging targets,” the directive establishes that “the interface between people and machines for autonomous and semi-autonomous weapon systems shall” have three characteristics. First, they shall “[b]e readily understandable to trained operators.” Second, they shall “[p]rovide traceable feedback on system status.” And third, they shall “[p]rovide clear procedures for trained operators to activate and deactivate system functions.”[103]

Directive 3000.09 further lays down, also as a matter of policy, that “[p]ersons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).”[104] The directive establishes that autonomous and semi-autonomous weapon systems intended to be used in a manner that falls within three certain sets of policies will be considered for approval in accordance with enumerated approval procedures and other applicable policies and issuances.[105] The first such policy set establishes that “[s]emi-autonomous weapon systems (including manned or unmanned platforms, munitions, or sub-munitions that function as semi-autonomous weapon systems or as subcomponents of semi-autonomous weapon systems) may be used to apply lethal or non-lethal, kinetic or non-kinetic force.” Further pursuant to that policy set, “[s]emi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.” The second policy set lays down that “[h]uman-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-critical or saturation attacks” for static defense of manned installations and for onboard defense of manned platforms. Finally in this connection, the third policy set establishes that autonomous weapon systems “may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets in accordance with” a separate DoD directive.[106]

Directive 3000.09 further provides that “[a]utonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside” those three sets of policies must be approved by the Under Secretary of Defense for Policy, the Under Secretary of Defense for Acquisition, Technology, and Logistics, and the Chairman of the Joint Chiefs of Staff “before formal development and again before fielding in accordance with” enclosed guidelines and other applicable policies and issuances.[107] In addition, Directive 3000.09 lays down, also as a matter of policy, that “[i]nternational sales or transfers of autonomous and semi-autonomous weapon systems will be approved in accordance with existing technology security and foreign disclosure requirements and processes, in accordance with” an enumerated memorandum.[108] Enclosures to the directive further explain certain references; further elaborate verification and validation as well as testing and evaluation of autonomous and semi-autonomous weapon systems; set down guidelines for review of certain such systems; elaborate responsibilities; and provide definitions in a glossary.[109]

For its part, the U.S. DoD Law of War Manual gives examples of two ways that some weapons may have autonomous functions. First, “mines may be regarded as rudimentary autonomous weapons because they are designed to explode by the presence, proximity, or contact of a person or vehicle, rather than by the decision of the operator.”[110] And second, “[o]ther weapons may have more sophisticated autonomous functions and may be designed such that the weapon is able to select targets or to engage targets automatically after being activated by the user.”[111] The Manual authors give the example that “the United States has used weapon systems for local defense with autonomous capabilities designed to counter time-critical or saturation attacks. These weapon systems have included the Aegis ship defense system and the Counter-Rocket, Artillery, and Mortar (C-RAM) system.”[112]

United Kingdom

The United Kingdom Ministry of Defence (MoD) has addressed autonomy primarily in relation to unmanned aircraft systems. The MoD promulgated the key document—Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems (Joint Doctrine Note)—on March 30, 2011.[113] That document’s “purpose is to identify and discuss policy, conceptual, doctrinal and technology issues that will need to be addressed if such systems are to be successfully developed and integrated into future operations.”[114]

In the section on definitions, the authors discuss “automation” and “autonomy,” emphasizing that, confusingly, the two “terms are often used interchangeably even when referring to the same platform; consequently, companies may describe their systems to be autonomous even though they would not be considered as such under the military definition.”[115] Noting that “[i]t would be impossible to produce definitions that every community would agree to,” the Joint Doctrine Note authors chose the following definitions in order to be “as simple as possible, while making clear the essential differences in meaning between them”:[116]

Automated system: “In the unmanned aircraft context, an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a pre-defined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable.”

Autonomous system: “An autonomous system is capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”[117]

Based on those definitions, the Joint Doctrine Note authors deduce four sets of points. The basic notion of the first set is that “[a]ny or none of the functions involved in the operation of an unmanned aircraft may be automated.”[118] In a related footnote, it is stated that “[f]or major functions such as target detection, only some of the sub-functions may be automated, requiring human input to deliver the overall function.”[119]

The main idea guiding the second set of points is that “[a]utonomous systems will, in effect, be self-aware and their response to inputs indistinguishable from, or even superior to, that of a manned aircraft.”[120] As such, according to the authors, those autonomous systems “must be capable of achieving the same level of situational understanding as a human.”[121] At  the time of publication (2011), the authors stated, “[t]his level of technology is not yet achievable and so, by the definition of autonomy in this JDN, none of the currently fielded or in-development unmanned aircraft platforms can be correctly described as autonomous.”[122]

The third set of points concerns the importance of “[t]he distinction between autonomous and automated ... as there are moral, ethical and legal implications regarding the use of autonomous unmanned aircraft.”[123] Those issues are discussed in another part of the Joint Doctrine Note.[124] The fourth and final set of points deduced by the authors concerns “an over-arching principle that, whatever the degree of automation, an unmanned aircraft should provide at least the same, or better, safety standard as a manned platform carrying out the same task.”[125]

In addressing accountability, the Joint Doctrine Note states that “[l]egal responsibility for any military activity remains with the last person to issue the command authorising a specific activity.”[126] The Joint Doctrine Note authors recognize, however, that “[t]his assumes that a system’s basic principles of operation have, as part of its release to service, already been shown to be lawful, but that the individual giving orders for use will ensure its continued lawful employment throughout any task.”[127] An assumption underlying this process is “that a system will continue to behave in a predictable manner after commands are issued,” yet, the authors note, “clearly this becomes problematical as systems become more complex and operate for extended periods.”[128] Indeed, according to the authors, “[i]n reality, predictability is likely to be inversely proportional to mission and environmental complexity. For long-endurance missions engaged in complex scenarios, the authorised entity that holds legal responsibility will be required to exercise some level of supervision throughout.”[129] If that is the case, in the view of the authors, “this implies that any fielded system employing weapons will have to maintain a 2-way data link between the aircraft and its controlling authority.”[130]

Examples of Purported Autonomous Weapon Systems

This section profiles weapons, weapon systems, and weapon platforms that have been couched, by various commentators, as autonomous weapon systems—such as by exhibiting or reflecting varying levels, forms, or notions of autonomy or automation, in relation to navigation or maneuvering or the targeting cycle. The inclusion of a weapon here is not meant to indicate our evaluation that the weapon, system, or platform has or does not have autonomous capabilities or that it fits within a legally relevant definition of autonomy. Most, but not all, of the weapons, systems, and platforms described here operate based, at least in part, on a war algorithm.

Mines

Anti-Personnel Mines

Anti-personnel mines are designed to “reroute or push back foot soldiers from a given geographic area,” and can kill or injure foot soldiers[131] (in contrast to, for example, naval mines, which are designed to destroy ships).[132] They are typically activated “by direct pressure from above, by pressure put on a wire or filament attached to a pull switch, by a radio signal or other remote firing method, or even simply by the proximity of a person within a predetermined distance.”[133] For these reasons, anti-personnel mines do not discriminate among potential targets, as they are not capable of independently tracking different targets and choosing among them.

Underwater Mines

Naval Mines — General

Naval mines are capable of being detonated by either seismic sensors that sense vibrations in the water as a ship approaches[134] or acoustic sensors that detect sounds generated by passing ships.[135] Some modern mines use a combination of seismic, acoustic, electric, and magnetic sensors to detect nearby ships.[136] Naval mines explode when triggered, without a proximate human directing them to detonate. Naval mines do not discriminate among potential targets; if something triggers its detonation, a naval mine explodes without any independent decision-making process in which it might “choose” whether to detonate.

MK-60 CAPTOR (United States)

The MK-60 EnCAPsulated TORpedo (CAPTOR), manufactured by Alliant Techsystems, is a sophisticated anti-submarine weapon. It is a deep-water mine that, when triggered, launches a torpedo at hostile targets. It is anchored to the ocean floor and uses a surveillance system known as Reliable Acoustic Path (RAP) sound propagation to track vessels above it.[137] Vessels traveling on or very close to the surface are labeled as ships and are not attacked. Vessels traveling far enough below the surface are labeled as submarines. When it senses a submarine that does not have a “friendly” acoustic signature, the MK-60 launches a torpedo at the target.[138] It therefore has autonomy in its functions in terms of not requiring human authorization to unleash a specific attack. Yet the MK-60 is not capable of “choosing” whether to attack an enemy submarine; if it detects an enemy submarine, it launches the torpedo with no (further) “decision-making” process involved.

Unmanned Vehicles and Systems

Unmanned Vehicles — General

Unmanned Aerial Vehicles

Unmanned Aerial Vehicles (UAVs), also called drones, comprise a broad category and refer to any aircraft without a human pilot onboard. Their functions can span from surveillance and reconnaissance to military attacks. Unmanned Combat Aerial Vehicles (UCAVs) are a subset of UAVs. Different models operate with varying degrees of autonomy across different functions. Traditionally, pilots have operated drones remotely, but drones are becoming increasingly capable of certain autonomous functions. Models such as the nEUROn (which has been referred to as a UCAV; see below) can in key respects fly autonomously,[139] compensating for unexpected events like changing weather patterns, and the X-47B (see below) can even refuel itself in mid-air at its carrier.[140] The technological capability of certain UAVs, once launched, to select and attack targets, without further human intervention, seems to exist, but most drones require human authorization or guidance before deploying lethal force. The Harpy (see below)—a “fire and forget, fully autonomous” so-called “loitering munition”—is one notable exception.[141]

Unmanned Surface Vehicle

Unmanned Surface Vehicles (USVs) broadly refer to any watercraft that operates on the surface of the water without an onboard crew. They have a wide range of commercial and military functions. The U.S. Navy often uses them for minesweeping, for surveillance and reconnaissance, and to detect submarines.[142] Like UAVs, USVs might operate with various degrees of autonomy across different functions, spanning a range from remote-controlled operation to autonomy in navigation and maneuver.[143]

Unmanned Maritime Vehicles

Unmanned Maritime Vehicles include both USVs and Autonomous Underwater Vehicles (AUVs). Both USVs and AUVs generally perform similar functions like surveillance and minesweeping.[144] Different models operate with various degrees of autonomy across different functions.[145]

Unmanned Vehicles and Systems — Specific

Dominator (United States)

Currently under development by Boeing, the Dominator aims to incorporate a “long-endurance, autonomous UAV for intelligence, surveillance, and reconnaissance missions and potentially for strike capability.”[146] According to Boeing, the Dominator will employ “autonomous flight using small-diameter bomb avionics,” and can be deployed from a variety of artillery and vehicles, including unmanned aircraft.[147] Boeing will also examine the potential to incorporate “Textron Defense System’s Common Smart Submunition (CSS)” to differentiate and deploy against both fixed and moving targets.[148]

Guardium (Israel)

The Guardium system, developed by G-NIUS, Israel Aerospace Industries, and Elbit Systems, includes both manned and unmanned ground vehicles (UGVs) and is used by the Israel Defense Forces.[149] According to the chief executive officer for G-NIUS, the latest design of Guardium displayed at a weapons exhibition in 2015 has the capability of serving a variety of purposes, including carrying missiles, loitering munitions, or UAV for reconnaissance missions.[150] The Guardium vehicles have “varying degrees” of autonomy: for instance, the vehicles are capable of responding to various obstacles, “automatically deploy[ing] subsystems,” and patrolling Israel’s border with Gaza,[151] yet human operators may override or intervene to control the vehicle’s functions.[152]

K-MAX Helicopter (United States)

Lockheed Martin designed the K-MAX helicopter, which is capable of deploying in a variety of environments, including cargo delivery in combat, firefighting, and humanitarian aid.[153] While the K-MAX helicopter has the capability to seat a pilot onboard, it is capable of being operated remotely to allow the system to function in a variety of high-risk environments.[154]

Knifefish (United States)

The Knifefish, designed as an unmanned underwater vehicle (UUV), is used to locate mines,[155] including those buried in so-called “high clutter environments.”[156] General Dynamics Mission Systems and Bluefin Robotics have been developing various models to be used by the U.S. Navy, possibly  beginning in 2018 or 2019.[157] The Knifefish operates with autonomy in its function to sweep for mines in various underwater environments.[158]

Lijian (China)

China launched a prototype of Lijian, meaning “sharp sword,” on November 20, 2013.[159] Shenyang Aircraft Company and the Hongdu Aircraft Industries Corporation reportedly designed and manufactured the unmanned combat aerial vehicle (UCAV).[160] Other than its similar configuration to the X-47B, little is known about the UCAV or its capabilities.[161] Notably, it did not appear at Airshow China in 2014; however, the China Aerospace Science and Technology Corporation has “insinuated” that the Lijian program is “alive and well.”[162] Because little, if any, information about the Lijian’s capabilities is publicly known, it remains unclear whether the Lijian employs autonomy in its system. More generally, the release of information about China’s air forces indicates that China aims to develop an air force “capable of conducting both offensive and defensive operations,” to include “the enhancement of reconnaissance and strategic projection capabilities.”[163]

nEUROn (France, Greece, Italy, Spain, Sweden, Switzerland)

The nEUROn is an unmanned combat air vehicle (UCAV) being developed by Dassault Aviation and several European nations.[164] The nEUROn is designed to perform reconnaissance and combat missions. The various countries involved in the nEUROn program have been testing its capabilities, assessing, among other things, the “detection, localization, and reconnaissance of ground targets in autonomous modes.”[165] Testing of the nEUROn, which is designed as a demonstrator of current technologies, will also evaluate its capability to “drop…Precision Guided Munitions through the internal weapon bay.”[166]

Platform M (Russia)

According to Russian media, Platform M is a “remote-controlled robotic unit” developed by the Progress Scientific Research Technological Institute of Izhevsk.[167] Reportedly, Platform M has the capability to “destroy targets in automatic or semiautomatic control systems.”[168] Its “targeting mechanism works automatically without human assistance,” according to news reports.[169]

Pluto Plus (Italy)

The Pluto and Pluto Plus remotely operated vehicles (ROVs), also referred to as unmanned underwater vehicles (UUVs),[170] operate underwater to identify mines using features such as “sonar sensors for navigation, search, obstacle avoidance and identification,” as well as the capability to relay information, including video imagery, to the operator.[171] The Italian company Gaymarine developed the Pluto and Pluto Plus models, which are used in conjunction with other mine-countermeasure vehicles (MCMVs) by various navies throughout the world, including Italy, Nigeria, Norway, South Korea, Spain, and Thailand.[172] A pilot operates the Pluto Plus above the water, using a “remote control console” to maneuver the vehicle.[173]

Protector USV (Israel)

Developed and manufactured by Rafael Advanced Defense Systems, the 11m version of the Protector USV contains an “enhanced remotely controlled water can[n]on system for non-lethal and firefighting capabilities.”[174] It includes an unmanned boat, a tactical control system, and mission modules.[175] The 11m model includes features that will reportedly enable the USV to engage in “surveillance, reconnaissance, mine warfare, and anti-submarine warfare.”[176] The 11m model, as with earlier models of the Protector, employs two operators that work remotely from a dual-console station, controlling both the boat and the payload.[177]

Sea Hunter (United States)

In 2016, the Defense Advanced Research Projects Agency (DARPA), a U.S. government agency, designed a prototype of an autonomous surface vessel named Sea Hunter, which was manufactured by Leidos.[178] According to DARPA, the vessel can “robustly track quiet diesel electric submarines,”[179] with the ability to travel up to several months and for considerable distances; developers anticipate that it has the capability to perform other functions as well.[180] Sea Hunter is capable of autonomy in certain functions in two ways. First, it is capable of navigating and maneuvering independently without colliding with other ships.[181] Second, it is capable of locating and tracking diesel electric submarines, which can be extremely quiet and difficult to detect, within a range of two miles.[182] A human can take control of the vessel if necessary, but it is designed to perform its functions without any proximate human direction.[183]

Skat (Russia)

In 2013, the developer MiG reportedly signed an agreement to develop an unmanned combat air vehicle (UCAV) called Skat.[184] According to a Russian news agency, Skat would “carry out strike missions on stationary targets, especially air defense systems in high-threat areas, as well as mobile land and sea targets.”[185]Also according to a Russian news agency, Skat would “navigate in autonomous modes.”[186] More recent reports, however, note it is “unclear” whether Russia has continued to develop this kind of technology, stating that Russia cancelled plans to develop Skat.[187]

Taranis (United Kingdom)

Taranis is an unmanned aerial combat stealth drone being developed by the British company BAE Systems to demonstrate current technologies.[188] It is capable of performing surveillance and reconnaissance, and also serving in combat missions. According to BAE Systems, the company is attempting to determine whether the Taranis can “strike targets ‘with real precision at long range, even in another continent.’”[189] Taranis is theoretically capable of flying autonomously (although during test flights, it has always been controlled remotely by a human operator).[190] A remote human operator must give authorization before Taranis is capable of attacking any target, although the drone identifies potential targets and, once an attack has been authorized, it aims at those targets.[191]

X-47B (United States)

The X-47B is an unmanned aerial combat stealth drone that was developed by the United States, built by Northrop Grumman, and designed as a “test and development vehicle for advancing control technologies and systems necessary for operating [UAVs] in and around aircraft carriers.”[192] According to the U.S. Navy, it developed the X-47B as a “demonstrator” to showcase current capabilities; although the X-47B has not been armed, it is capable of carrying two 2,000-pound bombs.[193] While the X-47B reportedly has autonomy in certain functions,[194] an operator can take control of the X-47B via a Control Display Unit.[195] The X-47B pioneered several autonomous flight maneuvers, including the “first autonomous landing on an aircraft carrier and the first mid-air refueling by a [UAV].”[196] In principle, human authorization is required before the X-47B could be used to intentionally deploy deadly force, but the precise way in which the human operator fits into this equation is not publicly reported.[197]

Missile Systems

Missile Systems — General

“Fire and Forget” Missile Systems

“Fire and forget” missiles are capable, once launched, of reaching their target with no further human assistance. With older missile systems, the operator who fired the missile had to help guide the missile towards its target by, for example, continuing to track the target and transmitting “corrective commands” to the missile.[198] Newer “fire and forget” missiles, such as the FMG-148 Javelin (discussed below), are capable, once fired, of independently tracking their targets without outside guidance or control.[199] They are also capable of navigating certain difficult terrain on their own, and some, like the Brimstone and Brimstone 2 (discussed below), are capable of locating their target even when it was not initially in the line of sight of the launch location.

Missile Systems — Specific

Brimstone and Brimstone 2 (United Kingdom)

Brimstone is an anti-armor, “fire and forget” missile first used in 2005, and developed initially by GEC-Marconi Radar and Defense Systems (later MBDA UK).[200] The Royal Air Force (RAF) began using the Brimstone in Iraq and Afghanistan during 2008 and 2009.[201] Brimstone 2, which entered service in 2016,[202] incorporates a number of improvements from the initial Brimstone model.[203] Brimstone included “embedded algorithms” and could strike both land and naval targets.[204] Brimstone 2 introduced “an improved set of targeting algorithms,” as well as “autopilot and seeker enhancements.”[205] It is a “fire and forget” missile that is capable of autonomy in navigating terrain as it travels toward its target and in certain respects of independently locating a particular target by discriminating among potential candidates.[206] Once launched, Brimstone is capable of “sweeping” a large target area, searching for a specific type of target, the details of which can be pre-programmed into each individual missile prior to launch. For example, a Brimstone missile is capable of being programmed to target only an armored vehicle, ignoring other objects.[207]

FMG-148 Javelin (United States)

The Javelin is a “fire and forget” anti-tank missile developed by the United States with a range of 2,500 meters.[208] Multiple countries have purchased the Javelin, including Australia, Bahrain, the Czech Republic, France, Ireland, Jordan, Lithuania, New Zealand, Norway, Oman, the United Arab Emirates, the United Kingdom, and the United States.[209] The United States has also recently approved sales of the missile to other countries, including Qatar.[210] Both Raytheon and Lockheed Martin manufacture the Javelin.[211] Two human operators carry and launch the Javelin.[212] A human operator must select the Javelin’s target; however, the missile guides itself to the target, allowing the human operators to leave the launch site before the missile strikes. Operators are capable of identifying targets “either directly [in] line-of-sight or with help from the missile’s guidance capability.”[213]

Harpy (Israel)

Developed by Israel Aerospace Industries and used principally by China, India, South Korea, Turkey, and Israel, the Harpy is a “transportable, canister-launched, fire-and-forget, fully autonomous” system,[214] which is also called a “loitering munition.”[215] Harop, a variant of the Harpy developed in 2009, has the capability to “engage time-critical, high-value, relocatable targets,” and is also capable of being launched from both land and naval-based canisters.[216]

Joint Strike Missile (Norway)

The recently-developed Joint Strike Missile builds on the technology of the Naval Strike Missile.[217] Norway has funded the development of the missile, which is manufactured by Kongsberg.[218] It is designed to be integrated into the F-35 Joint Strike Fighters and to attack both naval and land targets.[219] In 2015, the Joint Strike Missile was deployed successfully in a test run, and further testing and developments are scheduled through 2017.[220] The Joint Strike Missile is not capable of choosing an initial target. It is also incapable of locating a hidden target; however, it does include a Global Positioning System/Inertial Navigation System to help it autonomously navigate close to terrain towards a preselected target. It is also programmed to automatically fly in unpredictable patterns to make it harder to intercept.[221]

Stationary Systems, including Close-In Weapon Systems

Aegis Combat System (United States)

The Aegis Combat System, manufactured by Lockheed Martin,[222] is a weapons control system capable of identifying, tracking, and attacking hostile targets.[223] Several countries use the system, including Australia, Japan, Norway, South Korea, Spain, and the United States.[224] Aegis has many more capabilities than a standalone Phalanx CIWS (see below). Like the Phalanx, Aegis relies on radar to identify possibly hostile targets.[225] Unlike the Phalanx, Aegis is capable of engaging over 100 targets simultaneously.[226] The Aegis Combat System is capable of being operated autonomously[227] in terms of the computer interface tracking various targets, determining their threat levels, and, in certain respects, independently determining whether to attack them.

AK-630 CIWS (Russia)

The AK-630 Close-In Weapons System (CIWS) gun turret is “designed to engage manned and unmanned aerial targets, small-size surface targets, soft-skinned coastal targets, and floating mines.”[228] Multiple countries have used the AK-630, including Bulgaria, Croatia, Greece, Lithuania, Poland, Romania, and Ukraine.[229]

Centurion (United States)

The Centurion Weapons System, manufactured by Raytheon, uses a “radar-guided gun” against “incoming rocket and mortar fire.”[230] The Centurion has been described as a “land-based version” of the Phalanx CIWS (see below).[231] In addition to the United States, the United Kingdom also uses the Centurion. The Centurion uses the same capabilities as the Phalanx CIWS, including automatically tracking and destroying incoming fire.[232]

Counter Rocket, Artillery, and Mortar (C-RAM) (United States)

C-RAM, manufactured by Northrop Grumman and Raytheon, is a missile defense system designed to intercept hostile projectiles before they reach their intended targets. Its central component is a revised version of the U.S. Navy’s Phalanx CIWS (see below), as well as existing radar systems, adapted for on-land use.[233] Australia and the United Kingdom have purchased the system from the United States.[234] C-RAM reportedly has autonomy in its operations in terms of “intercept[ing] incoming munitions at speeds too quick for a human to react.”[235]

GDF (Switzerland)

The Oerlikon GDF is an anti-aircraft cannon initially developed in the late 1950s and currently used by over 30 countries.[236] Once activated, the GDF-005 model is capable, without further human intervention, of operating using radar to identify targets, attacking them, and reloading.[237]

Goalkeeper CIWS (The Netherlands)

The Goalkeeper CIWS, manufactured by the Thales Group, includes a gun with “missile-piercing ammunition” that enables the system to “destroy missile warheads.”[238] The navies of Belgium, Chile, the Netherlands, Portugal, Qatar, South Korea, the United Arab Emirates, and the United Kingdom use the system.[239] According to information provided by Thales, the Goalkeeper system “automatically performs the entire process from surveillance and detection to destruction, including selection of the next priority target.”[240]

Iron Dome (Israel)

The Iron Dome is manufactured by Raytheon and seeks to “detect, assess, and intercept incoming rockets, artillery, and mortars.”[241] The Iron Dome has autonomy in some of its functions. It locates potential targets using radar and calculates their expected trajectory. If a rocket would hit a populated area, the Iron Dome is capable of launching a Tamir interceptor missile at the rocket. A human operator must authorize the launch, and she must often make the decision very quickly, sometimes in a matter of minutes.[242] Once a launch is authorized, the computer system will independently aim the Tamir and determine when to launch it. Once close enough to the hostile rocket, the Tamir explodes, destroying both projectiles. The computer algorithm, not the human operator, determines when to detonate the Tamir.

Kashtan CIWS (Russia)

Manufactured by KBP Instrument Design Bureau and used by China, India, and Russia,[243] the Kashtan Close-In Weapon System (CIWS) “can engage up to six targets simultaneously,” and includes gun and missile armaments.[244] The Kashtan system has been described as a human-supervised system with certain autonomous functions.[245]

MANTIS (Germany)

The Modular, Automatic, and Network-Capable Targeting and Interception System, or MANTIS, manufactured by Rheinmetall and used by German forces, is capable of quickly acquiring a target and firing 1,000 rounds a minute.[246] An operator must first activate the MANTIS, but, once activated, “the system is fully automated, although a man in the loop allows for engagement to be overruled if needed.”[247]

MK 15 Phalanx CIWS (United States)

Manufactured by Raytheon[248] and used by at least 25 countries,[249] MK 15 Phalanx Close-In Weapons System (CIWS) is a “fast-reaction, detect-through-engage, radar guided, 20-millimeter gun weapon system” used to explode anti-ship missiles (ASMs) and other approaching threats, such as aircraft and unmanned aerial systems (UASs).[250] The Phalanx CIWS can be operated manually or in an autonomous mode.[251] The Phalanx CIWS uses radar to track nearby projectiles, and it is capable of independently determining whether they pose a threat based on their speed and direction.[252] When it is programmed to operate autonomously, the Phalanx CIWS automatically fires at incoming missiles without further human direction.[253]

MK-60 Griffin Missile System (United States)

Used by the U.S. Navy and manufactured by Raytheon, the MK-60 Griffin Missile System enables ships to defend themselves against “small boat threats” by employing a “surface-to-surface missile system.”[254] The MK-60 Griffin Missile System includes at least two variants: Griffin A, an unmanned aircraft system (UAS), and Griffin B, an unmanned aerial vehicle (UAV).[255] The Griffin B model uses GPS guidance to help identify a target, while the human operator is capable of controlling the type of detonation, as well as of changing the target location after the missile has been launched.[256]

Patriot Missile (United States)

The Patriot System, manufactured by Raytheon, is a surface-to-air missile defense system that uses radar to detect and identify hostile incoming missiles and fires missiles to intercept them.[257] Multiple countries use the Patriot system, including Egypt, Germany, Greece, Israel, Japan, Kuwait, the Netherlands, Saudi Arabia, South Korea, Spain, the United Arab Emirates, and the United States.[258] The Patriot’s radar system is responsible for automatically detecting and tracing incoming projectiles. When operating semi-autonomously, the Patriot computer system requires a human operator to authorize a launch.[259] When operating in a mode of heightened autonomy, the Patriot computer itself chooses whether or not to launch, based upon the speed and direction of the approaching projectile.[260]

SeaRAM (United States)

The SeaRAM anti-ship missile defense system, used by the U.S. Navy, combines features of the Phalanx and rolling airframe missile (RAM) guided weapons systems.[261] According to the manufacturer Raytheon, the SeaRAM can “identify and destroy approaching supersonic and subsonic threats, such as cruise missiles, drones, small boats, and helicopters.”[262] The RAM “fire and forget” missile contains some autonomy in its features, including a “dual-mode passive radio frequency system.”[263]

Sentry Robot (Russia)

In 2014, the Russian Strategic Missile Forces announced that they were planning to release armed sentry robots that could exhibit autonomy in identifying and attacking targets.[264] Little else is publicly known about the specific features of these machines because the prototypes have not yet been released. Uralvagonzavod, a Russian defense firm, anticipates that it will be able to demonstrate prototypes by 2017.[265] In December 2015, U.S. Defense Department officials expressed alarm at the development of the “highly capable autonomous combat robots” that would be “capable of independently carrying out military operations.”[266]

Sentry Tech (Israel)

Manufactured by Rafael Advanced Defense Systems, the Sentry Tech system “consists of a lineup of remote-controlled weapon stations integrated with security and intelligence sensors…providing an infiltration alert via ground and airborne sensors” to provide operators with information on whether to fire weapons.[267] The system is mainly used by Israel along the Gaza border.[268] Sentry Tech does not operate with autonomy in its features; rather, it is a remote-controlled weapon station. Once a potential target has been identified, an operator remotely controls the Sentry Tech to track the target and is capable of choosing to attack the target with the Sentry’s machine gun turret.[269]

SGR A1 Sentry Gun (South Korea)

The SGR A1 is a stationary robot that operates a machine-gun turret, originally designed by the Korea University and the Samsung Techwin Company. The robot guards the Demilitarized Zone (DMZ) between North and South Korea. It uses an infrared camera surveillance system to identify potential intruders. When an individual comes within ten meters of the robot, the SGR A1 demands the necessary access code and uses voice recognition to determine whether the intruder has provided the correct code. If the intruder fails to do so, the SGR A1 has three options: ring an alarm bell, fire rubber bullets, or fire its turreted machine gun.[270] The SGR A1 normally operates with remote human authorization required to enable the SGR A1 to fire.[271] Central to this decision is whether the target has appeared to “surrender.” The robot is programmed to recognize that a human with its arms held high in the air is attempting to surrender.[272]

Super aEgis II (South Korea)

The Super aEgis II is a robot sentry with certain automated features manufactured by DoDAAM. It incorporates a machine gun turret, which is used primarily by South Korea in the Demilitarized Zone (DMZ).[273] It uses a combination of digital cameras and thermal imaging to identify potential targets, allowing it to operate in the dark.[274] The Super aEgis II requires a human to authorize any use of lethal force. Before firing, it automatically emits a warning, advising potential targets to “turn back or we will shoot” (in Korean).[275] If the target continues to advance, a remote human operator enters a password to enable the aEgis to shoot the target.[276]

Cyber Capabilities

Stuxnet (United States and Israel)

Reportedly, Stuxnet is a cyberweapon that was used to attack Iran’s nuclear-enrichment operations in 2009 and 2010. The specifics of the malware are uncertain, but it was reportedly developed by the United States and Israel in a mission codenamed “Olympic Games.”[277] Allegedly, Stuxnet caused computers in Natanz (Iran’s nuclear enrichment facility) to malfunction, reprogramming the centrifuges to spin too fast and damaging delicate pieces of the machinery.[278] It is believed to have damaged 1,000 of Iran’s 6,000 centrifuges in 2010.[279] Since it was intended to operate in the Iranian nuclear enrichment facility, a computer system that is “air-gapped” (disconnected from the internet and other computer networks), Stuxnet was designed to operate, once launched, without (further) external human direction or input.[280] Stuxnet’s code was written to ensure that once connected to the nuclear facility’s computer network, it would begin sabotaging the centrifuge software immediately and to continue doing so without further outside guidance.

[39].  Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest & Clifford Stein, Introduction to Algorithms 5 (3rd ed. 2009).

[40].  Id.

[41].  Id.

[42].  Id.

[43].  Robotics Today, RIA News, Spring 1980, at 7, cited in Robotics and the Economy: A Staff Study, Prepared for the Use of the Subcommittee on Monetary and Fiscal Policy of the Joint Economic Committee, Congress of the United States 4 n.3 (1982).

[44].  Robohub Editors, Robohub Roundtable: Why Is It So Difficult to Define Robot?, Robohub, April 29, 2016, http://robohub.org/robohub-roundtable-why-is-it-so-difficult-to-define-robot.

[45].  Neil Richards & William Smart, How Should the Law Think About Robots?, in Robot Law 3, 6 (Ryan Calo, Michael Froomkin & Ian Kerr eds., 2016).

[46].  The now-historical sense of the term “robot” denotes “[a] central European system of serfdom, by which a tenant’s rent was paid in forced labour or service.” See Robot n.1, Oxford English Dictionary (online ed.) (2016).

[47].  Robot n.2, Oxford English Dictionary (online ed.) (2016) (noting that, originally, this sense of the term was used “with reference to the mass-produced workers in Karel Čapek’s play R.U.R.: Rossum’s Universal Robots (1920) which are assembled from artificially synthesized organic material.”).

[48].  See infra Section 2: Examples of Purported Autonomous Weapon Systems.

[49].  See, e.g., Boston Dynamics, Introducing SpotMini, YouTube (June 23, 2016), https://www.youtube.com/watch?v=tf7IEVTDjng [https://perma.cc/LNV5-3SCH] (video of Boston Dynamic’s SpotMini robot, which purports to “perform[] some tasks autonomously, but often uses a human for high-level guidance.”).

[50].  See, e.g., Michael Rubenstein, Alejandro Cornejo & Radhika Nagpal, Programmable Self-Assembly in a Thousand-Robot Swarm, 345 Science 795, 796 (2014) (“We demonstrate a thousand-robot swarm capable of large-scale, flexible self-assembly of two-dimensional shapes entirely through programmable local interactions and local sensing, achieving highly complex collective behavior. The approach involves the design of a collective algorithm that relies on the composition of basic collective behaviors and cooperative monitoring for errors to achieve versatile and robust group behavior, combined with an unconventional physical robot design that enabled the creation of more than 1000 autonomous robots.”). In respect of this large-scale robotic swarm, the extent to which the robots “can be fully autonomous” is measured in terms of being “capable of computation, locomotion, sensing, and communication.” Id. at 796.

[51].  For an excellent analysis of some of the key technologies in relation to AWS, see Peter Margulies, Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts, in Research Handbook on Remote Warfare (Jens David Ohlin ed., forthcoming 2016).

[52].  Yann LeCun, Yoshua Bengio & Geoffrey Hinton, Deep Learning, 521 Nature 436, 436 (2015).

[53].  Id.

[54].  Id.

[55].  Id.

[56].  Cade Metz, In Two Moves, AlphaGo and Lee Sedol Redefined the Future, Wired (March 16, 2016, 7:00 A.M.), http://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future.

[57].  LeCun et al., supra note 52, at 436.

[58].  Id.

[59].  Id.

[60].  Id.

[61].  Id.

[62].  Id.

[63].  Id. (citations omitted).

[64].  Id.

[65].  Id.

[66].  Silver et al., supra note 14, at 488.

[67].  See Christof Koch, How the Computer Beat the Go Master, Scientific American (March 19, 2016), http://www.scientificamerican.com/article/how-the-computer-beat-the-go-master.

[68].  LeCun et al., supra note 52, at 439.

[69].  Silver et al., supra note 14, at 484.

[70].  Silver et al., supra note 14, at 484; on supervised learning, see LeCun et al., supra note 52, at 436–38.

[71].  Silver et al., supra note 14, at 486.

[72].  Id. at 489 (citations omitted).

[73].  Id.

[74].  Gov’t of Switz., Towards a “Compliance-Based” Approach to LAWS [Lethal Autonomous Weapons Systems] 1 (March 30, 2016) (informal working paper), http://www.unog.ch/80256EDD006B8954/(httpAssets)/D2D66A9C427958D6C1257F8700415473/$file/2016_LAWS+MX_CountryPaper+Switzerland.pdf [hereinafter Swiss, “Compliance-Based” Approach].

[75].  Id.

[76].  Id. at 1–2.

[77].  Id. at 2.

[78].  Id.

[79].  Advisory Council on International Affairs, Autonomous Weapon Systems: The Need for Meaningful Human Control 7 (Advisory Report No. 97, 2015), http://aiv-advice.nl/8gr [hereinafter AIV].

[80].  Dutch Government, Response to AIV/CAVV Report, supra note 22.

[81].  Id.

[82].  Id.

[83].  Dutch Government, Response to AIV/CAVV Report, supra note 22.

[84].  Id.

[85].  Id.

[86].  Id.

[87].  Id.

[88].  Id.

[89].  Though the government agrees with the advisory committee “that definitions should be agreed on (in accordance with recommendation no. 4).” Dutch Government, Response to AIV/CAVV Report, supra note 22. As noted above, the Dutch government “reject[ed] outright the possibility of developing and deploying fully autonomous weapons.” Id.

[90].  Gov’t of Fr., Characterization of a LAWS (April 11–15, 2016) (non-paper), http://www.unog.ch/80256EDD006B8954/(httpAssets)/5FD844883B46FEACC1257F8F00401FF6/$file/2016_LAWSMX_CountryPaper_France+CharacterizationofaLAWS.pdf (bold in the original).

[91].  U.S. Dep’t of Def., Dir. 3000.09, Autonomy in Weapon Systems ¶ 1 (Nov. 21, 2012) [hereinafter DOD AWS Dir.].

[92].  Id. at ¶ 2.

[93].  Id.

[94].  Id.

[95].  Id. at 13–14.

[96].  Id.at 14.

[97].  Id.

[98].  Id. at ¶ 4.

[99].  Id.

[100].  Id.

[101].  Id.

[102].  Id.

[103].  Id.

[104].  Id.

[105].  Id.

[106].  Id.

[107].  Id.

[108].  Id.

[109].  Id. at 5–15.

[110].  U.S. Dep’t of Def., Law of War Manual § 6.5.9.1 (2016) (internal reference omitted) [hereinafter Law of War Manual].

[111].  Id.

[112].  Id.

[113].  U.K. Ministry of Def., Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems, (2011), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/33711/20110505JDN_211_UAS_v2U.pdf.

[114].  Id. at iii.

[115].  Id. at 2-2.

[116].  Id. at 2-2–2-3.

[117].  Id. at 2-3.

[118].  Id.

[119].  Id. at 2-3 n.5 (giving examples of “take-off and landing; navigation/route following; pre-programmed response to events such as loss of a command and communication link; and automated target detection and recognition”).

[120].  Id. at 2-3.

[121].  Id.

[122].  Id. at 2-3–2-4 (further stating in this connection that “[a]s computing and sensor capability increases, it is likely that many systems, using very complex sets of control rules, will appear and be described as autonomous systems, but as long as it can be shown that the system logically follows a set of rules or instructions and is not capable of human levels of situational understanding, then they should only be considered to be automated”).

[123].  Id. at 2-4.

[124].  Id. at 5-1–5-12. See also infra Section 3.

[125].  Id. at 2-4 (citation omitted).

[126].  Id. at 5-5.

[127].  Id.

[128].  Id.

[129].  Id.

[130].  Id. (noting, however, that this data “link may not need to be continuous”).

[131].  Kevin Bonsor, How Landmines Work, How Stuff Works (June 19, 2001), http://science.howstuffworks.com/landmine.htm.

[132].  Mines, FAS Military Analysis Network (Dec. 12, 1998), http://fas.org/man/dod-101/sys/ship/weaps/mines.htm.

[133].  Landmines, Landmine and Cluster Munition Monitor (2014), http://www.the-monitor.org/en-gb/the-issues/landmines.aspx.

[134].  Sam LaGrone, A Terrible Thing That Waits (Under the Ocean), Popular Science (May 19, 2014), http://www.popsci.com/blog-network/shipshape/terrible-thing-waits-under-ocean.

[135].  Guillermo C. Gaunaurd, Acoustic Mine, Access Science (2014), http://www.accessscience.com/content/006000.

[136].  LaGrone, supra note 134.

[137].  MK 60 Encapsulated Torpedo (CAPTOR), FAS Military Analysis Network (Dec. 13, 1998), http://fas.org/man/dod-101/sys/dumb/mk60.htm.

[138].  Id.

[139].  See, e.g., Ryan Gallagher, Military Moves Closer to Truly Autonomous Drones, Slate (Jan. 16, 2013), http://www.slate.com/blogs/future_tense/2013/01/16/taranis_neuron_militaries_moving_closer_to_truly_autonmoous_drones.html.

[140].  X-47B UCAS Makes Aviation History…Again!, Northrop Grumman, http://www.northropgrumman.com/Capabilities/x47bucas/Pages/default.aspx (last visited Aug. 24, 2016).

[141].  Loitering with Intent, Jane’s Int’l Def. Rev. (Nov. 27, 2015).

[142].  See generally U.S. Dep’t of the Navy, The Navy Unmanned Surface Vehicle Master Plan (2007), http://www.navy.mil/navydata/technology/usvmppr.pdf.

[143].  See, e.g., Autonomous Surface Vehicles Ltd., Unmanned Marine Systems, Unmanned Systems Technology, http://www.unmannedsystemstechnology.com/company/autonomous-surface-vehicles-ltd (last visited Aug. 24, 2016).

[144].  Denise Crimmins & Justin Manley, What Are AUVs and Why Do We Use Them?, National Oceanic and Atmospheric Administration (2008), http://oceanexplorer.noaa.gov/explorations/08auvfest/background/auvs/auvs.html.

[145].  Autonomous Underwater Vehicles, Woods Hole Oceanographic Institution, http://www.whoi.edu/main/auvs (last visited Aug. 24, 2016).

[146].  Bill Carey, Boeing Phantom Develops ‘Dominator’ UAV, AIN Online (Nov. 2, 2012), http://www.ainonline.com/aviation-news/defense/2012-11-02/boeing-phantom-works-develops-dominator-uav.

[147].  Id.

[148].  London Huw Williams, Boeing to Evaluate CSS for Dominator, Jane’s Int’l Def. Rev., (Oct. 31, 2012).

[149].  London Huw Williams, IAI to Offer Broad UGV Portfolio, Jane’s Int’l Def. Rev. (July 8, 2016).

[150].  Damian Kemp, AUSA 2015: G-NIUS Displays Loitering Munition-Equipped Guardium Concept, Jane’s Int’l Def. Rev. (Oct. 13, 2015).

[151].  London Huw Williams, G-NIUS Reveals Its Plans for Guardium Development, Jane’s Int’l Def. Rev. (June 25, 2008).

[152].  Id.

[153].  K-MAX, Lockheed Martin, http://www.lockheedmartin.com/us/products/kmax.html (last visited Aug. 24, 2016).

[154].  K-MAX Unmanned Aircraft System, Lockheed Martin, http://www.lockheedmartin.com/content/dam/lockheed/data/ms2/documents/K-MAX-brochure.pdf (last visited Aug. 24, 2016).

[155].  Knifefish Unmanned Undersea Vehicle, General Dynamics Mission Systems, https://gdmissionsystems.com/maritime-strategic/submarine-systems/knifefish-unmanned-undersea-vehicle (last visited Aug. 24, 2016).

[156].  John Reed, Meet the Navy’s Knifefish Mine-Hunting Robot, Defense Tech (Apr. 16, 2012), http://www.defensetech.org/2012/04/16/meet-the-navys-knifefish-mine-hunting-robot/.

[157].  Mission Possible? Fledgling Ship-Based Autonomous Systems Taking Off at Sea, Jane’s Int’l Def. Rev., Oct. 12, 2015. See also Grace Jean, Bluefin Robotics to Deliver Knifefish Variant to NRL in 2014, Jane’s Int’l Def. Rev. (May 2, 2013).

[158].  Reed, supra note 156.

[159].  James Hardy, China’s Sharp Sword UCAV Makes Maiden Flight, Jane’s Def. Wkly. (Nov. 22, 2015).

[160].  Id.

[161].  Id.

[162].  Kelvin Wong, CASC Showcases New Generation of UAV Weapons, Jane’s Int’l Def. Rev. (Nov. 20, 2014).

[163].  Craig Caffrey, Closing the Gaps: Air Force Modernisation in China, Jane’s Def. Wkly. (Oct. 2, 2015).

[164].  Nicholas de Larrinaga, France Begins Naval Testing of Neuron UCAV, Jane’s Defence Wkly. (May 19, 2016).

[165].  Berenice Baker, Taranis vs. nEUROn – Europe’s Combat Drone Revolution, Airforce-Technology.com (May 6, 2014), http://www.airforce-technology.com/features/featuretaranis-neuron-europe-combat-drone-revolution-4220502.

[166].  David Cenciotti, First European Experimental Stealth Combat Drone Rolled Out: The Neuron UCAV Almost Ready for Flight, The Aviationist (Jan. 20, 2012), https://theaviationist.com/2012/01/20/neuron-roll-out.

[167].  Russia’s Platform-M Combat Robot on Display in Sevastopol, RT News (July 22, 2015, 8:20 AM), https://www.rt.com/news/310291-russia-military-robot-sevastopol.

[168].  Id.

[169].  Franz-Stefan Gady, Meet Russia’s New Killer Robot, The Diplomat (July 21, 2015), http://thediplomat.com/2015/07/meet-russias-new-killer-robot.

[170].  Gary Martinic, Unmanned Maritime Surveillance and Weapons Systems, Australian Naval Institute (July 8, 2014), http://navalinstitute.com.au/unmanned-maritime-surveillance-and-weapons-systems.

[171].  Casandra Newell, Egypt Orders Pluto Plus ROVs, Jane’s Navy Int’l (June 19, 2009).

[172].  Briefing: Rolling in the Deep, Jane’s Def. Wkly. (March 6, 2011).

[173].  Columbia Group to Supply Pluto Plus UUVs to Egyptian Navy, Def. Industry Daily (June 21, 2009), http://www.defenseindustrydaily.com/Columbia-Group-to-Supply-Pluto-Plus-UUVs-to-Egyptian-Navy-05530/.

[174].  Protector Unmanned Surface Vehicle (USV), Israel, Naval-Technology.com, http://www.naval-technology.com/projects/protector-unmanned-surface-vehicle/ (last visited Aug. 24, 2016).

[175].  London Huw Williams, Rafael Looks to Extend Protector USV Control Range, Jane’s Int’l Def. Rev. (Aug. 8, 2013).

[176].  Id.

[177].  Richard Scott, New Protector USV Variant Detailed, Jane’s Int’l Def. Rev., Nov. 12, 2012.

[178].  Rachel Courtland, DARPA’s Self-Driving Submarine Hunter Steers Like a Human, IEEE Spectrum (Apr. 7, 2016), http://spectrum.ieee.org/automaton/robotics/military-robots/darpa-actuv-self-driving-submarine-hunter-steers-like-a-human.

[179].  Scott Littlefield, Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV), Defense Advanced Research Projects Agency, http://www.darpa.mil/program/anti-submarine-warfare-continuous-trail-unmanned-vessel (last visited Aug. 24, 2016).

[180].  Courtland, supra note 178.

[181].  Littlefield, supra note 179.

[182].  Rick Stella, Ghost Ship: Stepping aboard Sea Hunter, the Navy’s Unmanned Drone Ship, Digital Trends (Apr. 11, 2016), http://www.digitaltrends.com/cool-tech/darpa-officially-christens-the-actuv-in-portland.

[183].  Id.

[184].  John Reed, Meet Skat, Russia’s Stealthy Drone, Foreign Policy, June 3, 2013, http://foreignpolicy.com/2013/06/03/meet-skat-russias-stealthy-drone.

[185].  Id.

[186].  Id.

[187].  Andrew White, Unmanned Ambitions: European UAV Developments, Jane’s Def. Wkly., (Oct. 27, 2015).

[188].  Guia Marie Del Prado, This Drone Is One of the Most Secretive Weapons in the World, Tech Insider (Sep. 29, 2015), http://www.techinsider.io/british-taranis-drone-first-autonomous-weapon-2015-9.

[189].  Gallagher, supra note 139.

[190].  Id.

[191].  Id.

[192].  Grace Jean, X-47B Catapults Into New Era of Naval Aviation, Jane’s Int’l Def. Rev. (May 20, 2013).

[193].  Spencer Ackerman, Exclusive Pics: The Navy’s Unmanned, Autonomous “UFO,” Wired (July 31, 2012), https://www.wired.com/2012/07/x47b.

[194].  Jean, supra note 192.

[195].  Ackerman, supra note 193.

[196].  Jerry Hendrix, Put the X-47B Back to Work - As a Tanker, Defense One (June 13, 2016), http://www.defenseone.com/ideas/2016/06/put-x-47b-back-work-tanker/129029.

[197].  Ackerman, supra note 193.

[198].  See, e.g., Andreas Parsch, McDonnell Douglas FGM-77 Dragon, Directory of U.S. Military Rockets and Missiles (June 7, 2002), http://www.designation-systems.net/dusrm/m-77.html.

[199].  Raytheon/Lockheed Martin FGM-148 Javelin Anti-Tank (AT) Missile Launcher (1996), Military Factory (April 15, 2016), http://www.militaryfactory.com/smallarms/detail.asp?smallarms_id=391 [hereinafter Raytheon].

[200].  London Hughes, Reign of Fire: UK RAF Readies for Brimstone 2, Jane’s Int’l Def. Rev. (Sept. 4, 2014).

[201].  Id.

[202].  Nicholas de Larrinaga, Farnborough 2016: Brimstone 2 Enters Service, Begins Apache Trials, Jane’s Def. Wkly (July 14, 2016).

[203].  Hughes, supra note 200.

[204].  Id.

[205].  Id.

[206].  Brimstone Advanced Anti-Armour Missile, Army Technology.com, http://www.army-technology.com/projects/brimstone (last visited Aug. 24, 2016).

[207].  Id.

[208].  Raytheon, supra note 199.

[209].  Id.

[210].  Jeremy Binnie, U.S. Clears More Javelins for Qatar, Jane’s Def. Wkly (May 27, 2016).

[211].  Raytheon, supra note 199.

[212].  Id.

[213].  Id.

[214].  Loitering with Intent, supra note 141.

[215].  Id.

[216].  Id.

[217].  Richard Scott, Joint Strike Missile Starts Flight Test Programme, Jane’s Missiles & Rockets (Nov. 16, 2015), http://www.janes.com/article/55989/joint-strike-missile-starts-flight-test-programme.

[218].  Id.

[219].  Kongsberg’s NSM/JSM Anti-Ship & Strike Missile Attempts to Fit in Small F-35 Stealth Bay, Defense Industry Daily (Nov. 12, 2015), http://www.defenseindustrydaily.com/norwegian-contract-launches-nsm-missile-03417 [hereinafter Kongsberg].

[220].  Franz-Stefan Gady, F-35’s Joint Strike Missile Successfully Completes Flight Test in US, The Diplomat (Nov. 13, 2015), http://thediplomat.com/2015/11/f-35s-joint-strike-missile-successfully-completes-flight-test-in-us.

[221].  Kongsberg, supra note 219.

[222].  Aegis Combat System, Lockheed Martin, http://www.lockheedmartin.com/us/products/aegis.html (last visited Aug. 24, 2016).

[223].  Aegis Weapon System, America’s Navy: United States Navy Fact File (Jan. 5, 2016), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=200&ct=2.

[224].  Paul Scharre & Michael C. Horowitz, An Introduction to Autonomy in Weapons Systems 21 (Feb. 2015) (working paper), http://www.cnas.org/sites/default/files/publications-pdf/Ethical%20Autonomy%20Working%20Paper_021015_v02.pdf.

[225].  Id.

[226].  Id.

[227].  Scharre & Horowitz, supra note 224, at 21.

[228].  Thales Targets AK-630 Users for Fire Control, Jane’s Navy Int’l (Apr. 13, 2005).

[229].  Id.

[230].  Nathan Hodge, Raytheon Ramps Up Centurion Production, Jane’s Def. Wkly. (March 20, 2008).

[231].  Id.

[232].  Centurion C-RAM Counter-Rocket, Artillery, and Mortar Weapon System, Army Recognition, http://www.armyrecognition.com/united_states_us_army_artillery_vehicles_system_uk/centurion_c-ram_land-based_weapon_system_phalanx_technical_data_sheet_specifications_pictures_video.html (last visited Aug. 24, 2016).

[233].  Counter Rocket, Artillery and Mortar (C-RAM), GlobalSecurity.org (July 7, 2011), http://www.globalsecurity.org/military/systems/ground/cram.htm.

[234].  Kristin Horitski, Counter-Rocket, Artillery, Mortar (C-RAM), Missile Defense Advocacy Alliance (March 2016), http://missiledefenseadvocacy.org/missile-defense-systems-2/missile-defense-systems/u-s-deployed-intercept-systems/counter-rocket-artillery-mortar-c-ram.

[235].  Heather M. Roff, Killer Robots on the Battlefield, Slate (April 7, 2016), http://www.slate.com/articles/technology/future_tense/2016/04/the_danger_of_using_an_attrition_strategy_with_autonomous_weapons.html.

[236].  GDF, Weapons Systems.net, http://weaponsystems.net/weaponsystem/EE02%20-%20GDF.html (last visited Aug. 24, 2016).

[237].  Noah Shachtman, Robot Cannon Kills 9, Wounds 14, Wired (Oct. 18, 2007), https://www.wired.com/2007/10/robot-cannon-ki.

[238].  Goalkeeper – Close-In Weapon System, Thales, https://www.thalesgroup.com/en/goalkeeper-close-weapon-system# (last visited Aug. 24, 2016).

[239].  Id.

[240].  Id.

[241].  Iron Dome Weapon System, Raytheon, http://www.raytheon.com/capabilities/products/irondome (last visited Aug. 24, 2016).

[242].  Raoul Heinrichs, How Israel’s Iron Dome Anti-Missile System Works, Business Insider, July 30, 2014, http://www.businessinsider.com/how-israels-iron-dome-anti-missile-system-works-2014-7.

[243].  Scharre & Horowitz, supra note 224, at 21.

[244].  India – Kashtan Self-Defence System for Retrofit, Jane’s Int’l Def. Rev. (May 1, 2001).

[245].  Scharre & Horowitz, supra note 224, at 21.

[246].  Nicholas Fiorenza, Luftwaffe Receives MANTIS C-RAM System, Jane’s Def. Wkly. (Nov. 28, 2012).

[247].  Id.

[248].  Phalanx Close-In Weapon System, Raytheon, http://www.raytheon.com/capabilities/products/phalanx/ (last visited Aug. 24, 2016).

[249]. Scharre & Horowitz, supra note 224.

[250].  MK 15 Close-In Weapons System (CIWS), America’s Navy: United States Navy Fact File (May 9, 2016), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=487&ct=2.

[251].  Phalanx CIWS: The Last Defense, On Ship and Ashore, Defense Industry Daily (Feb. 16, 2016), https://www.defenseindustrydaily.com/phalanx-ciws-the-last-defense-on-ship-and-ashore-02620.

[252].  MK 15 Phalanx Close-In Weapons System (CIWS), FAS Military Analysis Network (Jan. 9, 2003), http://fas.org/man/dod-101/sys/ship/weaps/mk-15.htm.

[253].  Id.

[254].  MK 60 Griffin Missile System, America’s Navy: United States Navy Fact File (Nov. 25, 2013), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=593&ct=2.

[255].  Strike Out: Unmanned Systems Set for Wider Attack Role, Jane’s Int’l Def. Rev., July 17, 2015.

[256].  Id.

[257].  Andreas Parsch, Raytheon MIM-104 Patriot, Directory of Military US Rockets and Missiles (Dec. 3, 2002), http://www.designation-systems.net/dusrm/m-104.html.

[258].  Scharre & Horowitz, supra note 224, at 21–22.

[259].  Patriot Missiles (PAC-1, PAC-2, PAC-3), Missile Threat (Dec. 22, 2013), http://missilethreat.com/defense-systems/patriot-pac-1-pac-2-pac-3.

[260].  Marshall Brain, How Patriot Missiles Work, How Stuff Works (March 28, 2003), http://science.howstuffworks.com/patriot-missile.htm.

[261].  SeaRAM Anti-Ship Missile Defense System, Raytheon, http://www.raytheon.com/capabilities/products/searam (last visited Aug. 24, 2016).

[262].  Id.

[263].  SeaRAM Anti-Ship Missile Defence System, United States of America, Naval-Technology.com, http://www.naval-technology.com/projects/searam-anti-ship-missile-defence-system (last visited Aug. 24, 2016).

[264].  Patrick Tucker, The Pentagon Is Nervous about Russian and Chinese Killer Robots, Defense One (Dec. 14, 2015), http://www.defenseone.com/threats/2015/12/pentagon-nervous-about-russian-and-chinese-killer-robots/124465.

[265].  Producer of Russia’s Armata T-14 Plans to Create Army of AI Robots, RT International (Oct. 20, 2015, 11:43 P.M.), https://www.rt.com/news/319229-russia-armata-tanks-robots.

[266].  The Pentagon is Growing Concerned Over Development of Russian and Chinese Combat Robots, National Security News (Dec. 28, 2015), http://www.nationalsecurity.news/2015-12-22-the-pentagon-is-growing-concerned-over-development-of-russian-and-chinese-combat-robots.html.

[267].  Sentry-Tech, Rafael Advanced Defense Systems Ltd., http://www.rafael.co.il/Marketing/396-1687-en/Marketing.aspx (last visited Aug. 24, 2016).

[268].  Robin Hughes & Alon Ben-David, IDF Deploys Sentry Tech on Gaza Border, Jane’s Def. Wkly. (June 6, 2007).

[269].  Id.

[270].  Samsung Techwin SGR-A1 Sentry Guard Robot, GlobalSecurity.org (July 7, 2011), http://www.globalsecurity.org/military/world/rok/sgr-a1.htm [hereinafter Sentry Guard].

[271].  Keith Wagstaff, Future Tech? Autonomous Killer Robots Are Already Here, NBC News (May 15, 2014), http://www.nbcnews.com/tech/security/future-tech-autonomous-killer-robots-are-already-here-n105656.

[272].  Sentry Guard, supra note 270.

[273].  Simon Parkin, Killer Robots: The Soldiers That Never Sleep, BBC Future (July 16, 2015), http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep. Other countries, however, such as the United Arab Emirates and Qatar, also use the system.

[274].  Id.

[275].  Id.

[276].  Id.

[277].  Ellen Nakashima & Joby Warrick, Stuxnet Was Work of U.S. and Israeli Experts, Officials Say, Wash. Post (June 2, 2012), https://www.washingtonpost.com/world/national-security/stuxnet-was-work-of-us-and-israeli-experts-officials-say/2012/06/01/gJQAlnEy6U_story.html.

[278].  David E. Sanger, Obama Order Sped Up Wave of Cyberattacks Against Iran, N.Y. Times (May 31, 2012), http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html.

[279].  Kim Zetter, An Unprecedented Look at Stuxnet, the World’s First Digital Weapon, Wired (Nov. 3, 2013), https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet.

[280].  See Dorothy E. Denning, Stuxnet: What Has Changed?, 4 Future Internet 672, 674 (2012).

3 - International Law pertaining to Armed Conflict

3 - International Law pertaining to Armed Conflict

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Section 3: International Law pertaining to Armed Conflict

In this section, we outline key fields, concepts, and rules relating to international law pertaining to armed conflict. We do so to identify some of the fundamental substantive norms that may be relevant to war algorithms in general and to our three-part accountability approach in particular.[281] State responsibility entails, among other things, identifying the content of the underlying obligation. Individual responsibility entails, among other things, identifying the elements of the crime and the mode of responsibility under international law. Finally, scrutiny governance entails detecting—and potentially surpassing—a baseline of relevant normative regimes, and international law may provide a foundational normative framework concerning regulation of war algorithms.

This section is divided into two parts. We first set the stage with an introduction of state responsibility. Then, in the bulk of the section, we highlight relevant considerations in the substantive law of obligations. Part of the focus is on AWS, since that has been the main framing states have addressed to date. We examine whether a customary international law norm pertaining to AWS in particular has crystallized. We find that one has not, at least not yet. So we then outline some of the main international law rules of a more general nature. We focus here primarily on rules that may relate to AWS, but we also note a number of rules that may (otherwise or also) implicate war algorithms.

With respect to AWS, most commentators and states focus primarily on international humanitarian law and international criminal law. In this section, we raise concerns not only in those fields but also in some of the other regimes of international law that might apply with respect to war algorithms. The section, however, is not meant to be exhaustive.[282] We note that some states—including Switzerland, the United States, and the United Kingdom—have articulated much more detailed analyses of how AWS might relate to a particular rule or field of international law; in light of our interest in discerning state practice, we focus, in part, on those states’ positions and practices.

State Responsibility

State responsibility underpins international law. To grasp the broader accountability architecture governing the design, development, or use (or a combination thereof) of war algorithms, therefore, it is necessary to have at least a basic understanding of the conceptual framework of state responsibility.

Underlying Concepts

The underlying concepts of state responsibility, which are general in character, are attribution, breach, excuses, and consequences.[283] Attribution concerns the circumstances under which an act may be attributed to a state.[284] Breach concerns the conditions under which an act (or omission) may qualify as an internationally wrongful act.[285] Excuses concern the general defenses that may be available to a state in relation to an internationally wrongful act.[286] And consequences concern the forms of liability that may arise in relation to an internationally wrongful act. As James Crawford explains, “[i]ndividual treaties or rules may vary these underlying concepts in some respect; otherwise they are assumed and apply unless excluded.”[287]

Conduct may be attributed to a state under a variety of circumstances. These circumstances include the conduct of any state organ, such as the armed forces.[288] They also include the conduct of a person or entity empowered by the law of the state to exercise elements of governmental authority (so long as the person or entity is acting in that capacity in a particular instance),[289] and the conduct of an organ placed at the disposal of a state by another state so long as that “organ is acting in the exercise of elements of the governmental authority of the State at whose disposal it is placed.”[290] The conduct of these organs, persons, and entities where acting in those capacities shall be considered an act of the state under international law even if that conduct exceeds its authority or contravenes instructions.[291] Furthermore, “[t]he conduct of a person or group of persons shall be considered an act of a State under international law if the person or group of persons is in fact acting on the instructions of, or under the direction or control of, that State in carrying out the conduct.”[292] And “[t]he conduct of a person or group of persons shall be considered an act of a State under international law if the person or group of persons is in fact exercising elements of the governmental authority in the absence or default of the official authorities and in circumstances such as to call for the exercise of those elements of authority.”[293] Also, “[t]he conduct of an insurrectional movement which becomes the new Government of a State shall be considered an act of that State under international law.”[294] And, finally, “[c]onduct which is not attributable to a State under the preceding [circumstances] shall nevertheless be considered an act of that State under international law if and to the extent that the State acknowledges and adopts the conduct in question as its own.”[295]

In general, a consequence of state responsibility is the liability to make reparation.[296] As noted by Pietro Sullo and Julian Wyatt, “[t]he principle that States have to provide reparations to other States to redress wrongful acts they have committed is undisputed under international law and is confirmed by other instruments of international law.”[297] Those authors explain that “[t]he primary function of reparations in international law is the re-establishment of the situation that would have existed if an internationally wrongful act had not been committed and the forms that such reparation may take are various.”[298]

Substantive Law of Obligations

While state responsibility provides the basic framework, the substantive law of obligations fleshes out the relevant rules and procedures. The substantive law of obligations may be found in a relevant branch or branches of public international law. The operation of a specific branch may have implications for particular forms of attribution, breach, excuses, and consequences. IHL, for instance, contains specific provisions on what may constitute a “serious violation” and what consequences may arise with respect to certain rule breaches.

The two sources of the substantive law of obligations most relevant to war algorithms are treaties and customary international law. Treaties are often defined as international agreements between two or more states.[299] And customary international law is often defined as being made up of the “rules of international law that derive from and reflect a general practice accepted as law.”[300] Below, we first explore whether there is a specific customary rule pertaining to AWS in particular. (We focus on AWS here and not on war algorithms more broadly because, to date, the bulk of the state practice pertains to AWS.) Answering in the negative, we then highlight treaty provisions (and corresponding customary rules) of a more general character that may relate to AWS and war algorithms. These provisions stretch across an array of fields of international law—not only IHL and international criminal law, but also space law, telecommunications law, and others.

Customary International Law concerning AWS[301]

Customary international law has two constituent elements: state practice and opinio juris sive necessitates (shorthand: opinio juris).[302] State practice has recently been formulated as the “conduct of the State, whether in the exercise of executive, legislative, judicial or any other functions of the State.”[303] And opinio juris has recently been formulated as “the belief that [a practice] is obligatory under a rule of law.”[304] In other words, a state following a particular practice merely as a matter of policy or out of habit, not out of a sense of legal obligation, does not qualify as opinio juris.[305]

It seems fair to say that statements made by official state representatives at the 2015 and 2016 Convention on Certain Conventional Weapons (CCW) Informal Meetings of Experts on Lethal Autonomous Weapons Systems could qualify as state practice or opinio juris. (Though those statements probably should not be counted as both.) Such gatherings are “informal implementation mechanism[s],”[306] not formal gatherings of state parties. But these meetings nevertheless involved the sort of public pronouncements that, when conducted by state agents, are capable of comprising evidence of the elements of customary international law. In at least some cases, states’ presentations at meetings of experts have been considered as state practice for the purposes of assessing customary international law.[307] Whether a particular statement is evidence depends in part on its content. For example, a state merely implying or expressing a desire that something become illegal would not be evidence of state practice.[308]

So far, it appears that there is not enough consensus among these statements for any clear customary international law to have emerged due to state practice or opinio juris. Be that as it may, the 2016 meeting revealed relatively wide agreement on some important points. First, nearly all states that explicitly addressed the issue concurred that “fully” autonomous weapon systems do not yet exist (although some maintained that such systems will never exist, whereas others seemed to assume that they inevitably will). Second, there was wide agreement on the need for further discussion or monitoring (or both). Nearly every state mentioned the importance of continuing the dialogue. Third, most states indicated their belief that the current definitions of “autonomous weapon systems” are inadequate, impeding the progress that international society can make in assessing legal concerns.

In terms of taking a concrete position concerning the legality of “lethal autonomous weapons systems,” at the 2016 Meeting the greatest agreement was on the importance or relevance of the review process under Article 36 of the first Additional Protocol to the Geneva Conventions (described in more detail below) and on the need for “meaningful human control” over AWS. In statements at the 2016 Meeting, thirteen states referenced the importance or relevance of Article 36—more than twice as many as at the 2015 Meeting. Also at the 2016 Meeting, thirteen states expressly referenced the need for “meaningful human control.” However, as in 2015, this agreement was undercut by the lack of clarity as to what “meaningful human control” means. (Some states seemed to think that something akin to a human override capability would be sufficient, while others disagreed.[309]) Given the disparities in how different states interpret the concept, some states expressed skepticism about the usefulness of the notion of “meaningful human control.”[310]

When comparing the 2015 and 2016 CCW Informal Meetings of Experts, it is important to bear in mind that the participating states are not identical. The differences between the meetings may simply reflect the altered composition of participating states, not necessarily a coherent shift in position among the same group of states. Nonetheless, the growing number of states that referenced Article 36 reviews might reflect a growing recognition that the category “autonomous weapon systems” involves a broad spectrum of weapons and may require review on a case-by-case basis.

Another consideration in the evaluation of customary international law that may be relevant to AWS concerns “specially affected” states. The basic idea is that the practice of “specially affected” states[311]—that is, states that are “affected or interested to a higher degree than other states with regard to the rule in question”—“should weigh heavily (to the extent that, in appropriate circumstances, it may prevent a rule from emerging).”[312] For example, with respect to the rights associated with a state’s territorial sea, the practices of states with a coastline have been considered as more significant than those of landlocked states.[313] There is some dispute over the determination and role of “specially affected” states in customary international humanitarian law.[314] Yet the position of the majority of commentators seems to be that “[i]f an emerging rule in respect to the use of sophisticated weaponry is considered then the practice of only a few states technically capable of production may suffice.”[315]

If this view is accurate, then the practice of states that are more technologically advanced in the weapons arena—such as the United States, Israel, and South Korea, which are reportedly some of the states furthest along in the development of relevant technologies[316]—would be particularly important for any customary rules about AWS. So far, these and other similar states have largely favored continuing to monitor or discuss the development of such weapons. Indeed, these states mostly refrain from deciding on their per se legality while offering hints that they have apprehensions about bans that they view as potentially premature or restricting civilian technological development.[317]

Yet another line of reasoning suggests that states in whose territory where autonomous weapons might be deployed (regardless of whether the territorial state grants consent) may also be considered “specially affected.” Along these lines, Pakistan’s statements about the illegality of lethal autonomous weapons systems would also receive a privileged status.[318] This claim might have some value as lex ferenda (the law as it should be). But, as mentioned above, existing scholarly commentary tends to focus on the weapons-possessors, not on the places where the weapons may be used, as the “specially affected” states.

Summary of States’ Positions as Reflected by Their Statements at the 2015 and 2016 CCW Meetings of Experts

Charts containing the relevant quotations, caveats, and explanations are in Appendices I and II.

Position:[319] Currently unacceptable, unallowable, or unlawful

States reflecting this position: Austria,[320] Chile,[321] Costa Rica, Ecuador, Germany,[322] Mexico, Pakistan, Poland,[323] and Zambia

Position: Need to monitor or continue to discuss

States reflecting this position: Algeria, Austria, Australia, Canada, Chile, Colombia, Croatia, Costa Rica, Czech Republic, Ecuador, Finland, France, Germany, India, Ireland, Israel, Italy, Japan, Korea, Mexico, Morocco, Netherlands, New Zealand, Pakistan, Poland, Sierra Leone, South Africa, Spain, Sri Lanka, Sweden, Switzerland, Turkey, United Kingdom, United States of America, and Zambia

Position: Need to regulate[324]

States reflecting this position: Austria, Chile, Colombia, Czech Republic, Netherlands, Poland, Sri Lanka, Sweden, and Zambia

Position: Need to ban (or favorably disposed towards the idea)[325]

States reflecting this position: Algeria, Bolivia,[326] Chile, Costa Rica, Croatia,[327] Cuba, Ecuador, Egypt,[328] Ghana, Mexico,[329] Nicaragua,[330] Pakistan, Sierra Leone,[331] Palestine,[332] Zambia,[333] and Zimbabwe[334]

Position: Need for meaningful human control

States reflecting this position: Argentina, Austria, Australia, Canada, Chile, Colombia, Croatia, Czech Republic, Denmark, Ecuador, Germany, Greece, Ireland, Korea, Morocco, Netherlands, Pakistan, Poland, South Africa, Sweden, Switzerland, Turkey, United Kingdom, Zambia, and Zimbabwe

Position: AP I Article 36 weapons review (defined below) necessary[335]

States reflecting this position: Australia, Austria, Canada, Cuba, Finland, France, Germany, Netherlands, Sierra Leone, South Africa,[336] Sri Lanka, Sweden, Switzerland, United Kingdom, and Zambia

Position: Refers to legal principles while remaining undecided on per se legality of AWS

States reflecting this position: Algeria, Argentina, Australia, Austria, Canada, Chile, Czech Republic, Denmark, Ecuador, Finland, France, Germany, Greece, India, Ireland, Israel, Italy, Japan, New Zealand, Poland, Sierra Leone, South Africa, Spain, Sri Lanka, Sweden, Switzerland, Turkey, United Kingdom, United States of America, and Zambia

Treaty Provisions and Customary Rules Not Specific to AWS

Having established that a rule of customary international law specific to AWS has not crystallized (at least not yet),[337] we turn to treaty provisions and customary rules that might nonetheless govern the design, development, or use (or a combination thereof) of an AWS or, more generally, a war algorithm. The following section is not meant to be exhaustive but rather to highlight some of the main rules that might be implicated by AWS or war algorithms.

Jus ad Bellum

The jus ad bellum (also known as the jus contra bellum) is the field of public international law governing the threat of force or the use of force by a state in its international relations. Current international law establishes a general prohibition on such threats of force and such uses of force unless undertaken pursuant to a lawful exception to that prohibition. Recognized exceptions include an enforcement action pursuant to a mandate of the U.N. Security Council, an exercise of lawful self-defense conforming to the principles of necessity and proportionality, and lawful consent.[338]

At least two concerns arise with respect to war algorithms as a matter of the jus ad bellum. The first is whether the determination of a breach of a rule of the jus ad bellum is independent of the type of weapon used.[339] For instance, some commentators have debated the use of so-called “predecessors of AWS,” such as UAVs, in the context of obviating threats of terrorism as a matter of the jus ad bellum.[340] Others find those contributions “misguided,”[341] arguing instead that “[t]he use of AWS does not render an operation illegal under rules of ius ad bellum.”[342]

The second concern is whether a particular use of a war algorithm in relation to the use of force in international relations falls under the category of prohibited “force.” The most pertinent analogue might be a computer network attack. Oliver Dörr notes that, so far, such attacks against the information systems of another state have not been treated in practice under the principle of the non-use of force.[343] However, Dörr argues, “current and future State practice may, in this respect, lead to a different interpretation, given the weapon-like destructive potential which some attacks by means of information technology may develop: computer network attacks intended to directly cause physical damage to property or injury to human beings in another State may reasonably be considered armed force.”[344]

International Humanitarian Law

IHL is the primary field of international law governing armed conflict. It applies only in relation to armed conflict. Under international law, armed conflicts may be either international or non-international in character. IHL binds all of the parties to the armed conflict (whether states or non-state organized armed groups), as well as individuals.[345] And, where applicable, the law of neutrality also binds neutral states or other states not party to the armed conflict.[346]

The discussion on AWS and war algorithms enters into a number of preexisting debates in IHL. Those concern such issues as the contours of civilian “direct participation in hostilities,”[347] the geographic and temporal scope of armed conflict, and the relationship of IHL to international human rights law. The AWS discourse to date has largely revolved around IHL provisions concerning the conduct of hostilities, given the focus on autonomous weapon systems. Here we highlight the major considerations concerning AWS as weapons, though we note some other areas of IHL that might be relevant for war algorithms more broadly.

Suppression of Acts Contrary to the Geneva Conventions

As a framework matter, states parties to the Geneva Conventions of 1949 have a general obligation to “undertake to respect and to ensure respect for the ... Convention[s] in all circumstances.”[348] More broadly, each state party “shall take measures necessary for the suppression of all acts contrary to the provisions of the” Geneva Conventions of 1949 other than grave breaches.[349] (States are required to take certain other, more exacting measures with respect to grave breaches, as noted below.)

Classification: Weapons (or Weapon Systems) or Combatants?

An initial issue is whether under IHL the relevant AWS (however defined) is considered a weapon (or a weapon system) or should be classified as something else, such as a combatant. The bulk of states and commentators focus on AWS in the sense of weapons.[350] But others, such as Hin-Yan Liu, raise the prospect that an AWS may be considered a combatant where, for instance, the focus is on the system’s decision-making capability. Liu adopts the U.S. DoD Law of War Working Group’s approach to differentiating between the terms “weapon” and “weapon systems.”[351] The former refers to “all arms, munitions, materiel, instruments, mechanisms, or devices that have an intended effect of injuring, damaging, destroying or disabling personnel or property,” while the latter is more broadly conceived to include “the weapon itself and those components required for its operation, including new, advanced or emerging technologies.”[352]

For Liu, “the capacity for autonomous decision-making pushes these technologically advanced systems to the boundary of the notion of ‘combatant’.”[353] As an indicator of the “potential for the confusion between means and methods of warfare and combatants,” Liu points to the German military manual, which provides that “combatants are persons who may take a direct part in hostilities, i.e., participate in the use of a weapon or a weapon-system in an indispensable function.”[354] Liu notes that “this characterization was used in the context of differentiating categories of non-combatants who are members of the armed forces,” yet his broader point is that “the circularity of this definition illustrates precisely the difficulties associated with defining ‘weapon’ and ‘weapons system’.”[355]

Weapons: Reviews

As noted relatively frequently at the 2016 CCW Informal Expert Meeting on Lethal Autonomous Weapons Systems, Article 36 of Additional Protocol I imposes an obligation on states parties concerning “the study, development, acquisition or adoption of a new weapon, means or method of warfare.” In particular, states parties are obliged to determine “whether [the] employment [of a new weapon, means or method of warfare] would, in some or all circumstances, be prohibited by” AP I or by any other rule of international law applicable to the state party.

With respect to AWS, Christopher Ford argues that “[t]he complexity of the weapons review will be a function of the sophistication of the technology, the geographic and temporal scope of use, and the nature of the environment in which the system is expected to be used.”[356] He puts forward four “best practices” to consider in all such reviews. First, “[t]he weapons review should either be a multi-disciplinary process or include attorneys who have the technical expertise to understand the nature and results of the testing process.” Second, “[r]eviews should delineate the planned and normal circumstances of use for which the weapon was reviewed.” Third, “[t]he review should provide a clear delineation between expected human and system roles.” And fourth, “optimally, the review should occur at three points in time.” Those points are: “when the proposal is made to transition a weapon from research to development”; before the weapon is fielded; and, after fielding, “based upon feedback on how the weapon is functioning.” The latter “would necessitate the establishment of a clear feedback loop which provides information from the developer to the reviewer to the user, and back again.”

Weapons: Grounds for Unlawfulness

Under IHL, a weapon or its use may be considered unlawful under two sets of circumstances.[357] First, the weapon may be considered unlawful per se (in and of itself), either because the weapon has been expressly prohibited in applicable international law or because the weapon is not capable of being used in a manner that comports with IHL. Second, the weapon may be considered unlawful based on a particular use. In relation to this factor, only that unlawful use of the weapon, not the weapon itself, would be illegal.

Weapons: Unlawful Per Se Due to Applicable Prohibition

A number of IHL treaties prohibit or restrict the use of certain weapons. The prohibitions in IHL treaties concerning specific weapons that might be relevant to war algorithms or AWS (or both) include:

  • Pursuant to the Hague Convention on the Laying of Automatic Submarine Contact Mines (1907 Hague Convention VIII),[358] it is prohibited to lay unanchored automatic contact mines, except when they are so constructed as to become harmless one hour at most after the person who laid them ceases to control them;[359] it is also prohibited to lay anchored automatic contact mines that do not become harmless as soon as they have broken loose from their moorings and to use torpedoes that do not become harmless when they have missed their mark;[360] finally, it is also forbidden to lay automatic contact mines off the coast and ports of the enemy with the sole object of intercepting commercial shipping.[361]
  • The Convention on the Prohibition of Military or any other Hostile Use of Environmental Modification Techniques (1977)[362] prohibits, among other things, military or other hostile use of environmental modification techniques if these would have widespread, long-lasting, or severe effects as the means of destruction, damage, or injury to another state party.[363]
  • The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be Deemed to be Excessively Injurious or to have Indiscriminate Effects (1980)[364] “facilitates the negotiation of protocols which can address particular weapons or types of weapon technology.”[365] Under the aegis of the CCW, the following weapons prohibitions, among others, have been adopted:
    • Pursuant to the Protocol on Non-Detectable Fragments (Protocol I, 1980),[366] it is prohibited to use any weapon “the primary effect of which is to injure by fragments which in the human body escape detection by x-rays”;[367]
    • Pursuant to the Protocol on Prohibitions or Restrictions on the Use of Mines, Booby-Traps and other Devices (Protocol II, as amended, 1996),[368] it is prohibited to use booby-traps in the form of apparently harmless portable objects specifically designed and constructed to contain explosive material and to detonate when they are disturbed or approached[369] (note that the U.S. DoD Law of War Manual states that “to the extent a weapon system with autonomous functions falls within the definition of a ‘mine’ in the CCW Amended Mines Protocol, it would be regulated as such.”[370]);
    • Pursuant to the Protocol on Prohibitions or Restrictions on the Use of Incendiary Weapons (Protocol III, 1980),[371] it is prohibited to make any military objective located within a concentration of civilians the object of attack by air-delivered incendiary weapons;[372]
    • Pursuant to the Protocol on Blinding Laser Weapons (Protocol IV, 1995),[373] it is prohibited to employ laser-weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision, that is, to the naked eye or to the eye with corrective eyesight devices.[374]
  • The Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction (1997)[375] prohibits the use, development, production, acquisition, stockpiling, retention, or transfer of anti-personnel landmines and provides for their destruction.[376]
  • The Biological Weapons Convention (1972)[377] prohibits the development, production, stockpiling, acquisition, or retention of microbial or other biological agents or toxins where the types or quantities are such that there is no justification for prophylactic, protective, or other peaceful purposes.
  • The Chemical Weapons Convention (1993)[378] prohibits the development, production, acquisition, stockpiling, retention, direct or indirect transfer, or use of chemical weapons, preparing for their use or assisting, encouraging, or inducing any person to do any of these things.
  • The Convention on Cluster Munitions (2008)[379] prohibits the use, development, production, acquisition, stockpiling, retention, and direct or indirect transfer of cluster munitions and forbids assistance, encouragement, or inducement of any of these activities.[380]

As noted above, whether AWS (however defined) should be the subject of a preemptive prohibition remains an area of discussion and debate. As of August 2016, 16 states have stated that there is a need for a ban on fully autonomous weapons or have made statements indicating that they are favorably disposed toward the idea.[381]

Some advocates of a preemptive ban have pointed to the development of the Protocol on Blinding Lasers (CCW Protocol IV) as a relevant precedent. However, commentators have noted a number of distinguishing factors between permanently-blinding lasers and AWS. The combined analyses of two scholars suggest that, in general, a weapons ban is more likely to be successful where:

  • The weapon is ineffective;
  • Other means exist for accomplishing a similar military objective;
  • The weapon is not novel: it is easily analogized to other weapons, and its usages and effects are well understood;
  • The weapon or similar weapons have been previously regulated;
  • The weapon is unlikely to cause social or military disruption;
  • The weapon has not already been integrated into a state’s armed forces;
  • The weapon causes superfluous injury or suffering in relation to prevailing standards of medical care;
  • The weapon is inherently indiscriminate;
  • The weapon is or is perceived to be sufficiently notorious to galvanize public concern and spur civil society activism;
  • There is sufficient state commitment in enacting regulations;
  • The scope of the ban is clear and narrowly tailored; or
  • Violations can be identified.[382]

According to one of those scholars, “[o]f these, only a single factor – civil society engagement – supports the likelihood of a successful ban on autonomous weapon systems; the others are irrelevant, inconclusive, or imply that autonomous weapon systems will resist regulation.”[383] The extent to which states agree or disagree with these arguments seems likely to shape whether states will take more concrete steps towards a preemptive ban concerning AWS.

Weapons: Unlawful Per Se — Of a Nature to Cause Superfluous Injury or Unnecessary Suffering

Pursuant to Article 35(2) of AP I, “[i]t is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.”[384] According to Bill Boothby, “[t]his is now a customary rule of law that binds all States in all types of armed conflict.”[385] Accordingly, to not be unlawful, a war algorithm must not be of a nature to cause superfluous injury or unnecessary suffering.

Weapons: Unlawful Per Se — Indiscriminate by Nature

In addition to the customary superfluous-injury principle, “[t]he second, equally important, customary weapons law principle holds that weapons that are indiscriminate by nature are prohibited.”[386] The principle is derived in part from Article 51(4) of AP I. That provision prohibits indiscriminate attacks that are defined as including attacks “which employ a method or means of combat which cannot be directed at a specific military objective; or … which employ a method or means of combat the effects of which cannot be limited” as required by AP I and which consequently are of a nature to strike military objectives and civilians or civilian objects without distinction.[387] Thus, according to Switzerland, “in order for an AWS to be lawful under this rule [prohibiting indiscriminate-by-nature weapons], it must be possible to ensure that its operation will not result in unlawful outcomes with respect to the principle of distinction.”[388]

Weapons: Unlawful by Use — Failure to Conform to Principles Governing Conduct of Hostilities

As noted above, where a weapon is not unlawful per se it may nonetheless be considered unlawful based on a particular use. In relation to this factor, only that unlawful use of the weapon, not the weapon itself, would be illegal. To avoid contravening IHL, in an armed conflict a direct attack using a weapon that is not unlawful per se must comport with IHL principles governing the conduct of hostilities.

The three such principles most frequently cited in discussions of AWS are distinction, proportionality, and precautionary measures. Each of these principles has IHL treaty roots and customary cognates. According to Switzerland, the basic guidelines in relation to AWS are as follows:

Most notably, in order to lawfully use an AWS for the purpose of attack, belligerents must: (1 - Distinction) distinguish between military objectives and civilians or civilian objects and, in case of doubt, presume civilian status; (2 - Proportionality) evaluate whether the incidental harm likely to be inflicted on the civilian population or civilian objects would be excessive in relation to the concrete and direct military advantage anticipated from that particular attack; (3 - Precaution) take all feasible precautions to avoid, and in any event minimize, incidental harm to civilians and damage to civilian objects; and cancel or suspend the attack if it becomes apparent that the target is not a military objective, or that the attack may be expected to result in excessive incidental harm.[389]

With respect to the principle of proportionality and AWS, the U.S. DoD Law of War Manual states that “in the situation in which a person is using a weapon that selects and engages targets autonomously, that person must refrain from using that weapon where it is expected to result in incidental harm that is excessive in relation to the concrete and direct military advantage expected to be gained.”[390]

Regarding precautions in attack, the wording of Article 57(2) of AP I raises the question of whether some of the precautionary-measures obligations laid down therein may be carried out, as a matter of treaty law, only by humans (compared with other obligations therein, which are reposed in the party to the armed conflict). Consider how Article 57(2)(a) of AP I lays down obligations of “those who plan or decide upon an attack.”[391] But Article 57(2)(b)–(c) of AP I frames the obligations, respectively, as “an attack shall be cancelled or suspended”[392] and “effective advance warning shall be given.”[393]

For their part, the authors of the U.S. DoD Law of War Manual emphasize their view that “[t]he law of war rules on conducting attacks (such as the rules relating to discrimination and proportionality) impose obligations on persons. These rules do not impose obligations on the weapons themselves; of course, an inanimate object could not assume an ‘obligation’ in any event.”[394] According to this view, “the obligation on the person using the weapon to take feasible precautions in order to reduce the risk of civilian casualties may be more significant when the person uses weapon systems with more sophisticated autonomous functions.”[395] As an example, the Manual authors state that “such feasible precautions a person is obligated to take may include monitoring the operation of the weapon system or programming or building mechanisms for the weapon to deactivate automatically after a certain period of time.”[396]

The UK MoD Joint Doctrine Note on unmanned aircraft systems discusses the obligations laid down in Additional Protocol I on the constant care that must be “taken in the conduct of military operations to spare civilians and civilian objects. This means that any system, before an attack is made, must verify that targets are military entities, take all feasible precautions to minimise civilian losses and ensure that attacks do not cause disproportionate incidental losses.”[397] The Joint Doctrine Note authors state that “[f]or automated systems, operating in anything other than the simplest of scenarios, this process will provide a severe technological challenge for some years to come.”[398]

While not focusing on AWS in particular, the UK MoD Joint Doctrine Note also addresses a situation where “a mission may require an unmanned aircraft to carry out surveillance or monitoring of a given area, looking for a particular target type, before reporting contacts to a supervisor when found.”[399] According to the Joint Doctrine Note authors, “[a] human-authorised subsequent attack would be no different to that by a manned aircraft and would be fully compliant with the LOAC [law of armed conflict], provided the human believed that, based on the information available, the attack met LOAC requirements and extant ROE [rules of engagement].”[400] The Joint Doctrine Note authors elaborate this line of reasoning, noting that, “[f]rom this position, it would be only a small technical step to enable an unmanned aircraft to fire a weapon based solely on its own sensors, or shared information, and without recourse to higher, human authority.”[401] This would be entirely legal, the Joint Doctrine Note concludes, “[p]rovided it could be shown that the controlling system appropriately assessed the LOAC principles (military necessity; humanity; distinction and proportionality) and that ROE were satisfied….”[402] Yet the authors highlight a number of additional factors to consider:

In practice, such operations would present a considerable technological challenge and the software testing and certification for such a system would be extremely expensive as well as time consuming. Meeting the requirement for proportionality and distinction would be particularly problematic, as both of these areas are likely to contain elements of ambiguity requiring sophisticated judgement. Such problems are particularly difficult for a machine to solve and would likely require some form of artificial intelligence to be successful.[403]

Finally in this connection, the Joint Doctrine Note notes that “the MOD currently has no intention to develop systems that operate without human intervention in the weapon command and control chain, but it is looking to increase levels of automation where this will make systems more effective.”[404]

According to the U.S. DoD Law of War Manual, “in many cases, the use of autonomy could enhance the way law of war principles are implemented in military operations. For example, some munitions have homing functions that enable the user to strike military objectives with greater discrimination and less risk of incidental harm.”[405] The Manual authors also note that “some munitions have mechanisms to self-deactivate or to self-destruct, which helps reduce the risk they may pose generally to the civilian population or after the munitions have served their military purpose.”[406]

In a similar connection, the UK MoD Joint Doctrine Note on unmanned aircraft systems states that “[s]ome fully automated weapon systems have already entered service, following legal review, and contributing factors – such as required timeliness of response – can make compliance with LOAC easier to demonstrate.”[407] The authors give an example of “the Phalanx and Counter-Rocket, Artillery and Mortar (C-RAM) systems that are already employed in Afghanistan,” arguing that “it can be clearly shown that there is insufficient time for a human initiated response to counter incoming fire.”[408] According to this view, “[t]he potential damage caused by not using C-RAM in its automatic mode justifies the level of any anticipated collateral damage.”[409]

Other potentially relevant conduct-of-hostilities considerations raised in relation to AWS include principles concerning prohibitions on the denial of quarter and on the protection of persons hors de combat (such as the wounded and sick hors de combat). For instance, in relation to denial of quarter, in the view of Switzerland, “[a]ny reliance on AWS would need to preserve a reasonable possibility for adversaries to surrender. A general denial of this possibility would violate the prohibition of ordering that there shall be no survivors or of conducting hostilities on this basis (denial of quarter).”[410]

Stepping back, we see that, where a war algorithm is capable of being used in relation to the conduct of hostilities in connection with an armed conflict, that possible use is already regulated by a number of IHL rules and principles. Few states, however, have offered detailed views on what implications may arise for such uses of war algorithms.

Other Functions in relation to Armed Conflict

IHL governs far more than just weapons and the conduct of hostilities. As the primary normative framework regulating armed conflict, IHL also lays down rules concerning such activities as capture, detention, and transfer of enemies; medical care to the wounded and sick hors de combat; and humanitarian access and assistance to civilian populations in need. Switzerland has noted, for instance, that it is conceivable that AWS “could be used to perform other tasks governed by IHL, such as the guarding and transport of persons deprived of their liberty or tasks related to crowd control and public security in occupied territories.”[411]

Martens Clause

With respect to AWS, the IHL “Martens clause” would, according to Switzerland, afford “an important fallback protection in as much as the ‘laws of humanity and the requirements of the public conscience’ need to be referred to if IHL is not sufficiently precise or rigorous.”[412] Pursuant to this line of reasoning, “not everything that is not explicitly prohibited can be said to be legal if it would run counter [to] the principles put forward in the Martens clause. Indeed, the Martens clause may be said to imply positive obligations where contemplated military action would result in untenable humanitarian consequences.”[413]

Seizure of Private Property Susceptible of Direct Military Use

In a situation of belligerent occupation (a type of international armed conflict), the Occupying Power may seize, among other things, “all kinds of munitions of war … even if they belong to private persons.”[414] Items so seized “must be restored and compensation fixed when peace is made.”[415] With respect to AWS, this provision may implicate, for example, the private property—including the software and hardware components involved in developing AWS—of individuals or commercial entities subject to a belligerent occupation.[416]

International Criminal Law

International criminal law (ICL) is a framework through which individual responsibility arises for international crimes. Under certain circumstances, the design, development, or use (or a combination thereof) of a war algorithm may form part of the conduct underlying an international crime. Recognized categories of international crimes include war crimes, genocide, and crimes against humanity. Each international crime is made up of a prohibited act or acts (the actus reus or actus reī) and the prohibited mental state (the mens rea). War crimes may arise only in relation to armed conflict. Genocide and crimes against humanity may arise outside of situations of armed conflict (though they often do in fact arise in relation to armed conflict). Here, we focus on the Statute of the International Criminal Court (ICC),[417] though we note that other ICL rules—those derived from applicable treaties or customary international law—also may be relevant.

Various states and commentators disagree on whether ICL, especially in relation to war crimes, sufficiently addresses the design, development, and use of AWS. The discussion is hampered by lack of agreement on the definition of AWS, on the technological capabilities of AWS, and on the nature of the relationship between the various actors involved in the development and operation of AWS. These disagreements implicate underlying legal concepts of attribution, control, foreseeability, and reconstructability.

Much of the debate on AWS in relation to ICL revolves around modes of responsibility for international crimes and the mental element of international crimes.[418] Those arguing that ICL is sufficient to address AWS concerns typically emphasize that, ultimately, a single person—often, the commander or superior—may and should be held responsible where, in connection with an armed conflict, the design, development, or use of an AWS gives rise to an international crime.[419] Those arguing that ICL may not be sufficient typically emphasize that the ICL modes of command and superior responsibility are predicated on relationships between humans, not on relationships between humans and machines or constructed systems. (The ICC Statute establishes jurisdiction for individual responsibility only over natural persons, thereby excluding legal entities such as corporations.) They also note that it might not be possible, due to a lack of a temporal nexus to an armed conflict, to prosecute a developer who, before the war began, coded an AWS to function in a way that later gives rise to a war crime.[420] Critics also argue that due to the distributed nature of technical and physical control over an operation involving an AWS, it may not be possible to establish the relevant intent and knowledge of a particular perpetrator. Or, they assert, even if it is possible to establish the mental element, a perpetrator may argue to exclude criminal responsibility due to a mistake of fact, given how complex the operation of an AWS may be.

Arms-Transfer Law

The Arms Trade Treaty of 2013 (ATT)[421] may implicate war algorithms that form part of the conventional arms and certain other items covered by that instrument. It may do so not only with respect to exporting and importing states parties but also in connection with trans-shipment states parties.

The ATT regulates certain activities of the international trade in arms—in particular, “export, import, transit, trans-shipment and brokering,” all of which fall under the umbrella term of “transfer.”[422] Many of the arms and related items covered by the treaty already use war algorithms. In relation to states parties, the treaty applies in respect of all conventional arms within eight categories: battle tanks, armored combat vehicles, large-caliber artillery systems, combat aircraft, attack helicopters, warships, missiles and missile launchers, and small arms and light weapons.[423] The ATT also regulates the export of “ammunition/munitions fired, launched or delivered by”[424] such conventional weapons, as well as of “parts and components where the export is in a form that provides the capability to assemble the [relevant] conventional arms.”[425] (The ATT expressly does “not apply to the international movement of conventional arms by, or on behalf of, a State Party for its use provided that the conventional arms remain under that State Party’s ownership.”[426])

As part of the regulatory system established by the ATT, a state party is prohibited from authorizing any transfer of conventional arms or other covered items in three situations. First, the state party may not authorize such a transfer if it “would violate its obligations under measures adopted by the United Nations Security Council acting under Chapter VII of the Charter of the United Nations, in particular arms embargoes.”[427] Second, an authorization is prohibited if the transfer “would violate its relevant international obligations under international agreements to which it is a Party, in particular those relating to the transfer of, or illicit trafficking in, conventional arms.”[428] And third, an authorization is prohibited if the state party “has knowledge at the time of authorization that the arms or items would be used in the commission of … grave breaches of the Geneva Conventions of 1949, attacks directed against civilian objects or civilians protected as such, or other war crimes as defined by international agreements to which it is a Party.”[429]

Even if the export is not prohibited under one of those stipulations, the ATT imposes an obligation not to authorize the export where the state party determines “that there is an overriding risk of any of the negative consequences” identified in a provision of the treaty.[430] Those consequences include the potential that the conventional arms or other covered items:

(a) would contribute to or undermine peace and security;

(b) could be used to:

(i) commit or facilitate a serious violation of international humanitarian law;

(ii) commit or facilitate a serious violation of international human rights law;

(iii) commit or facilitate an act constituting an offence under international conventions or protocols relating to terrorism to which the exporting State is a Party; or

(iv) commit or facilitate an act constituting an offence under international conventions or protocols relating to transnational organized crime to which the exporting State is a Party.[431]

Also, pursuant to the ATT, each export state party “shall make available appropriate information about the authorization in question, upon request, to the importing State Party and to the transit or trans-shipment States Parties, subject to its national laws, practices or policies.”[432] Finally, each state party “involved in the transfer of conventional arms covered under Article 2 (1) [of the ATT] shall take measures to prevent their diversion.”[433]

The upshot is that, under the ATT, a detailed and somewhat expansive regime exists to regulate the transfer of war algorithms where those algorithms form part of certain conventional weapons and related items.

International Human Rights Law

While IHL traces its roots to the regulation of interstate wars, international human rights law (IHRL) arose out of an attempt to regulate, as a matter of international law and policy, the relationship between the state—through its governmental authority—and its population. Unlike the relatively narrow war-related field of IHL, IHRL spans a seemingly ever-growing range of dealings an individual, community, or nation may have with the state.

In recent decades, the connection between IHL and IHRL has been the subject of increased jurisprudential treatment and interpretation by states. The precise links between the two branches of public international law have also merited extensive academic commentary. The debate on this relationship is largely over three issues. First, whether IHRL applies extraterritorially such that states bring all, some, or none of their obligations with them when they fight wars under IHL outside of their territories. Second, whether organized armed groups have IHRL obligations (or, at least, responsibilities). And third, what is the apposite interpretive procedure or principle to use when discerning the content of a particular right under the relevant framework(s).

With these considerations in mind, IHRL may impose substantive obligations on a state party to an armed conflict concerning the design, development, or use of a war algorithm. These obligations may range, for instance, from violations of the right to privacy to the right not to be arbitrarily deprived of life. That is, of course, not an exhaustive list, but it demonstrates the wide array of rights under IHRL that a war algorithm might implicate. IHRL might also implicate state obligations in relation to the design, development, and use of war algorithms during times of peace.

Law of the Sea

As illustrated in section 2, many of the existing weapon systems with autonomous functions operate in the sea.[434] A number of provisions of the 1982 U.N. Convention on the Law of the Sea (UNCLOS),[435] “many of which are recognised as stating customary international law, … apply to ships with mounted autonomous weapon systems and possibly to independent seafaring autonomous weapon systems.”[436] Among these are the UNCLOS articles outlining “state obligations to protect and preserve both the marine environment generally and specific areas, such as the seabed and ocean floor,”[437] as well as the general prohibition on the threat of force or the use of force.[438] Furthermore, “[i]n addition to providing that the high seas ‘shall be reserved for peaceful purposes’, UNCLOS sets forth a number of prohibitions applicable to ships equipped with autonomous weapon systems that wish to exercise rights to innocent and transit passage.”[439] Finally, “[w]hile automated and autonomous weapon systems have long been used on warships, future autonomous weapon systems may themselves be warships.” Accordingly, “[s]hould they be granted warship status, such systems would gain certain rights and associated obligations.”[440]

Space Law

Guidance concerning the design, use, and liability of war algorithms in outer space in relation to armed conflict may be found in the 1967 Outer Space Treaty,[441] other space-law treaties, and various U.N. General Assembly declarations.[442] Yet “aside from a few plain prohibitions,” “the ‘ceiling’ of space law regulation is sky high … it allows for a wide range of potential extraterrestrial autonomous weapon systems”[443] and of war algorithms more broadly.

One such prohibition—laid down in the Outer Space Treaty, which may be binding as a codification of international law[444]—is on the use of space for destructive purposes. In particular, states parties to the Outer Space Treaty “undertake not to place in orbit around the Earth any objects carrying nuclear weapons or any other kinds of weapons of mass destruction, install such weapons on celestial bodies, or station such weapons in outer space in any other manner.”[445] Among the other issues raised in this context include jurisdiction, control over objects launched into space, international responsibility for activities in space, and international liability for damage caused by space-based objects.[446]

International Telecommunications Law

Constructed systems that use the electromagnetic spectrum or international telecommunications networks in effectuating war algorithms may be governed in part by telecommunications law. That law is regulated primarily by the International Telecommunications Union (ITU).[447] Scholars have already raised AWS in relation to telecommunications law,[448] including with respect to obligations to legislate against certain “harmful interference,” preserving the secrecy of international correspondence and military radio installations, as well as exceptions concerning certain uses of military installations.[449]

[281].  See infra Section 4.

[282]. One field of international law that we do not address but that might merit attention is international trade law, perhaps especially to the extent that it is used as a framework for developing technology-related standards and procedures at the national and international levels.

[283].  See James R. Crawford, State Responsibility, in Max Planck Encyclopedia of Public International Law ¶ 3 (2006).

[284].  See Draft Articles on Responsibility of States for Internationally Wrongful Acts with Commentaries arts. 4–11, Report of the International Law Commission, 53d Sess., Apr. 23-June 1, July 2-Aug. 10, 2001, U.N. Doc. A/56/10, U.N. GAOR 56th Sess., Supp. No. 10 (2001), http://legal.un.org/ilc/texts/instruments/english/commentaries/9_6_2001.pdf [hereinafter Draft Articles].

[285].  Id. at arts. 12–15.

[286].  Id. at arts. 20–25.

[287].  Crawford, supra note 283, at ¶ 3.

[288].  Draft Articles, supra note 284, at art. 4(1).

[289].  Id. at art. 5.

[290].  Id. at art. 6.

[291].  Id. at art. 7.

[292].  Id. at art. 8.

[293].  Id. at art. 9.

[294].  Id. at art. 10(1); see also id. at art. 10(2)–(3).

[295].  Id. at art. 11.

[296].  See Rosalyn Higgins, Problems and Process: International Law and How We Use It 162 (1995).

[297].  Pietro Sullo & Julian Wyatt, War Reparations, in Max Planck Encyclopedia of Public International Law ¶ 5 (2015) (citing to the 2001 International Law Commission Draft Articles on Responsibility of States for Internationally Wrongful Acts (art. 31 and arts. 34–37)).

[298].  Sullo & Wyatt, supra note 297, at ¶ 5.

[299].  See, e.g., Vienna Convention on the Law of Treaties art. 2(1)(a), May 23, 1969, 1155 U.N.T.S. 133; Restatement (Third) of the Foreign Relations Law of the United States § 301(1) (1987).

[300].  Michael Wood (Special Rapporteur), Int’l Law Comm’n, Second Report on Identification of Customary International Law, at 20, U.N. Doc. A/CN.4/672 (2014), http://daccess-ods.un.org/access.nsf/Get?Open&DS=A/CN.4/672&Lang=E [hereinafter Wood, Second Report]. Though the International Law Commission (ILC) Drafting Committee ultimately did not include this definition in its subsequent report, this exclusion was related to concerns about redundancy, not objections to its content. See Gilberto Saboia (Chairman of the Drafting Committee), Int’l Law Comm’n, Identification of Customary International Law, at 4 (2014), http://legal.un.org/ilc/sessions/66/pdfs/english/dc_chairman_statement_identification_of_custom.pdf.

[301].  Katie King and Joshua Kestin provided extensive research assistance for this section.

[302].  See, e.g., Int’l Law Comm’n, Identification of Customary International Law: Text of the Draft Conclusions Provisionally Adopted by the Drafting Committee, draft conclusion 2, U.N. Doc. A/CN.4/L.869 (2015), https://documents-dds-ny.un.org/doc/UNDOC/LTD/G15/156/93/PDF/G1515693.pdf?OpenElement; Wood, Second Report, supra note 300, at 9, 21–27.

[303].  Int’l Law Comm’n, supra note 302, at draft conclusion 5.

[304].  Wood, Second Report, supra note 302, at 24 (quoting the explanation of various states). See also Michael Wood (Special Rapporteur), Int’l Law Comm’n, Third Report on Identification of Customary International Law, at 13, U.N. Doc. A/CN.4/682 (2015), https://documents-dds-ny.un.org/doc/UNDOC/GEN/N15/088/91/PDF/N1508891.pdf?OpenElement [hereinafter Wood, Third Report]; Int’l Law Comm’n, supra note 302, at draft conclusion 9 (“The requirement, as a constituent element of customary international law, that the general practice be accepted as law (opinio juris) means that the practice in question must be undertaken with a sense of legal right or obligation”).

[305].  See, e.g., id.

[306].  U.N. Office at Geneva, 2010 Meeting of Experts, Disarmament, http://www.unog.ch/80256EE600585943/(httpPages)/701141247B6C85E7C12576F200587847?OpenDocument (last visited March 12, 2016).

[307].  See, e.g., Customary International Humanitarian Law 1338, 3164 (Jean-Marie Henckaerts and Louise Doswald-Beck eds., 2005), https://www.icrc.org/eng/assets/files/other/customary-international-humanitarian-law-ii-icrc-eng.pdf (citing remarks at a meeting of experts as evidence related to state practice on deception and a Colombian Ministry of Foreign Affairs working paper presented at a meeting of experts as evidence of state practice). The same International Committee of the Red Cross (ICRC) study also took statements at CCW conferences as evidence of state practice, both when at official States Parties conferences, see, e.g., id. at 1965 (citing China’s remarks about blinding lasers; however, since these remarks were made a year after China adopted the protocol banning blinding lasers and are generally an endorsement of that protocol, it is not clear what added value they have), and in preparatory or implementation gatherings, see, e.g., id. at 1966 (noting India’s statement at the Third Preparatory Committee for the Second Review Conference of States Parties to the CCW that it “fully supported the idea of expanding the scope of the CCW to cover armed internal conflicts”). Even if one is not willing to accept the ICRC’s assessment of what qualifies as state practice, see, e.g., John Bellinger & William Haynes, A U.S. Government Response to the International Committee of the Red Cross Study Customary International Humanitarian Law, 89 Int’l. Rev. Red Cross 443, 444–46 (2007), https://www.icrc.org/eng/assets/files/other/irrc_866_bellinger.pdf, international tribunals like the International Tribunal for the Former Yugoslavia have accepted states’ remarks before the United Nations General Assembly as state practice, see Prosecutor v. Tadic, Case No. IT-94-1-I, Decision on Defence Motion for Interlocutory Appeal on Jurisdiction, para. 120 (Int’l Cri. Trib. For the Former Yugoslavia Oct. 2, 1995), as well as statements before national legislatures, see id. at para. 100. Statements at a meetings of experts are similarly public, recorded, and made by state representatives in an official capacity. Further, at least one International Court of Justice judge has also declared that “the positions taken up by the delegates of States in international organizations and conferences…naturally form part of State practice.” Barcelona Traction, Light and Power Company Limited (Belgium v. Spain), Judgment, 3 I.C.J. Rep 286, para. 302 (Feb. 5, 1970) (Ammoun, J., separate opinion), http://www.icj-cij.org/docket/index.php?p1=3&p2=3&case=50&p3=4. Statements at the Meeting of the Experts would fulfill that description.

[308].  See Henrik Meijers, On International Customary Law in the Netherlands, in On the Foundations and Sources of International Law 77, 85 (Ige F. Dekker & Harry H.G. Post eds., 2003) (A “declaration by a state which implies no more than that it is in favor of a proposed rule becoming law, does not contribute to the formation of…custom” because “[i]f one declares to be in favour of something happening in [the] future, that ‘something’ has not yet taken place in the present, and no present practice (relating to that something) can have been formed yet”).

[309].  See, e.g., Statement of Israel, Characteristics of LAWS (Part II), http://www.unog.ch/80256EDD006B8954/(httpAssets)/AB30BF0E02AA39EAC1257E29004769F3/$file/2015_LAWS_MX_Israel_characteristics.pdf (“During the discussions, delegations have made use of various phrases referring to the appropriate degree of human involvement in respect to LAWS. Several States mentioned the phrase ‘meaningful human control’. Several other States did not express support for this phrase. Some of them thought that it was too vague, and the alternative phrasing ‘appropriate levels of human judgment’ was suggested. We have also noted, that even those who did choose to use the phrase ‘meaningful human control’, had different understandings of its meaning. Some of its proponents had in mind human control or oversight of each targeting action in real-time, while others thought that, at least from a perspective of ensuring compliance with IHL, the preset by a human of certain limitations on the way a lethal autonomous system would operate, may also amount to meaningful human control. In our view, it is safe to assume that human judgment will be an integral part of any process to introduce LAWS, and will be applied throughout the various phases of research, development, programming, testing, review, approval, and decision to employ them. LAWS will not actually be making decisions or exercising judgment by themselves, but will operate as designed and programmed by humans”).

[310].  See Appendices I and II.

[311].  North Sea Continental Shelf Cases (Germany v. Denmark; Germany v. Netherlands), Judgment, 1969 I.C.J. Rep. 3, para. 73 (Feb. 1969) (“State practice, including that of States whose interests are specially affected, should have been both extensive and virtually uniform in the sense of the provision invoked;—and should moreover have occurred in such a way as to show a general recognition that a rule of law or legal obligation is involved”).

[312].  Wood, Second Report, supra note 300, at 38–39 (internal citations omitted).

[313].  See, e.g., Yoram Dinstein, The Interaction between Customary International Law and Treaties 288–89 (2007).

[314].  See, e.g., Ward Ferdinandusse, Book Review, 53 Netherlands Int’l L. Rev. 502, 504 (2006) (“it may be asked whether there are specially affected states in IHL at all. It is easy to see how the concept of specially affected states is useful when discussing delimitation of the continental shelf: some states have a continental shelf to delimit while other states do not and, one may assume, never will. There is an aspect of permanency there which is lacking in IHL. Belligerent states, one may hope, are the peace makers of tomorrow. Occupied states may be the occupiers of tomorrow. Customary rules develop slowly and should be stable enough to withstand such changing of positions. Moreover, one would think that it is irreconcilable with the very character of IHL to grant specially affected status to the manufacturers of certain dubious weapons, just as it would have been problematic at least to grant South-Africa specially affected status with regard to the question of apartheid”). See also Richard Price, Emerging Customary Norms and Anti-Personnel Landmines, in The Politics of International Law 106, 120–21 (Christian Reus-Smit ed., 2004); Jean-Marie Henckaerts, Customary International Humanitarian Law: Taking Stock of the ICRC Study, 78 Nordic J. Int’l L. 435, 446 (2010).

[315].  Harry H.G. Post, The Role of State Practice in the Formation of Customary International Humanitarian Law, in On the Foundations and Sources of International Law 129, 142 (Ige F. Dekker & Harry H.G. Post eds., 2003). See also Dinstein, supra note 313, at 293; Customary International Humanitarian Law, supra note 307, at xliv–xlv (“Concerning the question of the legality of the use of blinding laser weapons, for example, ‘specially affected States’ include those identified as having been in the process of developing such weapons”). Cf. H.W.A. Thirlway, International Customary Law and Codification: An Examination of the Continuing Role of Custom in the Present Period of Codification of International Law 71–72 (stating that, in relation to laws for outer space, specially affected states would be those “actually or potentially in control of the economic and scientific assets necessary for the exploration of space,” and that it might even be unnecessary to look beyond those states to determine the relevant state practice).

[316].  See Kenneth Anderson & Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can, American University Washington College of Law Research Paper No. 2013-11, at 1 (2013), http://ssrn.com/abstract=2250126.

[317].  At least in 2015, Germany did somewhat differentiate itself, drawing a “red line” about the need for meaningful human control and calling for states to “take care to closely monitor the development and introduction of any new weapon system to guarantee that there will be no transgression.”

[318].  This sort of argument would not be too far removed from some states’ claims before the International Court of Justice (ICJ) that the potentially world-affecting damage nuclear weapons could create should mean that all states qualify as specially affected, see Hugh Thirlway, The Sources of International Law, in International Law 91, 99 (Malcolm D. Evans ed., 2014). The ICJ did not weigh in on the validity of this claim. Still, if anything, the sort of argument outlined above would be less extreme than the nuclear-weapons claim, since, it seems, AWS might be capable of being more geographically limited than nuclear weapons. That argument would nevertheless rely on states believing that they could accurately predict where AWS would be used, if the customary law was to precede their development.

[319].  When states advocate the need to regulate AWS, the need for meaningful human control, or the need for an Article 36 review, they are not necessarily suggesting that any of these steps, on their own, would adequately address the issues presented by autonomous weapons. Rather, states often presented these actions as necessary but not sufficient steps to effectively dealing with AWS. Additionally, this table is not intended to and does not necessarily represent a comprehensive, accurate list of all states’ current positions on AWS. One reason for this fact is that it represents states’ positions as assessed through both the 2015 and 2016 meetings; a state’s position could have changed between 2015 and 2016, but both the 2015 and 2016 positions would be listed here. Also, the table generally excludes states’ remarks outside of the written statements they offered at these two meetings. There are several exceptions, which are noted through footnotes.

[320].  In this context, Austria concludes only that the technology as it currently stands is unlawful; though concerned about future versions also being unlawful, Austria does not categorically state that lawfulness would be impossible.

[321].  Chile’s position on this issue is slightly ambiguous. Some of its statements clearly indicate that it believes that fully autonomous weapons are unlawful, but some of its other statements seem to suggest that those weapons should simply be regulated. (This raises the question whether Chile believes that AWS would become lawful if we simply regulated their use.)

[322].  In this context, Germany never explicitly uses the word “unlawful.” Nevertheless, Germany has given strong indications that it considers the use of lethal force by fully autonomous weapon systems to be illegitimate. Not only does Germany explicitly state that it is “not acceptable” for a weapon system to have control over life and death, but Germany portrays its current stance as a repetition of the stance that it took in last year’s meeting. (In last year’s meeting, Germany stated that it considered AWS to be unlawful.)

[323].  In this context, Poland indicated only that a fully autonomous weapon system would not be allowed, but it was very careful to indicate that it believes that such weapon systems do not yet exist. Therefore, Poland does not believe that any autonomous weapon systems, as they currently exist, are unlawful. But its Human Rights and Ethical Issues Statement does suggest that if a fully autonomous weapon system were to be developed in the future, it would “not be allowed.” (As with Germany, however, Poland does not explicitly use the word “unlawful,” though Poland’s statement that fully autonomous weapon systems would “not be allowed” seems to suggest that such systems would indeed be illegal.)

[324].  Scholarly debates about AWS are often framed as a choice between regulation and a ban. However, when states at the 2015 and 2016 CCW Informal Meeting of Experts have discussed regulation, it is not clear that they were implying regulation was to be preferred over a ban; often, those endorsing regulation seemed to be conceiving of the act as distinguished from doing nothing, not in contrast to a ban.

[325].  The Holy See has also spoken in favor of a ban (for example, in a written statement for the 2015 CCW Meeting of Experts). However, as it is not a state, see Gerd Westdickenberg, Holy See, in Max Planck Encyclopedia of Public International Law (James R. Crawford, ed., 2006) (“The Holy See is neither a State nor only an abstract entity like an international organization….The international personality the Holy See enjoys as a unique entity and the sovereignty it exercises are different from those of other subjects of international law, be it States, international organizations like the International Committee of the Red Cross (ICRC), or [other] subject[s] of international law…[Its] international legal personality can best be defined as being ‘sui generis’”), the Holy See has not been included in this table or any of the ones that follow in Appendices I and II.

[326].  Bolivia did not express its desire for a ban via a written statement at the 2015 or 2016 CCW Meeting of Experts, but it did reportedly offer an oral statement favoring a ban at the 2015 CCW Meeting of Experts. See Campaign to Stop Killer Robots, Report on Activities: Convention on Conventional Weapons Second Informal Meeting of Experts on Lethal Autonomous Weapons Systems 25 (2015), http://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_CCWx2015_Report_4June2015_uploaded.pdf (“Bolivia made a late statement—its first on the matter—that called for a ban on fully autonomous weapons systems, citing concerns that the right to life should not be delegated and doubts that international humanitarian and human rights law is sufficient to deal with the challenges posed”). Bolivia’s position has been included here to more fully represent states’ attitudes on an important issue.

[327].  In 2015, Croatia did not necessarily endorse a ban on all AWS but seemed to at least indicate it would be favorably inclined toward efforts to ban any AWS that did not involve “meaningful human control;” Croatia also repeatedly indicated that the option of a ban or moratorium should still be on the table. See Appendices I and II for more.

[328].  At the 2015 or 2016 CCW Meeting of Experts, Egypt did not express its desire for a ban via a written statement. It has, however, orally indicated a preference for a moratorium on the development of AWS until more debate has occurred. See Appendices I and II for more. Egypt’s position has been included here to more fully represent states’ attitudes on an important issue.

[329].  At the 2015 or 2016 CCW Meeting of Experts, Mexico did not express its desire for a ban via a written statement. It did, however, orally indicate a preference for a ban during the 2016 meeting. See Appendices I and II for more. Mexico’s position has been included here to more fully represent states’ attitudes on an important issue.

[330].  At the 2015 or 2016 CCW Meeting of Experts, Nicaragua did not express its desire for a ban via a written statement. It did, however, orally indicate a preference for a ban during the 2016 meeting. See Appendices I and II for more. Nicaragua’s position has been included here to more fully represent states’ attitudes on an important issue.

[331].  Sierra Leone did not explicitly call for a ban but is seemingly against any AWS not under human control. See Appendices I and II for more.

[332].  At the 2015 or 2016 CCW Meeting of Experts, Palestine did not express its desire for a ban via a written statement (it did offer a written statement for the 2015 meeting, but it is not available online, and no press reports cite that 2015 statement as announcing Palestine favored a ban). Palestine did, however, orally indicate a preference for a ban during the 2015 CCW meeting (not the Meeting of Experts). See Appendices I and II for more. Palestine’s position has been included here to more fully represent states’ attitudes on an important issue.

[333].  Zambia believes a prohibition on the use of AWS should be “on the CCW agenda.” See Appendices I and II for more.

[334].  At the 2015 or 2016 CCW Meeting of Experts, Zimbabwe did not express its desire for a ban via a written statement. It did, however, orally indicate a preference for a ban during the 2016 CCW meeting (not the Meeting of Experts). See Appendices I and II for more. Zimbabwe’s position has been included here to more fully represent states’ attitudes on an important issue.

[335].  Other states spoke about the importance of proper national review but did not necessarily frame it in terms of an international legal obligation or, more specifically, an obligation derived from Article 36 of AP I.

[336].  South Africa’s position on Article 36 is somewhat ambiguous. South Africa does not explicitly state that an Article 36 review is necessary, nor does South Africa discuss how it would plan to implement it. But South Africa’s General Statement directly quotes the language of Article 36 when discussing compliance with international law, strongly implying that an Article 36 review is important or relevant to assessing the legality of AWS.

[337].  This conclusion aligns with the statement in the U.S. DoD Law of War Manual that “[t]he law of war does not prohibit the use of autonomy in weapon systems.” Law of War Manual, supra note 110, at § 6.5.9; see also id. at § 6.9.5.2 (“The law of war does not specifically prohibit or restrict the use of autonomy to aid in the operation of weapons”).

[338].  On Security Council authorizations and self-defense, see, e.g., Oliver Dörr, Use of Force, Prohibition of, in Max Planck Encyclopedia of Public International Law ¶¶ 38, 40–42 (2015).

[339].  See Markus Wagner, Autonomous Weapon Systems, in Max Planck Encyclopedia of Public International Law ¶ 11 (2016) (arguing that “[w]hether a breach of a rule of ius ad bellum has occurred is a determination that is independent from the type of weapon that has been used….”).

[340].  Id.

[341].  Id.

[342].  Id.

[343].  See, e.g., Dörr, supra note 338, at ¶ 12.

[344].  Id. (citations omitted).

[345].  See, e.g., Jann Kleffner, supra note 17.

[346].  See, e.g., Michael Bothe, Law of Neutrality, in The Handbook of International Humanitarian Law (Dieter Fleck ed., 3rd ed. 2013).

[347].  See generally the Forum in 42 N.Y.U. J. Int’l Law & Pol. 3, 637 et seq. (2010).

[348].  See Geneva Convention for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field art. 1, Aug. 12, 1949, T.I.A.S. 3362 [hereinafter GC I]; Geneva Convention for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea art. 1, Aug. 12, 1949, T.I.A.S. 3363 [hereinafter GC II]; Geneva Convention Relative to the Treatment of Prisoners of War art. 1, Aug. 12, 1949, T.I.A.S. 3364 [hereinafter GC III]; Geneva Convention Relative to the Protection of Civilian Persons in Time of War art. 1, Aug. 12, 1949, T.I.A.S. 3365 [hereinafter GC IV].

[349].  GC I, supra note 348, at art. 59; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146.

[350].  On the conflation between weapons and “means and methods of warfare,” at least in the context of Article 36 AP I weapons reviews, see generally Hin-Yan Liu, Categorization and Legality of Autonomous and Remote Weapons Systems, 94 Int’l Rev. Red Cross 627, 636 (2012).

[351].  Id. at 635.

[352].  Id. (citations omitted).

[353].  Id. at 636 (italics added).

[354].  Id. at 637.

[355].  Id.

[356].  Lt. Col. Christopher M. Ford, Stockton Center for the Study of International Law, Remarks at the 2016 Informal Meeting of Experts, at 4, UN Office in Geneva (April 2016), http://www.unog.ch/80256EDD006B8954/(httpAssets)/D4FCD1D20DB21431C1257F9B0050B318/$file/2016_LAWS+MX_presentations_challengestoIHL_fordnotes.pdf; see also U.K. Ministry of Def., supra note 113, at 5-3 (discussing factors concerning legal review and situation awareness of manned vs. unmanned aircraft systems).

[357].  This sub-section on weapons and IHL draws extensively on William H. Boothby, Prohibited Weapons, in Max Planck Encyclopedia of Public International Law (2015).

[358].  Convention No. VIII Relative to the Laying of Automatic Submarine Contact Mines, Oct. 18, 1907, 36 Stat. 2332.

[359].  Id. at art. 1.

[360].  Id.

[361].  Id. at art. 2.

[362].  Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, May 18, 1977, 31 U.S.T. 333, 1108 U.N.T.S. 15.

[363].  See id. at art. 1. See also AP I, supra note 12, at arts. 35(3) and 55.

[364].  Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Oct. 10, 1980, 1342 U.N.T.S. 137, 19 I.L.M. 1523 [hereinafter CCW].

[365].  Boothby, supra note 357, at ¶ 16.

[366].  Protocol [I to the Convention on Prohibitions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects] on Non-Detectable Fragments, Oct. 10, 1980, 1342 U.N.T.S. 168.

[367].  Id.

[368].  Protocol [II to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects] on Prohibitions or Restrictions on the Use of Mines, Booby-Traps and Other Devices, Oct. 10, 1980, 1342 U.N.T.S. 168.

[369].  Id. at art. 2-3.

[370].  Law of War Manual, supra note 110, at § 6.5.9.2 (internal reference omitted).

[371].  Protocol [III to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects] on Prohibitions or Restrictions on the Use of Incendiary Weapons art. 2(2), Oct. 10, 1980, 1342 U.N.T.S. 171.

[372].  Id. at art. 2.

[373].  Protocol [IV to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects] on Blinding Laser Weapons art. 1, Oct. 13, 1995, 1380 U.N.T.S. 370.

[374].  Id. at art. 1.

[375].  Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction, Sept. 18, 1997, 2056 U.N.T.S. 211, 242.

[376].  Id. at art. 1.

[377].  Convention on the Prohibition of the Development, Production, and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction, art. 1, Apr. 10, 1972, 1015 U.N.T.S. 163.

[378].  Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction art. 1, Jan. 13, 1993, 1974 U.N.T.S. 317.

[379].  Convention on Cluster Munitions art. 1, May 30, 2008, 48 I.L.M. 357.

[380].  Id. at art. 1.

[381].  See supra Section 3: International Law pertaining to Armed Conflict — Customary International Law concerning AWS.

[382].  Rebecca Crootof, The Killer Robots Are Here: Legal and Policy Implications, 36 Cardozo L. Rev. 1837 (2014); Sean Watts, Regulation-Tolerant Weapons, Regulation-Resistant Weapons and the Law of War, 91 Int’l L. Stud. 541 (2015).

[383].  Rebecca Crootof, Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems, Lawfare (Nov. 24, 2015), https://www.lawfareblog.com/why-prohibition-permanently-blinding-lasers-poor-precedent-ban-autonomous-weapon-systems.

[384].  AP I, supra note 12, at art. 35(2) (emphasis added). See also Regulations Concerning the Laws and Customs of War on Land art. 23(e), annexed to Hague Convention (IV) Respecting the Laws and Customs of War on Land, Oct. 18, 1907, T.S. 539 [hereinafter 1907 Hague Regulations].

[385].  Boothby, supra note 357, at ¶ 10; see also, e.g., Law of War Manual, supra note 110, at § 6.5.9.2 (stating that “[i]n addition, the general rules applicable to all weapons would apply to weapons with autonomous functions. For example, autonomous weapon systems must not be calculated to cause superfluous injury ….”) (internal reference omitted).

[386].  Boothby, supra note 357, at ¶ 11; see also, e.g., Law of War Manual, supra note 110, at § 6.5.9.2 (stating that “[i]n addition, the general rules applicable to all weapons would apply to weapons with autonomous functions. For example, autonomous weapon systems must not … be inherently indiscriminate.”) (internal reference omitted).

[387].  AP I, supra note 12, at art. 51 (emphasis added).

[388].  Swiss, “Compliance-Based” Approach, supra note 74, at 3.

[389].  Id.

[390].  Law of War Manual, supra note 110, at § 6.5.9.3 (internal reference omitted).

[391].  AP I, supra note 12, art. 57(2)(a).

[392].  Id. at art. 57(2)(b).

[393].  Id. at art. 57(2)(c).

[394].  Law of War Manual, supra note 110, at § 6.5.9.3 (italics added).

[395].  Id.

[396].  Id.

[397].  U.K. Ministry of Def., supra note 113, at 5-2.

[398].  Id.

[399].  Id. at 5-4.

[400].  Id.

[401].  Id.

[402].  Id.

[403].  Id.

[404].  Id.

[405].  Law of War Manual, supra note 110, at § 6.5.9.2.

[406].  Id. (internal reference omitted).

[407].  U.K. Ministry of Def., supra note 113, at 5-2.

[408].  Id.

[409].  Id.

[410].  Swiss, “Compliance-Based” Approach, supra note 74, at 3 (citation omitted).

[411].  Id. (citation omitted) (noting that “[a]dditional specific rules need to be taken into consideration if AWS were to be relied for such activities”).

[412].  See, e.g., id. at 4 (citing to CCW, supra note 364, at preamble and AP I, supra note 12, at art. 1(2), and noting that “[i]n its 1996 Advisory Opinion on the legality of the threat or use of nuclear weapons, the International Court of Justice held that the clause ‘proved to be an effective means of addressing the rapid evolution of military technology’ (§78)”).

[413].  Id. at 3 (citing respectively, to AP I, supra note 12, at art. 57(2)(a) and to GCs I–IV, supra note 348, at arts. 49, 50, 129, 146 (respectively); AP I, supra note 12, at Section III.

[414].  1907 Hague Regulations, supra note 384, at art. 53(2).

[415].  Id.

[416].  According to the U.S. DoD Law of War Manual, “[p]rivate property susceptible of direct military use includes cables, telephone and telegraph facilities, radio, television, telecommunications and computer networks and equipment, motor vehicles, railways, railway plants, port facilities, ships in port, barges and other watercraft, airfields, aircraft, depots of arms (whether military or sporting), documents connected with the conflict, all varieties of military equipment (including that in the hands of manufacturers), component parts of, or material suitable only for use in, the foregoing, and, in general, all kinds of war material.” Law of War Manual, supra note 110, at § 11.18.6.2, citing to U.S. Dep’t of the Army, The Law of Land Warfare, 1956 FM 27-10 ¶410a (Change No. 1 1976).

[417].  Rome Statute of the International Criminal Court, July 17, 1998, 2187 U.N.T.S. 90 [hereinafter ICC Statute].

[418].  Id. at arts. 25(3) and 28.

[419].  See, e.g., Dutch Government, Response to AIV/CAVV Report, supra note 22.

[420].  See Tim McFarland & Tim McCormack, Mind the Gap: Can Developers of Autonomous Weapons Systems Be Liable for War Crimes?, 90 Int’l L. Stud. 361 (2014).

[421].  Arms Trade Treaty, Apr. 2, 2013, U.N. Doc. A/RES/67/234B [hereinafter ATT].

[422].  Id., at art. 2(2).

[423].  Id. at art. 1.

[424].  Id. at art. 3.

[425].  Id. at art. 4.

[426].  Id at art. 2(3).

[427].  Id. at art. 6(a).

[428].  Id. at art. 6(b).

[429].  Id. at art. 6(c).

[430].  Id. at art. 7(3).

[431].  Id. at art. 7(1).

[432].  Id. at art. 7(6).

[433].  Id. at art. 11(1).

[434].  The vast majority of scholars and states addressing AWS in relation to international law focus only on IHL and ICL; Rebecca Crootof has provided one of the most expansive analyses of various fields of public international law that might be implicated by AWS. Rebecca Crootof, The Varied Law of Autonomous Weapon Systems, in NATO Allied Command Transformation, Autonomous Systems: Issues for Defence Policy Makers 98, 109 (Andrew P. Williams & Paul D. Scharre eds., 2015) [hereinafter Crootof, Varied]. With respect to the law of the sea, space law, and international telecommunications law, we draw in part on her analysis.

[435].  United Nations Convention on the Law of the Sea, Dec. 10, 1982, 1833 U.N.T.S. 397 [hereinafter UNCLOS].

[436].  Crootof, Varied, supra note 434, at 109 (citation omitted).

[437].  Id. (citing to UNCLOS, supra note 435, at art. 192–196).

[438].  Id. (citing to UNCLOS, supra note 435, at art. 301)

[439].  Id. at 110 (citing to UNCLOS, supra note 435, at art. 88).

[440].  Id. (referring to the definition of “warship” in UNCLOS, supra note 435, at art. 29). Id. at 110 n.41.

[441].  Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, Jan. 27, 1967, 18 U.S.T. 2410, 610 U.N.T.S. 205 [hereinafter OST].

[442].  Crootof, Varied, supra note 434, at 111.

[443].  Id.

[444].  Id. (citation omitted).

[445].  OST, supra note 441, at art. IV.

[446].  Crootof, Varied, supra note 434, at 112 (citations omitted).

[447].  See Dietrich Westphal, International Telecommunication Union, in Max Planck Encyclopedia of Public International Law (2014).

[448].  Crootof, Varied, supra note 434, at 113–114.

[449].  Id. at 114 (citation omitted).


4 - Accountability Axes

4 - Accountability Axes

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Section 4: Accountability Axes

In this section, we outline three accountability axes that might be relevant to regulating war algorithms. We do not claim to be exhaustive but rather aim to provide examples of key accountability avenues. We adapt an accountability approach focusing on the regulation of war algorithms along three axes: state responsibility for internationally wrongful acts, individual responsibility under international law for international crimes, and a wider notion of scrutiny governance.[450]

Below, for each axis, we highlight existing and possible accountability actors, forums, and mechanisms. Some of these axes utilize existing formal legal regimes; others depend more on “soft law” or less formal codes, standards, guidelines, and the like. Regulation may arise, for instance, through direct or intermediary modes, as well as by setting rules to allocate risk and by defining rules of private interaction.[451] As noted above, we focus on international law in part because it is the only normative framework that purports, in key respects but with important caveats, to be universal and uniform.

State Responsibility

Along this axis, accountability is a matter of state responsibility arising out of acts or omissions involving a war algorithm where those acts or omissions constitute a breach of a rule of international law. State responsibility entails discerning the content of the rule, assigning attribution to a state, determining available excuses (if any), and imposing measures of remedy.

Measures of Remedy

A range of consequences may arise where a war algorithm involved in an internationally wrongful act, not otherwise excused, is attributable to a state. In this sub-section, we highlight a main form of liability: war reparations. But we also note some of the other existing mechanisms and avenues through which state responsibility may be pursued, such as diplomatic channels, arbitration, judicial proceedings, weapons-control regimes, and an IHL fact-finding body.

War Reparations to a State

As noted above, in general a consequence of state responsibility is the liability to make reparation. War reparations constitute one such form of liability. They “involve the transfer of legal rights, goods, property and, typically, money from one State to another in response to the injury caused by the use of armed force.”[452] Historical practice favors, “[i]n the specific case of war reparations, … the use of restitution, monetary compensation, territorial guarantees, guarantees of non-repetition, and symbolic reparations.”[453]

The Hague Convention concerning the Laws and Customs of War on Land (1907) and Additional Protocol I “establish an inter-State duty to pay compensation when a belligerent party violates the provisions of the Convention and ... Protocol I.”[454] Thus, with respect to who can claim reparations, “a State’s duty to provide inter-State reparations after the commission of an internationally wrongful act is certain.”[455]

As a practical matter, war reparations are still the exception rather than the norm. When they do occur, the most common form of reparations, according to an assessment of practice up to 1995, was a lump sum at the end of the war.[456] Nonetheless, pursuant to Security Council resolutions the United Nations Compensation Commission (UNCC) was established to address damages incurred in the course of the Iraq-Kuwait War (1990–91).[457] And the governments of Eritrea and Ethiopia established a commission to deal with reparations claims concerning an armed conflict between those two states.[458]

Where a state party does not fulfill the obligation concerning suppression of acts contrary to the Geneva Conventions, another state party may also, for instance, pursue diplomatic channels to encourage the non-complying state to fulfill the obligation. That other state party may, where available, also pursue arbitration (if the transgressing state agrees) or institute judicial proceedings (if a relevant tribunal can assert its jurisdiction over the transgressing state).

Weapons Monitoring, Inspection, and Verification Regimes

Weapons regimes may establish consequences for certain violations. Arms-control instruments range, in general, “from mere reporting duties and routine inspections (monitoring) to more invasive ad hoc inspections, sometimes so-called ‘challenge inspections’ at the request of a Member State (verification), up to compulsive methods in case of a determined breach (enforcement).”[459] Two of the main challenges of effective arms-control law are weak verification and limited enforcement mechanisms.

As noted above, the Arms Trade Treaty—which might cover various war algorithms—lays down a regulatory framework concerning the transfer of certain conventional weapons and related items. Through activities such as reporting and inspections, the Organization for the Prohibition of Chemical Weapons (OPCW) supervises the Chemical Weapons Convention. That treaty also provides for a challenge inspection procedure, “which is considered one of the most extensive verification procedures in the law of arms control, but has never been used, mainly due to political constraints.”[460] In comparison, the supervisory mechanism of the Biological Weapons Convention is weaker, consisting mainly of review conferences every five years.

International Fact-Finding Commission

Where certain rules of IHL are breached, the International Fact-Finding Commission (IFFC) established in Additional Protocol I may help provide measures of remedy. With respect to states parties to that treaty, the IFFC is competent, first, to enquire into any facts alleged to be a grave breach in or other serious violation of the Geneva Conventions of 1949 and Additional Protocol I.[461] Second, the IFFC is competent to “facilitate, through its good offices, the restoration of an attitude of respect for” the Geneva Conventions of 1949 and Additional Protocol I.[462] Where relevant, the design, development, or use of a war algorithm might implicate either or both of these competences. However, as practical matter, it bears emphasis that the IFFC has never been utilized for either competence.

Other Avenues

Certain other state accountability avenues may arise even where the design, development, or use of a war algorithm attributable to a state does not constitute an internationally wrongful act. Two such measures to consider are reparations to an individual pursuant to international human rights law, and a highly contentious form of domestic tort liability.

Reparations to an Individual

As noted above, it is clear that a state may be provided reparations after the commission of an internationally wrongful act, including an applicable violation of IHL. Yet it is far less clear whether an individual right to reparation for victims of gross human rights violations has crystallized.[463] The U.N. General Assembly has adopted a resolution on the matter.[464] But that resolution has been characterized as falling into a category often referred to as “soft law”: while “[t]hese documents do not have the formal status of legally binding instruments such as treaties, … they nonetheless reflect principles of justice and serve as tools for victim-oriented policies and practices at national and international levels.”[465]

Nonetheless, to the extent it is applicable in relation to the design, development, or use of a war algorithm, IHRL may provide grounds for an individual to seek redress and reparation. The relevant violation would not be an internationally wrongful act vis-à-vis another state (or states) but rather a violation of an applicable provision of IHRL vis-à-vis an individual. For instance, “[t]he case-law developed in the jurisprudence of the [European Court of Human Rights] and the Inter-American Court of Human Rights … demonstrates an increasing readiness of these international (regional) adjudicative bodies to afford substantial reparative justice to victims, in particular in cases of gross violations of human rights.”[466]

Tortious Liability

Another state accountability avenue might arise in relation to a highly disputed form of tortious liability:[467] pecuniary compensation under domestic tort law for death or injury to the person, or damage to or loss of tangible property, caused by an act or omission which is alleged to be attributable under domestic law to a state other than the forum state and which involved a war algorithm. That compensation may be available only so long as the act or omission occurred in whole or in part in the territory of the forum state and so long as the author of the act or omission was present in the forum-state territory at the time of the act or omission.[468]

This notion of tortious liability requires discerning the content of applicable domestic law (including the relevant standard of care), attributing responsibility for the resulting harm to a state other than the forum state, confirming the presence of the author of the act in the forum state, determining the availability of immunity claims (if any), and imposing pecuniary compensation. This contested form of liability is derived from a purported “territorial tort” restriction to the applicability of state immunity found in the 2004 United Nations Convention on Jurisdictional Immunities of States and Their Property (UNCSI), which is not yet in force, and its customary analogue (if any).[469]

Individual Responsibility under International Law

As noted in section 3, a natural person may be held responsible under international law for committing an international crime connected with a war algorithm, including certain war crimes and crimes against humanity. To impose that liability, the judicial body would need to be able to understand the underlying war algorithm so as to adjudicate the legal parameters applicable in relation to it. Also as noted above, commentators have raised a number of concerns as to whether international law concerning individual responsibility for international crimes is suitable to address AWS, especially in relation to certain modes of responsibility, such as command and superior responsibility, and to mental elements (especially the requisite knowledge and intent).

This axis describes international and domestic avenues through which an individual may be held responsible for committing an international crime. We also briefly highlight another avenue­—extraterritorial jurisdiction not in respect of internationally defined crimes—through which an individual may be held responsible in relation to the design, development, or use of a war algorithm.

International Crimes

International Criminal Tribunals

As noted in section 3, where it has jurisdiction, an international criminal court or tribunal may impose individual responsibility for the commission of international crimes. The ICC—which operates pursuant to the principle of complementarity to national jurisdictions—is the first such court established on a permanent basis. Numerous war crimes under the ICC’s jurisdiction may in principle be committed through the design, development, or use of war algorithms. 

Suppression of Grave Breaches

Under the Geneva Conventions of 1949, states parties are obliged “to enact any legislation necessary to provide effective penal sanctions for persons committing, or ordering to be committed, any of the grave breaches of the” relevant instrument.[470] In principle, a war algorithm may be involved in the commission of such a breach. Each state party is obliged “to search for persons alleged to have committed, or to have ordered to be committed, such grave breaches, and shall bring such persons, regardless of their nationality, before its own courts.”[471] And each state party “may also, if it prefers, and in accordance with the provisions of its own legislation, hand such persons over for trial to another” state party, so long as that party has “made out a prima facie case.”[472]

Universal Jurisdiction

While “[s]tates generally do not have jurisdiction to define and punish crimes committed abroad by and against foreign nationals,” pursuant to universal jurisdiction “any State has the right to try a person with regard to certain internationally defined crimes.”[473] Originally, this “jurisdiction was recognized only with respect to piracy on the high seas.”[474] But “[a]s the human rights content of international law expanded, universal adjudicative jurisdiction also expanded to embrace universally condemned crimes and may now apply to slavery, genocide, torture, and war crimes.”[475] Such “[u]niversal jurisdiction to try these offences is not limited to situations in which they are committed on the high seas or in other areas outside the territory of any State, but generally confers no enforcement power to enter foreign territory or board a foreign ship without consent.”[476] Nonetheless, “[a]lthough the laws of each State define the offences over which its courts may exercise universal jurisdiction, the scope of legislative jurisdiction is limited by the fact that the offences subject to universal jurisdiction are determined by treaty and international law.”[477] As a practical matter, to date the exercise of domestic universal jurisdiction has arguably been the strongest form (even if not very strong over all) of enforcement of accountability for war crimes.

Other Avenues

Certain other individual accountability avenues might arise even where the design, development, or use of a war algorithm attributable to a natural person does not give rise to individual responsibility under international law for an international crime. One such avenue to consider is extraterritorial jurisdiction, which more and more states are turning to in order to protect their perceived interests.

Extraterritorial Jurisdiction

Extraterritorial jurisdiction refers “to the competence of a State to make, apply and enforce rules of conduct in respect of persons, property or events beyond its territory.”[478] Traditionally, the exercise of extraterritorial jurisdiction was viewed as available only in exceptional circumstances.[479] But today, more and more states are creating such regimes.

The background idea is that, with respect to conduct occurring beyond a state’s territory, the state perceives the need to protect not only its own interests but also the interests of international society.[480] States have perceived those interests in such areas as anti-trust and competition law, anti-terrorism law, and anti-bribery law.

Certain characteristics of war algorithms—including that some of the underlying technologies are developed by transnational corporations and the modularity of the technology—might lead states to perceive strong interests in making, applying, and enforcing war-algorithm rules of conduct beyond their territories. Where states do so, it may be important to be attentive to the distinctions between the different ways that states may exercise extraterritorial jurisdiction. That is because some of those methods “are more likely to conflict with the competence of other States and therefore more likely to raise questions as to their compatibility with international law.”[481]

Scrutiny Governance

Along this axis, accountability is framed in terms of the extent to which a person or entity is and should be subject to, or should exercise, forms of internal or external scrutiny, monitoring, or regulation concerning a war algorithm.[482] Notably, scrutiny governance does not hinge on—but might implicate—potential and subsequent liability or responsibility.[483] The basic notion is that there are a number of avenues—other than or alongside of legal responsibility—to hold oneself or others answerable for the exercise of war-algorithm power and authority. We highlight only a few of the various possible approaches: independent monitoring, norm (including legal) development, non-binding resolutions and codes of conduct, normative design of technical architectures, and community self-regulation.

Independent Monitoring

A vast array of institutions independently monitor compliance with law and regulations that may be relevant to war algorithms. Those institutions include bodies within international organizations, treaty-based weapons-control regimes, and non-governmental organizations. Note, however, that the existence of all of these institutions does not absolve any state from its independent duty to ensure its own compliance with international law in general and with IHL in particular. While the competence of these institutions is not explicitly stated in war-algorithm terms, their general purviews would encompass monitoring of at least certain elements of the development and operation of those algorithms. Included among those institutions are:

  • The U.N. Security Council;[484]
  • The U.N. General Assembly;[485]
  • The U.N. Secretariat, including the Secretary-General,[486] the Office of the United Nations High Commissioner for Human Rights (OHCHR), and the U.N. Office for Outer Space Affairs (UNOOSA);
  • The Human Rights Council, including Special Procedures (Special Rapporteurs);[487]
  • Treaty-based human-rights and weapons-monitoring bodies and mechanisms;[488] and
  • Non-governmental organizations.[489]

Norm Development (including of International Law)

Norms may be developed through formal or informal mechanisms.

With respect to international law, for instance, the U.N. “General Assembly shall initiate studies and make recommendations for the purpose of … encouraging the progressive development of international law and its codification.”[490] The U.N. General Assembly established its Legal Committee (Sixth Committee), which “is responsible for the UN General Assembly’s role in encouraging the codification and progressive development of international law.”[491] The workings of the Sixth Committee led to the establishment of the International Law Commission (ILC).[492] According to its Statute, the ILC is expected to bring onto its agenda only topics that are “necessary and desirable”[493]—or, “[i]n other words, only topics ‘ripe’ for codification and progressive development of international law are to be the subject of its work.”[494] This criterion leaves some room for the ILC to consider various topics as possible candidates for its work. Broadly speaking, “a topic may be considered ripe if the subject-matter regulates the essential necessities of States or the wider needs and/or contemporary realities of the international community or is one held central to the authority of international law, notwithstanding any existing disagreements among States on the topic.”[495] In principle, war algorithms could arguably fit that definition.

Norms and accompanying standards relevant to war algorithms may also be developed at levels other than international law. Pursuant to their legislative jurisdiction, states may promulgate municipal laws.[496] Moreover, whether pursuant to domestic law or regulations or to less formal bases, agencies, regulatory bodies, and other standards-setting entities—governmental or non-governmental—may articulate guidelines, standards, and the like.[497]

Non-Binding Resolutions and Declarations, and Interpretative Guides

While not laying down legal obligations, non-binding resolutions and declarations, as well as codes of conduct or informal manuals, may also contribute to the development of the normative framework concerning war algorithms. This has already occurred in relation to AWS: a 2014 resolution of the European Parliament “[c]alls on the High Representative for Foreign Affairs and Security Policy, the Member States and the Council to ... ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.”[498]

Moreover, at the 2016 CCW Informal Meeting of Experts, the Netherlands called “for the formulation of an interpretative guide that clarifies the current legal landscape with regard to the deployment of autonomous weapons.”[499] In recent years, a number of “Manuals”[500] as well as an “Interpretive Guide”[501] on international law pertaining to armed conflict in relation to certain thematic areas have been drafted. It is unclear whether the initiative called for by the Netherlands will align with these approaches or might take another form. But based on the initial articulation, it appears that the focus of the called-for “interpretative guide” will be on clarifying currently applicable law concerning the deployment of autonomous weapons.

Normative Design of Technical Architectures

Programmers, engineers, and others involved in the design, development, and use of war algorithms might take diverse measures to embed normative principles into those systems. The background idea is that code and technical architectures can function like a kind of law. Maximizing the auditability of that code—especially in light of legally-relevant concepts such as attribution and reconstructability—might help strengthen external and internal scrutiny mechanisms.

To increase the likelihood of being adopted, such normative-design approaches would likely need to be devised in a manner that takes due consideration of the tension between, on one side, external transparency, and, on the other, a state’s interest in protecting classified technologies as well as the intellectual-property interests associated with those technologies. In addition, those thinking through ways to pursue war-algorithm accountability along this avenue should critically assess the experience of attempting to regulate cyber operations and cyber “warfare.” So far, those areas have eluded a universal normative regime. Like war algorithms, cyber operations and cyber “warfare” raise concerns regarding intellectual-property interests, the modularity and dual-use nature of the technologies, transparency with external actors due to classification regimes, and maintaining a qualitative edge.

Designing “Morally Responsible Engineering” and a “Partnership Architecture”

Some governments have recognized the importance of incorporating moral and ethical considerations into the engineering of systems that might be relevant to war algorithms.

In an October 2015 report on AWS, a Dutch “advisory committee advocates taking the interaction between humans and machines into account sufficiently in the design phase of autonomous weapon systems.”[502] Furthermore, “[i]n light of the importance of attributing responsibility and accountability, the [advisory committee] believes that, when procuring autonomous weapons, the government should ensure that the concept of morally responsible engineering is applied during the design stage.” For their part, the Ministries of Foreign Affairs and Defense consider that “recommendation to be an affirmation of existing policy,”[503] and emphasize that “the government and several of its knowledge partners are studying this theme.”[504]

Among the research programs funded by the Dutch government was a project entitled “Military Human Enhancement: Design for Responsibility and Combat Systems,” which was carried out by Delft University of Technology. One of the articles published as part of that project put forward the idea of a “partnership architecture.”[505] Two components undergird this idea. First, a mechanism is put forward through which both parties—the human and the machine—“do their job concurrently. In this way, each actor arrives at an own interpretation of the world thereby constructing a human representation of the world and a machine representation of the world at the same time.”[506] Second, work agreements—“explicit contracts between the human and the machine about the division of work”—are used to “minimize[] the automation-human coordination asymmetry because working agreements define an a priori explicit contract [regarding] what [to] and what not to delegate[] to the automation.”[507]

The main idea is that the resulting “partnership architecture can protect a commitment to responsibility within the armed forces.”[508] On one hand, “operators will be responsible for the terms of their working agreements with their machine.”[509] And on the other, working agreements may help “ensure that operators receive the morally relevant facts needed to make decisions that comply with IHL, as well as key moral principles.”[510]

Coding Law

Software and hardware engineers, roboticists, and others involved in the development of war algorithms may consider taking a page from the internet playbook. The internet protocol suite (also known as TCP/IP) is a core set of protocols that define the way in which the internet functions. A fundamental choice at the heart of the internet’s architecture concerned defining the flow of information by allowing ordinary computers connected to the internet to not only receive but also to send information. This was neither a necessary nor inevitable feature of the internet. (And whether one sees it today as a feature or a bug depends on one’s vantage point.) The suite of protocols could have been designed in other ways—for instance, the system could have distributed packets from a centralized hub, precluding individual computers to communicate directly with each other.

Lawrence Lessig argues that, through that structuring, TCP/IP embeds some regulatory—perhaps normative—principles in the design of the system.[511] Put another way, in defining the way in which computers could share data and communicate with one another, TCP/IP also forecloses alternative methods of communication, thereby imposing, if implicitly, regulations on the way in which the internet functions. In this way, code is a kind of law because it enables computers to do certain things (such as exchange packets of information) but, in doing so, also indirectly defines and narrows the specific way in which that exchange is accomplished. (It merits mention that code functions as a type of law in this conception irrespective of whether that was the intention of the system’s designers.)

At the 2016 CCW Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Danièle Bourcier imported Lessig’s general idea into the specific discussion on AWS where she raised the notion of designing “humanitarian law” into the relevant technical system.[512] What this might mean in practice is unclear. But in principle it might concern the design of the underlying algorithms as well as the constructed systems through which those algorithms are effectuated.

Auditable Algorithms

Making war algorithms more auditable may help foster accountability over them. “Audit logs,” for instance, record activity that takes place in an information architecture. In the U.S., national-security fusion centers “are supposed to employ audit logs that record the activity taking place in the information-sharing network, including ‘queries made by users, the information accessed, information flows between systems, and date- and time-markers for those activities.’”[513] (A fusion center is designed to promote information-sharing and to streamline intelligence-gathering, not only at the federal level between various agencies but also among the U.S. military and state- and local-level government.) In addition to the national-security realm, audit logs or similar mechanisms are mandated with respect to certain credit-rating agencies, financial transactions, and healthcare software. To be effective, audit logs need to be immutable.[514] While not specifically addressing AWS, the UK MoD Joint Doctrine Note on unmanned aircraft systems states that “[a] complex weapon system is also likely to require an authorisation and decisions log, to provide an audit trail for any subsequent legal enquiry.”[515]

Community Self-Regulation

A recent call for self-imposed regulation by a group of expert scientists in the domain of genetic engineering may provide a regulatory model for those involved in the development of war algorithms. The basic idea is that, even where there is no or little formal regulation, a community can choose, on its own initiative, to delineate what is and is not acceptable and to self-police the resulting boundaries.

The plea by some leading scientists partly concerned a relatively easy-to-use gene-editing technique called CRISPR/Cas9. (Gene-editing techniques, in short, “use enzymes called nucleases to snip DNA at specific points and then delete or rewrite the genetic information at those locations.”[516]) CRISPR/Cas9 had “suddenly made it possible to cross [a] Rubicon”: “[f]or decades, the ability to make changes that could be inherited in the human genome has been viewed as a fateful decision — but one that could be postponed because there was no safe and efficient way to edit the genome.”[517] With CRISPR/Cas9, it has been said, “the long theoretical issue now requires practical decisions.”[518]

In December 2015, the Organizing Committee for the International Summit on Human Gene Editing came to an agreement on “a recommendation not to stop human-gene-editing research outright, but to refrain from research and applications that use modified human embryos to establish a pregnancy.”[519] More specifically, intensive basic and preclinical research should proceed, the Committee said, but that research should be “subject to appropriate legal and ethical rules and oversight, on (i) technologies for editing genetic sequences in human cells, (ii) the potential benefits and risks of proposed clinical uses, and (iii) understanding the biology of human embryos and germline cells.”[520] And “[i]f, in the process of research, early human embryos or germline cells undergo gene editing,” the Committee entreated, “the modified cells should not be used to establish a pregnancy.”[521]

The Committee also called for an ongoing forum to address these issues. The push should be for “[t]he international community … [to] strive to establish norms concerning acceptable uses of human germline editing and to harmonize regulations, in order to discourage unacceptable activities while advancing human health and welfare.”[522] Against this backdrop, the Committee called upon the national academies that co-hosted the summit “to take the lead in creating an ongoing international forum to discuss potential clinical uses of gene editing; help inform decisions by national policymakers and others; formulate recommendations and guidelines; and promote coordination among nations.”[523] This forum, the Committee stated, “should be inclusive among nations and engage a wide range of perspectives and expertise,” such as “biomedical scientists, social scientists, ethicists, health care providers, patients and their families, people with disabilities, policymakers, regulators, research funders, faith leaders, public interest advocates, industry representatives, and members of the general public.”[524]

Zooming out, the call for various forms of self-regulation by these scientists might be relevant for those involved in the design and development of war algorithms—another area where some are concerned about crossing a moral Rubicon. In addition to the broader point (that, alongside forms of legal responsibility, a community can raise the normative bar for itself), specific possible regulatory avenues emerge: setting boundaries on possible research and imposing moratoriums (where deemed necessary); defining legal and ethical rules and oversight mechanisms; committing to review existing regulations on an ongoing basis; and establishing forums to address enduring and emergent concerns.

[450].  Derived in part from International Law Association, supra note 35, at 5.

[451].  See Wittes & Blum, supra note 31, at 203–206.

[452].  Sullo & Wyatt, supra note 297, at ¶ 1.

[453].  Id. at ¶ 4.

[454].  Id. at ¶ 5 (referring to art. 3 Hague Peace Conferences [1899 and 1907]) and art. 91 AP I).

[455].  Id. at ¶ 4.

[456].  Id. (“Based on the analysis of practice until 1995, Lillich, Weston, and Bederman concluded that the settlement of international claims by lump sum agreements was by far the prevailing practice and the creation of arbitral tribunals such as the Iran-United States Claims Tribunal the exception.”).

[457].  Id. at ¶ 5.

[458].  Id. (citing to Agreement between the Government of the State of Eritrea and the Government of the Federal Democratic Republic of Ethiopia, U.N. Doc. A/55/686-S/2000/1183 Annex).

[459].  Adrian Loets, Arms Control, in Max Planck Encyclopedia of Public International Law ¶ 21 (2013).

[460].  Id. at ¶ 23 (citation omitted).

[461].  AP I, supra note 12, at art. 90(2)(c)(i).

[462].  Id. at art. 90(2)(c)(ii).

[463].  See Sullo & Wyatt, supra note 297, at ¶ 4; see generally Christian Tomuschat, State Responsibility and the Individual Right to Compensation Before National Courts, in The Oxford Handbook of International Law in Armed Conflict (Andrew Clapham, Paola Gaeta & Tom Haeck eds., 2014).

[464].  Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law, adopted by UNGA Resolution 60/147, Dec. 16, 2005.

[465].  Theo van Boven, Victims’ Rights, in Max Planck Encyclopedia of Public International Law ¶ 19 (2007).

[466].  Id. at ¶¶ 10–13.

[467].  Compare, e.g., Joanne Foakes & Roger O’Keefe, Article 12, in The United Nations Convention on Jurisdictional Immunities of States and Their Property: A Commentary 209, 209–224 (Roger O’Keefe, Christian J. Tams & Antonios Tzanakopoulos eds., 2013) with Tomuschat, supra note 463. As noted above, another form of pecuniary compensation—though one not framed in terms of tortious liability—may arise under IHRL.

[468].  Another form of tortious liability—one that, in principle, establishes jurisdiction for serious violations of IHL to national courts in accordance with the principle of universal jurisdiction—may be relevant, though perhaps more in theory than in practice, at least under current interpretations. See, e.g., Tomuschat, supra note 463. Under the Alien Tort Claims Act (ATCA), federal judges “shall have original jurisdiction of any civil action by an alien for a tort only, committed in violation of the law of nations or a treaty of the United States.” 28 U.S.C. § 1350 (2012). Actions have been filed under the ATCA against foreign governments and foreign corporations, as well as against the U.S. government. Yet recent judicial interpretations have narrowed the statute’s scope of application. See, e.g., Ingrid Wuerth, Kiobel v. Royal Dutch Petroleum Co.: The Supreme Court and the Alien Tort Statute, 107 Am. J. Int’l L. 601 (2013).

[469].  See generally Foakes & O’Keefe, supra note 467. The form of pecuniary compensation here, which is based on a municipal tort law of the forum state, is distinguishable from the innovative “war tort” idea articulated by Rebecca Crootof, which is based on serious violations of IHL; however, the two might interface where a municipal tort is linked to a serious violation of IHL. See Crootof, War Torts, supra note 20, at 2. Crootof argues that “just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for ‘war torts’: serious violations of international humanitarian law that give rise to state responsibility.” Id. She believes that a “successful ban on autonomous weapon systems is unlikely (and possibly even detrimental).” Id. Instead, in her view, “what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons.” Id.

[470].  GC I, supra note 348, at art. 49; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146. See also AP I, supra note 12, at art. 85.

[471].  GC I, supra note 348, at art. 49; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146. See also AP I, supra note 12, at art. 85.

[472].  GC I, supra note 348, at art. 49; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146. See also AP I, supra note 12, at art. 85.

[473].  Bernard H. Oxman, Jurisdiction of States, in Max Planck Encyclopedia of Public International Law ¶ 37 (2007).

[474].  Id. at ¶ 38.

[475].  Id. at ¶ 39.

[476].  Id.

[477].  Id.

[478]. Menno T. Kamminga, Extraterritoriality, in Max Planck Encyclopedia of Public International Law ¶ 1 (2012).

[479]. See id. at ¶ 3.

[480]. See id. at ¶ 4.

[481]. Id. at ¶ 1.

[482].  Derived in part from International Law Association, supra note 35, at 5.

[483].  The obligation to review weapons, means, and methods of warfare laid down in Article 36 of AP I and the customary law cognate (if any), discussed above, constitutes a form of required scrutiny that directly implicates legal responsibility.

[484].  See U.N. Charter art. 25, 39–42.

[485].  Under the U.N. Charter, “[t]he General Assembly may discuss any questions or any matters within the scope of the … Charter or relating to the powers and functions of any organs provided for in the … Charter, and, except as provided in Article 12, may make recommendations to the Members of the United Nations or to the Security Council or to both on any such questions or matters.” U.N. Charter art. 10. Among its explicit competences laid down in the U.N. Charter, “[t]he General Assembly may consider the general principles of co-operation in the maintenance of international peace and security, including the principles governing disarmament and the regulation of armaments, and may make recommendations with regard to such principles to the Members or to the Security Council or to both.” U.N. Charter art. 11 (emphasis added). And “[t]he General Assembly may call the attention of the Security Council to situations which are likely to endanger international peace and security.” Id.

[486].  Pursuant to the U.N. Charter, “[t]he Secretary-General may bring to the attention of the Security Council any matter which in his opinion may threaten the maintenance of international peace and security.” U.N. Charter art. 99. An inherent right to investigate in connection with this power has been invoked by several Secretaries-General. Katja Göcke & Hubertus von Mohr, United Nations, Secretary-General, in Max Planck Encyclopedia of Public International Law ¶ 18 (2013). The rationale is that “[s]ince it is necessary for the Secretary-General to have comprehensive knowledge of the situation in the conflict area before taking action, his authority [to bring any relevant matter to the attention of the Security Council] must encompass the right to conduct investigations and to implement preparatory fact-finding missions.” Id. at ¶ 20. According to Katja Göcke and Hubertus von Mohr, this power has proven its value especially “since States may for various reasons be reluctant to bring certain matters before the Security Council….” Id. at ¶ 19.

[487].  See, e.g., Christof Heyns (Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions), Rep. to Human Rights Council, UN Doc. A/HRC/23/47 (Apr. 9, 2013).

[488].  See, e.g., Human Rights Committee; Committee on Economic, Social and Cultural Rights (CESCR); Committee against Torture (CAT); Committee on the Rights of the Child (CRC); and Organisation for the Prohibition of Chemical Weapons (OPCW).

[489].  See, e.g., the Steering Committee of the Campaign to Stop Killer Robots (Human Rights Watch, Article 36, Association for Aid and Relief Japan, International Committee for Robot Arms Control, Mines Action Canada, Nobel Women’s Initiative, PAX, Pugwash Conferences on Science & World Affairs, Seguridad Humana en América Latina y el Caribe, and Women’s International League for Peace and Freedom). About Us, Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/about-us/ (last visited Aug. 25, 2016).

[490].  U.N. Charter art. 13.

[491].  Huw Llewellyn, United Nations, Sixth Committee, in Max Planck Encyclopedia of Public International Law ¶ 1 (2012).

[492].  Pemmaraju Sreenivasa Rao, International Law Commission (ILC), in Max Planck Encyclopedia of Public International Law ¶ 3 (2013) (citing to G.A. Res. 174 (II) (November 1947)).

[493].  Statute of the International Law Commission, art. 18(2), GA Res. 174(II), UN Doc. A/519 (1947).

[494].  Rao, supra note 492, at ¶ 6.

[495].  Id. (emphasis added).

[496].  See, e.g., Public Law 100-180, § 224 (“No agency of the Federal Government may plan for, fund, or otherwise support the development of command and control systems for strategic defense in the boost or post-boost phase against ballistic missile threats that would permit such strategic defenses to initiate the directing of damaging or lethal fire except by affirmative human decision at an appropriate level of authority.”). But see Law of War Manual, supra note 110, at § 6.9.5.4 n.111 (“This statute may, however, be an unconstitutional intrusion on the President’s authority, as Commander in Chief, to determine how weapons are to be used in military operations.”).

[497].  See, e.g., DOD AWS Dir., supra note 91; Hui-Min Huang et al., Autonomy Levels for Unmanned Systems (ALFUS) Framework, Volume II: Framework Models, NIST Special Publication 1011-II-1.0, Version 1.0 (2007), http://www.nist.gov/el/isd/ks/upload/ALFUS-BG.pdf; Jessie Y.C. Chen; Ellen C. Haas, Krishna Pillalamarri & Catherine N. Jacobson, “Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies,” U.S. Army Research Laboratory, ARL-TR-3834 (July 2006).

[498]. European Parliament Resolution on the Use of Armed Drones ¶ H.2(d) (2014/2567(RSP)) (Feb. 25, 2014),

[499].  Henk Cor van der Kwast, Perm. Rep. of Neth. to the Conference on Disarmament, Opening Statement at the 2016 Informal Meeting of Experts, at 4, UN Office in Geneva (April 11, 2016), http://www.unog.ch/80256EDD006B8954/(httpAssets)/FC2E59B32F14D791C1257F920057CAE6/$file/2016_LAWS+MX_GeneralExchange_Statements_Netherlands.pdf. See also Steven Groves, A Manual Adapting the Law of Armed Conflict to Lethal Autonomous Weapons Systems (Heritage Foundation, Special Report No. 183, 2016), http://www.heritage.org/research/reports/2016/04/a-manual-adapting-the-law-of-armed-conflict-to-lethal-autonomous-weapons-systems.

[500].  E.g., Tallinn Manual on the International Law Applicable to Cyber Warfare (Michael Schmitt ed., 2013); Program on Humanitarian Policy and Conflict Research, Manual on International Law Applicable to Air and Missile Warfare (2009); International Institute of Humanitarian Law, San Remo Manual on International Law Applicable to Armed Conflicts at Sea (1995). See also Project on a Manual on International Law Applicable to Military Uses of Outer Space (MILAMOS), https://www.mcgill.ca/milamos/home (last visited Aug. 27, 2016).

[501].  Nils Melzer (ICRC), Interpretive Guidance on the Notion of Direct Participation in Hostilities Under International Humanitarian Law (2009).

[502].  Dutch Government, Response to AIV/CAVV Report, supra note 22.

[503].  This approach aligns in certain respects with the focus on systems engineering discussed in the UK MoD Joint Doctrine Note on unmanned aircraft systems. The authors of that document state that “[i]n order to ensure that new unmanned aircraft systems adhere to present and future legal requirements, it is likely that a systems engineering approach will be the best model for developing the requirement and specification.” U.K. Ministry of Def., supra note 113, at 5-2. Using such an approach, according to the Joint Doctrine Note authors, “the legal framework for operating the platform would simply form a list of capability requirements that would sit alongside the usual technical and operational requirements.” Id. In turn, “[t]his would then inform the specification and design of various sub-systems, as well as informing the concept of employment.” Id.

[504].  Dutch Government, Response to AIV/CAVV Report, supra note 22.

[505].  See Tjerk de Greef & Alex Leveringhaus, Design for Responsibility: Safeguarding Moral Perception via a Partnership Architecture, 17 Cognition, Technology & Work 319 (2015).

[506].  Id. at 326 (emphasis original).

[507].  Id. (citations omitted).

[508].  Id. at 327.

[509].  Id. The authors note that “[t]his raises issues about foresight, negligence and so on that we cannot tackle here.” Rather, “[f]or now, it suffices to note that the operator remains firmly control of his machine—even if there is a physical distance between them or that the machines operates at increased levels of automation.” Id.

[510].  Id.

[511].  See generally Lawrence Lessig, Code and Other Laws of Cyberspace (1999).

[512].  Danièle Bourcier, Centre national de la recherche scientifique, Artificial Intelligence & Autonomous Decisions: From Judgelike Robot to Soldier Robot, Address at the 2016 Informal Meeting of Experts, UN Office in Geneva (April 2016), available at

http://www.unog.ch/80256EDD006B8954/(httpAssets)/338ABCC8C57BB09CC1257F9A0045197A/$file/2016_LAWS+MX+Presentations_HRandEthicalIssues_Daniele+Vourcier.pdf.

[513].  Pasquale, supra note 1, at 157 (citing to Markle Task Force on National Security in the Information Age, Implementing a Trusted Information Sharing Environment: Using Immutable Audit Logs to Increase Security, Trust, and Accountability, at I (2006), http://research.policyarchive.org/15551.pdf).

[514].  See Pasquale, supra note 1, at 157; see also id. at 159 (stating that “[i]f immutable audit logs of fusion centers are regularly reviewed, misconduct might be discovered, wrongdoers might be held responsible, and similar misuses might be deterred”) (citation omitted).

[515].  U.K. Ministry of Def., supra note 113, at 5-6. See also DOD AWS Dir., supra note 91 (establishing audit-like requirements in DoD policy).

[516].  David Cyranoski, Ethics of Embryo Editing Divides Scientists, 519 Nature 272, 272 (2015).

[517].  Nicholas Wade, Scientists Seek Moratorium on Edits to Human Genome That Could Be Inherited, N.Y. Times (Dec. 3, 2015), http://www.nytimes.com/2015/12/04/science/crispr-cas9-human-genome-editing-moratorium.html.

[518].  Id.; see, e.g., George Church, Perspective: Encourage the Innovators, 528 Nature S7, S7 (2015).

[519].  Sara Reardon, Global Summit Reveals Divergent Views on Human Gene Editing, 528 Nature 173, 173 (2015).

[520].  David Baltimore et al., International Summit Statement, On Human Gene Editing, (Dec. 3, 2015), http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=12032015a.

[521].  Id.

[522].  Id.

[523].  Id.

[524].  Id.


5 - Conclusion

5 - Conclusion

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Section 5: Conclusion

Two contradictory trends may be combining into a new global climate that is at once enterprising and anxious. Militaries see myriad technological triumphs that will transform warfighting. Yet the possibility of “replacing” human judgment with algorithmically-derived “decisions”—especially in war—threatens what many consider to define us as humans.  

To date, the lack of demonstrated technical knowledge by many states and commentators, the unwillingness of states to share closely-held national-security technologies, and an absence of a definitional consensus on what is meant by autonomous weapon systems have impeded regulatory efforts on AWS. Moreover, uncertainty about which actors would benefit most from advances in AWS and for how long such benefits would yield a meaningful qualitative edge over others seems likely to continue to inhibit efforts at negotiating binding international rules on the development and deployment of AWS. In this sense, efforts at reaching a dedicated international regime to address AWS may follow the same frustrations as analogous efforts to address cyber warfare. True, unlike with the early days of cyber warfare, there has been greater state engagement on regulation of AWS. In particular, the concept of “meaningful human control” over AWS has already been endorsed by over two-dozen states. But much remains up in the air as states decide whether to establish a Group of Governmental Experts on AWS at the upcoming Fifth Review Conference of the CCW.

We have shown that, with respect to armed conflict, the primary formal regulatory avenues under international law are state responsibility for internationally wrongful acts and individual criminal responsibility for international crimes. These fields are well established and offer many more avenues than are often considered in the relatively narrow AWS discourse to date. In sum, ICL and, especially, IHL already address many of the concerns raised in relation to AWS—but ICL and IHL may not be sufficient to address all of those concerns.

The current crux, as we see it, is whether advances in technology—especially those capable of “self-learning” and of operating in relation to war and whose “choices” may be difficult for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment—are susceptible to regulation and, if so, whether and how they should be regulated. One way to think about the core concern which vaults over at least some of the impediments to the discussion on AWS is the new concept we raise: war algorithms. War algorithms include not only those algorithms capable of being used in weapons but also in any other function related to war.

More war algorithms are on the horizon. Two months ago, the Defense Science Board, which is connected with the U.S. Department of Defense, identified five “stretch problems”—that is, goals that are “hard-but-not-too-hard” and that have a purpose of accelerating the process of bringing a new algorithmically-derived capability into widespread application:

  • Generating “future loop options” (that is, “using interpretation of massive data including social media and rapidly generated strategic options”);
  • Enabling autonomous swarms (that is, “deny[ing] the enemy’s ability to disrupt through quantity by launching overwhelming numbers of low‐cost assets that cooperate to defeat the threat”);
  • Intrusion detection on the Internet of Things (that is, “defeat[ing] adversary intrusions in the vast network of commercial sensors and devices by autonomously discovering subtle indicators of compromise hidden within a flood of ordinary traffic”);
  • Building autonomous cyber-resilient military vehicle systems (that is, “trust[ing] that … platforms are resilient to cyber‐attack through autonomous system integrity validation and recovery”); and
  • Planning autonomous air operations (that is, “operat[ing] inside adversary timelines by continuously planning and replanning tactical operations using autonomous ISR analysis, interpretation, option generation, and resource allocation”).[525]

What this trajectory toward greater algorithmic autonomy in war—at least among more technologically-sophisticated armed forces and even some non-state armed groups—means for accountability purposes seems likely to remain a contested issue for the foreseeable future.

In the meantime, it remains to be authoritatively determined whether war algorithms will be capable of making the evaluative decisions and value judgments that are incorporated into IHL. It is currently not clear, for instance, whether war algorithms will be capable of formulating and implementing the following IHL-based evaluative decisions and value judgments:[526]

  • The presumption of civilian status in case of “doubt”;[527]
  • The assessment of “excessiveness” of expected incidental harm in relation to anticipated military advantage;
  • The betrayal of “confidence” in IHL in relation to the prohibition of perfidy; and
  • The prohibition of destruction of civilian property except where “imperatively” demanded by the necessities of war.[528]

*     *     *

Two factors may suggest that, at least for now, the most immediate ways to regulate war algorithms more broadly and to pursue accountability over them might be to follow not only traditional paths but also less conventional ones. As illustrated above, the latter might include relatively formal avenues—such as states making, applying, and enforcing war-algorithm rules of conduct within and beyond their territories—or less formal avenues—such as coding law into technical architectures and community self-regulation.

First, even where the formal law may seem sufficient, concerns about practical enforcement abound. Recently, for instance, states parties to the Geneva Conventions failed to muster the political support to establish a new IHL compliance forum.[529] There are a number of ways to interpret this refusal. But, at a minimum, it seems to point to a lack of political will among states to cast more light on IHL compliance. This suggests that even where existing IHL seems adequate as a regulatory regime for some aspects of the design, development, and use of AWS or war algorithms, it still lacks dependable enforcement as far as state conduct is concerned.

Second, the proliferation of increasingly advanced technical systems based on self-learning and distributed control raises the question of whether the model of individual responsibility found in ICL might pose conceptual challenges to regulating AWS and war algorithms. At a general level, this is not a wholly new concern, as distributed systems have been used in relation to war for a long time. But the design, development, and operation of those systems might be increasingly difficult to square with the foundational tenet of ICL—that “[c]rimes against international law are committed by men, not by abstract entities”[530]—as learning algorithms and architectures advance.[531]

In short, individual responsibility for international crimes under international law remains one of the vital accountability avenues in existence today, as do measures of remedy for state responsibility. Yet in practice responsibility along either avenue is unfortunately relatively rare. And thus neither path, on its own or in combination, seems to be sufficient to effectively address the myriad regulatory concerns pertaining to war algorithms—at least not until we better understand what is at issue. These concerns might lead those seeking to strengthen accountability of war algorithms to pursue not only traditional, formal avenues but also less formal, softer mechanisms.

In that connection, it seems likely that attempts to change governments’ approaches to technical autonomy in war through social pressure (at least for those governments that might be responsive to that pressure) will continue to be a vital avenue along which to pursue accountability. But here, too, there are concerns. Numerous initiatives already exist. Some of them are very well informed; others less so. Many of them are motivated by ideological, commercial, or other interests that—depending on one’s viewpoint—might strengthen or thwart accountability efforts. And given the paucity of formal regulatory regimes, some of these initiatives may end up having considerable impact, despite their shortcomings.

Stepping back, we see that technologies of war, as with technologies in so many areas, produce an uneasy blend of promise and threat.[532] With respect to war algorithms, understanding these conflicting pulls requires attention to a century-and-a-half-long history during which war came to be one of the most highly regulated areas of international law. But it also requires technical know-how. Thus those seeking accountability for war algorithms would do well not to forget the essentially political work of IHL’s designers—nor to obscure the fact that today’s technology is, at its core, designed, developed, and deployed by humans. Ultimately, war-algorithm accountability seems unrealizable without competence in technical architectures and in legal frameworks, coupled with ethical, political, and economic awareness.

[525].  Defense Science Board, supra note 7, at 76–97.

[526].  These concerns were raised in relation to autonomous weapon systems, but they are also implicated by war algorithms.

[527].  Swiss, “Compliance-Based” Approach, supra note 74, citing art. 50(3) and art. 52(3) of Additional Protocol I to the Geneva Conventions. See AP I, supra note 12, at art. 50(3), 52(3).

[528].  Id., citing art. 23(g) of Hague Regulation IV, see Hague Convention (IV) Respecting the Laws and Customs of War on Land art. 23(g), Oct. 18, 1907, T.S. 539, and art. 53 of the Fourth Geneva Convention, see GC IV, supra note 349, at art. 53.

[529].  Compare 32nd International Conference of the Red Cross and Red Crescent, Draft “0” Resolution on “Strengthening compliance with international humanitarian law” (undated), https://www.icrc.org/en/download/file/13244/32ic-draft-0-resolution-on-ihl-compliance-20150915-en.pdf with 32nd International Conference of the Red Cross and Red Crescent, Resolution 2 (Dec. 10, 2015), http://rcrcconference.org/wp-content/uploads/sites/3/2015/04/32IC-AR-Compliance_EN.pdf.

[530].  1 Trial of the Major War Criminals Before the International Military Tribunal 223 (1947).

[531].  In a related context, M.C. Elish has noted a dilemma in which “control has become distributed across multiple actors (human and nonhuman),” and yet “our social and legal conceptions of responsibility have remained generally about an individual.” She thus “developed the term moral crumple zone to describe the result of this ambiguity within systems of distributed control, particularly automated and autonomous systems.” The basic idea is that “[j]ust as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.” M.C. Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction 3–4 (We Robot 2016 Working Paper) (March 20, 2016), http://dx.doi.org/10.2139/ssrn.2757236 (using “the terms autonomous, automation, machine and robot as related technologies on a spectrum of computational technologies that perform tasks previously done by humans” and discussing a framework for categorizing types of automation proposed by Parasuraman, Sheridan and Wickens, who “define automation specifically in the context of human-machine comparison and as ‘a device or system that accomplishes (partially or fully) a function that was previously, or conceivably could be, carried out (partially or fully) by a human operator.’”). Id. at n.5 (citing to Parasuraman et al., “A Model for Types and Levels of Human Interaction with Automation,” 30 IEEE Transactions on Systems, Man and Cybernetics 3 (2000). Elish notes that the term arose in her work with Tim Hwang. Id. at 3.

[532].  On broader historical, social, and political forces that shape notions and experiences of technology, at least in the American context, see, e.g., John M. Staudenmaier, Technology, in A Companion to American Thought 667–669 (Richard Wrightman Fox & James T. Kloppenberg eds., 1995).

Bibliography

Bibliography

Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Bibliography

Online Resources

The Ethical Autonomy Project of the Center for a New American Security (CNAS) maintains a Bibliography at http://www.cnas.org/research/defense-strategies-and-assessments/20YY-Warfare-Initiative/Ethical-Autonomy/bibliography.

The United Nations Office in Geneva maintains web pages that host papers produced by and statements given by states—as well as materials produced by non-state commentators—in relation to “Lethal Autonomous Weapon Systems” in the context of the CCW at http://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument.

Sources

Keith Abney, Autonomous Robots and the Future of Just War Theory, in Routledge handbook of the law of armed conflict (Rain Liivoja & Timothy L. H. McCormack eds., 2016).

Benjamin Adler et al., Autonomous Exploration of Urban Environments Using Unmanned Aerial Vehicles, 31 J. Field Robotics 912 (2014).

Sarah Ahern, International criminal law and autonomous weapons: a challenge less considered, ILA Reporter (Dec. 3, 2015), http://ilareporter.org.au/tag/autonomous-weapons/.

Autonomous Weapon Systems: The Need for Meaningful Human Control, 97 Advisory Council on International Affairs (AIV), 26 Advisory Committee on Issues of Public International Law (CAVV), Oct. 2015.

Edwin F. Albertsworth, The Machine-Age Mind and Legal Developments, 20 Ky. L. J. 416 (1931).

Brad Allenby, Emerging Technologies and the Future of Humanity, 71 Bull. Atomic Sci. 29 (2015).

Philip Alston, Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law, 21 J.L. Info. & Sci. 35 (2011).

Jürgen Altmann et al., Armed military robots: editorial, 15 Ethics & Info. Tech. 73 (2013).

Jürgen Altmann, Arms control for armed uninhabited vehicles: an ethical issue, 15 Ethics & Info. Tech. 137 (2013).

赤坂 亮太 アカサカ リョウタ & R. Alasaka, Product Liability for Autonomous Robots: in Regard to Concept of Defect and State-of-art, 13 情報ネットワーク・ローレビュー / 情報ネットワーク法学会 編 (Inf. Netw. L. Rev.) 103 (2014).

Amnesty International, Moratorium on Fully Autonomous Robotic Weapons Needed to Allow the UN to Consider Fully Their Far-Reaching Implications and Protect Human Rights, Written statement submitted by Amnesty International, U.N. Doc. A/HRC/23/NGO/106 (May 22, 2013).

Kenneth Anderson et al., Adapting the Law of Armed Conflict to Autonomous Weapon Systems, 90 Int’l L. Stud. 386 (2014).

Kenneth Anderson, Comparing The Strategic And Legal Features Of Cyberwar, Drone Warfare, And Autonomous Weapon Systems, Hoover Inst. (Feb. 27, 2015), http://www.hoover.org/research/comparing-strategic-and-legal-features-cyberwar-drone-warfare-and-autonomous-weapon-systems.

Kenneth Anderson & Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can, Hoover Inst. (2013).

Kenneth Anderson & Matthew Waxman, Law and Ethics for Robot Soldiers, Pol’y Rev. (2012).

Kenneth Anderson & Matthew Waxman, Similar Ethical Dilemmas for Autonomous Weapon Systems and Autonomous Self-Driving Cars, Lawfare (Nov. 6, 2015), https://www.lawfareblog.com/similar-ethical-dilemmas-autonomous-weapon-systems-and-autonomous-self-driving-cars.

Kenneth Anderson & Matthew Waxman, Threats, Signaling Behavior, Assertiveness & Aggression in Autonomous Robot-Human Competitive Strategic Interactions: Comparing Regimes of Ethical and Legal Accountability Between Self-Driving Vehicles and Autonomous Lethal Weapon Systems, Address at the We Robot 2013 Conference on Legal and Policy Issues Relating to Robotics (Stanford University Apr. 2013).

Defense Applications of Artificial Intelligence (Stephen J. Andriole & Gerald W. Hopple eds., 1988).

לירן ענתבי, הידרשות האו"ם למערכות נשק אוטונומיות - הזדמנות מוחמצת?, מבט על 707 (המכון למחקרי ביטחון לאומי, 2015).

Ronald C. Arkin, Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (Terry Fong et al. eds., 2008).

Ronald C. Arkin, Governing Lethal Behavior in Autonomous Robots (2009).

Ronald Arkin, The Case for Banning Killer Robots: Counterpoint, 58 Comm. ACM 46 (2015).

Ronald Arkin, The Case for Ethical Autonomy in Unmanned Systems, 9 J. Mil. Ethics 332 (2010).

Thomas Arnold & Matthias Scheutz, Against the Moral Turing Test: Accountable Design and the Moral Reasoning of Autonomous Systems, 18 Ethics & Info. Tech. 103 (2016).

Peter M. Asaro, A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics, in Robot Ethics: The Ethical and Social Implications of Robotics 169 (Patrick Lin et al. eds., 2012).

Peter Asaro, On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making, 94 Int’l Rev. Red Cross 687 (2012).

Peter M. Asaro, Remote-Control Crimes, 18 IEEE Robotics & Automation Mag. 68 (2011).

Peter M. Asaro, Determinism, machine agency, and responsibility, Politica & Societá 265 (2014).

Peter M. Asaro, Jus Nascendi, Robotic Weapons and the Martens Clause, in Robot law (Ryan Calo et al. eds., 2016).

Peter M. Asaro, The Liability Problem for Autonomous Artificial Agents, 2016 AAAI Spring Symposium Series (March 21, 2016).

Alan Backstrom & Ian Henderson, New Capabilities in Warfare: An Overview of Contemporary Technological Developments and the Associated Legal and Engineering Issues in Article 36 Weapons Reviews, 94 Int’l Rev. Red Cross 483 (2012).

Jack M. Balkin, The Path of Robotics Law, 6 Calif. L. Rev. Circuit 45 (2015).

Michael Barnes et al., Five Requisites for Human-Agent Decision Sharing in Military Environments, in Advances in Human Factors in Robots and Unmanned Systems (Pamela Savage-Knepshield & Jessie Chen eds., 2016).

Michael J. Barnes et al., Designing for Humans in Autonomous Systems: Military Applications, U.S. Army Research Laboratory (2014).

Jack M. Beard, Autonomous Weapons and Human Responsibilities, 45 Geo. J. Int’l L. 617 (2013).

Jack M. Beard, Legal Phantoms in Cyberspace: The Problematic Status of Information as a Weapon and a Target Under International Humanitarian Law, 47 Vand. J. Transnat’l L. 67 (2014).

Susanne Beck, The Problem of Ascribing Legal Responsibility in the Case of Robotics, 31 AI & Soc’y 1 (2015).

Aline Belloni et al., Dealing with Ethical Conflicts in Autonomous Agents and Multi-Agent Systems, Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence (2015).

Ben-Naftali & Z. Triger, The Human Conditioning: International Law and Science-Fiction, Law, Culture and the Humanities (2013).

Roger Berkowitz, Drones and the Question of “The Human,” 28 Ethics & Int’l Aff. 159 (2014).

Vincent Bernard, Editorial: Science Cannot Be Placed above Its Consequences, 94 Int’l Rev. Red Cross 457 (2012).

Fiona Berreby et al., Modelling Moral Reasoning and Ethical Responsibility with Logic Programming, in Logic for Programming, Artificial Intelligence, and Reasoning: 20th International Conference, LPAR-20 2015, Suva, Fiji, November 24-28, 2015, Proceedings 532 (Martin Davis et al. eds., 2015).

Autonomous Weapons Systems: Law, Ethics, Policy (Nehal Bhuta et al. eds., 2016).

Matthias Biere & Marcel Dickow, Lethal Autonomous Weapons Systems, Future Challenges, CSS Analyses in Sec. Pol’y (2014).

Gwendelynn Bills, LAWS unto Themselves: Controlling the Development and Use of Lethal Autonomous Weapons Systems, 83 Geo. Wash. L. Rev. 176 (2015).

Wendy Blanks, Militaries’ Growing Use of Ground Robots Raises Ethics Concerns, The Sleuth Journal (May 21, 2013), http://www.thesleuthjournal.com/militaries-growing-use-of-ground-robots-raises-ethics-concerns/.

Olivier Boissier et al., A Roadmap towards Ethical Autonomous Agents, Ethicaa: Éthique & Agents autonomes (2015).

Jeroen van Den Boogaard, Proportionality and Autonomous Weapons Systems, J. Int’l Hum.. Legal Stud. (2016).

Bill Boothby, Autonomous Attack—Opportunity or Spectre?, in Yearbook of International Humanitarian Law 2013 71 (Terry D. Gill et al. eds., 2015).

William Boothby, Some Legal Challenges Posed by Remote Attack, 94 Int’l Rev. Red Cross 579 (2012).

William H. Boothby, Interacting Technologies and Legal Challenge, in Conflict Law 97 (2014).

Jason Borenstein, The Ethics of Autonomous Military Robots, 2 Stud. Ethics, L., & Tech. (2008).

John Borrie, Understanding Different Types of Risk: Unintentional Risks, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference on Understanding Different Types of Risk (2016).

Ujjayini Bose, The Black Box Solution to Autonomous Liability, 92 Wash. U. L. Rev. 1325 (2015).

Jan Broersen, Responsible Intelligent Systems: The REINS Project, 28 Künstliche Intelligenz (KI) 209 (2014).

Hanna Brollowski, Military Robots and the Principle of Humanity: Distorting the Human Face of the Law, in Armed Conflict and International Law: In Search of the Human Face 53 (Mariëlle Matthee et al. eds., 2013).

Laura Burgess, Autonomous Legal Reasoning? Legal and Ethical Issues in the Technologies of Conflict, ICRC Intercross Blog (Dec. 7, 2015), http://intercrossblog.icrc.org/blog/048x5za4aqeztdiu3r8f96s8m7lzom.

Robot Law (Ryan Calo et al. eds., 2016).

Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513 (2015).

M.C. Canellas & R.A. Haga, Toward Meaningful Human Control of Autonomous Weapons Systems through Function Allocation, 2015 IEEE International Symposium on Technology and Society (ISTAS) 1 (Nov. 2015).

Charli Carpenter, Beware the Killer Robots, Foreign Affairs (Jul. 9, 2013), https://www.foreignaffairs.com/articles/united-states/2013-07-03/beware-killer-robots.

Julie Carpenter, The Quiet Professional: An Investigation of U.S. Military Explosive Ordnance Disposal Personnel Interactions with Everyday Field Robots (2013) (unpublished PhD Thesis, University of Washington).

Charli Carpenter, Vetting the Advocacy Agenda: Network Centrality and the Paradox of Weapons Norms, 65 Int’l Org. 69 (2011).

Jeffrey Carr, Responsible Attribution: A Prerequisite for Accountability, Tallinn Papers No. 6, NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) (2014).

Kelly Cass, Autonomous Weapons and Accountability: Seeking Solutions in the Law of War, 48 Loy. L.A. L. Rev. 1017 (2015).

Jeffrey L. Caton, Autonomous Weapon Systems: A Brief Survey of Developmental, Operational, Legal, and Ethical Issues (Strategic Studies Institute ed., 2015).

Marc Champagne & Ryan Tonkens, Bridging the Responsibility Gap in Automated Warfare, 28 Phil. & Tech. 125 (2015).

Monika Chansoria, To Ban or Regulate Autonomous Weapons: An Indian Response, 72 Bull.Atomic Sci. 120 (2016).

Thompson Chengeta, Accountability Gap, Autonomous Weapon Systems and Modes of Responsibility in International Law (Social Science Research Network Working Paper Series, 2016).

Thompson Chengeta, Are Autonomous Weapon Systems the Subject of Article 36 of Additional Protocol I to the Geneva Conventions? (Social Science Research Network Working Paper Series, 2016).

Thompson Chengeta, Can Robocop “Serve and Protect” within the Confines of Law Enforcement Rules? (Social Science Research Network Working Paper Series, 2016).

Thompson Chengeta, Defining the Emerging Notion of “Meaningful Human Control” in Autonomous Weapon Systems (AWS) 2016 (Social Science Research Network Working Paper Series, 2016).

Thompson Chengeta, Dignity and Autonomous Weapon Systems Debate: An African Perspective (Social Science Research Network Working Paper Series, 2016).

Thompson Chengeta, Measuring Autonomous Weapon Systems Against International Humanitarian Law Rules (Social Science Research Network Working Paper Series, 2016).

Jessie Y.C. Chen, Individual Differences in Human-Robot Interaction in a Military Multitasking Environment, 5 J. Cognitive Engineering & Decision Making 83 (2011).

Jessie Y.C. Chen & Michael J. Barnes, Human-Agent Teaming for Multirobot Control: A Review of Human Factors Issues, 44 IEEE Transactions on Human-Machine Systems 13 (2014).

Jessie YC Chen & Michael J. Barnes, Supervisory Control of Multiple Robots in Dynamic Tasking Environments, 55 Ergonomics 1043 (2012).

Ting Chen et al., Increasing Autonomy Transparency through Capability Communication in Multiple Heterogeneous UAV Management, in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2015).

George Cho, Unmanned Aerial Vehicles: Emerging Policy and Regulatory Issues, 22 J.L. Info. & Sci. 201 (2012).

Samir Chopra & Laurence F. White, A Legal Theory for Autonomous Artificial Agents (2011).

Danielle Keats Citron & Frank A. Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014).

Ben Clarke, Arming Drones for Law Enforcement: Challenges and Opportunities for the Protection of Human Life (Social Science Research Network Working Paper Series, 2013).

Mark Coeckelbergh, Drones, Information Technology, and Distance: Mapping the Moral Epistemology of Remote Fighting, 15 Ethics & Info. Tech. 87 (2013).

Mark Coeckelbergh, From Killer Machines to Doctrines and Swarms, or Why Ethics of Military Robotics Is Not (Necessarily) About Robots, 24 Phil. & Tech. 269 (2011).

Nicolas Cointe et al., Ethical Judgment of Agents’ Behaviors in Multi-Agent Systems, International Conference on Autonomous Agents and Multiagent Systems (May 3, 2016).

Roberto Cordeschi, Automatic Decision-Making and Reliability in Robotic Systems: Some Implications in the Case of Robot Weapons, 28 AI & Soc’y 431 (2013).

Geoffrey Corn, Autonomous Weapon Systems: Managing the Inevitability of “Taking the Man out of the Loop” (Social Science Research Network Working Paper Series, 2014).

Geoffrey S. Corn et al., The Law of Armed Conflict: An Operational Approach, 94 Int’l Rev. Red Cross 855 (2012).

Rebecca Crootof, The Killer Robots Are Here: Legal and Policy Implications, 36 Cardozo L. Rev. 1837 (2014).

Rebecca Crootof, The Meaning of “Meaningful Human Control” (Social Science Research Network Working Paper Series, 2015).

Rebecca Crootof, The Varied Law of Autonomous Weapon Systems, in Autonomous systems: Issues for Defence Policymakers (Andrew P. Williams & Paul D. Scharre eds., 2015).

Rebecca Crootof, War, Responsibility, and Killer Robots, 40 N.C. J. Int’l L. & Com. Reg. 909 (2015).

Rebecca Crootof, War Torts: Accountability for Autonomous Weapons Systems, 164 U. Penn. L. Rev. (forthcoming June 2016), http://ssrn.com/abstract=2657680.

Rebecca Crootof, Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems, Lawfare (Nov. 24, 2015), https://www.lawfareblog.com/why-prohibition-permanently-blinding-lasers-poor-precedent-ban-autonomous-weapon-systems.

Kathleen Bartzen Culver, From Battlefield to Newsroom: Ethical Implications of Drone Technology in Journalism, 29 J. Mass Media Ethics 52 (2014).

Mary L. Cummings, Automation and Accountability in Decision Support System Interface Design, 32 J.Tech. Stud. (2006).

John Danaher, Robots, Law and the Retribution Gap, J. Ethics & Info. Tech. 1 (2016).

Peter Danielson, Engaging the Public in the Ethics of Robots for War and Peace, 24 Phil. & Tech. 239 (2011).

Jovana Davidovic, Should the Changing Character of War Affect Our Theories of War?, 19 Ethical Theory & Moral Prac. 603 (2016).

Jason S. DeSon, Automating the Right Stuff? The Hidden Ramifications of Ensuring Autonomous Aerial Weapon Systems Comply with International Humanitarian Law, 72 A.F. L. Rev. 85 (2015).

Marcel Dickow et al., First Steps towards a Multidimensional Autonomy Risk Assessment (MARA) in Weapons Systems (German Institute for International and Security Affairs, Working Paper No. 5, 2015).

Ezio Di Nucci & Filippo Santoni de Sio, Who’s Afraid of Robots? Fear of Automation and the Ideal of Direct Control, in Roboethics in film (Fiorella Battaglia & Natalie Weidenfeld eds., 2014).

Bonnie Docherty et. al., Losing Humanity: The Case against Killer Robots (Human Rights Watch 2012).

Bonnie Docherty et al., Shaking the Foundations: The Human Rights Implications of Killer Robots (Human Rights Watch 2014).

Gordana Dodig Crnkovic & Baran Çürüklü, Robots: Ethical by Design, 14 Ethics & Info. Tech. 61 (2012).

Neelke Doorn, A Rawlsian Approach to Distribute Responsibilities in Networks, 16 Sci. & Engineering Ethics 221 (2010).

Neelke Doorn, Responsibility Ascriptions in Technology Development and Engineering: Three Perspectives, 18 Sci. & Engineering Ethics 69 (2012).

U.S. Army Combined Arms Center, Combat Studies Institute, Robots on the Battlefield: Contemporary Perspectives and Implications for the Future (Ronan Douaré et al. eds., 2014).

Aaron M. Drake, Current U.S. Air Force Drone Operations and Their Conduct in Compliance with International Humanitarian Law - An Overview, 39 Denv. J. Int’l L. & Pol’y 629 (2010).

Cordula Droege, Get off My Cloud: Cyber Warfare, International Humanitarian Law, and the Protection of Civilians, 94 Int’l Rev. Red Cross 533 (2012).

Charles J. Dunlap Jr., Accountability and Autonomous Weapons: Much Ado About Nothing?, Temp. Int’l & Comp. L.J. (forthcoming 2016).

The Means to Kill: Essays on the Interdependence of War and Technology from Ancient Rome to the Age of Drones (G. Dworok & F. Jacob eds., 2016).

Kjølv Egeland, Lethal Autonomous Weapon Systems under International Humanitarian Law, 85 Nordic J. Int’l L. 89 (2016).

Kjølv Egeland, Machine Autonomy and the Uncanny: Recasting Ethical, Legal, and Operational Implications of the Development of Autonomous Weapon Systems (Spring 2014) (unpublished Master’s Thesis in Political Science, University of Oslo).

Kristen E. Eichensehr, Cyberwar & International Law Step Zero, 50 Tex. Int’l L.J. 357 (2015).

Merel Ekelhof, “Are you smarter than Professor Hawking?” Higher Forces and Gut-Feelings in the Debate on Lethal Autonomous Weapons Systems, EJIL: Talk! (Apr. 27, 2016), http://www.ejiltalk.org/are-you-smarter-than-professor-hawking-higher-forces-and-gut-feelings-in-the-debate-on-lethal-autonomous-weapons-systems/.

Roni A. Elias, Facing the Brave New World of Killer Robots: Adapting the Development of Autonomous Weapons Systems into the Framework of the International Law of War, 3 Indon. J. Int’l & Comp. L. 101 (2016).

Linda R. Elliott et al., Robotic Telepresence: Perception, Performance, and User Experience, U.S. Army Research Laboratory (2012).

Christian Enemark, Armed Drones and the Ethics of War: Military Virtue in a Post-Heroic Age (2014).

Matthias Englert et al., Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons (2014).

The Computational Turn: Past, presents, futures? Proceedings of the First International Conference of the International Association for Computing and Philosophy (IACAP) (Charles Ess & Ruth Hagengruber eds., 2011).

Tyler D. Evans, At War with the Robots: Autonomous Weapon Systems and the Martens Clause, 41 Hofstra L. Rev. 697 (2012).

Anthony Finn & Steve Scheding, Developments and Challenges for Autonomous Unmanned Vehicles: A Compendium, Volume 3 (2010).

FLI, AI Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence, Future of Life Institute (FLI), http://futureoflife.org/ai-open-letter/.

Lt. Col. Christopher M. Ford, Stockton Center for the Study of International Law, Remarks at the 2016 Informal Meeting of Experts, at 4, UN Office in Geneva (April 2016), http://www.unog.ch/80256EDD006B8954/(httpAssets)/D4FCD1D20DB21431C1257F9B0050B318/$file/2016_LAWS+MX_presentations_challengestoIHL_fordnotes.pdf.

James Foy, Autonomous Weapons Systems: Taking the Human out of International Humanitarian Law, 23 Dalhousie J. Legal Stud. 47 (2014).

Patricia K. Freeman & Robert S. Freeland, Media Framing the Reception of Unmanned Aerial Vehicles in the United States of America, 44 Tech. in Soc’y 23 (2016).

Michael Froomkin & P. Zak Colangelo, Self-Defense Against Robots and Drones, 48 Conn. L. Rev. 1 (2015).

George Galdorisi, Designing Autonomous Systems for Warfighters: Keeping Humans in the Loop, Small Wars Journal (2016).

Jai Galliott, Military Robots: Mapping the Moral Landscape (2015).

Jai C. Galliott, Uninhabited Aerial Vehicles and the Asymmetry Objection: A Response to Strawser, 11 J. Mil. Ethics 58 (2012).

Denise Garcia, Killer Robots: Why the US Should Lead the Ban, 6 Global Pol’y 57 (2015).

Robin Geiß, Book Review, 24 Eur. J. Int’l L. 722 (2013) (reviewing Claire Finkelstein, Jens David Ohlin, and Andrew Altman (eds), Targeted Killings: Law and Morality in an Asymmetrical World; Roland Otto, Targeted Killings and International Law; William H. Boothby, The Law of Targeting).

Robin Geiß, The International-Law Dimension of Autonomous Weapons Systems (2015).

Aaron Gevers, Is Johnny Five Alive or Did It Short Circuit? Can and Should an Artificially Intelligent Machine Be Held Accountable in War or is It Merely a Weapon?, 12 Rutgers J.L. & Pub. Pol’y 384 (2015).

Tony Gillespie, New Technologies and Design for the Laws of Armed Conflict, 160 The RUSI Journal 50 (2015).

Tony Gillespie & Robin West, Requirements for Autonomous Unmanned Air Systems Set by Legal Issues, 4 Int’l C2 J. 1 (2010).

Brendan Gogarty & Meredith Hagger, The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air, 19 J.L. Info. & Sci. 73 (2010).

Michael A. Goodrich & Alan C. Schultz, Human-Robot Interaction: A Survey, 1 Foundations and Trends in Human-Computer Interaction 203 (2007).

Gov’t (Neth.), Government Response to AIV/CAVV Advisory Report no. 97, Autonomous Weapon Systems: The Need for Meaningful Human Control (2015), http://aiv-advice.nl/8gr#government-responses.

Gov’t of Fr., Characterization of a LAWS (April 11–15, 2016) (non-paper), http://www.unog.ch/80256EDD006B8954/(httpAssets)/5FD844883B46FEACC1257F8F00401FF6/$file/2016_LAWSMX_CountryPaper_France+CharacterizationofaLAWS.pdf.

Gov’t of Switz., Towards a “Compliance-Based” Approach to LAWS [Lethal Autonomous Weapons Systems] (March 30, 2016) (informal working paper), http://www.unog.ch/80256EDD006B8954/(httpAssets)/D2D66A9C427958D6C1257F8700415473/$file/2016_LAWS+MX_CountryPaper+Switzerland.pdf.

Tjerk de Greef, Delegation and Responsibility: A Human-Machine Perspective, in Drones and responsibility: legal, philosophical and socio-technical perspectives on remotely controlled weapons 134 (Ezio Di Nucci & Filippo Santoni de Sio eds., 2016).

Tjerk de Greef & Alex Leveringhaus, Design for Responsibility: Safeguarding Moral Perception via a Partnership Architecture, 17 Cognition, Tech. & Work 319 (2015).

Florian Gros et al., Ethics and Authority Sharing for Autonomous Armed Robots, http://ceur-ws.org/Vol-885/paper1.pdf.

Oren Gross, Cyber Responsibility to Protect: Legal Obligations of States Directly Affected by Cyber-Incidents, 48 Cornell Int’l L.J. 481 (2015).

Oren Gross, The New Way of War: Is There a Duty to Use Drones?, 67 Fla. L. Rev. 1 (2015).

Oren Gross, When Machines Kill: Criminal Responsibility for International Crimes Committed by Lethal Autonomous Robots, Address at the We Robot 2012 Conference: Military Robotics Panel Presentation (Apr. 22, 2012).

Steven Groves, A Manual Adapting the Law of Armed Conflict to Lethal Autonomous Weapons Systems, (Margaret Thatcher Center for Freedom Special Report No. 183, Apr. 7, 2016).

Grut, The Challenge of Autonomous Lethal Robotics to International Humanitarian Law, 18 J. Conflict & Sec. L. 5 (2013).

Mark Gubrud, Stopping Killer Robots, 70 Bull. Atomic Sci. 32 (2014).

Michael A. Guetlein, Lethal Autonomous Weapons - Ethical and Doctrinal Implications (Feb. 14, 2005) (Submitted to the faculty of the Naval War College in partial satisfaction of the requirements of the Department of Joint Military Operations).

The Society for the Study of Artificial Intelligence and Simulation of Behaviour, The Machine Question: AI, Ethics and Moral Responsibility. Proceedings of the AISB/IACAP World Congress 2012 (David J. Gunkel et al. eds., 2012).

Alonso Dunkelberg Gurmendi, Laws for L.A.W.S.: Legal Challenges for the Use of Lethal Autonomous Weapons Systems in Times of Armed Conflict (Social Science Research Network Working Paper Series, 2016).

Gabriel Hallevy, “I, Robot - I, Criminal” - When Science Fiction Becomes Reality: Legal Liability of AI Robots Committing Criminal Offenses, 22 Syracuse Sci. & Tech. L. Rep. 1 (2010).

Gabriel Hallevy, The Matrix of Derivative Criminal Liability (2012).

Storrs Hall, Beyond AI: Creating the Conscience of the Machine (2007).

Daniel N. Hammond, Autonomous Weapons and the Problem of State Accountability, 15 Chi. J. Int’l L. 652 (2014).

Katherine L. Hanna, Old Laws, New Tricks: Drunk Driving and Autonomous Vehicles, 55 Jurimetrics J.L. Sci. & Tech. 275 (2015).

Maaike Harbers et al., Designing for Responsibility-Five Desiderata of Military Robots, Responsible Innovation Conference: Values and Valorisation (May 21, 2014).

Woodrow Hartzog et al., Inefficiently Automated Law Enforcement, 2015 Mich. State L. Rev. 1763 (2015).

Woodrow Hartzog, Unfair and Deceptive Robots, 74 Md. L. Rev. 785 (2015).

Titus Hattan, Lethal Autonomous Robots: Are They Legal under International Human Rights and Humanitarian Law?, 93 Neb. L. Rev. 1035 (2015).

Allyson Hauptman, Autonomous Weapons and the Law of Armed Conflict, 218 Mil. L. Rev. 170 (2013).

Brian F. Havel & John Q. Mulligan, Unmanned Aircraft Systems: A Challenge to Global Regulators, 65 DePaul L. Rev. 107 (2015).

Hannah Haviland, “The Machine Made Me Do It!”: An Exploration of Ascribing Agency and Responsibility to Decision Support Systems (May 2005) (unpublished Master’s Thesis in Applied Ethics, Linköpings University, Centre for Applied Ethics).

Thomas Hellström, On the Moral Responsibility of Military Robots, 15 Ethics & Info. Tech. 99 (2013).

Ian Henderson et al., Emerging Technology and Perfidy in Armed Conflict, 91 Int’l L. Stud. 468 (2015).

M.J. de C. Henshaw et al., Aiding Designers, Operators and Regulators to Deal with Legal and Ethical Considerations in the Design and Use of Lethal Autonomous Systems, in 2010 International Conference on Emerging Security Technologies (2010).

Jonathan David Herbach, Into the Caves of Steel: Precaution, Cognition and Robotic Weapon Systems Under the International Law of Armed Conflict, 4 Amsterdam L.F. 3 (2012).

Alexander Hevelke & Julian Nida-Rümelin, Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis, 21 Sci. & Engineering Ethics 619 (2015).

Patrick Chisan Hew, Artificial Moral Agents Are Infeasible with Foreseeable Technologies, 16 Ethics & Info. Tech. 197 (2014).

Patrick Chisan Hew, Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room, 18 Ethics & Info. Tech. 1 (2016).

Christof Heyns, Autonomous Weapons Systems: Living a Dignified Life and Dying a Dignified Death, in Autonomous Weapons systems: Law, Ethics, Policy (Nehal Bhuta et al. eds., 2016).

Christof Heyns, Human Rights and the Use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement, 38 Hum. Rts. Q. 350 (2016).

Human Law and Computer Law: Comparative Perspectives (Mireille Hildebrandt & Jeanne Gaakeer eds., 2013).

Morgan Hochheiser, The Truth behind Data Collection and Analysis, 32 J. Marshall J. Info. Tech. & Privacy L. 32 (2015).

Duncan Hollis, Setting the Stage: Autonomous Legal Reasoning in International Humanitarian Law (Social Science Research Network Working Paper Series, 2016).

Joel Hood, The Equilibrium of Violence: Accountability in the Age of Autonomous Weapons Systems, 11 BYU Int’l L. & Mgmt. Rev. 12 (2015).

M.C. Horowitz, Public Opinion and the Politics of the Killer Robots Debate, 3 Res. & Pol. (2016).

Michael C. Horowitz, Coming next in Military Tech, 70 Bull. Atomic Sci. 54 (2014).

Michael C. Horowitz, The Looming Robotics Gap, Foreign Pol’y, (May 5, 2014).

Michael C. Horowitz & Paul D. Scharre, Do Killer Robots Save Lives?, Politico Magazine (Nov. 19, 2014), http://www.politico.com/magazine/story/2014/11/killer-robots-save-lives-113010.html.

Patrick Hubbard, Do Androids Dream: Personhood and Intelligent Artifacts, 83 Temp. L. Rev. 405 (2011).

Review of the 2012 US Policy on Autonomy in Weapons Systems, Human Rights Watch (Apr. 15, 2013), https://www.hrw.org/news/2013/04/15/review-2012-us-policy-autonomy-weapons-systems.

Margaret Hu, Small Data Surveillance v. Big Data Cybersurveillance, 42 Pepp. L. Rev. 773 (2015).

Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects, International Committee of the Red Cross Expert Meeting Report (March 26, 2014).

Report on International Humanitarian Law and the Challenges of Contemporary Armed Conflicts, 31st International Conference of the Red Cross and Red Crescent 31IC/11/5.1.2 (Oct. 2011).

Report on International Humanitarian Law and the Challenges of Contemporary Armed Conflicts, 32nd International Conference of the Red Cross and Red Crescent 32IC/15/11 (Dec. 8, 2015).

Inbar & J. Meyer, Manners Matter: Trust in Robotic Peacekeepers, 59 Proceedings Hum. Factors & Ergonomics Soc’y Annual Meeting 185 (Sept. 1, 2015).

D.R. Jacques et al., Optimization of an Autonomous Weapon System’s Operating Characteristic, 3 IEEE Systems J. 489 (2009).

Chris Jenks, False Rubicons, Moral Panic & Conceptual Cul-De-Sacs: Critiquing & Reframing the Call to Ban Lethal Automatic Weapons (Social Science Research Network Working Paper Series, 2016).

Chris Jenks, The Distraction of Full Autonomy and the Need to Refocus the CCW LAWS Discussion on Critical Functions, Social Science Research Network (SSRN) Electronic Journal (2016).

Eric Talbot Jensen, Emerging Technologies and LOAC Signaling, 91 Int’l L. Stud. 621 (2015).

Eric Talbot Jensen, Future War and the War Powers Resolution, 29 Emory Int’l L. Rev. 499 (2014).

Eric Talbot Jensen, Future War, Future Law, 22 Minn. J. Int’l L. 282 (2013).

Eric Talbot Jensen, The Future of the Law of Armed Conflict: Ostriches, Butterflies, and Nanobots, 35 Mich. J. Int’l L. 253 (2014).

Human-Robot Interactions in Future Military Operations (Florian Jentsch & Michael Barnes eds., 2012).

U.C. Jha, Killer Robots: Lethal Autonomous Weapon Systems Legal, Ethical and Moral Challenges (2016).

Aaron M. Johnson & Sidney Axinn, The Morality of Autonoous Robots, 12 J. Mil. Ethics 129 (2013).

Deborah G. Johnson, Technology with No Human Responsibility?, 127 J. Bus. Ethics 707 (2015).

Deborah G. Johnson & Merel E. Noorman, Responsibility Practices in Robotic Warfare, 92 Mil. Rev. 12 (2014).

Jeffrey Kahn, Protection and Empire: The Martens Clause, State Sovereignty, and Individual Rights, 56 Va. J. Int’l L. 1 (2016).

David S. Kang et al., Draper Unmanned Vehicle Systems, 18 Robotica 263 (2000).

Benjamin Kastan, Autonomous Weapons Systems: A Coming Legal “Singularity”?, 2013 U.Ill. J.L. Tech. & Pol’y 45 (2013).

Jakob Kellenberger, International Humanitarian Law and New Weapon Technologies 34th Round Table on Current Issues of International Humanitarian Law, San Remo, 8–10 September 2011, 94 Int’l Rev. Red Cross 809 (2012).

Ian Kerr & Jason Millar, Delegation, Relinquishment and Responsibility: The Prospect of Expert Robots, in Robot law (Ryan Calo et al. eds., 2016).

Ian Kerr & Katie Szilagyi, Asleep at the Switch? How Killer Robots Become a Force Multiplier of Military Necessity, in Robot law (Ryan Calo et al. eds., 2016).

Kerr & K. Szilagyi, Evitable Conflicts, Inevitable Technologies? The Science and Fiction of Robotic Warfare and IHL, L. Culture & Human. (2014).

Kristine Kiernan, Human Factors Considerations in Autonomous Lethal Unmanned Aerial Systems, Aviation, Aeronautics, and Aerospace International Research Conference (2015).

Michał Klincewicz, Autonomous Weapons Systems, the Frame Problem and Computer Security, 14 J. Mil. Ethics 162 (2015).

William J. Kohler & Alex Colbert-Taylor, Current Law and Potential Legal Issues Pertaining to Automated, Autonomous and Connected Vehicles, 31 Santa Clara Computer & High Tech. L.J. 99 (2015).

Michael Kolb, Soldier and Robot Interaction in Combat Environments (2012) (unpublished PhD Thesis, University of Oklahoma, Graduate College).

Christopher M. Kovach, Beyond Skynet: Reconciling Increased Autonomy in Computer-Based Weapons Systems with the Laws of War, 71 Air Force Law Review 231 (2014).

Вадим Козюлин & Альберт Ефимов, Новый Бонд — Машина с Лицензией на Убийство, 22 Индекс Безопасности 116, 17 (2016).

Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons (2009).

Armin Krishnan, Automating War: The Need for Regulation, 30 Cont. Sec. Pol’y 172 (2011).

Tetyana (Tanya) Krupiy, A Case against Relying Solely on Intelligence, Surveillance and Reconnaissance Technology to Identify Proposed Targets, 20 J. Conflict & Sec. L. 415 (2015).

Tetyana (Tanya) Krupiy, Of Souls, Spirits and Ghosts: Transposing the Application of the Rules of Targeting to Lethal Autonomous Robots, 16 Melbourne J. Int’l L. 145 (2015).

Artur Kuptel & Andy Williams, Policy Guidance: Autonomy in Defence Systems (Social Science Research Network Working Paper Series, 2014).

Matthew S. Larkin, Brave New Warfare: Autonomy in Lethal UAVs (Mar. 2011) (unpublished MSc in Management Thesis, Naval Postgraduate School).

Phil Lemmons, Autonomous Weapons and Human Responsibility (Editorial), 10 Byte 6 (1985).

Alex Leveringhaus, Autonomous Weaponry: Conceptual Issues, in Alex Leveringhaus, Ethics and Autonomous Weapons 31 (2016).

Alex Leveringhaus, Drones, Automated Targeting, and Moral Responsibility, in Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons 169 (Ezio Di Nucci & Filippo Santoni de Sio eds., 2016).

Alex Leveringhaus, Ethics and Autonomous Weapons (2016).

Alex Leveringhaus, Ethics and the Autonomous Weapons Debate, in Alex Leveringhaus, Ethics and Autonomous Weapons 1 (2016).

Alex Leveringhaus, From Warfare Without Humans to Warfare Without Responsibility?, in Alex Leveringhaus, Ethics and Autonomous Weapons 59 (2016).

Alex Leveringhaus, Human Agency and Artificial Agency in War, in Alex Leveringhaus, Ethics and Autonomous Weapons 89 (2016).

Alex Leveringhaus & Tjerk de Greef, Keeping the Human “in-the-Loop”: A Qualified Defence of Autonomous Weapons, in Precision Strike Warfare and International Intervention: Strategic, Ethico-Legal and Decisional Implications (Mike Aranson et al. eds., 2014).

John Lewis, The Case for Regulating Fully Autonomous Weapons, 124 Yale L.J. 1309 (2015).

Eliav Lieblich, Autonomous Weapons Systems and the Obligation to Exercise Discretion, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (Geneva Apr. 14, 2016).

Eliav Lieblich & Eyal Benvenisti, The Obligation to Exercise Discretion in Warfare: Why Autonomous Weapons Systems Are Unlawful, in Autonomous Weapons Systems: Law, Ethics, Policy (Nehal Bhuta et al. eds., 2016).

כלי נשק אוטונומיים ובעיית כבילת שיקול הדעת, צפוי להתפרסם בעיוני משפט

Rain Liijova et al., Emerging Technologies of Warfare, in Routledge Handbook of the Law of Armed Conflict (Rain Liivoja & Timothy L. H. McCormack eds., 2016).

黎辉辉著:《自主武器系统是合法的武器吗?——以国际人道法为视角》,载《中国政法大学研究生法学》第29卷第6期,2014年12月,第125-132页

Patrick Lin et al., Autonomous Military Robotics: Risk, Ethics, and Design (U.S. Department of Navy, Office of Naval Research Dec. 20, 2008).

Robot Ethics: The Ethical and Social Implications of Robotics (Patrick Lin et al. eds., 2012).

Patrick Lin, Cyber Norms: A Missing Link in the Autonomous Weapons Debate, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference on Interaction and Vulnerabilities (2015).

Patrick Lin, How Does Cyber Fit with Lethal Autonomous Weapons?, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference Considering the Drivers for the Weaponization of Increasingly Autonomous Technologies (2015).

Patrick Lin et al., Robot Ethics: Mapping the Issues for a Mechanized World, 175 Artificial Intelligence 942 (2011).

Patrick Lin, The Right to Life and the Martens Clause, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (Geneva Apr. 13, 2015).

Patrick Lin, Why Ethics Matters for Autonomous Cars, in Autonomous Driving 69 (Markus Maurer et al. eds., 2016).

Hin-Yan Liu, Categorization and Legality of Autonomous and Remote Weapons Systems, 94 Int’l Rev. Red Cross 627 (2012).

[美]廖显寅著:《自主和遥控武器系统的分类与合法性》,李强译,载《红十字国际评论》第94卷第886期,2012年,第145-174页

Hin-Yan Liu, Refining Responsibility: Differentiating Two Types of Responsibility Issues Raised by Autonomous Weapons Systems, in Autonomous Weapons systems: Law, Ethics, Policy (Nehal Bhuta et al. eds., 2016).

George R. Lucas Jr., Automated Warfare, 25 Stan. L. & Pol’y Rev. 317 (2014).

George R. Lucas Jr., Legal and Ethical Precepts Governing Emerging Military Technologies: Research and Use, 6 Amsterdam L.F. 23 (2014).

George R. Lucas Jr., Legal and Ethical Precepts Governing Emerging Military Technologies: Research and Use, 5 Utah L. Rev. 1271 (2013).

Bertram F. Malle, Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots, Ethics & Info. Tech. 1 (2015).

Gary E. Marchant et al., International Governance of Autonomous Military Robots, 12 Colum. Sci. & Tech. L. Rev. 272 (2011).

Peter Margulies, Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts, in Research Handbook on Remote Warfare (Jens David Ohlin ed., forthcoming 2016).

Christopher J. Markham & Michael N. Schmitt, Precision Air Warfare and the Law of Armed Conflict, 89 Int’l L. Stud. 669 (2013).

Markus Wagner, Beyond the Drone Debate: Autonomy in Tomorrow’s Battlespace, 106 Proceedings of the Annual Meeting (American Society of International Law) 80 (2012).

William C. Marra & Sonia K. McNeil, Understanding “the Loop”: Regulating the Next Generation of War Machines, 36 Harv. J.L. & Pub. Pol’y 1139 (2013).

Nicholas Marsh, Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots, (Peace Research Institute Oslo 2014).

Eve Massingham, Conflict without Casualties . . . a Note of Caution: Non-Lethal Weapons and International Humanitarian Law, 94 Int’l Rev. Red Cross 673 (2012).

Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics & Info. Tech. 175 (2004).

Julie A. Mccann et al., Can Self-Managed Systems Be Trusted? Some Views and Trends, 21 The Knowledge Engineering Rev. 239 (2006).

Tim McFarland, Factors Shaping the Legal Implications of Increasingly Autonomous Military Systems, FirstView Int’l Rev. of the Red Cross 1 (2016).

Tim McFarland, How Should Lawyers Think About Weapon Autonomy? (Social Science Research Network Working Paper Series, 2015).

Tim McFarland & Tim McCormack, Mind the Gap: Can Developers of Autonomous Weapons Systems Be Liable for War Crimes?, 90 Int’l L. Stud. 361 (2014).

C.D. Meyers, GI, Robot: The Ethics of Using Robots in Combat, 25 Pub. Affairs Q. 21 (2011).

Miller et al., Supervisory Decision-Making in Semi/Autonomous Systems, Proceedings Hum. Factors & Ergonomics Soc’y Annual Meeting 318 (2002).

Valerie Morkevicius, Tin Men: Ethics, Cybernetics and the Importance of Soul, 13 J. Mil. Ethics 3 (2014).

Nikil Mukerji, Autonomous Killer Drones, in Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons 197 (Ezio Di Nucci & Filippo Santoni de Sio eds., 2016).

Vincent C. Müller & Thomas W. Simpson, Autonomous Killer Robots Are Probably Good News, in Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons 67 (Ezio Di Nucci & Filippo Santoni de Sio eds., 2016).

Nyagudi Musandu, Humanitarian Algorithms: A Codified Key Safety Switch Protocol for Lethal Autonomy (2014).

Nardin, From Right to Intervene to Duty to Protect: Michael Walzer on Humanitarian Intervention, 24 Eur. J. Int’l L. 67 (2013).

Hitoshi Nasu, Nanotechnology and Challenges to International Humanitarian Law: A Preliminary Legal Assessment, 94 Int’l Rev. Red Cross 653 (2012).

Kevin Neslage, Does “Meaningful Human Control” Have Potential for the Regulation of Autonomous Weapon Systems?, 6 U. Miami Nat’l Sec. & Armed Conflict L. Rev. 151 (2015).

Michael A. Newton, Back to the Future: Reflections on the Advent of Autonomous Weapons System International Regulation of Emerging Military Technologies, 47 Case W. Res. J. Int’l L. 5 (2015).

Gregory P. Noone & Diana C. Noone, The Debate over Autonomous Weapons Systems, 47 Case W. Res. J. Int’l L. 25 (2015).

Merel E. Noorman, Responsibility Practices and Unmanned Military Technologies, 20 Sci. & Engineering Ethics 809 (2014).

Merel Noorman & Deborah G. Johnson, Negotiating Autonomy and Responsibility in Military Robots, 16 Ethics & Info. Tech. 51 (2014).

Karsten Nowrot, Animals at War: The Status of “Animal Soldiers” under International Humanitarian Law, 40 Historical Soc. Res./Historische Sozialforschung 128 (2015).

Mary Ellen O’Connell, 21st Century Arms Control Challenges: Drones, Cyber Weapons, Killer Robots, and WMDs, 13 Wash. U. Global Stud. L. Rev. 515 (2014).

Mary Ellen O’Connell, Banning Autonomous Killing, in The American Way of Bombing: Changing Ethical and Legal Norms, from Flying Fortresses to Drones (Matthew Evangelista & Henry Shue eds., 1st ed. 2014).

Jens David Ohlin, Combatant’s Stance: Autonomous Weapons on the Battlefield, The, 92 Int’l L. Stud. 1 (2016).

Peter Olsthoorn & Lambèr Royakkers, Risks and Robots – Some Ethical Issues (2011).

Richard M. O’Meara, The Rules of War and the Use of Unarmed, Remotely Operated Autonomous Robotics Systems, Platforms and Weapons . . . Some Cautions, Address at the We Robot 2012 Conference: Military Robotics Panel Presentation (Apr. 22, 2012).

Ugo Pagallo, Killers, Fridges, and Slaves: A Legal Journey in Robotics, 26 AI & Society 347 (2011).

Ugo Pagallo, Robots of Just War: A Legal Perspective, 24 Phil. & Tech. 307 (2011).

Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (2013).

Ugo Pagallo, What Robots Want: Autonomous Machines, Codes and New Frontiers of Legal Responsibility, in Human Law and Computer Law: Comparative Perspectives 47 (Mireille Hildebrandt & Jeanne Gaakeer eds., 2013).

Adam Page, The Fallacy of Humane Killing: Interwar Debates about Air Power and Twenty-First Century “Killer Robots,” in The Means to Kill: Essays on the Interdependence of War and Technology from Ancient Rome to the Age of Drones 260 (G. Dworok & F. Jacob eds., 2016).

Frank A. Pasquale, Bittersweet Mysteries of Machine Learning (A Provocation), LSE Media Pol’y Project (Feb. 5, 2016), https://declara.com/collection/null/post/7a1b0815-494a-4987-8d80-677793f4f299.

Frank A. Pasquale, The Emerging Law of Algorithms, Robots, and Predictive Analytics, Concurring Opinions (Feb. 9, 2016), http://concurringopinions.com/archives/2016/02/the-emerging-law-of-algorithms-robots-and-predictive-analytics.html.

נתי פרל, עלותן של החלטות דטרמיניסטיות המתקבלות על ידי רובוטים אוטונומיים (מאמר הוגש במסגרת סמינר מחקר) מרכז מינרבה לשלטון החוק במצבי קיצון, אוניברסיטת חיפה (2014).

Rodger A. Pettitt et al., Scalability of Robotic Controllers: Effects of Progressive Levels of Autonomy on Robotic Reconnaissance Tasks (2010).

Eric Pomes, Preventive Ban Lethal Autonomous Weapons Systems True False Good Idea (Social Science Research Network Working Paper Series, 2014).

Peter B. Postma, Regulating Legal Autonomous Robots in Unconventional Warfare, 11 University of St. Thomas L.J. 300 (2013).

Jody M. Prescott, Autonomous Decision Making Processes and the Cyber Commander, in 5th International Conference on Cyber Conflict (CYCON 2013): Tallinn, Estonia, 4-7 June 2013 391 (Karlis Podins ed., IEEE 2013).

Jody M. Prescott, Building the Ethical Cyber Commander and the Law of Armed Conflict, 40 Rutgers Computer & Tech. L.J. 42 (2014).

Jody M. Prescott, The Law of Armed Conflict and the Responsible Cyber Commander, 38 Vt. L. Rev. 103 (2013).

Duncan Purves et al., Autonomous Machines, Moral Judgment, and Acting for the Right Reasons, 18 Ethical Theory & Moral Prac. 851 (2015).

Werner Rammert, Where the Action is: Distributed Agency between Humans, Machines, and Programs (The Technical University Technology Studies Working Papers, 2008).

Brian Rappert et al., The Roles of Civil Society in the Development of Standards around New Weapons and Other Technologies of Warfare, 94 Int’l Rev. Red Cross 765 (2012).

Shane R. Reeve & Dave Wallace, Modern Weapons and the Law of Armed Conflict, in U.S. Military Operations: Law, Policy, and Practice 41 (Geoffrey S. Corn et al. eds., 2016).

Shane R. Reeves & William J. Johnson, Autonomous Weapons: Are You Sure Those Are Killer Robots? Can We Talk about It?, 2014 Army Law. 25 (2014).

Nathan Reitinger, Algorithmic Choice and Superior Responsibility: Closing the Gap between Liability and Lethal Autonomy by Defining the Line between Actors and Tools, 51 Gonz. L. Rev. 79 (2015).

Daphné Richemond-Barak & Ayal Feinberg, The Irony of the Iron Dome: Intelligent Defense Systems, Law, and Security, 7 Harv. Nat’l Sec. J. 469 (2016).

Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871 (2015).

Michael Kurt Riepl, War crimes without criminal accountability? The case of Active Protection Systems, ICRC Humanitarian Law & Policy (Jun. 1, 2016), http://blogs.icrc.org/law-and-policy/2016/06/01/war-crimes-without-criminal-accountability-case-active-protection-systems/.

Shane Riza, Killing without Heart: Limits on Robotic Warfare in an Age of Persistent Conflict (2013).

Heather Roff, To Ban or Regulate Autonomous Weapons: A US Response, 72 Bull. Atomic Sci. 122 (2016).

Heather M. Roff, Autonomous Weapons: Risk, Foreseeability and Choice, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference on Understanding Different Types of Risk (2016).

Heather M. Roff, Cybersecurity, Artificial Intelligence, and Autonomous Weapons: Critical Intersections (2015).

Heather M. Roff, Gendering a Warbot: Gender, Sex and the Implications for the Future of War, 18 Int’l Feminist J. Pol. 1 (2016).

Heather M. Roff, Killing in War: Responsibility, Liability, and Lethal Autonomous Robots, in Routledge Handbook of the Law of Armed Conflict (Rain Liivoja & Timothy L. H. McCormack eds., 2016).

Heather M. Roff, The Strategic Robot Problem: Lethal Autonomous Weapons in War, 13 J. Mil. Ethics 211 (2014).

Ann Rogers & John Hill, Unmanned: Drone Warfare and Global Security (2014).

Jay Logan Rogers, Legal Judgment Day for the Rise of the Machines: A National Approach to Regulating Fully Autonomous Weapons, 56 Ariz. L. Rev. 1257 (2014).

Mark Roorda, NATO’s Targeting Process: Ensuring Human Control Over and Lawful Use of Autonomous Weapons, in Autonomous systems: Issues for defence policymakers (Andrew P Williams & Paul D Scharre eds., 2015).

Frederik Rosén, Extremely Stealthy and Incredibly Close: Drones, Control and Legal Responsibility, 19 J. Conflict & Sec. L. 113 (2014).

Lambèr Royakkers & Rinie van Est, A Literature Review on New Robotics: Automation from Love to War, 7 Int’l J. Soc. Robotics 549 (2015).

Lambèr Royakkers & Peter Olsthoorn, Military Robots and the Question of Responsibility, 5 Int’l J. Technoethics 1 (2014).

Charles T. Rubin, Artificial Intelligence and Human Nature, 1 The New Atlantis 88 (2003).

Simon Rushton & Maria Kett, Killing Machines, 29 Med. Conflict & Survival 165 (2013).

Stuart Russell et al., Research Priorities for Robust and Beneficial Artificial Intelligence, 36 AI Mag. 105 (2015).

Rutgers CTLJ, Forty-Seventh Selected Bibliography on Computers, Technology, and the Law Bibliography, 42 Rutgers Computer & Tech. L.J. 331 (2016).

Kaitlin J. Sahni, The Legality of Invisibility Technology in Modern Warfare, 103 Geo. L.J. 1661 (2014).

Paulo E. Santos, To Ban or Regulate Autonomous Weapons: A Brazilian Response, 72 Bull. Atomic Sci. 117 (2016).

Marco Sassoli, Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to Be Clarified, 90 Int’l L. Stud. 308 (2014).

Sauer & N. Schornig, Killer Drones: The “Silver Bullet” of Democratic Warfare?, 43 Sec. Dialogue 363 (2012).

Dan Saxon, A Human Touch: Autonomous Weapons, Directive 3000.09, and the Appropriate Levels of Human Judgment over the Use of Force, 15 Geo. J. Int’l Aff. 100 (2014).

Dan Saxon, Autonomous Drones and Individual Criminal Responsibility, in Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons 17 (Ezio Di Nucci & Filippo Santoni de Sio eds., 2016).

Enrique Schaerer et al., Robots as Animals: A Framework for Liability and Responsibility in Human-Robot Interactions, in The 18th IEEE International Symposium on Robot and Human Interactive Communication 72 (2009).

Paul D. Scharre, Flash War: Autonomous Weapons and Strategic Stability, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference on Understanding Different Types of Risk (2016).

Paul Scharre & Michael Horowitz, An Introduction to Autonomy in Weapon Systems, Project on Ethical Autonomy Working Paper (Center for A New American Security (CNAS)), Feb. 2015.

Paul Scharre & Michael Horowitz, Meaningful Human Control in Weapon Systems: A Primer, (Center for A New American Security Working Paper, Mar. 2015).

Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J.L. & Tech. 1 (2016).

Michael N. Schmitt, Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics, Harv. Nat’l Sec. J. 1 (Online Features, 2013).

Michael N. Schmitt, Rewired Warfare: Rethinking the Law of Cyber Attack, 96 Int’l Rev. Red Cross 189 (2014).

Michael N. Schmitt, The Notion of “Objects” during Cyber Operations: A Riposte in Defence of Interpretive and Applicative Precision, 48 Isr. L. Rev. 81 (2015).

Michael N. Schmitt, War, Technology and the Law of Armed Conflict, 82 Int’l L. Stud. 137 (2006).

Michael N. Schmitt & Jeffrey S. Thurnher, Out of the Loop: Autonomous Weapon Systems and the Law of Armed Conflict, 4 Harv. Nat’l Sec. J. 231 (2013).

Michael N. Schmitt & Sean Watts, State Opinio Juris and International Humanitarian Law Pluralism, 91 Int’l L. Stud. 171 (2015).

Michael N. Schmitt & Sean Watts, The Decline of International Humanitarian Law Opinio Juris and the Law of Cyber Warfare, 50 Tex. Int’l L.J. 189 (2015).

Michael N. Schmitt & Eric W. Widmar, On Target: Precision and Balance in the Contemporary Law of Targeting, 7 J. Nat’l Sec. L. & Pol’y 379 (2014).

Schörnig, Robot Warriors: Why the Western Investment Into Military Robots Might Backfire (Peace Research Institute Frankfurt 2010).

Marcus Schulzke, Autonomous Weapons and Distributed Responsibility, 26 Phil. & Tech. 203 (2013).

Marcus Schulzke, Robots as Weapons in Just Wars, 24 Phil. & Tech. 293 (2011).

Susan Schuppli, Deadly Algorithms, 187 Radical Phil. 2 (2014).

David Schuster et al., The Impact of Type and Level of Automation on Situation Awareness and Performance in Human-Robot Interaction, 8019 Engineering Psychol. & Cognitive Ergonomics 252 (Springer 2013).

Brittany C. Sellers et al., The Effects of Autonomy and Cognitive Abilities on Workload and Supervisory Control of Unmanned Systems, 56 Proceedings of the Human Factors and Ergonomics Society Annual Meeting 1039 (2012).

Noel Sharkey, Cassandra or False Prophet of Doom: AI Robots and War, 23 IEEE Intelligent Systems 14 (2008).

Noel Sharkey, Automated Killers and the Computing Profession, 40 Computer 124 (2007).

Noel Sharkey, Death Strikes from the Sky: The Calculus of Proportionality, 28 IEEE Tech. & Soc’y Mag. 16 (2009).

Noel Sharkey, Robotics Today: The Good, Bad, and Ugly, 15th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems 3 (Mar. 2008).

Noel Sharkey, Saying “No!” to Lethal Autonomous Targeting, 9 J. Mil. Ethics 369 (2010).

Noel Sharkey, The Robot Arm of the Law Grows Longer, 42 Computer 113 (2009).

Noel E. Sharkey, The Evitability of Autonomous Robot Warfare, 94 Int’l Rev. Red Cross 787 (2012).

Noel Sharkey & Lucy Suchman, Wishful Mnemonics and Autonomous Killing Machines, 136 Proceedings of the AISB 14 (May 2013).

Ian Shaw, The Future of Killer Robots: Are We Really Losing Humanity?, E-International Relations (Dec. 11, 2012) http://www.e-ir.info/2012/12/11/the-future-of-killer-robots-are-we-really-losing-humanity/.

Thomas W. Simpson, Robots, Trust and War, 24 Phil. & Tech. 325 (2011).

גבי סיבוני & יוני אשפר, דילמות בהפעלת אמצעי לחימה אוטונומיים, עדכן אסטרטגי 16(4) (המכון למחקרי ביטחון לאומי, 2014). 

Interview with Peter W. Singer, 94 Int’l Rev. Red Cross 467 (2012).

Peter W. Singer, Military Robots and the Laws of War, 23 The New Atlantis 25 (2009).

Peter W. Singer & August Cole, Humans Can’t Escape Killer Robots, but Humans Can Be Held Accountable for Them, VICE News (Apr. 15, 2016), https://news.vice.com/article/killer-robots-autonomous-weapons-systems-and-accountability.

P.W. Singer, Robots at War: The New Battlefield, 33 The Wilson Q. 30 (2009).

P.W. Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century (2009).

Bryant W. Smith, Lawyers and Engineers Should Speak the Same Robot Language, in Robot law (Ryan Calo et al. eds., 2016).

Robert Sparrow, Building a Better WarBot: Ethical Issues in the Design of Unmanned Systems for Military Applications, 15 Sci. & Engineering Ethics 169 (2009).

Robert Sparrow, Killer Robots, 24 Applied Phil. 62 (2007).

Robert Sparrow, Predators or Plowshares? Arms Control of Robotic Weapons, 28 IEEE Tech. & Soc’y Mag. 25 (2009).

Robert Sparrow, Robots and Respect: Assessing the Case Against Autonomous Weapon Systems, 30 Ethics & Int’l Aff. 93 (2016).

Robert Sparrow, Twenty Seconds to Comply: Autonomous Weapons Systems and the Recognition of Surrender, 91 Int’l L. Stud. 699 (2015).

Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Interim Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, U.N. Doc. A/65/321 (Aug. 23, 2010).

Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, U.N. Doc. A/HRC/23/47 (Apr. 9, 2013).

Christopher J. Spinelli, The Rise of Robots: The Military’s Use of Autonomous Lethal Force (Feb. 17, 2015) (unpublished thesis, Air War College, Air University).

Phillip Spoerri, International Humanitarian Law and New Weapon Technologies 34th Round Table on Current Issues of International Humanitarian Law, San Remo, 8–10 September 2011, 94 Int’l Rev. Red Cross 814 (2012).

Bernd Carsten Stahl, Responsible Computers? A Case for Ascribing Quasi-Responsibility to Computers Independent of Personhood or Agency, 8 Ethics & Info. Tech. 205 (2006).

Darren M. Stewart, New Technology and the Law of Armed Conflict: Technological Meteorites and Legal Dinosaurs?, 87 Int’l L. Stud. 271 (2011).

Jeremy Straub, Consideration of the Use of Autonomous, Non-Recallable Unmanned Vehicles and Programs as a Deterrent or Threat by State Actors and Others, 44 Tech. in Soc’y 39 (2016).

Killing by Remote Control: The Ethics of an Unmanned Military (Bradley Jay Strawser ed., 2013).

Bradley Jay Strawser, Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles, 9 Journal of Military Ethics 342 (2010).

John P. Sullins, RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield?, 12 Ethics & Info. Tech. 263 (2010).

Dan Terzian, The Right to Bear (Robotic) Arms, 117 Penn St. L. Rev. 755 (2012).

Eno Thereska et al., Towards Self-Predicting Systems: What If You Could Ask “what-If”?, 21 The Knowledge Engineering Rev. 261 (2006).

Bradan T. Thomas, Autonomous Weapon Systems: The Anatomy of Autonomy and the Legality of Lethality, 37 Hous. J. Int’l L. 235 (2015).

Jeffrey S. Thurnher, Examining Autonomous Weapon Systems from a Law of Armed Conflict Perspective, in New Technologies and the Law of Armed Conflict 213 (Hitoshi Nasu ed., 2014).

Jeffrey S. Thurnher, Means and Methods of the Future: Autonomous Systems, in Targeting: The Challenges of Modern Warfare 177 (Paul A.L. Ducheine et al. eds., 2016).

Jeffrey S. Thurnher, No One at the Controls: Legal Implications of Fully Autonomous Targeting, 67 Joint Force Quarterly 77 (2012).

Jeffrey S. Thurnher, The Law That Applies to Autonomous Weapon Systems, 17 ASIL Insights 4 (2013).

Remus Titiriga, Autonomy of Military Robots: Assessing the Technical and Legal (Jus In Bello) Thresholds, 32 J. Marshall J. Info. Tech. & Privacy L. 57 (2015).

Ryan Tonkens, A Challenge for Machine Ethics, 19 Minds & Machines 421 (2009).

Ryan Tonkens, Out of Character: On the Creation of Virtuous Machines, 14 Ethics & Info. Tech. 137 (2012).

Ryan Tonkens, Should Autonomous Robots Be Pacifists?, 15 Ethics & Info. Tech. 109 (2013).

Ryan Tonkens, The Case Against Robotic Warfare: A Response to Arkin, 11 J. Mil. Ethics 149 (2012).

Christopher P. Toscano, Friend of Humans: An Argument for Developing Autonomous Weapons Systems, 8 J. Nat’l Sec. L. & Pol’y 189 (2015).

Armed Forces: Autonomous Weapon Systems – Question in the House of Lords, They Work For You (Mar. 26, 2013), http://www.theyworkforyou.com/lords/?id=2013-03-26a.958.0 (concerning 744 Parl. Deb., H.L. (5th ser.) (2013) 958 (U.K.)).

U.K. Ministry of Defence, The UK Approach to Unmanned Aircraft Systems (UAS), Joint Doctrine Note 2/11, Mar. 30, 2011.

The United Kingdom and Lethal Autonomous Weapons Systems (Article 36, Background Paper Apr. 8, 2016).

Appendix I: Status of Multilateral Arms Regulation and Disarmament Agreements, in The United Nations Disarmament Yearbook: Volume 39: 2014: Disarmament Resolutions and Decisions of the Sixty-ninth Session of the United Nations General Assembly, Part II, U.N. Sales No. E.15.IX.4.

Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, United Nations Institute for Disarmament Research (Observation Paper 1, 2014).

The Weaponization of Increasingly Autonomous Technologies: Considering Ethics and Social Values, United Nations Institute for Disarmament Research (Observation Paper 3, 2015).

The Weaponization of Increasingly Autonomous Technologies: Considering How Meaningful Human Control Might Move the Discussion Forward, United Nations Institute for Disarmament Research (Observation Paper 2, 2014).

The Weaponization of Increasingly Autonomous Technologies in the Maritime Environment: Testing the Waters, United Nations Institute for Disarmament Research (Observation Paper 4, 2015).

United States Air Force Unmanned Aircraft Systems Flight Plan 2009–2047 (May 18, 2009.

Army Capabilities Integration Center, Robotics Strategy White Paper (Mar. 19, 2009).

U.S. Department of Defense, Department of Defense Directive 3000.09, Autonomy in Weapons Systems (Nov. 21, 2012).

U.S. Department of Defense, Unmanned Systems Integrated Roadmap, FY2013–38 (2013).

U.S. Department of Defense, Defense Science Board, Task Force Report: The Role of Autonomy in DoD Systems (July 2012).

Shannon Vallor, Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character, 28 Phil. & Tech. 107 (2015).

Shannon Vallor, The Future of Military Virtue: Autonomous Systems and the Moral Deskilling of the Military, 5th International Conference on Cyber Conflict (CYCON 2013) (Jun. 4, 2013).

Kerstin Vignard, Statement of the UN Institute for Disarmament Research, Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference on Understanding Different Types of Risk (Apr. 12, 2016).

Vik Kanwar, Post-Human Humanitarian Law: The Law of War in the Age of Robotic Weapons, 2 Harv. Nat’l Sec. J. 616 (2011).

Ryan J. Vogel, Drone Warfare and the Law of Armed Conflict, 39 Denv. J. Int’l L. & Pol’y 101 (2010).

Markus Wagner, Autonomous Weapon Systems, in Max Planck Encyclopedia of Public International Law (2016).

Markus Wagner, Autonomy in the Battlespace: Independently Operating Weapon Systems and the Law of Armed Conflict, in International Humanitarian Law and the Changing Technology of War 99 (Dan Saxon ed., 2013).

Markus Wagner, Taking Humans Out of the Loop: Implications for International Humanitarian Law, 21 J. L. Info. & Sci. (2011).

Markus Wagner, The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems, 47 Vand. J. Transnat’l L. 1371 (2014).

Wendell Wallach, Mapping a Way Forward, Address at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Conference on Understanding Different Types of Risk (2016).

Wendell Wallach & Colin Allen, Framing Robot Arms Control, 15 Ethics & Info. Tech. 125 (2013).

James Igoe Walsh, Political Accountability and Autonomous Weapons, 2 Res. & Pol. (2015).

Sean Watts, Regulation-Tolerant Weapons, Regulation-Resistant Weapons and the Law of War, 91 Int’l L. Stud. 541 (2015).

John Frank Weaver, Abhor a Vacuum: The Status of Artificial Intelligence and AI Drones Under International Law, 54 N.H. Bar J. 14 (2013).

John Frank Weaver, Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws (2014).

Michael Webb, The Robots Are Here! The Robots Are Here!, 121 Design Q. 4 (1983).

Nathalie Weizmann, Autonomous Weapon Systems under International Law Academy (Geneva Academy Briefing No. 8, Nov. 2014.

Stephen E. White, Brave New World: Neurowarfare and the Limits of International Humanitarian Law, 41 Cornell Int’l L.J. 177 (2008).

Autonomous Systems: Issues for Defence Policymakers (Andrew P. Williams & Paul D. Scharre eds., 2015).

Benjamin Wittes, Does Human Rights Watch Prefer Disproportionate and Indiscriminate Humans to Discriminating and Proportionate Robots?, Lawfare (Dec. 1, 2012), https://www.lawfareblog.com/does-human-rights-watch-prefer-disproportionate-and-indiscriminate-humans-discriminating-and.

Benjamin Wittes and Gabriella Blum, The Future of Violence: Robots and Germs, Hackers and Drones—Confronting A New Age of Threat (2015).

Robert O. Work & Shawn Brimley, 20YY: Preparing for War in the Robotic Age (Center for a New American Security 2014).

杨丹丹、李伯军著:《论自主机器人上战场的法律问题》,载《西安政治学院学报》第25卷第4期,2012年8月,第118-121页

Tung Yin, Game of Drones: Defending against Drone Terrorism, 2 Tex. A&M L. Rev. 635 (2014).

Jakub Zlotowski et al., Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction, 7 Int’l J. Soc. Robotics 347 (2015).