Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Section 5: Conclusion

Two contradictory trends may be combining into a new global climate that is at once enterprising and anxious. Militaries see myriad technological triumphs that will transform warfighting. Yet the possibility of “replacing” human judgment with algorithmically-derived “decisions”—especially in war—threatens what many consider to define us as humans.  

To date, the lack of demonstrated technical knowledge by many states and commentators, the unwillingness of states to share closely-held national-security technologies, and an absence of a definitional consensus on what is meant by autonomous weapon systems have impeded regulatory efforts on AWS. Moreover, uncertainty about which actors would benefit most from advances in AWS and for how long such benefits would yield a meaningful qualitative edge over others seems likely to continue to inhibit efforts at negotiating binding international rules on the development and deployment of AWS. In this sense, efforts at reaching a dedicated international regime to address AWS may follow the same frustrations as analogous efforts to address cyber warfare. True, unlike with the early days of cyber warfare, there has been greater state engagement on regulation of AWS. In particular, the concept of “meaningful human control” over AWS has already been endorsed by over two-dozen states. But much remains up in the air as states decide whether to establish a Group of Governmental Experts on AWS at the upcoming Fifth Review Conference of the CCW.

We have shown that, with respect to armed conflict, the primary formal regulatory avenues under international law are state responsibility for internationally wrongful acts and individual criminal responsibility for international crimes. These fields are well established and offer many more avenues than are often considered in the relatively narrow AWS discourse to date. In sum, ICL and, especially, IHL already address many of the concerns raised in relation to AWS—but ICL and IHL may not be sufficient to address all of those concerns.

The current crux, as we see it, is whether advances in technology—especially those capable of “self-learning” and of operating in relation to war and whose “choices” may be difficult for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment—are susceptible to regulation and, if so, whether and how they should be regulated. One way to think about the core concern which vaults over at least some of the impediments to the discussion on AWS is the new concept we raise: war algorithms. War algorithms include not only those algorithms capable of being used in weapons but also in any other function related to war.

More war algorithms are on the horizon. Two months ago, the Defense Science Board, which is connected with the U.S. Department of Defense, identified five “stretch problems”—that is, goals that are “hard-but-not-too-hard” and that have a purpose of accelerating the process of bringing a new algorithmically-derived capability into widespread application:

  • Generating “future loop options” (that is, “using interpretation of massive data including social media and rapidly generated strategic options”);
  • Enabling autonomous swarms (that is, “deny[ing] the enemy’s ability to disrupt through quantity by launching overwhelming numbers of low‐cost assets that cooperate to defeat the threat”);
  • Intrusion detection on the Internet of Things (that is, “defeat[ing] adversary intrusions in the vast network of commercial sensors and devices by autonomously discovering subtle indicators of compromise hidden within a flood of ordinary traffic”);
  • Building autonomous cyber-resilient military vehicle systems (that is, “trust[ing] that … platforms are resilient to cyber‐attack through autonomous system integrity validation and recovery”); and
  • Planning autonomous air operations (that is, “operat[ing] inside adversary timelines by continuously planning and replanning tactical operations using autonomous ISR analysis, interpretation, option generation, and resource allocation”).[525]

What this trajectory toward greater algorithmic autonomy in war—at least among more technologically-sophisticated armed forces and even some non-state armed groups—means for accountability purposes seems likely to remain a contested issue for the foreseeable future.

In the meantime, it remains to be authoritatively determined whether war algorithms will be capable of making the evaluative decisions and value judgments that are incorporated into IHL. It is currently not clear, for instance, whether war algorithms will be capable of formulating and implementing the following IHL-based evaluative decisions and value judgments:[526]

  • The presumption of civilian status in case of “doubt”;[527]
  • The assessment of “excessiveness” of expected incidental harm in relation to anticipated military advantage;
  • The betrayal of “confidence” in IHL in relation to the prohibition of perfidy; and
  • The prohibition of destruction of civilian property except where “imperatively” demanded by the necessities of war.[528]

*     *     *

Two factors may suggest that, at least for now, the most immediate ways to regulate war algorithms more broadly and to pursue accountability over them might be to follow not only traditional paths but also less conventional ones. As illustrated above, the latter might include relatively formal avenues—such as states making, applying, and enforcing war-algorithm rules of conduct within and beyond their territories—or less formal avenues—such as coding law into technical architectures and community self-regulation.

First, even where the formal law may seem sufficient, concerns about practical enforcement abound. Recently, for instance, states parties to the Geneva Conventions failed to muster the political support to establish a new IHL compliance forum.[529] There are a number of ways to interpret this refusal. But, at a minimum, it seems to point to a lack of political will among states to cast more light on IHL compliance. This suggests that even where existing IHL seems adequate as a regulatory regime for some aspects of the design, development, and use of AWS or war algorithms, it still lacks dependable enforcement as far as state conduct is concerned.

Second, the proliferation of increasingly advanced technical systems based on self-learning and distributed control raises the question of whether the model of individual responsibility found in ICL might pose conceptual challenges to regulating AWS and war algorithms. At a general level, this is not a wholly new concern, as distributed systems have been used in relation to war for a long time. But the design, development, and operation of those systems might be increasingly difficult to square with the foundational tenet of ICL—that “[c]rimes against international law are committed by men, not by abstract entities”[530]—as learning algorithms and architectures advance.[531]

In short, individual responsibility for international crimes under international law remains one of the vital accountability avenues in existence today, as do measures of remedy for state responsibility. Yet in practice responsibility along either avenue is unfortunately relatively rare. And thus neither path, on its own or in combination, seems to be sufficient to effectively address the myriad regulatory concerns pertaining to war algorithms—at least not until we better understand what is at issue. These concerns might lead those seeking to strengthen accountability of war algorithms to pursue not only traditional, formal avenues but also less formal, softer mechanisms.

In that connection, it seems likely that attempts to change governments’ approaches to technical autonomy in war through social pressure (at least for those governments that might be responsive to that pressure) will continue to be a vital avenue along which to pursue accountability. But here, too, there are concerns. Numerous initiatives already exist. Some of them are very well informed; others less so. Many of them are motivated by ideological, commercial, or other interests that—depending on one’s viewpoint—might strengthen or thwart accountability efforts. And given the paucity of formal regulatory regimes, some of these initiatives may end up having considerable impact, despite their shortcomings.

Stepping back, we see that technologies of war, as with technologies in so many areas, produce an uneasy blend of promise and threat.[532] With respect to war algorithms, understanding these conflicting pulls requires attention to a century-and-a-half-long history during which war came to be one of the most highly regulated areas of international law. But it also requires technical know-how. Thus those seeking accountability for war algorithms would do well not to forget the essentially political work of IHL’s designers—nor to obscure the fact that today’s technology is, at its core, designed, developed, and deployed by humans. Ultimately, war-algorithm accountability seems unrealizable without competence in technical architectures and in legal frameworks, coupled with ethical, political, and economic awareness.

[525].  Defense Science Board, supra note 7, at 76–97.

[526].  These concerns were raised in relation to autonomous weapon systems, but they are also implicated by war algorithms.

[527].  Swiss, “Compliance-Based” Approach, supra note 74, citing art. 50(3) and art. 52(3) of Additional Protocol I to the Geneva Conventions. See AP I, supra note 12, at art. 50(3), 52(3).

[528].  Id., citing art. 23(g) of Hague Regulation IV, see Hague Convention (IV) Respecting the Laws and Customs of War on Land art. 23(g), Oct. 18, 1907, T.S. 539, and art. 53 of the Fourth Geneva Convention, see GC IV, supra note 349, at art. 53.

[529].  Compare 32nd International Conference of the Red Cross and Red Crescent, Draft “0” Resolution on “Strengthening compliance with international humanitarian law” (undated), https://www.icrc.org/en/download/file/13244/32ic-draft-0-resolution-on-ihl-compliance-20150915-en.pdf with 32nd International Conference of the Red Cross and Red Crescent, Resolution 2 (Dec. 10, 2015), http://rcrcconference.org/wp-content/uploads/sites/3/2015/04/32IC-AR-Compliance_EN.pdf.

[530].  1 Trial of the Major War Criminals Before the International Military Tribunal 223 (1947).

[531].  In a related context, M.C. Elish has noted a dilemma in which “control has become distributed across multiple actors (human and nonhuman),” and yet “our social and legal conceptions of responsibility have remained generally about an individual.” She thus “developed the term moral crumple zone to describe the result of this ambiguity within systems of distributed control, particularly automated and autonomous systems.” The basic idea is that “[j]ust as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.” M.C. Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction 3–4 (We Robot 2016 Working Paper) (March 20, 2016), http://dx.doi.org/10.2139/ssrn.2757236 (using “the terms autonomous, automation, machine and robot as related technologies on a spectrum of computational technologies that perform tasks previously done by humans” and discussing a framework for categorizing types of automation proposed by Parasuraman, Sheridan and Wickens, who “define automation specifically in the context of human-machine comparison and as ‘a device or system that accomplishes (partially or fully) a function that was previously, or conceivably could be, carried out (partially or fully) by a human operator.’”). Id. at n.5 (citing to Parasuraman et al., “A Model for Types and Levels of Human Interaction with Automation,” 30 IEEE Transactions on Systems, Man and Cybernetics 3 (2000). Elish notes that the term arose in her work with Tim Hwang. Id. at 3.

[532].  On broader historical, social, and political forces that shape notions and experiences of technology, at least in the American context, see, e.g., John M. Staudenmaier, Technology, in A Companion to American Thought 667–669 (Richard Wrightman Fox & James T. Kloppenberg eds., 1995).