Note: A PDF of this Executive Summary is available here [link], and more information about this PILAC Project as well as the full version of the Briefing Report are available here [link].


Executive Summary

Across many areas of modern life, “authority is increasingly expressed algorithmically.”[1] War is no exception.

In this briefing report, we introduce a new concept—war algorithms—that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.”

In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation—and how those algorithms might already fit within the existing regulatory system established by international law.

*     *     *

Warring parties have long expressed authority and power through algorithms. For decades, algorithms have helped weapons systems—first at sea and later on land—to identify and intercept inbound missiles. Today, military systems are increasingly capable of navigating novel environments and surveilling faraway populations, as well as identifying targets, estimating harm, and launching direct attacks—all with fewer humans at the switch. Indeed, in recent years, commercial and military developments in algorithmically-derived autonomy have created diverse benefits for the armed forces in terms of “battlespace awareness,” protection, “force application,” and logistics. And those are by no means the exhaustive set of applications.

Much of the underlying technology—often developed initially in commercial or academic contexts—is susceptible to both military and non-military use. Most of it is thus characterized as “dual-use,” a shorthand for being capable of serving a wide array of functions. Costs of the technology are dropping, often precipitously. And, once the technology exists, the assumption is usually that it can be utilized by a broad range of actors.

Driven in no small part by commercial interests, developers are advancing relevant technologies and technical architectures at a rapid pace. The potential for those advancements to cross a moral Rubicon is being raised more frequently in international forums and among technical communities, as well as in the popular press.

Some of the most relevant advancements involve constructed systems through which huge amounts of data are quickly gathered and ensuing algorithmically-derived “choices” are effectuated. “Self-driving” or “autonomous” cars are one example. Ford, for instance, mounts four laser-based sensors on the roof of its self-driving research car, and collectively those sensors “can capture 2.5 million 3-D points per second within a 200-foot range.” Legal, ethical, political, and social commentators are casting attention on—and vetting proposed standards and frameworks to govern—the life-and-death “choices” made by autonomous cars.

Among the other relevant advancements is the potential for learning algorithms and architectures to achieve more and more human-level performance in previously-intractable artificial-intelligence (AI) domains. For instance, a computer program recently achieved a feat previously thought to be at least a decade away: defeating a human professional player in a full-sized game of Go. In March 2016, in a five-game match, AlphaGo—a computer program using an AI technique known as “deep learning,” which “allows computers to extract patterns from masses of data with little human hand-holding”—won four games against Go expert Lee Sedol. Google, Amazon, and Baidu use the same AI technique or similar ones for such tasks as facial recognition and serving advertisements on websites. Following AlphaGo’s series of wins, computer programs have now outperformed humans at chess, backgammon, “Jeopardy!”, and Go.

Yet even among leading scientists, uncertainty prevails as to the technological limits. That uncertainty repels a consensus on the current capabilities, to say nothing of predictions of what might be likely developments in the near- and long-term (with those horizons defined variously).

The stakes are particularly high in the context of political violence that reaches the level of “armed conflict.” That is because international law admits of far more lawful death, destruction, and disruption in war than in peace. Even for responsible parties who are committed to the rule of law, the legal regime contemplates the deployment of lethal and destructive technologies on a wide scale. The use of advanced technologies—to say nothing of the failures, malfunctioning, hacking, or spoofing of those technologies—might therefore entail far more significant consequences in relation to war than to peace. We focus here largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres.

Of course, the development and use of advanced technologies in relation to war have long generated ethical, political, and legal debates. There is nothing new about the general desire and the need to discern whether the use of an emerging technological capability would comport with or violate the law. Today, however, emergent technologies sharpen—and, to a certain extent, recast—that enduring endeavor. A key reason is that those technologies are seen as presenting an inflection point at which human judgment might be “replaced” by algorithmically-derived “choices.” To unpack and understand the implications of that framing requires, among other things, technical comprehension, ethical awareness, and legal knowledge. Understandably if unfortunately, competence across those diverse domains has so far proven difficult to achieve for the vast majority of states, practitioners, and commentators.

Largely, the discourse to date has revolved around a concept that so far lacks a definitional consensus: “autonomous weapon systems” (AWS). Current conceptions of AWS range enormously. On one end of the spectrum, an AWS is an automated component of an existing weapon. On the other, it is a platform that is itself capable of sensing, learning, and launching resulting attacks. Irrespective of how it is defined in a particular instance, the AWS framing narrows the discourse to weapons, excluding the myriad other functions, however benevolent, that the underlying technologies might be capable of.

What autonomous weapons mean for legal responsibility and for broader accountability has generated one of the most heated recent debates about the law of war. A constellation of factors has shaped the discussion.

Perceptions of evolving security threats, geopolitical strategy, and accompanying developments in military doctrine have led governments to prioritize the use of unmanned and increasingly autonomous systems (with “autonomous” defined variously) in order to gain and maintain a qualitative edge. By 2013, leadership in the U.S. Navy and Department of Defense (DoD) had identified autonomy in unmanned systems as a “high priority.” In March 2016, the Ministries of Foreign Affairs and Defense of the Netherlands affirmed their belief that “if the Dutch armed forces are to remain technologically advanced, autonomous weapons will have a role to play, now and in the future.” A growing number of states hold similar views.

At the same time, human-rights advocates and certain technology experts have catalyzed initiatives to promote a ban on “fully autonomous weapons” (which those advocates and experts also call “killer robots”). The primary concerns are couched in terms of delegating decisions about lethal force away from humans—thereby “dehumanizing” war—and, in the process, of making wars easier to prosecute. Following the release in 2012 of a report by Human Rights Watch and the International Human Rights Clinic at Harvard Law School, the Campaign to Stop Killer Robots was launched in April 2013 with an explicit goal of fostering a “pre-emptive ban on fully autonomous weapons.” The rationale is that such weapons will, pursuant to this view, never be capable of comporting with international humanitarian law (IHL) and are therefore per se illegal. In July 2015, thousands of prominent AI and robotics experts, as well as other scientists, endorsed an “Open Letter” on autonomous weapons, arguing that “[t]he key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” Those endorsing the letter “believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” But, they cautioned, “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Meanwhile, a range of commentators has argued in favor of regulating autonomous weapon systems, primarily through existing international law rules and provisions. In general, these voices focus on grounding the discourse in terms of the capability of existing legal norms—especially those laid down in IHL—to regulate the design, development, and use, or to prohibit the use, of emergent technologies. In doing so, these commentators often emphasize that states have already developed a relatively thick set of international law rules that guide decisions about life and death in war. Even if there is no specific treaty addressing a particular weapon, they argue, IHL regulates the use of all weapons through general rules and principles governing the conduct of hostilities that apply irrespective of the weapon used. A number of these voices also aver that—for political, military, commercial, or other reasons—states are unlikely to agree on a preemptive ban on fully autonomous weapons, and therefore a better use of resources would be to focus on regulating the technologies and monitoring their use. In addition, these commentators often emphasize the modularity of the technology and raise concerns about foreclosing possible beneficial applications in the service of an (in their eyes, highly unlikely) prohibition on fully autonomous weapons.

Over all, the lack of consensus on the root classification of AWS and on the scope of the resulting discussion make it difficult to generalize. But the main contours of the ensuing “debate” often cast a purportedly unitary “ban” side versus a purportedly unitary “regulate” side. As with many shorthand accounts, this formulation is overly simplistic. An assortment of thoughtful contributors does not fit neatly into either general category. And, when scrutinized, those wholesale categories—of “ban” vs. “regulate”—disclose fundamental flaws, not least because of the lack of agreement on what, exactly, is meant to be prohibited or regulated. Be that as it may, a large portion of the resulting discourse has been captured in these “ban”-vs.-“regulate” terms.

Underpinning much of this debate are arguments about decision-making in war, and who is better situated to make life-and-death decisions—humans or machines. There is also a disagreement over the benefits and costs of distancing human combatants from the battlefield and whether the possible life-saving benefits of AWS are offset by the fact that war also becomes, in certain respects, easier to conduct. There are also different understandings of and predictions about what machines are and will be capable of doing.

With the rise of expert and popular interest in AWS, states have been paying more public attention to the issue of regulating autonomy in war. But the primary venue at which they are doing so functionally limits the discussion to weapons. Since 2014, informal expert meetings on “lethal autonomous weapons systems” have been convened on an annual basis at the United Nations Office in Geneva. These meetings take place within the structure of the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (CCW). That treaty is set up as a framework convention: through it, states may adopt additional instruments that pertain to the core concerns of the baseline agreement (five such protocols have been adopted). Alongside the CCW, other arms-control treaties address specific types of weapons, including chemical weapons, biological weapons, anti-personnel landmines, cluster munitions, and others. The CCW is the only existing regime, however, that is ongoing and open-ended and is capable of being used as a framework to address additional types of weapons.

The original motivation to convene states as part of the CCW was to propel a protocol banning fully autonomous weapons. The most recent meeting (which was convened in April 2016) recommended that the Fifth Review Conference of states parties to the CCW (which is scheduled to take place in December 2016) “may decide to establish an open-ended Group of Governmental Experts (GGE)” on AWS. In the past, the establishment of a GGE has led to the adoption of a new CCW protocol (one banning permanently-blinding lasers). Whether states parties establish a GGE on AWS—and, if so, what its mandate will be—are open questions. In any event, at the most recent meetings, about two-dozen states endorsed the notion—the contours of which remain undefined so far—of “meaningful human control” over autonomous weapon systems.

Zooming out, we see that a pair of interlocking factors has obscured and hindered analysis of whether the relevant technologies can and should be regulated.

One factor is the sheer technical complexity at issue. Lack of knowledge of technical intricacies has hindered efforts by non-experts to grasp how the core technologies may either fit within or frustrate existing legal frameworks.

This is not a challenge particular to AWS, of course. The majority of IHL professionals are not experts in the inner workings of the numerous technologies related to armed conflict. Most IHL lawyers could not detail the technical specifications, for instance, of various armaments, combat vehicles, or intelligence, surveillance, and reconnaissance (ISR) systems. But in general that lack of technical knowledge would not necessarily impede at least a provisional analysis of the lawfulness of the use of such a system. That is because an initial IHL analysis is often an exercise in identifying the relevant rule and beginning to apply it in relation to the applicable context. Yet the widely diverse conceptions of AWS and the varied technologies accompanying those conceptions pose an as-yet-unresolved set of classification challenges. Without a threshold classification, a general legal analysis cannot proceed.

The other, related factor is that states—as well as lawyers, technologists, and other commentators—disagree in key respects on what should be addressed. The headings so far include “lethal autonomous robots,” “lethal autonomous weapons systems,” “autonomous weapons systems” more broadly, and “intelligent partnerships” more broadly still. And the possible standards mentioned include “meaningful human control” (including in the “wider loop” of targeting operations), “meaningful state control,” and “appropriate levels of human judgment.” More basically, there is no consensus on whether to include only weapons or, additionally, systems capable of involvement in other armed conflict-related functions, such as transporting and guarding detainees, providing medical care, and facilitating humanitarian assistance.

Against this backdrop, the AWS framing has largely precluded meaningful analysis of whether it (whatever “it” entails) can be regulated, let alone whether and how it should be regulated. In this briefing report, we recast the discussion by introducing the concept of “war algorithms.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. Those algorithms seem to be a—and perhaps the—key ingredient of what most people and states discuss when they address AWS. We expand the purview beyond weapons alone (important as those are) because the technological capabilities are rarely, if ever, limited to use only as weapons and because other war functions involving algorithmically-derived autonomy should be considered for regulation as well. Moreover, given the modular nature of much of the technology, a focus on weapons alone might thwart attempts at regulation.

Algorithms are a conceptual and technical building block of many systems. Those systems include self-learning architectures that today present some of the sharpest questions about “replacing” human judgment with algorithmically-derived “choices.” Moreover, algorithms form a foundation of most of the systems and platforms—and even the “systems of systems”—often discussed in relation to AWS. Absent an unforeseen development, algorithms are likely to remain a pillar of the technical architectures.

The constructed systems through which these algorithms are effectuated differ enormously. So do the nature, forms, and tiers of human control and governance over them. Existing constructed systems include, among many others, stationary turrets, missile systems, and manned or unmanned aerial, terrestrial, or marine vehicles.

All of the underlying algorithms are developed by programmers and are expressed in computer code. But some of these algorithms—especially those capable of “self-learning” and whose “choices” might be difficult for humans to anticipate or unpack—seem to challenge fundamental and interrelated concepts that underpin international law pertaining to armed conflict and related accountability frameworks. Those concepts include attribution, control, foreseeability, and reconstructability.

At their core, the design, development, and use of war algorithms raise profound questions. Most fundamentally, those inquiries concern who, or what, should decide—and what it means to decide—matters of life and death in relation to war. But war algorithms also bring to the fore an array of more quotidian, though also important, questions about the benefits and costs of human judgment and “replacing” it with algorithmically-derived systems, including in such areas as logistics.

We ground our analysis by focusing on war-algorithm accountability. In short, we are primarily interested in the “duty to account … for the exercise of power” over—in other words, holding someone or some entity answerable for—the design, development, or use (or a combination thereof) of a war algorithm. That power may be exercised by a diverse assortment of actors. Some are obvious, especially states and their armed forces. But myriad other individuals and entities may exercise power over war algorithms, too. Consider the broad classes of “developers” and “operators,” both within and outside of government, of such algorithms and their related systems. Also think of lawyers, industry bodies, political authorities, members of organized armed groups—and many, many others. Focusing on war algorithms encompasses them all.

We draw on the extensive—and rapidly growing—amount of scholarship and other analytical analyses that have addressed related topics. To help illuminate the discussion, we outline what technologies and weapon systems already exist, what fields of international law might be relevant, and what regulatory avenues might be available. As noted above, because international law is the touchstone normative framework for accountability in relation to war, we focus on public international law sources and methodologies. But as we show, other norms and forms of governance might also merit attention.

Accountability is a broad term of art. We adapt—from the work of an International Law Association Committee in a different context (the accountability of international organizations)—a three-part accountability approach. Our framework outlines three axes on which to focus initially on war algorithms.

The first axis is state responsibility. It concerns state responsibility arising out of acts or omissions involving a war algorithm where those acts or omissions constitute a breach of a rule of international law. State responsibility entails discerning the content of the rule, identifying a breach of the rule, assigning attribution for that breach to a state, determining available excuses (if any), and imposing measures of remedy.

The second axis is a form of individual responsibility under international law. In particular, it concerns individual responsibility under international law for international crimes—such as war crimes—involving war algorithms. This form of individual responsibility entails establishing the commission of a crime under the relevant jurisdiction, assessing the existence of a justification or excuse (if any), and, upon conviction, imposing a sentence.

The third and final axis is scrutiny governance. Embracing a wider notion of accountability, it concerns the extent to which a person or entity is and should be subject to, or should exercise, forms of internal or external scrutiny, monitoring, or regulation (or a combination thereof) concerning the design, development, or use of a war algorithm. Scrutiny governance does not hinge on—but might implicate—potential and subsequent liability or responsibility (or both). Forms of scrutiny governance include independent monitoring, norm (such as legal) development, adopting non-binding resolutions and codes of conduct, normative design of technical architectures, and community self-regulation.

Following an introduction that highlights the stakes, we proceed with a section outlining pertinent considerations regarding algorithms and constructed systems. We highlight recent advancements in artificial intelligence related to learning algorithms and architectures. We also examine state approaches to technical autonomy in war, focusing on five such approaches—those of Switzerland, the Netherlands, France, the United States, and the United Kingdom. Finally, to ground the often-theoretical debate pertaining to autonomous weapon systems, we describe existing weapon systems that have been characterized by various commentators as AWS.

The next section outlines the main fields of international law that war algorithms might implicate. There is no single branch of international law dedicated solely to war algorithms. So we canvass how those algorithms might fit within or otherwise implicate various fields of international law. We ground the discussion by outlining the main ingredients of state responsibility. To help illustrate states’ positions concerning AWS, we examine whether an emerging norm of customary international law specific to AWS may be discerned. We find that one cannot (at least not yet). So we next highlight how the design, development, or use (or a combination thereof) of a war algorithm might implicate more general principles and rules found in various fields of international law. Those fields include the jus ad bellum, IHL, international human rights law, international criminal law (ICL), and space law. Because states and commentators have largely focused on AWS to date, much of our discussion here relates to the AWS framing.

The subsequent section elaborates a (non-exhaustive) war-algorithm accountability approach. That approach focuses on state responsibility for an internationally wrongful act, on individual responsibility under international law for international crimes, and on wider forms of scrutiny, monitoring, and regulation. We highlight existing accountability actors and architectures under international law that might regulate war algorithms. These include war reparations as well as international and domestic tribunals. We then turn to less conventional accountability avenues, such as those rooted in normative design of technical architectures (including maximizing the auditability of algorithms) and community self-regulation.

In the conclusion, we return to the deficiencies of current discussions of AWS and emphasize the importance of addressing the wide and serious concerns raised by AWS with technical proficiency, legal expertise, and non-ideological commitment to a genuine and inclusive inquiry. On the horizon, we see that two contradictory trends may be combining into a new global climate that is at once enterprising and anxious. Militaries see myriad technological triumphs that will transform warfighting. Yet the possibility of “replacing” human judgment with algorithmically-derived “decisions”—especially in war—threatens what many consider to define us as humans.

To date, the lack of demonstrated technical knowledge by many states and commentators, the unwillingness of states to share closely-held national-security technologies, and an absence of a definitional consensus on what is meant by autonomous weapon systems have impeded regulatory efforts on AWS. Moreover, uncertainty about which actors would benefit most from advances in AWS and for how long such benefits would yield a meaningful qualitative edge over others seems likely to continue to inhibit efforts at negotiating binding international rules on the development and deployment of AWS. In this sense, efforts at reaching a dedicated international regime to address AWS may follow the same frustrations as analogous efforts to address cyber warfare. True, unlike with the early days of cyber warfare, there has been greater state engagement on regulation of AWS. In particular, the concept of “meaningful human control” over AWS has already been endorsed by over two-dozen states. But much remains up in the air as states decide whether to establish a Group of Governmental Experts on AWS at the upcoming Fifth Review Conference of the CCW.

The current crux, as we see it, is whether advances in technology—especially those capable of “self-learning” and of operating in relation to war and whose “choices” may be difficult for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment—are susceptible to regulation and, if so, whether and how they should be regulated. One way to think about the core concern which vaults over at least some of the impediments to the discussion on AWS is the new concept we raise: war algorithms. War algorithms include not only those algorithms capable of being used in weapons but also in any other function related to war.

More war algorithms are on the horizon. Two months ago, the Defense Science Board, which is connected with the U.S. Department of Defense, identified five “stretch problems”—that is, goals that are “hard-but-not-too-hard” and that have a purpose of accelerating the process of bringing a new algorithmically-derived capability into widespread application:

  • Generating “future loop options” (that is, “using interpretation of massive data including social media and rapidly generated strategic options”);
  • Enabling autonomous swarms (that is, “deny[ing] the enemy’s ability to disrupt through quantity by launching overwhelming numbers of low‐cost assets that cooperate to defeat the threat”);
  • Intrusion detection on the Internet of Things (that is, “defeat[ing] adversary intrusions in the vast network of commercial sensors and devices by autonomously discovering subtle indicators of compromise hidden within a flood of ordinary traffic”);
  • Building autonomous cyber-resilient military vehicle systems (that is, “trust[ing] that … platforms are resilient to cyber‐attack through autonomous system integrity validation and recovery”); and
  • Planning autonomous air operations (that is, “operat[ing] inside adversary timelines by continuously planning and replanning tactical operations using autonomous ISR analysis, interpretation, option generation, and resource allocation”).

What this trajectory toward greater algorithmic autonomy in war—at least among more technologically-sophisticated armed forces and even some non-state armed groups—means for accountability purposes seems likely to stay a contested issue for the foreseeable future.

In the meantime, it remains to be authoritatively determined whether war algorithms will be capable of making the evaluative decisions and value judgments that are incorporated into IHL. It is currently not clear, for instance, whether war algorithms will be capable of formulating and implementing the following IHL-based evaluative decisions and value judgments:

  • The presumption of civilian status in case of “doubt”;
  • The assessment of “excessiveness” of expected incidental harm in relation to anticipated military advantage;
  • The betrayal of “confidence” in IHL in relation to the prohibition of perfidy; and
  • The prohibition of destruction of civilian property except where “imperatively” demanded by the necessities of war.

*     *     *

Two factors may suggest that, at least for now, the most immediate ways to regulate war algorithms specifically and to pursue accountability over them might be to follow not only traditional paths but also less conventional ones. As illustrated above, the latter might include relatively formal avenues—such as states making, applying, and enforcing war-algorithm rules of conduct within and beyond their territories—or less formal avenues—such as coding law into technical architectures and community self-regulation. First, even where the formal law may seem sufficient, concerns about practical enforcement abound. Second, the proliferation of increasingly advanced technical systems based on self-learning and distributed control raises the question of whether the model of individual responsibility found in ICL might pose conceptual challenges to regulating AWS and war algorithms.

In short, individual responsibility for international crimes under international law remains one of the vital accountability avenues in existence today, as do measures of remedy for state responsibility. Yet in practice responsibility along either avenue is unfortunately relatively rare. And thus neither path, on its own or in combination, seems to be sufficient to effectively address the myriad regulatory concerns pertaining to war algorithms—at least not until we better understand what is at issue. These concerns might lead those seeking to strengthen accountability of war algorithms to pursue not only traditional, formal avenues but also less formal, softer mechanisms.

In that connection, it seems likely that attempts to change governments’ approaches to technical autonomy in war through social pressure (at least for those governments that might be responsive to that pressure) will continue to be a vital avenue along which to pursue accountability. But here, too, there are concerns. Numerous initiatives already exist. Some of them are very well informed; others less so. Many of them are motivated by ideological, commercial, or other interests that—depending on one’s viewpoint—might strengthen or thwart accountability efforts. And given the paucity of formal regulatory regimes, some of these initiatives may end up having considerable impact, despite their shortcomings.

Stepping back, we see that technologies of war, as with technologies in so many areas, produce an uneasy blend of promise and threat. With respect to war algorithms, understanding these conflicting pulls requires attention to a century-and-a-half-long history during which war came to be one of the most highly regulated areas of international law. But it also requires technical know-how. Thus those seeking accountability for war algorithms would do well not to forget the essentially political work of IHL’s designers—nor to obscure the fact that today’s technology is, at its core, designed, developed, and deployed by humans. Ultimately, war-algorithm accountability seems unrealizable without sufficient competence in technical architectures and in legal frameworks, coupled with ethical, political, and economic awareness.

Finally, we also include a Bibliography and Appendices. The Bibliography contains over 400 analytical sources, in various languages, pertaining to technical autonomy in war. The Appendices contain detailed charts listing and categorizing states’ statements at the 2015 and 2016 Informal Meetings of Experts on Lethal Autonomous Weapons Systems convened within the framework of the CCW.


 

[1]. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Society 8 (2015), citing Clay Shirky, A Speculative Post on the Idea of Algorithmic Authority, Clay Shirky (November 15, 2009, 4:06 PM), http://www.shirky.com/weblog/2009/11/a-speculative-post-on-the-idea-of-algorithmic-authority (referencing Shirky’s definition of “algorithmic authority” as “the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying ‘Trust this because you trust me.’”). All further citations for sources underlying this Executive Summary are available in the full-text version of the briefing report.