Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].
Section 2: Technology Concepts and Developments
This section sketches key technology concepts and developments, as well as certain states’ understandings of autonomy in relation to war. We set the stage by discussing algorithms and constructed systems. We then outline recent advancements in the AI field of deep learning. Next, we highlight five states’ approaches to technical autonomy in war. In doing so, we also note accompanying standards that states and commentators are actively vetting, such as “meaningful human control” over AWS. Finally, we describe some of the main technologies that various commentators have addressed in relation to autonomous weapon systems.
Two Key Ingredients
In this briefing report, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system.
Algorithm
An algorithm has been defined informally as “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.”[39] Accordingly, an algorithm is “a sequence of computational steps that transform the input into the output.”[40] Yet “[w]e can also view an algorithm as a tool for solving a well-specified computational problem.”[41] In this second approach, “[t]he statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship.”[42] Here, we are most concerned with algorithms that are expressed in computer code and that can be conceptualized as making “decisions” or “choices” along the computational pathway undertaken in light of the input and in accordance with programmed parameters.
The relevant algorithms may vary enormously in terms of their sophistication and complexity. But, at base, they all are conceived and coded initially by humans to take some input and produce some output or to describe a specific computational procedure for achieving a defined desirable input/output relationship.
By limiting our inquiry to war algorithms, we narrow the types of algorithms at issue to those that fulfill three conditions: algorithms (1) that are expressed in computer code; (2) that are effectuated through a constructed system; and (3) that are capable of operating in relation to armed conflict. Not all weapons or systems that have been characterized as “AWS” meet these criteria. But most do. And, more to the point, we see these algorithms as a key ingredient in what most commentators and states mean when they address notions of autonomy.
We predicate our definition on the algorithm being capable of operating in relation to armed conflict, even if it is not initially designed for such use. We thus do not limit our classification to algorithms that are in fact used in armed conflict (though the broader category of capability would subsume those that are actually used). A critique of this approach might be that it is over-inclusive because it does not distinguish between algorithms and the relevant constructed systems that are intended for use in relation to war from the vast array of other such algorithms and systems that might be adapted for such use. Yet one reason to focus on capability—instead of intent—is that much of the underlying technology is modular and can therefore be adapted for use in relation to war even if it was not initially designed and developed to do so. Moreover, with respect to accountability, focusing on capability sweeps in not only those who are in a position to choose to deploy or to operate war algorithms but also those involved in the design and development of those algorithms. The emphasis on capability thereby helps account for the diverse assortment of actors—whether in government, commercial, academic, or other contexts—who might exercise power over, and thus who might be held answerable for, the design, development, or use of war algorithms.
Constructed System
“Robot” is not a legal term of art under international law. One oft-cited, decades-old definition comes from the Robot Institute of America, a trade association of robot manufacturers and users: “a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.”[43] Others draw different definitional boundaries. Alan Winfield, for instance, defines a robot as “an artificial device that can sense its environment and purposefully act on or in that environment.”[44] Neil Richards and William Smart argue that a robot is “a constructed system that displays both physical and mental agency but is not alive in the biological sense.”[45] And the Oxford English Dictionary Online defines a robot in the modern sense[46] as “[a]n intelligent artificial being typically made of metal and resembling in some way a human or other animal.”[47]
We sidestep some of the definitional quandaries attending “robot” by focusing instead on constructed systems. For our purposes, a constructed system is a manufactured machine, apparatus, plant, or platform that is capable both of being used to gather information and of effectuating a “choice” or “decision” which is, in whole or in part, derived through an algorithm expressed in computer code but that is not alive in the biological sense. By limiting our inquiry to systems that are not alive in the biological sense, we also circumvent the subject of biologically engineered agents.
Among the most common sensors used to gather information in “constructed systems” include methods to detect how far away objects are by transmitting certain waves and monitoring their reflections, such as radar (radio waves), sonar (sound waves), and lidar (light waves), as well as cameras. The system may be tele-operated (also known as remotely operated)—or not. It may have a manipulator (used loosely here to denote a component providing the capability to interact in the built environment)—or not. However, if it does not have a manipulator, the system needs, to meet our definition, another avenue to effectuate the algorithmically-derived “choice” or “decision.”
The constructed systems may come in a diverse array of forms,[48] such as marine, terrestrial, aerial, or space vehicles; missile systems; or biped or quadruped robots.[49] They may operate collaboratively—including as so-called “swarms”[50]—or individually. They may use a range of power sources, such as batteries or internal combustion engines to generate electricity or to power hydraulic or pneumatic actuators. And their costs may run the gamut from the budget of a tinkerer to industrial or governmental-scale programs.
A.I. Advancements
Recently published advancements in AI—especially machine learning and a class of techniques called deep learning—underscore the rapid pace of technical development.[51] Those advancements reach into many areas of modern digital life, underlying “web searches to content filtering on social networks to recommendations on e-commerce websites.”[52]
For many years, “[c]onventional machine-learning techniques were limited in their ability to process natural data in their raw form.”[53] For decades, for instance, “constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data … into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input.”[54] An advance came with representational learning, which “is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification.”[55]
Deep learning—including deep neural networks—marked another advance. (A deep neural network can be thought of as “a network of hardware and software that mimics the web of neurons in the human brain.”[56]) Deep-learning methods have been explained as “representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level.”[57] As experts have explained, “[w]ith the composition of enough such transformations, very complex functions can be learned.”[58] The gist is that, “[f]or classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations.”[59]
Consider the example of a digital image. It
comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts.[60]
Through deep-learning techniques, “these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure.”[61]
Already, “[d]eep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years.”[62] Those include beating records in image recognition and speech recognition, as well as beating other machine-learning techniques at, for example, predicting the activity of drug molecules.[63] Writing in 2015, some experts “think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data.”[64] In line with this view, “[n]ew learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress.”[65]
One mark of that progress came late last year when a computer program, AlphaGo, achieved a feat previously thought to be at least a decade away: defeating a human professional player in a full-sized game of Go.[66] (A few months later, AlphaGo won four of five matches against Lee Sedol, who, as one of the top players in the world, had achieved the highest rank of nine dan.[67]) The system designers introduced a new approach based on deep convolutional neural networks that used “value networks” to evaluate board decisions and “policy networks” to select moves. (Convolutional neural networks—the typical architecture of which is structured as a series of stages—“are designed to process data that come in the form of multiple arrays.”[68] In other words, these networks “use many layers of neurons, each arranged in overlapping tiles, to construct increasingly abstract, localized representations of an image.”[69]) For AlphaGo, those deep neural networks were “trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play.”[70] AlphaGo developers also introduced a new search algorithm—which was designed in part to encourage exploration on its own—that combines a sophisticated simulation technique (called Monte Carlo tree search) with the value and policy networks.[71]
By grounding our discussion in algorithms expressed in computer code and effectuated through constructed systems, we sidestep some of the doctrinal debates on what constitutes “artificial intelligence” and “artificial general intelligence”—and on whether the latter may be realistically achievable or is more the stuff of science fiction. These questions are outside of the scope of this briefing report, but they are nonetheless vitally important. In any event, it merits emphasis that existing learning algorithms and architectures already have remarkable capabilities that, at least, seem to approach aspects of human “decision-making.”
For their part, creators of AlphaGo have characterized Go as “exemplary in many ways of the difficulties faced by artificial intelligence: a challenging decision-making task, an intractable search space, and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function.”[72] In the eyes of its designers, AlphaGo provides “hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains.”[73]
Approaches to Technical Autonomy in War
As noted above, there is no agreement on what “autonomy” means in the context of the discussion to date on autonomous weapon systems.
Commentators’ views on what constitutes “autonomy” in this context range enormously. Some, for instance, focus on whether the system navigates with a human on board (“manned”) or without one (“unmanned”). Others emphasize geography, such as whether the weapon is operated by a human remotely or proximately. Some hold that the “autonomy” in AWS should be reserved only for “critical functions” in the conduct-of-hostilities targeting cycle. Still others argue that it is the capability of a system, once launched, to sense, think, learn, and act all without further human intervention. A number of definitions combine various components of these notions. But depending on the definition and classification, it is beyond doubt that some existing military systems contain at least a degree of autonomy. (In the last sub-section of this section, we profile examples of weapons, weapon systems, and weapon platforms that some commentators have characterized as AWS.)
In this sub-section, we focus on the positions of states, because discerning states’ positions and practices is one of the key steps in illuminating the scope of international law as it currently stands (lex lata) and distinguishing that from nascent norms and from the law as it should be (lex ferenda). A handful of states have considered or formally adopted definitions relevant to AWS, whether while focusing on weapon systems or unmanned aerial systems. Below, we summarize five of the most elaborate sets of these considerations and definitions—those by Switzerland, France, the Netherlands, the United States, and the United Kingdom.
Switzerland
In the lead-up to the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Switzerland published an “Informal Working Paper” titled “Towards a ‘compliance-based’ approach to LAWS.” The paper proposes “to initially describe autonomous weapons systems (AWS) simply as” follows:
[W]eapons systems that are capable of carrying out tasks governed by IHL in partial or full replacement of a human in the use of force, notably in the targeting cycle.[74]
According to the paper, “[s]uch a working definition is inclusive, accounts for a wide array of system configurations, and allows for a debate that is differentiated, compliance-based, and without prejudice to the question of appropriate regulatory response.”[75] In the view of Switzerland, “the working definition proposed is not conceived in any way to single out only those systems which could be seen as legally objectionable.”[76] The authors note that “[a]t one end of the spectrum of systems falling within that working definition, States may find some subcategories to be entirely unproblematic, while at the other end of the spectrum, States may find other subcategories unacceptable.”[77] Finally, the paper notes, “[a]s discussions advance, this working definition could and probably should evolve to become more specific and purposeful.”[78]
The Netherlands
On April 7, 2015, the Netherlands Ministries of Foreign Affairs and of Defense requested a report from the Advisory Council on International Affairs (AIV) and the Advisory Committee on Issues of Public International Law (CAVV) addressing five sets of questions concerning autonomous weapon systems:
- What role can autonomous weapons systems (and autonomous functions within weapons systems) fulfil in the context of military action now and in the future?
- What changes might occur in the accountability mechanism for the use of fully or semi-autonomous weapons systems in the light of associated ethical issues? What role could the concept of ‘meaningful human control’ play in this regard, and what other concepts, if any, might be helpful here?
- In its previous advisory report, the CAVV states that the deployment of any weapons system, whether or not it is wholly or partly autonomous, remains subject to the same legal framework. As far as the CAVV is concerned, there is no reason to assume that the existing international legal framework is inadequate to regulate the deployment of armed drones. Does the debate on fully or semi-autonomous weapons systems give cause to augment or amend this position?
- How do the AIV and the CAVV view the UN Special Rapporteur’s call for a moratorium on the development of fully autonomous weapons systems?
- How can the Netherlands best contribute to the international debate on this issue?
A joint committee of the AIV and the CAVV prepared a report, which the AIV adopted on October 2, 2015 and the CAVV adopted on October 12, 2015.[79] On March 2, 2016, the government responded to the report. (We use the term “government” in this context interchangeably with reference to the Ministries of Foreign Affairs and of Defense of the Netherlands.) The main conclusion of the report, in the words of the government’s response, “is that meaningful human control is required in the deployment of autonomous weapon systems”—a view with which the government concurs.[80]
The government—while noting “[t]here is as yet no internationally agreed definition of an autonomous weapon system”—supports the working definition of AWS which the advisory committee adopted:[81]
A weapon that, without human intervention, selects and engages targets matching certain predetermined criteria, following a human decision to deploy the weapon on the understanding that an attack, once launched, cannot be stopped by human intervention.[82]
Underlying this definition is the notion of the “wider loop” of the decision-making process, which plays a prominent role in the Dutch government’s understanding of accountability concerning AWS. In the view of the Dutch government, with respect to AWS humans are involved in that “wider loop” because humans “play a prominent role in programming the characteristics of the targets that are to be engaged and in the decision to deploy the weapon.”[83] That means, in short, “that humans continue to play a crucial role in the wider targeting process. An autonomous weapon as defined above is therefore only deployed after human consideration of aspects such as target selection, weapon selection and implementation planning, including an assessment of potential collateral damage.”[84] In addition, the government notes, “the autonomous weapon is programmed to perform specific functions within pre-programmed conditions and parameters. Its deployment is followed by a human assessment of the effects. Assessments of potential collateral damage (proportionality) and accountability under international humanitarian law are of key importance in this respect.”[85]
As summarized by the Dutch government, “[t]he advisory committee states that if the deployment of an autonomous weapon system takes place in accordance with the process described above, there is meaningful human control. In such cases, humans make informed, conscious choices regarding the use of weapons, based on adequate information about the target, the weapon in question and the context in which it is to be deployed.”[86] For its part, “[t]he advisory committee sees no immediate reason to draft new or additional legislation for the concept of meaningful human control.”[87] Instead, “[t]he concept should be regarded as a standard deriving from existing legislation and practices (such as the targeting process).”[88] Over all, the government expressly affirms that it “supports the definition given above of an autonomous weapon system, including the concept of meaningful human control, and agrees that no new legislation is required.”[89]
France
In a “non-paper” circulated in the context of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems, France articulated the following considerations with respect to such systems:
France considers that LAWS [Lethal Autonomous Weapons Systems] share the following characteristics:
- Lethal autonomous weapons systems are fully autonomous systems. LAWS are future systems: they do not currently exist.
- Remotely operated weapons systems and supervised weapons systems should not be regarded as LAWS since a human operator remains involved, in particular during the targeting and firing phases. Existing automatic systems are not LAWS either[.]
- LAWS should be understood as implying a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command.
- The delivery platform of a LAWS would be capable of moving, adapting to its land, marine or aerial environments and targeting and firing a lethal effector (bullet, missile, bomb, etc.) without any kind of human intervention or validation.”[90]
Compared to most other states that have put forward working definitions, France articulates a relatively narrow definition of what constitutes a lethal autonomous weapons system in the context of the CCW. Most striking, perhaps, is the condition that there be “a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command.” Moreover, France clarifies that, in its view, the definition of a “lethal autonomous weapons system” includes only a delivery “platform” that “would be capable of moving, adapting to its land, marine or aerial environments and targeting and firing a lethal effector … without any kind of human intervention or validation.” This formulation combines autonomy in navigation and maneuver with autonomy in certain key elements of the targeting cycle.
United States
In a series of directives and other documents, the U.S. Department of Defense (DoD) has elaborated one of the most technically specific state approaches to autonomy in relation to weapon systems.
A central document is DoD Directive 3000.09 (2012). It “[e]stablishes DoD policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms.”[91] The directive is applicable to certain DoD actors and related organizational entities.[92] It concerns “[t]he design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that can independently select and discriminate targets,” as well as “[t]he application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.”[93] However, the directive expressly “does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance.”[94] Among the relevant terms defined in the glossary of Directive 3000.09 are the following:
Autonomous weapon system: “A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”[95]
Human-supervised autonomous weapon system: “An autonomous weapon system that is designed to provide human operators with the ability to intervene and terminate engagements, including in the event of a weapon system failure, before unacceptable levels of damage occur.”[96]
Semi-autonomous weapon system: “A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator. This includes: [s]emi-autonomous weapon systems that employ autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.”[97]
Directive 3000.09 establishes that, as a matter of policy, “[a]utonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[98] More specifically, “[s]ystems will go through rigorous hardware and software verification and validation … and realistic system developmental and operational test and evaluation … in accordance with” certain guidelines.[99] In addition, “[t]raining, doctrine, and tactics, techniques, and procedures … will be established.”[100] In particular, those measures will ensure that autonomous and semi-autonomous weapon systems will, first, “[f]unction as anticipated in realistic operational environments against adaptive adversaries.” Second, they will ensure that those systems will “[c]omplete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” And third, they will ensure that those systems “[a]re sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.”[101]
The directive also establishes that “[c]onsistent with the potential consequences of an unintended engagement or loss of control of the system to unauthorized parties, physical hardware and software will be designed with appropriate: … Safeties, anti-tamper mechanisms, and information assurance in accordance with [another relevant DoD directive]. … Human-machine interfaces and controls.”[102] Furthermore, “[i]n order for operators to make informed and appropriate decisions in engaging targets,” the directive establishes that “the interface between people and machines for autonomous and semi-autonomous weapon systems shall” have three characteristics. First, they shall “[b]e readily understandable to trained operators.” Second, they shall “[p]rovide traceable feedback on system status.” And third, they shall “[p]rovide clear procedures for trained operators to activate and deactivate system functions.”[103]
Directive 3000.09 further lays down, also as a matter of policy, that “[p]ersons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).”[104] The directive establishes that autonomous and semi-autonomous weapon systems intended to be used in a manner that falls within three certain sets of policies will be considered for approval in accordance with enumerated approval procedures and other applicable policies and issuances.[105] The first such policy set establishes that “[s]emi-autonomous weapon systems (including manned or unmanned platforms, munitions, or sub-munitions that function as semi-autonomous weapon systems or as subcomponents of semi-autonomous weapon systems) may be used to apply lethal or non-lethal, kinetic or non-kinetic force.” Further pursuant to that policy set, “[s]emi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.” The second policy set lays down that “[h]uman-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-critical or saturation attacks” for static defense of manned installations and for onboard defense of manned platforms. Finally in this connection, the third policy set establishes that autonomous weapon systems “may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets in accordance with” a separate DoD directive.[106]
Directive 3000.09 further provides that “[a]utonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside” those three sets of policies must be approved by the Under Secretary of Defense for Policy, the Under Secretary of Defense for Acquisition, Technology, and Logistics, and the Chairman of the Joint Chiefs of Staff “before formal development and again before fielding in accordance with” enclosed guidelines and other applicable policies and issuances.[107] In addition, Directive 3000.09 lays down, also as a matter of policy, that “[i]nternational sales or transfers of autonomous and semi-autonomous weapon systems will be approved in accordance with existing technology security and foreign disclosure requirements and processes, in accordance with” an enumerated memorandum.[108] Enclosures to the directive further explain certain references; further elaborate verification and validation as well as testing and evaluation of autonomous and semi-autonomous weapon systems; set down guidelines for review of certain such systems; elaborate responsibilities; and provide definitions in a glossary.[109]
For its part, the U.S. DoD Law of War Manual gives examples of two ways that some weapons may have autonomous functions. First, “mines may be regarded as rudimentary autonomous weapons because they are designed to explode by the presence, proximity, or contact of a person or vehicle, rather than by the decision of the operator.”[110] And second, “[o]ther weapons may have more sophisticated autonomous functions and may be designed such that the weapon is able to select targets or to engage targets automatically after being activated by the user.”[111] The Manual authors give the example that “the United States has used weapon systems for local defense with autonomous capabilities designed to counter time-critical or saturation attacks. These weapon systems have included the Aegis ship defense system and the Counter-Rocket, Artillery, and Mortar (C-RAM) system.”[112]
United Kingdom
The United Kingdom Ministry of Defence (MoD) has addressed autonomy primarily in relation to unmanned aircraft systems. The MoD promulgated the key document—Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems (Joint Doctrine Note)—on March 30, 2011.[113] That document’s “purpose is to identify and discuss policy, conceptual, doctrinal and technology issues that will need to be addressed if such systems are to be successfully developed and integrated into future operations.”[114]
In the section on definitions, the authors discuss “automation” and “autonomy,” emphasizing that, confusingly, the two “terms are often used interchangeably even when referring to the same platform; consequently, companies may describe their systems to be autonomous even though they would not be considered as such under the military definition.”[115] Noting that “[i]t would be impossible to produce definitions that every community would agree to,” the Joint Doctrine Note authors chose the following definitions in order to be “as simple as possible, while making clear the essential differences in meaning between them”:[116]
Automated system: “In the unmanned aircraft context, an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a pre-defined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable.”
Autonomous system: “An autonomous system is capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”[117]
Based on those definitions, the Joint Doctrine Note authors deduce four sets of points. The basic notion of the first set is that “[a]ny or none of the functions involved in the operation of an unmanned aircraft may be automated.”[118] In a related footnote, it is stated that “[f]or major functions such as target detection, only some of the sub-functions may be automated, requiring human input to deliver the overall function.”[119]
The main idea guiding the second set of points is that “[a]utonomous systems will, in effect, be self-aware and their response to inputs indistinguishable from, or even superior to, that of a manned aircraft.”[120] As such, according to the authors, those autonomous systems “must be capable of achieving the same level of situational understanding as a human.”[121] At the time of publication (2011), the authors stated, “[t]his level of technology is not yet achievable and so, by the definition of autonomy in this JDN, none of the currently fielded or in-development unmanned aircraft platforms can be correctly described as autonomous.”[122]
The third set of points concerns the importance of “[t]he distinction between autonomous and automated ... as there are moral, ethical and legal implications regarding the use of autonomous unmanned aircraft.”[123] Those issues are discussed in another part of the Joint Doctrine Note.[124] The fourth and final set of points deduced by the authors concerns “an over-arching principle that, whatever the degree of automation, an unmanned aircraft should provide at least the same, or better, safety standard as a manned platform carrying out the same task.”[125]
In addressing accountability, the Joint Doctrine Note states that “[l]egal responsibility for any military activity remains with the last person to issue the command authorising a specific activity.”[126] The Joint Doctrine Note authors recognize, however, that “[t]his assumes that a system’s basic principles of operation have, as part of its release to service, already been shown to be lawful, but that the individual giving orders for use will ensure its continued lawful employment throughout any task.”[127] An assumption underlying this process is “that a system will continue to behave in a predictable manner after commands are issued,” yet, the authors note, “clearly this becomes problematical as systems become more complex and operate for extended periods.”[128] Indeed, according to the authors, “[i]n reality, predictability is likely to be inversely proportional to mission and environmental complexity. For long-endurance missions engaged in complex scenarios, the authorised entity that holds legal responsibility will be required to exercise some level of supervision throughout.”[129] If that is the case, in the view of the authors, “this implies that any fielded system employing weapons will have to maintain a 2-way data link between the aircraft and its controlling authority.”[130]
Examples of Purported Autonomous Weapon Systems
This section profiles weapons, weapon systems, and weapon platforms that have been couched, by various commentators, as autonomous weapon systems—such as by exhibiting or reflecting varying levels, forms, or notions of autonomy or automation, in relation to navigation or maneuvering or the targeting cycle. The inclusion of a weapon here is not meant to indicate our evaluation that the weapon, system, or platform has or does not have autonomous capabilities or that it fits within a legally relevant definition of autonomy. Most, but not all, of the weapons, systems, and platforms described here operate based, at least in part, on a war algorithm.
Mines
Anti-Personnel Mines
Anti-personnel mines are designed to “reroute or push back foot soldiers from a given geographic area,” and can kill or injure foot soldiers[131] (in contrast to, for example, naval mines, which are designed to destroy ships).[132] They are typically activated “by direct pressure from above, by pressure put on a wire or filament attached to a pull switch, by a radio signal or other remote firing method, or even simply by the proximity of a person within a predetermined distance.”[133] For these reasons, anti-personnel mines do not discriminate among potential targets, as they are not capable of independently tracking different targets and choosing among them.
Underwater Mines
Naval Mines — General
Naval mines are capable of being detonated by either seismic sensors that sense vibrations in the water as a ship approaches[134] or acoustic sensors that detect sounds generated by passing ships.[135] Some modern mines use a combination of seismic, acoustic, electric, and magnetic sensors to detect nearby ships.[136] Naval mines explode when triggered, without a proximate human directing them to detonate. Naval mines do not discriminate among potential targets; if something triggers its detonation, a naval mine explodes without any independent decision-making process in which it might “choose” whether to detonate.
MK-60 CAPTOR (United States)
The MK-60 EnCAPsulated TORpedo (CAPTOR), manufactured by Alliant Techsystems, is a sophisticated anti-submarine weapon. It is a deep-water mine that, when triggered, launches a torpedo at hostile targets. It is anchored to the ocean floor and uses a surveillance system known as Reliable Acoustic Path (RAP) sound propagation to track vessels above it.[137] Vessels traveling on or very close to the surface are labeled as ships and are not attacked. Vessels traveling far enough below the surface are labeled as submarines. When it senses a submarine that does not have a “friendly” acoustic signature, the MK-60 launches a torpedo at the target.[138] It therefore has autonomy in its functions in terms of not requiring human authorization to unleash a specific attack. Yet the MK-60 is not capable of “choosing” whether to attack an enemy submarine; if it detects an enemy submarine, it launches the torpedo with no (further) “decision-making” process involved.
Unmanned Vehicles and Systems
Unmanned Vehicles — General
Unmanned Aerial Vehicles
Unmanned Aerial Vehicles (UAVs), also called drones, comprise a broad category and refer to any aircraft without a human pilot onboard. Their functions can span from surveillance and reconnaissance to military attacks. Unmanned Combat Aerial Vehicles (UCAVs) are a subset of UAVs. Different models operate with varying degrees of autonomy across different functions. Traditionally, pilots have operated drones remotely, but drones are becoming increasingly capable of certain autonomous functions. Models such as the nEUROn (which has been referred to as a UCAV; see below) can in key respects fly autonomously,[139] compensating for unexpected events like changing weather patterns, and the X-47B (see below) can even refuel itself in mid-air at its carrier.[140] The technological capability of certain UAVs, once launched, to select and attack targets, without further human intervention, seems to exist, but most drones require human authorization or guidance before deploying lethal force. The Harpy (see below)—a “fire and forget, fully autonomous” so-called “loitering munition”—is one notable exception.[141]
Unmanned Surface Vehicle
Unmanned Surface Vehicles (USVs) broadly refer to any watercraft that operates on the surface of the water without an onboard crew. They have a wide range of commercial and military functions. The U.S. Navy often uses them for minesweeping, for surveillance and reconnaissance, and to detect submarines.[142] Like UAVs, USVs might operate with various degrees of autonomy across different functions, spanning a range from remote-controlled operation to autonomy in navigation and maneuver.[143]
Unmanned Maritime Vehicles
Unmanned Maritime Vehicles include both USVs and Autonomous Underwater Vehicles (AUVs). Both USVs and AUVs generally perform similar functions like surveillance and minesweeping.[144] Different models operate with various degrees of autonomy across different functions.[145]
Unmanned Vehicles and Systems — Specific
Dominator (United States)
Currently under development by Boeing, the Dominator aims to incorporate a “long-endurance, autonomous UAV for intelligence, surveillance, and reconnaissance missions and potentially for strike capability.”[146] According to Boeing, the Dominator will employ “autonomous flight using small-diameter bomb avionics,” and can be deployed from a variety of artillery and vehicles, including unmanned aircraft.[147] Boeing will also examine the potential to incorporate “Textron Defense System’s Common Smart Submunition (CSS)” to differentiate and deploy against both fixed and moving targets.[148]
Guardium (Israel)
The Guardium system, developed by G-NIUS, Israel Aerospace Industries, and Elbit Systems, includes both manned and unmanned ground vehicles (UGVs) and is used by the Israel Defense Forces.[149] According to the chief executive officer for G-NIUS, the latest design of Guardium displayed at a weapons exhibition in 2015 has the capability of serving a variety of purposes, including carrying missiles, loitering munitions, or UAV for reconnaissance missions.[150] The Guardium vehicles have “varying degrees” of autonomy: for instance, the vehicles are capable of responding to various obstacles, “automatically deploy[ing] subsystems,” and patrolling Israel’s border with Gaza,[151] yet human operators may override or intervene to control the vehicle’s functions.[152]
K-MAX Helicopter (United States)
Lockheed Martin designed the K-MAX helicopter, which is capable of deploying in a variety of environments, including cargo delivery in combat, firefighting, and humanitarian aid.[153] While the K-MAX helicopter has the capability to seat a pilot onboard, it is capable of being operated remotely to allow the system to function in a variety of high-risk environments.[154]
Knifefish (United States)
The Knifefish, designed as an unmanned underwater vehicle (UUV), is used to locate mines,[155] including those buried in so-called “high clutter environments.”[156] General Dynamics Mission Systems and Bluefin Robotics have been developing various models to be used by the U.S. Navy, possibly beginning in 2018 or 2019.[157] The Knifefish operates with autonomy in its function to sweep for mines in various underwater environments.[158]
Lijian (China)
China launched a prototype of Lijian, meaning “sharp sword,” on November 20, 2013.[159] Shenyang Aircraft Company and the Hongdu Aircraft Industries Corporation reportedly designed and manufactured the unmanned combat aerial vehicle (UCAV).[160] Other than its similar configuration to the X-47B, little is known about the UCAV or its capabilities.[161] Notably, it did not appear at Airshow China in 2014; however, the China Aerospace Science and Technology Corporation has “insinuated” that the Lijian program is “alive and well.”[162] Because little, if any, information about the Lijian’s capabilities is publicly known, it remains unclear whether the Lijian employs autonomy in its system. More generally, the release of information about China’s air forces indicates that China aims to develop an air force “capable of conducting both offensive and defensive operations,” to include “the enhancement of reconnaissance and strategic projection capabilities.”[163]
nEUROn (France, Greece, Italy, Spain, Sweden, Switzerland)
The nEUROn is an unmanned combat air vehicle (UCAV) being developed by Dassault Aviation and several European nations.[164] The nEUROn is designed to perform reconnaissance and combat missions. The various countries involved in the nEUROn program have been testing its capabilities, assessing, among other things, the “detection, localization, and reconnaissance of ground targets in autonomous modes.”[165] Testing of the nEUROn, which is designed as a demonstrator of current technologies, will also evaluate its capability to “drop…Precision Guided Munitions through the internal weapon bay.”[166]
Platform M (Russia)
According to Russian media, Platform M is a “remote-controlled robotic unit” developed by the Progress Scientific Research Technological Institute of Izhevsk.[167] Reportedly, Platform M has the capability to “destroy targets in automatic or semiautomatic control systems.”[168] Its “targeting mechanism works automatically without human assistance,” according to news reports.[169]
Pluto Plus (Italy)
The Pluto and Pluto Plus remotely operated vehicles (ROVs), also referred to as unmanned underwater vehicles (UUVs),[170] operate underwater to identify mines using features such as “sonar sensors for navigation, search, obstacle avoidance and identification,” as well as the capability to relay information, including video imagery, to the operator.[171] The Italian company Gaymarine developed the Pluto and Pluto Plus models, which are used in conjunction with other mine-countermeasure vehicles (MCMVs) by various navies throughout the world, including Italy, Nigeria, Norway, South Korea, Spain, and Thailand.[172] A pilot operates the Pluto Plus above the water, using a “remote control console” to maneuver the vehicle.[173]
Protector USV (Israel)
Developed and manufactured by Rafael Advanced Defense Systems, the 11m version of the Protector USV contains an “enhanced remotely controlled water can[n]on system for non-lethal and firefighting capabilities.”[174] It includes an unmanned boat, a tactical control system, and mission modules.[175] The 11m model includes features that will reportedly enable the USV to engage in “surveillance, reconnaissance, mine warfare, and anti-submarine warfare.”[176] The 11m model, as with earlier models of the Protector, employs two operators that work remotely from a dual-console station, controlling both the boat and the payload.[177]
Sea Hunter (United States)
In 2016, the Defense Advanced Research Projects Agency (DARPA), a U.S. government agency, designed a prototype of an autonomous surface vessel named Sea Hunter, which was manufactured by Leidos.[178] According to DARPA, the vessel can “robustly track quiet diesel electric submarines,”[179] with the ability to travel up to several months and for considerable distances; developers anticipate that it has the capability to perform other functions as well.[180] Sea Hunter is capable of autonomy in certain functions in two ways. First, it is capable of navigating and maneuvering independently without colliding with other ships.[181] Second, it is capable of locating and tracking diesel electric submarines, which can be extremely quiet and difficult to detect, within a range of two miles.[182] A human can take control of the vessel if necessary, but it is designed to perform its functions without any proximate human direction.[183]
Skat (Russia)
In 2013, the developer MiG reportedly signed an agreement to develop an unmanned combat air vehicle (UCAV) called Skat.[184] According to a Russian news agency, Skat would “carry out strike missions on stationary targets, especially air defense systems in high-threat areas, as well as mobile land and sea targets.”[185]Also according to a Russian news agency, Skat would “navigate in autonomous modes.”[186] More recent reports, however, note it is “unclear” whether Russia has continued to develop this kind of technology, stating that Russia cancelled plans to develop Skat.[187]
Taranis (United Kingdom)
Taranis is an unmanned aerial combat stealth drone being developed by the British company BAE Systems to demonstrate current technologies.[188] It is capable of performing surveillance and reconnaissance, and also serving in combat missions. According to BAE Systems, the company is attempting to determine whether the Taranis can “strike targets ‘with real precision at long range, even in another continent.’”[189] Taranis is theoretically capable of flying autonomously (although during test flights, it has always been controlled remotely by a human operator).[190] A remote human operator must give authorization before Taranis is capable of attacking any target, although the drone identifies potential targets and, once an attack has been authorized, it aims at those targets.[191]
X-47B (United States)
The X-47B is an unmanned aerial combat stealth drone that was developed by the United States, built by Northrop Grumman, and designed as a “test and development vehicle for advancing control technologies and systems necessary for operating [UAVs] in and around aircraft carriers.”[192] According to the U.S. Navy, it developed the X-47B as a “demonstrator” to showcase current capabilities; although the X-47B has not been armed, it is capable of carrying two 2,000-pound bombs.[193] While the X-47B reportedly has autonomy in certain functions,[194] an operator can take control of the X-47B via a Control Display Unit.[195] The X-47B pioneered several autonomous flight maneuvers, including the “first autonomous landing on an aircraft carrier and the first mid-air refueling by a [UAV].”[196] In principle, human authorization is required before the X-47B could be used to intentionally deploy deadly force, but the precise way in which the human operator fits into this equation is not publicly reported.[197]
Missile Systems
Missile Systems — General
“Fire and Forget” Missile Systems
“Fire and forget” missiles are capable, once launched, of reaching their target with no further human assistance. With older missile systems, the operator who fired the missile had to help guide the missile towards its target by, for example, continuing to track the target and transmitting “corrective commands” to the missile.[198] Newer “fire and forget” missiles, such as the FMG-148 Javelin (discussed below), are capable, once fired, of independently tracking their targets without outside guidance or control.[199] They are also capable of navigating certain difficult terrain on their own, and some, like the Brimstone and Brimstone 2 (discussed below), are capable of locating their target even when it was not initially in the line of sight of the launch location.
Missile Systems — Specific
Brimstone and Brimstone 2 (United Kingdom)
Brimstone is an anti-armor, “fire and forget” missile first used in 2005, and developed initially by GEC-Marconi Radar and Defense Systems (later MBDA UK).[200] The Royal Air Force (RAF) began using the Brimstone in Iraq and Afghanistan during 2008 and 2009.[201] Brimstone 2, which entered service in 2016,[202] incorporates a number of improvements from the initial Brimstone model.[203] Brimstone included “embedded algorithms” and could strike both land and naval targets.[204] Brimstone 2 introduced “an improved set of targeting algorithms,” as well as “autopilot and seeker enhancements.”[205] It is a “fire and forget” missile that is capable of autonomy in navigating terrain as it travels toward its target and in certain respects of independently locating a particular target by discriminating among potential candidates.[206] Once launched, Brimstone is capable of “sweeping” a large target area, searching for a specific type of target, the details of which can be pre-programmed into each individual missile prior to launch. For example, a Brimstone missile is capable of being programmed to target only an armored vehicle, ignoring other objects.[207]
FMG-148 Javelin (United States)
The Javelin is a “fire and forget” anti-tank missile developed by the United States with a range of 2,500 meters.[208] Multiple countries have purchased the Javelin, including Australia, Bahrain, the Czech Republic, France, Ireland, Jordan, Lithuania, New Zealand, Norway, Oman, the United Arab Emirates, the United Kingdom, and the United States.[209] The United States has also recently approved sales of the missile to other countries, including Qatar.[210] Both Raytheon and Lockheed Martin manufacture the Javelin.[211] Two human operators carry and launch the Javelin.[212] A human operator must select the Javelin’s target; however, the missile guides itself to the target, allowing the human operators to leave the launch site before the missile strikes. Operators are capable of identifying targets “either directly [in] line-of-sight or with help from the missile’s guidance capability.”[213]
Harpy (Israel)
Developed by Israel Aerospace Industries and used principally by China, India, South Korea, Turkey, and Israel, the Harpy is a “transportable, canister-launched, fire-and-forget, fully autonomous” system,[214] which is also called a “loitering munition.”[215] Harop, a variant of the Harpy developed in 2009, has the capability to “engage time-critical, high-value, relocatable targets,” and is also capable of being launched from both land and naval-based canisters.[216]
Joint Strike Missile (Norway)
The recently-developed Joint Strike Missile builds on the technology of the Naval Strike Missile.[217] Norway has funded the development of the missile, which is manufactured by Kongsberg.[218] It is designed to be integrated into the F-35 Joint Strike Fighters and to attack both naval and land targets.[219] In 2015, the Joint Strike Missile was deployed successfully in a test run, and further testing and developments are scheduled through 2017.[220] The Joint Strike Missile is not capable of choosing an initial target. It is also incapable of locating a hidden target; however, it does include a Global Positioning System/Inertial Navigation System to help it autonomously navigate close to terrain towards a preselected target. It is also programmed to automatically fly in unpredictable patterns to make it harder to intercept.[221]
Stationary Systems, including Close-In Weapon Systems
Aegis Combat System (United States)
The Aegis Combat System, manufactured by Lockheed Martin,[222] is a weapons control system capable of identifying, tracking, and attacking hostile targets.[223] Several countries use the system, including Australia, Japan, Norway, South Korea, Spain, and the United States.[224] Aegis has many more capabilities than a standalone Phalanx CIWS (see below). Like the Phalanx, Aegis relies on radar to identify possibly hostile targets.[225] Unlike the Phalanx, Aegis is capable of engaging over 100 targets simultaneously.[226] The Aegis Combat System is capable of being operated autonomously[227] in terms of the computer interface tracking various targets, determining their threat levels, and, in certain respects, independently determining whether to attack them.
AK-630 CIWS (Russia)
The AK-630 Close-In Weapons System (CIWS) gun turret is “designed to engage manned and unmanned aerial targets, small-size surface targets, soft-skinned coastal targets, and floating mines.”[228] Multiple countries have used the AK-630, including Bulgaria, Croatia, Greece, Lithuania, Poland, Romania, and Ukraine.[229]
Centurion (United States)
The Centurion Weapons System, manufactured by Raytheon, uses a “radar-guided gun” against “incoming rocket and mortar fire.”[230] The Centurion has been described as a “land-based version” of the Phalanx CIWS (see below).[231] In addition to the United States, the United Kingdom also uses the Centurion. The Centurion uses the same capabilities as the Phalanx CIWS, including automatically tracking and destroying incoming fire.[232]
Counter Rocket, Artillery, and Mortar (C-RAM) (United States)
C-RAM, manufactured by Northrop Grumman and Raytheon, is a missile defense system designed to intercept hostile projectiles before they reach their intended targets. Its central component is a revised version of the U.S. Navy’s Phalanx CIWS (see below), as well as existing radar systems, adapted for on-land use.[233] Australia and the United Kingdom have purchased the system from the United States.[234] C-RAM reportedly has autonomy in its operations in terms of “intercept[ing] incoming munitions at speeds too quick for a human to react.”[235]
GDF (Switzerland)
The Oerlikon GDF is an anti-aircraft cannon initially developed in the late 1950s and currently used by over 30 countries.[236] Once activated, the GDF-005 model is capable, without further human intervention, of operating using radar to identify targets, attacking them, and reloading.[237]
Goalkeeper CIWS (The Netherlands)
The Goalkeeper CIWS, manufactured by the Thales Group, includes a gun with “missile-piercing ammunition” that enables the system to “destroy missile warheads.”[238] The navies of Belgium, Chile, the Netherlands, Portugal, Qatar, South Korea, the United Arab Emirates, and the United Kingdom use the system.[239] According to information provided by Thales, the Goalkeeper system “automatically performs the entire process from surveillance and detection to destruction, including selection of the next priority target.”[240]
Iron Dome (Israel)
The Iron Dome is manufactured by Raytheon and seeks to “detect, assess, and intercept incoming rockets, artillery, and mortars.”[241] The Iron Dome has autonomy in some of its functions. It locates potential targets using radar and calculates their expected trajectory. If a rocket would hit a populated area, the Iron Dome is capable of launching a Tamir interceptor missile at the rocket. A human operator must authorize the launch, and she must often make the decision very quickly, sometimes in a matter of minutes.[242] Once a launch is authorized, the computer system will independently aim the Tamir and determine when to launch it. Once close enough to the hostile rocket, the Tamir explodes, destroying both projectiles. The computer algorithm, not the human operator, determines when to detonate the Tamir.
Kashtan CIWS (Russia)
Manufactured by KBP Instrument Design Bureau and used by China, India, and Russia,[243] the Kashtan Close-In Weapon System (CIWS) “can engage up to six targets simultaneously,” and includes gun and missile armaments.[244] The Kashtan system has been described as a human-supervised system with certain autonomous functions.[245]
MANTIS (Germany)
The Modular, Automatic, and Network-Capable Targeting and Interception System, or MANTIS, manufactured by Rheinmetall and used by German forces, is capable of quickly acquiring a target and firing 1,000 rounds a minute.[246] An operator must first activate the MANTIS, but, once activated, “the system is fully automated, although a man in the loop allows for engagement to be overruled if needed.”[247]
MK 15 Phalanx CIWS (United States)
Manufactured by Raytheon[248] and used by at least 25 countries,[249] MK 15 Phalanx Close-In Weapons System (CIWS) is a “fast-reaction, detect-through-engage, radar guided, 20-millimeter gun weapon system” used to explode anti-ship missiles (ASMs) and other approaching threats, such as aircraft and unmanned aerial systems (UASs).[250] The Phalanx CIWS can be operated manually or in an autonomous mode.[251] The Phalanx CIWS uses radar to track nearby projectiles, and it is capable of independently determining whether they pose a threat based on their speed and direction.[252] When it is programmed to operate autonomously, the Phalanx CIWS automatically fires at incoming missiles without further human direction.[253]
MK-60 Griffin Missile System (United States)
Used by the U.S. Navy and manufactured by Raytheon, the MK-60 Griffin Missile System enables ships to defend themselves against “small boat threats” by employing a “surface-to-surface missile system.”[254] The MK-60 Griffin Missile System includes at least two variants: Griffin A, an unmanned aircraft system (UAS), and Griffin B, an unmanned aerial vehicle (UAV).[255] The Griffin B model uses GPS guidance to help identify a target, while the human operator is capable of controlling the type of detonation, as well as of changing the target location after the missile has been launched.[256]
Patriot Missile (United States)
The Patriot System, manufactured by Raytheon, is a surface-to-air missile defense system that uses radar to detect and identify hostile incoming missiles and fires missiles to intercept them.[257] Multiple countries use the Patriot system, including Egypt, Germany, Greece, Israel, Japan, Kuwait, the Netherlands, Saudi Arabia, South Korea, Spain, the United Arab Emirates, and the United States.[258] The Patriot’s radar system is responsible for automatically detecting and tracing incoming projectiles. When operating semi-autonomously, the Patriot computer system requires a human operator to authorize a launch.[259] When operating in a mode of heightened autonomy, the Patriot computer itself chooses whether or not to launch, based upon the speed and direction of the approaching projectile.[260]
SeaRAM (United States)
The SeaRAM anti-ship missile defense system, used by the U.S. Navy, combines features of the Phalanx and rolling airframe missile (RAM) guided weapons systems.[261] According to the manufacturer Raytheon, the SeaRAM can “identify and destroy approaching supersonic and subsonic threats, such as cruise missiles, drones, small boats, and helicopters.”[262] The RAM “fire and forget” missile contains some autonomy in its features, including a “dual-mode passive radio frequency system.”[263]
Sentry Robot (Russia)
In 2014, the Russian Strategic Missile Forces announced that they were planning to release armed sentry robots that could exhibit autonomy in identifying and attacking targets.[264] Little else is publicly known about the specific features of these machines because the prototypes have not yet been released. Uralvagonzavod, a Russian defense firm, anticipates that it will be able to demonstrate prototypes by 2017.[265] In December 2015, U.S. Defense Department officials expressed alarm at the development of the “highly capable autonomous combat robots” that would be “capable of independently carrying out military operations.”[266]
Sentry Tech (Israel)
Manufactured by Rafael Advanced Defense Systems, the Sentry Tech system “consists of a lineup of remote-controlled weapon stations integrated with security and intelligence sensors…providing an infiltration alert via ground and airborne sensors” to provide operators with information on whether to fire weapons.[267] The system is mainly used by Israel along the Gaza border.[268] Sentry Tech does not operate with autonomy in its features; rather, it is a remote-controlled weapon station. Once a potential target has been identified, an operator remotely controls the Sentry Tech to track the target and is capable of choosing to attack the target with the Sentry’s machine gun turret.[269]
SGR A1 Sentry Gun (South Korea)
The SGR A1 is a stationary robot that operates a machine-gun turret, originally designed by the Korea University and the Samsung Techwin Company. The robot guards the Demilitarized Zone (DMZ) between North and South Korea. It uses an infrared camera surveillance system to identify potential intruders. When an individual comes within ten meters of the robot, the SGR A1 demands the necessary access code and uses voice recognition to determine whether the intruder has provided the correct code. If the intruder fails to do so, the SGR A1 has three options: ring an alarm bell, fire rubber bullets, or fire its turreted machine gun.[270] The SGR A1 normally operates with remote human authorization required to enable the SGR A1 to fire.[271] Central to this decision is whether the target has appeared to “surrender.” The robot is programmed to recognize that a human with its arms held high in the air is attempting to surrender.[272]
Super aEgis II (South Korea)
The Super aEgis II is a robot sentry with certain automated features manufactured by DoDAAM. It incorporates a machine gun turret, which is used primarily by South Korea in the Demilitarized Zone (DMZ).[273] It uses a combination of digital cameras and thermal imaging to identify potential targets, allowing it to operate in the dark.[274] The Super aEgis II requires a human to authorize any use of lethal force. Before firing, it automatically emits a warning, advising potential targets to “turn back or we will shoot” (in Korean).[275] If the target continues to advance, a remote human operator enters a password to enable the aEgis to shoot the target.[276]
Cyber Capabilities
Stuxnet (United States and Israel)
Reportedly, Stuxnet is a cyberweapon that was used to attack Iran’s nuclear-enrichment operations in 2009 and 2010. The specifics of the malware are uncertain, but it was reportedly developed by the United States and Israel in a mission codenamed “Olympic Games.”[277] Allegedly, Stuxnet caused computers in Natanz (Iran’s nuclear enrichment facility) to malfunction, reprogramming the centrifuges to spin too fast and damaging delicate pieces of the machinery.[278] It is believed to have damaged 1,000 of Iran’s 6,000 centrifuges in 2010.[279] Since it was intended to operate in the Iranian nuclear enrichment facility, a computer system that is “air-gapped” (disconnected from the internet and other computer networks), Stuxnet was designed to operate, once launched, without (further) external human direction or input.[280] Stuxnet’s code was written to ensure that once connected to the nuclear facility’s computer network, it would begin sabotaging the centrifuge software immediately and to continue doing so without further outside guidance.
[39]. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest & Clifford Stein, Introduction to Algorithms 5 (3rd ed. 2009).
[40]. Id.
[41]. Id.
[42]. Id.
[43]. Robotics Today, RIA News, Spring 1980, at 7, cited in Robotics and the Economy: A Staff Study, Prepared for the Use of the Subcommittee on Monetary and Fiscal Policy of the Joint Economic Committee, Congress of the United States 4 n.3 (1982).
[44]. Robohub Editors, Robohub Roundtable: Why Is It So Difficult to Define Robot?, Robohub, April 29, 2016, http://robohub.org/robohub-roundtable-why-is-it-so-difficult-to-define-robot.
[45]. Neil Richards & William Smart, How Should the Law Think About Robots?, in Robot Law 3, 6 (Ryan Calo, Michael Froomkin & Ian Kerr eds., 2016).
[46]. The now-historical sense of the term “robot” denotes “[a] central European system of serfdom, by which a tenant’s rent was paid in forced labour or service.” See Robot n.1, Oxford English Dictionary (online ed.) (2016).
[47]. Robot n.2, Oxford English Dictionary (online ed.) (2016) (noting that, originally, this sense of the term was used “with reference to the mass-produced workers in Karel Čapek’s play R.U.R.: Rossum’s Universal Robots (1920) which are assembled from artificially synthesized organic material.”).
[48]. See infra Section 2: Examples of Purported Autonomous Weapon Systems.
[49]. See, e.g., Boston Dynamics, Introducing SpotMini, YouTube (June 23, 2016), https://www.youtube.com/watch?v=tf7IEVTDjng [https://perma.cc/LNV5-3SCH] (video of Boston Dynamic’s SpotMini robot, which purports to “perform[] some tasks autonomously, but often uses a human for high-level guidance.”).
[50]. See, e.g., Michael Rubenstein, Alejandro Cornejo & Radhika Nagpal, Programmable Self-Assembly in a Thousand-Robot Swarm, 345 Science 795, 796 (2014) (“We demonstrate a thousand-robot swarm capable of large-scale, flexible self-assembly of two-dimensional shapes entirely through programmable local interactions and local sensing, achieving highly complex collective behavior. The approach involves the design of a collective algorithm that relies on the composition of basic collective behaviors and cooperative monitoring for errors to achieve versatile and robust group behavior, combined with an unconventional physical robot design that enabled the creation of more than 1000 autonomous robots.”). In respect of this large-scale robotic swarm, the extent to which the robots “can be fully autonomous” is measured in terms of being “capable of computation, locomotion, sensing, and communication.” Id. at 796.
[51]. For an excellent analysis of some of the key technologies in relation to AWS, see Peter Margulies, Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts, in Research Handbook on Remote Warfare (Jens David Ohlin ed., forthcoming 2016).
[52]. Yann LeCun, Yoshua Bengio & Geoffrey Hinton, Deep Learning, 521 Nature 436, 436 (2015).
[53]. Id.
[54]. Id.
[55]. Id.
[56]. Cade Metz, In Two Moves, AlphaGo and Lee Sedol Redefined the Future, Wired (March 16, 2016, 7:00 A.M.), http://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future.
[57]. LeCun et al., supra note 52, at 436.
[58]. Id.
[59]. Id.
[60]. Id.
[61]. Id.
[62]. Id.
[63]. Id. (citations omitted).
[64]. Id.
[65]. Id.
[66]. Silver et al., supra note 14, at 488.
[67]. See Christof Koch, How the Computer Beat the Go Master, Scientific American (March 19, 2016), http://www.scientificamerican.com/article/how-the-computer-beat-the-go-master.
[68]. LeCun et al., supra note 52, at 439.
[69]. Silver et al., supra note 14, at 484.
[70]. Silver et al., supra note 14, at 484; on supervised learning, see LeCun et al., supra note 52, at 436–38.
[71]. Silver et al., supra note 14, at 486.
[72]. Id. at 489 (citations omitted).
[73]. Id.
[74]. Gov’t of Switz., Towards a “Compliance-Based” Approach to LAWS [Lethal Autonomous Weapons Systems] 1 (March 30, 2016) (informal working paper), http://www.unog.ch/80256EDD006B8954/(httpAssets)/D2D66A9C427958D6C1257F8700415473/$file/2016_LAWS+MX_CountryPaper+Switzerland.pdf [hereinafter Swiss, “Compliance-Based” Approach].
[75]. Id.
[76]. Id. at 1–2.
[77]. Id. at 2.
[78]. Id.
[79]. Advisory Council on International Affairs, Autonomous Weapon Systems: The Need for Meaningful Human Control 7 (Advisory Report No. 97, 2015), http://aiv-advice.nl/8gr [hereinafter AIV].
[80]. Dutch Government, Response to AIV/CAVV Report, supra note 22.
[81]. Id.
[82]. Id.
[83]. Dutch Government, Response to AIV/CAVV Report, supra note 22.
[84]. Id.
[85]. Id.
[86]. Id.
[87]. Id.
[88]. Id.
[89]. Though the government agrees with the advisory committee “that definitions should be agreed on (in accordance with recommendation no. 4).” Dutch Government, Response to AIV/CAVV Report, supra note 22. As noted above, the Dutch government “reject[ed] outright the possibility of developing and deploying fully autonomous weapons.” Id.
[90]. Gov’t of Fr., Characterization of a LAWS (April 11–15, 2016) (non-paper), http://www.unog.ch/80256EDD006B8954/(httpAssets)/5FD844883B46FEACC1257F8F00401FF6/$file/2016_LAWSMX_CountryPaper_France+CharacterizationofaLAWS.pdf (bold in the original).
[91]. U.S. Dep’t of Def., Dir. 3000.09, Autonomy in Weapon Systems ¶ 1 (Nov. 21, 2012) [hereinafter DOD AWS Dir.].
[92]. Id. at ¶ 2.
[93]. Id.
[94]. Id.
[95]. Id. at 13–14.
[96]. Id.at 14.
[97]. Id.
[98]. Id. at ¶ 4.
[99]. Id.
[100]. Id.
[101]. Id.
[102]. Id.
[103]. Id.
[104]. Id.
[105]. Id.
[106]. Id.
[107]. Id.
[108]. Id.
[109]. Id. at 5–15.
[110]. U.S. Dep’t of Def., Law of War Manual § 6.5.9.1 (2016) (internal reference omitted) [hereinafter Law of War Manual].
[111]. Id.
[112]. Id.
[113]. U.K. Ministry of Def., Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems, (2011), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/33711/20110505JDN_211_UAS_v2U.pdf.
[114]. Id. at iii.
[115]. Id. at 2-2.
[116]. Id. at 2-2–2-3.
[117]. Id. at 2-3.
[118]. Id.
[119]. Id. at 2-3 n.5 (giving examples of “take-off and landing; navigation/route following; pre-programmed response to events such as loss of a command and communication link; and automated target detection and recognition”).
[120]. Id. at 2-3.
[121]. Id.
[122]. Id. at 2-3–2-4 (further stating in this connection that “[a]s computing and sensor capability increases, it is likely that many systems, using very complex sets of control rules, will appear and be described as autonomous systems, but as long as it can be shown that the system logically follows a set of rules or instructions and is not capable of human levels of situational understanding, then they should only be considered to be automated”).
[123]. Id. at 2-4.
[124]. Id. at 5-1–5-12. See also infra Section 3.
[125]. Id. at 2-4 (citation omitted).
[126]. Id. at 5-5.
[127]. Id.
[128]. Id.
[129]. Id.
[130]. Id. (noting, however, that this data “link may not need to be continuous”).
[131]. Kevin Bonsor, How Landmines Work, How Stuff Works (June 19, 2001), http://science.howstuffworks.com/landmine.htm.
[132]. Mines, FAS Military Analysis Network (Dec. 12, 1998), http://fas.org/man/dod-101/sys/ship/weaps/mines.htm.
[133]. Landmines, Landmine and Cluster Munition Monitor (2014), http://www.the-monitor.org/en-gb/the-issues/landmines.aspx.
[134]. Sam LaGrone, A Terrible Thing That Waits (Under the Ocean), Popular Science (May 19, 2014), http://www.popsci.com/blog-network/shipshape/terrible-thing-waits-under-ocean.
[135]. Guillermo C. Gaunaurd, Acoustic Mine, Access Science (2014), http://www.accessscience.com/content/006000.
[136]. LaGrone, supra note 134.
[137]. MK 60 Encapsulated Torpedo (CAPTOR), FAS Military Analysis Network (Dec. 13, 1998), http://fas.org/man/dod-101/sys/dumb/mk60.htm.
[138]. Id.
[139]. See, e.g., Ryan Gallagher, Military Moves Closer to Truly Autonomous Drones, Slate (Jan. 16, 2013), http://www.slate.com/blogs/future_tense/2013/01/16/taranis_neuron_militaries_moving_closer_to_truly_autonmoous_drones.html.
[140]. X-47B UCAS Makes Aviation History…Again!, Northrop Grumman, http://www.northropgrumman.com/Capabilities/x47bucas/Pages/default.aspx (last visited Aug. 24, 2016).
[141]. Loitering with Intent, Jane’s Int’l Def. Rev. (Nov. 27, 2015).
[142]. See generally U.S. Dep’t of the Navy, The Navy Unmanned Surface Vehicle Master Plan (2007), http://www.navy.mil/navydata/technology/usvmppr.pdf.
[143]. See, e.g., Autonomous Surface Vehicles Ltd., Unmanned Marine Systems, Unmanned Systems Technology, http://www.unmannedsystemstechnology.com/company/autonomous-surface-vehicles-ltd (last visited Aug. 24, 2016).
[144]. Denise Crimmins & Justin Manley, What Are AUVs and Why Do We Use Them?, National Oceanic and Atmospheric Administration (2008), http://oceanexplorer.noaa.gov/explorations/08auvfest/background/auvs/auvs.html.
[145]. Autonomous Underwater Vehicles, Woods Hole Oceanographic Institution, http://www.whoi.edu/main/auvs (last visited Aug. 24, 2016).
[146]. Bill Carey, Boeing Phantom Develops ‘Dominator’ UAV, AIN Online (Nov. 2, 2012), http://www.ainonline.com/aviation-news/defense/2012-11-02/boeing-phantom-works-develops-dominator-uav.
[147]. Id.
[148]. London Huw Williams, Boeing to Evaluate CSS for Dominator, Jane’s Int’l Def. Rev., (Oct. 31, 2012).
[149]. London Huw Williams, IAI to Offer Broad UGV Portfolio, Jane’s Int’l Def. Rev. (July 8, 2016).
[150]. Damian Kemp, AUSA 2015: G-NIUS Displays Loitering Munition-Equipped Guardium Concept, Jane’s Int’l Def. Rev. (Oct. 13, 2015).
[151]. London Huw Williams, G-NIUS Reveals Its Plans for Guardium Development, Jane’s Int’l Def. Rev. (June 25, 2008).
[152]. Id.
[153]. K-MAX, Lockheed Martin, http://www.lockheedmartin.com/us/products/kmax.html (last visited Aug. 24, 2016).
[154]. K-MAX Unmanned Aircraft System, Lockheed Martin, http://www.lockheedmartin.com/content/dam/lockheed/data/ms2/documents/K-MAX-brochure.pdf (last visited Aug. 24, 2016).
[155]. Knifefish Unmanned Undersea Vehicle, General Dynamics Mission Systems, https://gdmissionsystems.com/maritime-strategic/submarine-systems/knifefish-unmanned-undersea-vehicle (last visited Aug. 24, 2016).
[156]. John Reed, Meet the Navy’s Knifefish Mine-Hunting Robot, Defense Tech (Apr. 16, 2012), http://www.defensetech.org/2012/04/16/meet-the-navys-knifefish-mine-hunting-robot/.
[157]. Mission Possible? Fledgling Ship-Based Autonomous Systems Taking Off at Sea, Jane’s Int’l Def. Rev., Oct. 12, 2015. See also Grace Jean, Bluefin Robotics to Deliver Knifefish Variant to NRL in 2014, Jane’s Int’l Def. Rev. (May 2, 2013).
[158]. Reed, supra note 156.
[159]. James Hardy, China’s Sharp Sword UCAV Makes Maiden Flight, Jane’s Def. Wkly. (Nov. 22, 2015).
[160]. Id.
[161]. Id.
[162]. Kelvin Wong, CASC Showcases New Generation of UAV Weapons, Jane’s Int’l Def. Rev. (Nov. 20, 2014).
[163]. Craig Caffrey, Closing the Gaps: Air Force Modernisation in China, Jane’s Def. Wkly. (Oct. 2, 2015).
[164]. Nicholas de Larrinaga, France Begins Naval Testing of Neuron UCAV, Jane’s Defence Wkly. (May 19, 2016).
[165]. Berenice Baker, Taranis vs. nEUROn – Europe’s Combat Drone Revolution, Airforce-Technology.com (May 6, 2014), http://www.airforce-technology.com/features/featuretaranis-neuron-europe-combat-drone-revolution-4220502.
[166]. David Cenciotti, First European Experimental Stealth Combat Drone Rolled Out: The Neuron UCAV Almost Ready for Flight, The Aviationist (Jan. 20, 2012), https://theaviationist.com/2012/01/20/neuron-roll-out.
[167]. Russia’s Platform-M Combat Robot on Display in Sevastopol, RT News (July 22, 2015, 8:20 AM), https://www.rt.com/news/310291-russia-military-robot-sevastopol.
[168]. Id.
[169]. Franz-Stefan Gady, Meet Russia’s New Killer Robot, The Diplomat (July 21, 2015), http://thediplomat.com/2015/07/meet-russias-new-killer-robot.
[170]. Gary Martinic, Unmanned Maritime Surveillance and Weapons Systems, Australian Naval Institute (July 8, 2014), http://navalinstitute.com.au/unmanned-maritime-surveillance-and-weapons-systems.
[171]. Casandra Newell, Egypt Orders Pluto Plus ROVs, Jane’s Navy Int’l (June 19, 2009).
[172]. Briefing: Rolling in the Deep, Jane’s Def. Wkly. (March 6, 2011).
[173]. Columbia Group to Supply Pluto Plus UUVs to Egyptian Navy, Def. Industry Daily (June 21, 2009), http://www.defenseindustrydaily.com/Columbia-Group-to-Supply-Pluto-Plus-UUVs-to-Egyptian-Navy-05530/.
[174]. Protector Unmanned Surface Vehicle (USV), Israel, Naval-Technology.com, http://www.naval-technology.com/projects/protector-unmanned-surface-vehicle/ (last visited Aug. 24, 2016).
[175]. London Huw Williams, Rafael Looks to Extend Protector USV Control Range, Jane’s Int’l Def. Rev. (Aug. 8, 2013).
[176]. Id.
[177]. Richard Scott, New Protector USV Variant Detailed, Jane’s Int’l Def. Rev., Nov. 12, 2012.
[178]. Rachel Courtland, DARPA’s Self-Driving Submarine Hunter Steers Like a Human, IEEE Spectrum (Apr. 7, 2016), http://spectrum.ieee.org/automaton/robotics/military-robots/darpa-actuv-self-driving-submarine-hunter-steers-like-a-human.
[179]. Scott Littlefield, Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV), Defense Advanced Research Projects Agency, http://www.darpa.mil/program/anti-submarine-warfare-continuous-trail-unmanned-vessel (last visited Aug. 24, 2016).
[180]. Courtland, supra note 178.
[181]. Littlefield, supra note 179.
[182]. Rick Stella, Ghost Ship: Stepping aboard Sea Hunter, the Navy’s Unmanned Drone Ship, Digital Trends (Apr. 11, 2016), http://www.digitaltrends.com/cool-tech/darpa-officially-christens-the-actuv-in-portland.
[183]. Id.
[184]. John Reed, Meet Skat, Russia’s Stealthy Drone, Foreign Policy, June 3, 2013, http://foreignpolicy.com/2013/06/03/meet-skat-russias-stealthy-drone.
[185]. Id.
[186]. Id.
[187]. Andrew White, Unmanned Ambitions: European UAV Developments, Jane’s Def. Wkly., (Oct. 27, 2015).
[188]. Guia Marie Del Prado, This Drone Is One of the Most Secretive Weapons in the World, Tech Insider (Sep. 29, 2015), http://www.techinsider.io/british-taranis-drone-first-autonomous-weapon-2015-9.
[189]. Gallagher, supra note 139.
[190]. Id.
[191]. Id.
[192]. Grace Jean, X-47B Catapults Into New Era of Naval Aviation, Jane’s Int’l Def. Rev. (May 20, 2013).
[193]. Spencer Ackerman, Exclusive Pics: The Navy’s Unmanned, Autonomous “UFO,” Wired (July 31, 2012), https://www.wired.com/2012/07/x47b.
[194]. Jean, supra note 192.
[195]. Ackerman, supra note 193.
[196]. Jerry Hendrix, Put the X-47B Back to Work - As a Tanker, Defense One (June 13, 2016), http://www.defenseone.com/ideas/2016/06/put-x-47b-back-work-tanker/129029.
[197]. Ackerman, supra note 193.
[198]. See, e.g., Andreas Parsch, McDonnell Douglas FGM-77 Dragon, Directory of U.S. Military Rockets and Missiles (June 7, 2002), http://www.designation-systems.net/dusrm/m-77.html.
[199]. Raytheon/Lockheed Martin FGM-148 Javelin Anti-Tank (AT) Missile Launcher (1996), Military Factory (April 15, 2016), http://www.militaryfactory.com/smallarms/detail.asp?smallarms_id=391 [hereinafter Raytheon].
[200]. London Hughes, Reign of Fire: UK RAF Readies for Brimstone 2, Jane’s Int’l Def. Rev. (Sept. 4, 2014).
[201]. Id.
[202]. Nicholas de Larrinaga, Farnborough 2016: Brimstone 2 Enters Service, Begins Apache Trials, Jane’s Def. Wkly (July 14, 2016).
[203]. Hughes, supra note 200.
[204]. Id.
[205]. Id.
[206]. Brimstone Advanced Anti-Armour Missile, Army Technology.com, http://www.army-technology.com/projects/brimstone (last visited Aug. 24, 2016).
[207]. Id.
[208]. Raytheon, supra note 199.
[209]. Id.
[210]. Jeremy Binnie, U.S. Clears More Javelins for Qatar, Jane’s Def. Wkly (May 27, 2016).
[211]. Raytheon, supra note 199.
[212]. Id.
[213]. Id.
[214]. Loitering with Intent, supra note 141.
[215]. Id.
[216]. Id.
[217]. Richard Scott, Joint Strike Missile Starts Flight Test Programme, Jane’s Missiles & Rockets (Nov. 16, 2015), http://www.janes.com/article/55989/joint-strike-missile-starts-flight-test-programme.
[218]. Id.
[219]. Kongsberg’s NSM/JSM Anti-Ship & Strike Missile Attempts to Fit in Small F-35 Stealth Bay, Defense Industry Daily (Nov. 12, 2015), http://www.defenseindustrydaily.com/norwegian-contract-launches-nsm-missile-03417 [hereinafter Kongsberg].
[220]. Franz-Stefan Gady, F-35’s Joint Strike Missile Successfully Completes Flight Test in US, The Diplomat (Nov. 13, 2015), http://thediplomat.com/2015/11/f-35s-joint-strike-missile-successfully-completes-flight-test-in-us.
[221]. Kongsberg, supra note 219.
[222]. Aegis Combat System, Lockheed Martin, http://www.lockheedmartin.com/us/products/aegis.html (last visited Aug. 24, 2016).
[223]. Aegis Weapon System, America’s Navy: United States Navy Fact File (Jan. 5, 2016), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=200&ct=2.
[224]. Paul Scharre & Michael C. Horowitz, An Introduction to Autonomy in Weapons Systems 21 (Feb. 2015) (working paper), http://www.cnas.org/sites/default/files/publications-pdf/Ethical%20Autonomy%20Working%20Paper_021015_v02.pdf.
[225]. Id.
[226]. Id.
[227]. Scharre & Horowitz, supra note 224, at 21.
[228]. Thales Targets AK-630 Users for Fire Control, Jane’s Navy Int’l (Apr. 13, 2005).
[229]. Id.
[230]. Nathan Hodge, Raytheon Ramps Up Centurion Production, Jane’s Def. Wkly. (March 20, 2008).
[231]. Id.
[232]. Centurion C-RAM Counter-Rocket, Artillery, and Mortar Weapon System, Army Recognition, http://www.armyrecognition.com/united_states_us_army_artillery_vehicles_system_uk/centurion_c-ram_land-based_weapon_system_phalanx_technical_data_sheet_specifications_pictures_video.html (last visited Aug. 24, 2016).
[233]. Counter Rocket, Artillery and Mortar (C-RAM), GlobalSecurity.org (July 7, 2011), http://www.globalsecurity.org/military/systems/ground/cram.htm.
[234]. Kristin Horitski, Counter-Rocket, Artillery, Mortar (C-RAM), Missile Defense Advocacy Alliance (March 2016), http://missiledefenseadvocacy.org/missile-defense-systems-2/missile-defense-systems/u-s-deployed-intercept-systems/counter-rocket-artillery-mortar-c-ram.
[235]. Heather M. Roff, Killer Robots on the Battlefield, Slate (April 7, 2016), http://www.slate.com/articles/technology/future_tense/2016/04/the_danger_of_using_an_attrition_strategy_with_autonomous_weapons.html.
[236]. GDF, Weapons Systems.net, http://weaponsystems.net/weaponsystem/EE02%20-%20GDF.html (last visited Aug. 24, 2016).
[237]. Noah Shachtman, Robot Cannon Kills 9, Wounds 14, Wired (Oct. 18, 2007), https://www.wired.com/2007/10/robot-cannon-ki.
[238]. Goalkeeper – Close-In Weapon System, Thales, https://www.thalesgroup.com/en/goalkeeper-close-weapon-system# (last visited Aug. 24, 2016).
[239]. Id.
[240]. Id.
[241]. Iron Dome Weapon System, Raytheon, http://www.raytheon.com/capabilities/products/irondome (last visited Aug. 24, 2016).
[242]. Raoul Heinrichs, How Israel’s Iron Dome Anti-Missile System Works, Business Insider, July 30, 2014, http://www.businessinsider.com/how-israels-iron-dome-anti-missile-system-works-2014-7.
[243]. Scharre & Horowitz, supra note 224, at 21.
[244]. India – Kashtan Self-Defence System for Retrofit, Jane’s Int’l Def. Rev. (May 1, 2001).
[245]. Scharre & Horowitz, supra note 224, at 21.
[246]. Nicholas Fiorenza, Luftwaffe Receives MANTIS C-RAM System, Jane’s Def. Wkly. (Nov. 28, 2012).
[247]. Id.
[248]. Phalanx Close-In Weapon System, Raytheon, http://www.raytheon.com/capabilities/products/phalanx/ (last visited Aug. 24, 2016).
[249]. Scharre & Horowitz, supra note 224.
[250]. MK 15 Close-In Weapons System (CIWS), America’s Navy: United States Navy Fact File (May 9, 2016), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=487&ct=2.
[251]. Phalanx CIWS: The Last Defense, On Ship and Ashore, Defense Industry Daily (Feb. 16, 2016), https://www.defenseindustrydaily.com/phalanx-ciws-the-last-defense-on-ship-and-ashore-02620.
[252]. MK 15 Phalanx Close-In Weapons System (CIWS), FAS Military Analysis Network (Jan. 9, 2003), http://fas.org/man/dod-101/sys/ship/weaps/mk-15.htm.
[253]. Id.
[254]. MK 60 Griffin Missile System, America’s Navy: United States Navy Fact File (Nov. 25, 2013), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=593&ct=2.
[255]. Strike Out: Unmanned Systems Set for Wider Attack Role, Jane’s Int’l Def. Rev., July 17, 2015.
[256]. Id.
[257]. Andreas Parsch, Raytheon MIM-104 Patriot, Directory of Military US Rockets and Missiles (Dec. 3, 2002), http://www.designation-systems.net/dusrm/m-104.html.
[258]. Scharre & Horowitz, supra note 224, at 21–22.
[259]. Patriot Missiles (PAC-1, PAC-2, PAC-3), Missile Threat (Dec. 22, 2013), http://missilethreat.com/defense-systems/patriot-pac-1-pac-2-pac-3.
[260]. Marshall Brain, How Patriot Missiles Work, How Stuff Works (March 28, 2003), http://science.howstuffworks.com/patriot-missile.htm.
[261]. SeaRAM Anti-Ship Missile Defense System, Raytheon, http://www.raytheon.com/capabilities/products/searam (last visited Aug. 24, 2016).
[262]. Id.
[263]. SeaRAM Anti-Ship Missile Defence System, United States of America, Naval-Technology.com, http://www.naval-technology.com/projects/searam-anti-ship-missile-defence-system (last visited Aug. 24, 2016).
[264]. Patrick Tucker, The Pentagon Is Nervous about Russian and Chinese Killer Robots, Defense One (Dec. 14, 2015), http://www.defenseone.com/threats/2015/12/pentagon-nervous-about-russian-and-chinese-killer-robots/124465.
[265]. Producer of Russia’s Armata T-14 Plans to Create Army of AI Robots, RT International (Oct. 20, 2015, 11:43 P.M.), https://www.rt.com/news/319229-russia-armata-tanks-robots.
[266]. The Pentagon is Growing Concerned Over Development of Russian and Chinese Combat Robots, National Security News (Dec. 28, 2015), http://www.nationalsecurity.news/2015-12-22-the-pentagon-is-growing-concerned-over-development-of-russian-and-chinese-combat-robots.html.
[267]. Sentry-Tech, Rafael Advanced Defense Systems Ltd., http://www.rafael.co.il/Marketing/396-1687-en/Marketing.aspx (last visited Aug. 24, 2016).
[268]. Robin Hughes & Alon Ben-David, IDF Deploys Sentry Tech on Gaza Border, Jane’s Def. Wkly. (June 6, 2007).
[269]. Id.
[270]. Samsung Techwin SGR-A1 Sentry Guard Robot, GlobalSecurity.org (July 7, 2011), http://www.globalsecurity.org/military/world/rok/sgr-a1.htm [hereinafter Sentry Guard].
[271]. Keith Wagstaff, Future Tech? Autonomous Killer Robots Are Already Here, NBC News (May 15, 2014), http://www.nbcnews.com/tech/security/future-tech-autonomous-killer-robots-are-already-here-n105656.
[272]. Sentry Guard, supra note 270.
[273]. Simon Parkin, Killer Robots: The Soldiers That Never Sleep, BBC Future (July 16, 2015), http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep. Other countries, however, such as the United Arab Emirates and Qatar, also use the system.
[274]. Id.
[275]. Id.
[276]. Id.
[277]. Ellen Nakashima & Joby Warrick, Stuxnet Was Work of U.S. and Israeli Experts, Officials Say, Wash. Post (June 2, 2012), https://www.washingtonpost.com/world/national-security/stuxnet-was-work-of-us-and-israeli-experts-officials-say/2012/06/01/gJQAlnEy6U_story.html.
[278]. David E. Sanger, Obama Order Sped Up Wave of Cyberattacks Against Iran, N.Y. Times (May 31, 2012), http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html.
[279]. Kim Zetter, An Unprecedented Look at Stuxnet, the World’s First Digital Weapon, Wired (Nov. 3, 2013), https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet.
[280]. See Dorothy E. Denning, Stuxnet: What Has Changed?, 4 Future Internet 672, 674 (2012).