Note: More information about this PILAC Project as well as the full version of the Briefing Report are available here [link].
Section 4: Accountability Axes
In this section, we outline three accountability axes that might be relevant to regulating war algorithms. We do not claim to be exhaustive but rather aim to provide examples of key accountability avenues. We adapt an accountability approach focusing on the regulation of war algorithms along three axes: state responsibility for internationally wrongful acts, individual responsibility under international law for international crimes, and a wider notion of scrutiny governance.[450]
Below, for each axis, we highlight existing and possible accountability actors, forums, and mechanisms. Some of these axes utilize existing formal legal regimes; others depend more on “soft law” or less formal codes, standards, guidelines, and the like. Regulation may arise, for instance, through direct or intermediary modes, as well as by setting rules to allocate risk and by defining rules of private interaction.[451] As noted above, we focus on international law in part because it is the only normative framework that purports, in key respects but with important caveats, to be universal and uniform.
State Responsibility
Along this axis, accountability is a matter of state responsibility arising out of acts or omissions involving a war algorithm where those acts or omissions constitute a breach of a rule of international law. State responsibility entails discerning the content of the rule, assigning attribution to a state, determining available excuses (if any), and imposing measures of remedy.
Measures of Remedy
A range of consequences may arise where a war algorithm involved in an internationally wrongful act, not otherwise excused, is attributable to a state. In this sub-section, we highlight a main form of liability: war reparations. But we also note some of the other existing mechanisms and avenues through which state responsibility may be pursued, such as diplomatic channels, arbitration, judicial proceedings, weapons-control regimes, and an IHL fact-finding body.
War Reparations to a State
As noted above, in general a consequence of state responsibility is the liability to make reparation. War reparations constitute one such form of liability. They “involve the transfer of legal rights, goods, property and, typically, money from one State to another in response to the injury caused by the use of armed force.”[452] Historical practice favors, “[i]n the specific case of war reparations, … the use of restitution, monetary compensation, territorial guarantees, guarantees of non-repetition, and symbolic reparations.”[453]
The Hague Convention concerning the Laws and Customs of War on Land (1907) and Additional Protocol I “establish an inter-State duty to pay compensation when a belligerent party violates the provisions of the Convention and ... Protocol I.”[454] Thus, with respect to who can claim reparations, “a State’s duty to provide inter-State reparations after the commission of an internationally wrongful act is certain.”[455]
As a practical matter, war reparations are still the exception rather than the norm. When they do occur, the most common form of reparations, according to an assessment of practice up to 1995, was a lump sum at the end of the war.[456] Nonetheless, pursuant to Security Council resolutions the United Nations Compensation Commission (UNCC) was established to address damages incurred in the course of the Iraq-Kuwait War (1990–91).[457] And the governments of Eritrea and Ethiopia established a commission to deal with reparations claims concerning an armed conflict between those two states.[458]
Where a state party does not fulfill the obligation concerning suppression of acts contrary to the Geneva Conventions, another state party may also, for instance, pursue diplomatic channels to encourage the non-complying state to fulfill the obligation. That other state party may, where available, also pursue arbitration (if the transgressing state agrees) or institute judicial proceedings (if a relevant tribunal can assert its jurisdiction over the transgressing state).
Weapons Monitoring, Inspection, and Verification Regimes
Weapons regimes may establish consequences for certain violations. Arms-control instruments range, in general, “from mere reporting duties and routine inspections (monitoring) to more invasive ad hoc inspections, sometimes so-called ‘challenge inspections’ at the request of a Member State (verification), up to compulsive methods in case of a determined breach (enforcement).”[459] Two of the main challenges of effective arms-control law are weak verification and limited enforcement mechanisms.
As noted above, the Arms Trade Treaty—which might cover various war algorithms—lays down a regulatory framework concerning the transfer of certain conventional weapons and related items. Through activities such as reporting and inspections, the Organization for the Prohibition of Chemical Weapons (OPCW) supervises the Chemical Weapons Convention. That treaty also provides for a challenge inspection procedure, “which is considered one of the most extensive verification procedures in the law of arms control, but has never been used, mainly due to political constraints.”[460] In comparison, the supervisory mechanism of the Biological Weapons Convention is weaker, consisting mainly of review conferences every five years.
International Fact-Finding Commission
Where certain rules of IHL are breached, the International Fact-Finding Commission (IFFC) established in Additional Protocol I may help provide measures of remedy. With respect to states parties to that treaty, the IFFC is competent, first, to enquire into any facts alleged to be a grave breach in or other serious violation of the Geneva Conventions of 1949 and Additional Protocol I.[461] Second, the IFFC is competent to “facilitate, through its good offices, the restoration of an attitude of respect for” the Geneva Conventions of 1949 and Additional Protocol I.[462] Where relevant, the design, development, or use of a war algorithm might implicate either or both of these competences. However, as practical matter, it bears emphasis that the IFFC has never been utilized for either competence.
Other Avenues
Certain other state accountability avenues may arise even where the design, development, or use of a war algorithm attributable to a state does not constitute an internationally wrongful act. Two such measures to consider are reparations to an individual pursuant to international human rights law, and a highly contentious form of domestic tort liability.
Reparations to an Individual
As noted above, it is clear that a state may be provided reparations after the commission of an internationally wrongful act, including an applicable violation of IHL. Yet it is far less clear whether an individual right to reparation for victims of gross human rights violations has crystallized.[463] The U.N. General Assembly has adopted a resolution on the matter.[464] But that resolution has been characterized as falling into a category often referred to as “soft law”: while “[t]hese documents do not have the formal status of legally binding instruments such as treaties, … they nonetheless reflect principles of justice and serve as tools for victim-oriented policies and practices at national and international levels.”[465]
Nonetheless, to the extent it is applicable in relation to the design, development, or use of a war algorithm, IHRL may provide grounds for an individual to seek redress and reparation. The relevant violation would not be an internationally wrongful act vis-à-vis another state (or states) but rather a violation of an applicable provision of IHRL vis-à-vis an individual. For instance, “[t]he case-law developed in the jurisprudence of the [European Court of Human Rights] and the Inter-American Court of Human Rights … demonstrates an increasing readiness of these international (regional) adjudicative bodies to afford substantial reparative justice to victims, in particular in cases of gross violations of human rights.”[466]
Tortious Liability
Another state accountability avenue might arise in relation to a highly disputed form of tortious liability:[467] pecuniary compensation under domestic tort law for death or injury to the person, or damage to or loss of tangible property, caused by an act or omission which is alleged to be attributable under domestic law to a state other than the forum state and which involved a war algorithm. That compensation may be available only so long as the act or omission occurred in whole or in part in the territory of the forum state and so long as the author of the act or omission was present in the forum-state territory at the time of the act or omission.[468]
This notion of tortious liability requires discerning the content of applicable domestic law (including the relevant standard of care), attributing responsibility for the resulting harm to a state other than the forum state, confirming the presence of the author of the act in the forum state, determining the availability of immunity claims (if any), and imposing pecuniary compensation. This contested form of liability is derived from a purported “territorial tort” restriction to the applicability of state immunity found in the 2004 United Nations Convention on Jurisdictional Immunities of States and Their Property (UNCSI), which is not yet in force, and its customary analogue (if any).[469]
Individual Responsibility under International Law
As noted in section 3, a natural person may be held responsible under international law for committing an international crime connected with a war algorithm, including certain war crimes and crimes against humanity. To impose that liability, the judicial body would need to be able to understand the underlying war algorithm so as to adjudicate the legal parameters applicable in relation to it. Also as noted above, commentators have raised a number of concerns as to whether international law concerning individual responsibility for international crimes is suitable to address AWS, especially in relation to certain modes of responsibility, such as command and superior responsibility, and to mental elements (especially the requisite knowledge and intent).
This axis describes international and domestic avenues through which an individual may be held responsible for committing an international crime. We also briefly highlight another avenue—extraterritorial jurisdiction not in respect of internationally defined crimes—through which an individual may be held responsible in relation to the design, development, or use of a war algorithm.
International Crimes
International Criminal Tribunals
As noted in section 3, where it has jurisdiction, an international criminal court or tribunal may impose individual responsibility for the commission of international crimes. The ICC—which operates pursuant to the principle of complementarity to national jurisdictions—is the first such court established on a permanent basis. Numerous war crimes under the ICC’s jurisdiction may in principle be committed through the design, development, or use of war algorithms.
Suppression of Grave Breaches
Under the Geneva Conventions of 1949, states parties are obliged “to enact any legislation necessary to provide effective penal sanctions for persons committing, or ordering to be committed, any of the grave breaches of the” relevant instrument.[470] In principle, a war algorithm may be involved in the commission of such a breach. Each state party is obliged “to search for persons alleged to have committed, or to have ordered to be committed, such grave breaches, and shall bring such persons, regardless of their nationality, before its own courts.”[471] And each state party “may also, if it prefers, and in accordance with the provisions of its own legislation, hand such persons over for trial to another” state party, so long as that party has “made out a prima facie case.”[472]
Universal Jurisdiction
While “[s]tates generally do not have jurisdiction to define and punish crimes committed abroad by and against foreign nationals,” pursuant to universal jurisdiction “any State has the right to try a person with regard to certain internationally defined crimes.”[473] Originally, this “jurisdiction was recognized only with respect to piracy on the high seas.”[474] But “[a]s the human rights content of international law expanded, universal adjudicative jurisdiction also expanded to embrace universally condemned crimes and may now apply to slavery, genocide, torture, and war crimes.”[475] Such “[u]niversal jurisdiction to try these offences is not limited to situations in which they are committed on the high seas or in other areas outside the territory of any State, but generally confers no enforcement power to enter foreign territory or board a foreign ship without consent.”[476] Nonetheless, “[a]lthough the laws of each State define the offences over which its courts may exercise universal jurisdiction, the scope of legislative jurisdiction is limited by the fact that the offences subject to universal jurisdiction are determined by treaty and international law.”[477] As a practical matter, to date the exercise of domestic universal jurisdiction has arguably been the strongest form (even if not very strong over all) of enforcement of accountability for war crimes.
Other Avenues
Certain other individual accountability avenues might arise even where the design, development, or use of a war algorithm attributable to a natural person does not give rise to individual responsibility under international law for an international crime. One such avenue to consider is extraterritorial jurisdiction, which more and more states are turning to in order to protect their perceived interests.
Extraterritorial Jurisdiction
Extraterritorial jurisdiction refers “to the competence of a State to make, apply and enforce rules of conduct in respect of persons, property or events beyond its territory.”[478] Traditionally, the exercise of extraterritorial jurisdiction was viewed as available only in exceptional circumstances.[479] But today, more and more states are creating such regimes.
The background idea is that, with respect to conduct occurring beyond a state’s territory, the state perceives the need to protect not only its own interests but also the interests of international society.[480] States have perceived those interests in such areas as anti-trust and competition law, anti-terrorism law, and anti-bribery law.
Certain characteristics of war algorithms—including that some of the underlying technologies are developed by transnational corporations and the modularity of the technology—might lead states to perceive strong interests in making, applying, and enforcing war-algorithm rules of conduct beyond their territories. Where states do so, it may be important to be attentive to the distinctions between the different ways that states may exercise extraterritorial jurisdiction. That is because some of those methods “are more likely to conflict with the competence of other States and therefore more likely to raise questions as to their compatibility with international law.”[481]
Scrutiny Governance
Along this axis, accountability is framed in terms of the extent to which a person or entity is and should be subject to, or should exercise, forms of internal or external scrutiny, monitoring, or regulation concerning a war algorithm.[482] Notably, scrutiny governance does not hinge on—but might implicate—potential and subsequent liability or responsibility.[483] The basic notion is that there are a number of avenues—other than or alongside of legal responsibility—to hold oneself or others answerable for the exercise of war-algorithm power and authority. We highlight only a few of the various possible approaches: independent monitoring, norm (including legal) development, non-binding resolutions and codes of conduct, normative design of technical architectures, and community self-regulation.
Independent Monitoring
A vast array of institutions independently monitor compliance with law and regulations that may be relevant to war algorithms. Those institutions include bodies within international organizations, treaty-based weapons-control regimes, and non-governmental organizations. Note, however, that the existence of all of these institutions does not absolve any state from its independent duty to ensure its own compliance with international law in general and with IHL in particular. While the competence of these institutions is not explicitly stated in war-algorithm terms, their general purviews would encompass monitoring of at least certain elements of the development and operation of those algorithms. Included among those institutions are:
- The U.N. Security Council;[484]
- The U.N. General Assembly;[485]
- The U.N. Secretariat, including the Secretary-General,[486] the Office of the United Nations High Commissioner for Human Rights (OHCHR), and the U.N. Office for Outer Space Affairs (UNOOSA);
- The Human Rights Council, including Special Procedures (Special Rapporteurs);[487]
- Treaty-based human-rights and weapons-monitoring bodies and mechanisms;[488] and
- Non-governmental organizations.[489]
Norm Development (including of International Law)
Norms may be developed through formal or informal mechanisms.
With respect to international law, for instance, the U.N. “General Assembly shall initiate studies and make recommendations for the purpose of … encouraging the progressive development of international law and its codification.”[490] The U.N. General Assembly established its Legal Committee (Sixth Committee), which “is responsible for the UN General Assembly’s role in encouraging the codification and progressive development of international law.”[491] The workings of the Sixth Committee led to the establishment of the International Law Commission (ILC).[492] According to its Statute, the ILC is expected to bring onto its agenda only topics that are “necessary and desirable”[493]—or, “[i]n other words, only topics ‘ripe’ for codification and progressive development of international law are to be the subject of its work.”[494] This criterion leaves some room for the ILC to consider various topics as possible candidates for its work. Broadly speaking, “a topic may be considered ripe if the subject-matter regulates the essential necessities of States or the wider needs and/or contemporary realities of the international community or is one held central to the authority of international law, notwithstanding any existing disagreements among States on the topic.”[495] In principle, war algorithms could arguably fit that definition.
Norms and accompanying standards relevant to war algorithms may also be developed at levels other than international law. Pursuant to their legislative jurisdiction, states may promulgate municipal laws.[496] Moreover, whether pursuant to domestic law or regulations or to less formal bases, agencies, regulatory bodies, and other standards-setting entities—governmental or non-governmental—may articulate guidelines, standards, and the like.[497]
Non-Binding Resolutions and Declarations, and Interpretative Guides
While not laying down legal obligations, non-binding resolutions and declarations, as well as codes of conduct or informal manuals, may also contribute to the development of the normative framework concerning war algorithms. This has already occurred in relation to AWS: a 2014 resolution of the European Parliament “[c]alls on the High Representative for Foreign Affairs and Security Policy, the Member States and the Council to ... ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.”[498]
Moreover, at the 2016 CCW Informal Meeting of Experts, the Netherlands called “for the formulation of an interpretative guide that clarifies the current legal landscape with regard to the deployment of autonomous weapons.”[499] In recent years, a number of “Manuals”[500] as well as an “Interpretive Guide”[501] on international law pertaining to armed conflict in relation to certain thematic areas have been drafted. It is unclear whether the initiative called for by the Netherlands will align with these approaches or might take another form. But based on the initial articulation, it appears that the focus of the called-for “interpretative guide” will be on clarifying currently applicable law concerning the deployment of autonomous weapons.
Normative Design of Technical Architectures
Programmers, engineers, and others involved in the design, development, and use of war algorithms might take diverse measures to embed normative principles into those systems. The background idea is that code and technical architectures can function like a kind of law. Maximizing the auditability of that code—especially in light of legally-relevant concepts such as attribution and reconstructability—might help strengthen external and internal scrutiny mechanisms.
To increase the likelihood of being adopted, such normative-design approaches would likely need to be devised in a manner that takes due consideration of the tension between, on one side, external transparency, and, on the other, a state’s interest in protecting classified technologies as well as the intellectual-property interests associated with those technologies. In addition, those thinking through ways to pursue war-algorithm accountability along this avenue should critically assess the experience of attempting to regulate cyber operations and cyber “warfare.” So far, those areas have eluded a universal normative regime. Like war algorithms, cyber operations and cyber “warfare” raise concerns regarding intellectual-property interests, the modularity and dual-use nature of the technologies, transparency with external actors due to classification regimes, and maintaining a qualitative edge.
Designing “Morally Responsible Engineering” and a “Partnership Architecture”
Some governments have recognized the importance of incorporating moral and ethical considerations into the engineering of systems that might be relevant to war algorithms.
In an October 2015 report on AWS, a Dutch “advisory committee advocates taking the interaction between humans and machines into account sufficiently in the design phase of autonomous weapon systems.”[502] Furthermore, “[i]n light of the importance of attributing responsibility and accountability, the [advisory committee] believes that, when procuring autonomous weapons, the government should ensure that the concept of morally responsible engineering is applied during the design stage.” For their part, the Ministries of Foreign Affairs and Defense consider that “recommendation to be an affirmation of existing policy,”[503] and emphasize that “the government and several of its knowledge partners are studying this theme.”[504]
Among the research programs funded by the Dutch government was a project entitled “Military Human Enhancement: Design for Responsibility and Combat Systems,” which was carried out by Delft University of Technology. One of the articles published as part of that project put forward the idea of a “partnership architecture.”[505] Two components undergird this idea. First, a mechanism is put forward through which both parties—the human and the machine—“do their job concurrently. In this way, each actor arrives at an own interpretation of the world thereby constructing a human representation of the world and a machine representation of the world at the same time.”[506] Second, work agreements—“explicit contracts between the human and the machine about the division of work”—are used to “minimize[] the automation-human coordination asymmetry because working agreements define an a priori explicit contract [regarding] what [to] and what not to delegate[] to the automation.”[507]
The main idea is that the resulting “partnership architecture can protect a commitment to responsibility within the armed forces.”[508] On one hand, “operators will be responsible for the terms of their working agreements with their machine.”[509] And on the other, working agreements may help “ensure that operators receive the morally relevant facts needed to make decisions that comply with IHL, as well as key moral principles.”[510]
Coding Law
Software and hardware engineers, roboticists, and others involved in the development of war algorithms may consider taking a page from the internet playbook. The internet protocol suite (also known as TCP/IP) is a core set of protocols that define the way in which the internet functions. A fundamental choice at the heart of the internet’s architecture concerned defining the flow of information by allowing ordinary computers connected to the internet to not only receive but also to send information. This was neither a necessary nor inevitable feature of the internet. (And whether one sees it today as a feature or a bug depends on one’s vantage point.) The suite of protocols could have been designed in other ways—for instance, the system could have distributed packets from a centralized hub, precluding individual computers to communicate directly with each other.
Lawrence Lessig argues that, through that structuring, TCP/IP embeds some regulatory—perhaps normative—principles in the design of the system.[511] Put another way, in defining the way in which computers could share data and communicate with one another, TCP/IP also forecloses alternative methods of communication, thereby imposing, if implicitly, regulations on the way in which the internet functions. In this way, code is a kind of law because it enables computers to do certain things (such as exchange packets of information) but, in doing so, also indirectly defines and narrows the specific way in which that exchange is accomplished. (It merits mention that code functions as a type of law in this conception irrespective of whether that was the intention of the system’s designers.)
At the 2016 CCW Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Danièle Bourcier imported Lessig’s general idea into the specific discussion on AWS where she raised the notion of designing “humanitarian law” into the relevant technical system.[512] What this might mean in practice is unclear. But in principle it might concern the design of the underlying algorithms as well as the constructed systems through which those algorithms are effectuated.
Auditable Algorithms
Making war algorithms more auditable may help foster accountability over them. “Audit logs,” for instance, record activity that takes place in an information architecture. In the U.S., national-security fusion centers “are supposed to employ audit logs that record the activity taking place in the information-sharing network, including ‘queries made by users, the information accessed, information flows between systems, and date- and time-markers for those activities.’”[513] (A fusion center is designed to promote information-sharing and to streamline intelligence-gathering, not only at the federal level between various agencies but also among the U.S. military and state- and local-level government.) In addition to the national-security realm, audit logs or similar mechanisms are mandated with respect to certain credit-rating agencies, financial transactions, and healthcare software. To be effective, audit logs need to be immutable.[514] While not specifically addressing AWS, the UK MoD Joint Doctrine Note on unmanned aircraft systems states that “[a] complex weapon system is also likely to require an authorisation and decisions log, to provide an audit trail for any subsequent legal enquiry.”[515]
Community Self-Regulation
A recent call for self-imposed regulation by a group of expert scientists in the domain of genetic engineering may provide a regulatory model for those involved in the development of war algorithms. The basic idea is that, even where there is no or little formal regulation, a community can choose, on its own initiative, to delineate what is and is not acceptable and to self-police the resulting boundaries.
The plea by some leading scientists partly concerned a relatively easy-to-use gene-editing technique called CRISPR/Cas9. (Gene-editing techniques, in short, “use enzymes called nucleases to snip DNA at specific points and then delete or rewrite the genetic information at those locations.”[516]) CRISPR/Cas9 had “suddenly made it possible to cross [a] Rubicon”: “[f]or decades, the ability to make changes that could be inherited in the human genome has been viewed as a fateful decision — but one that could be postponed because there was no safe and efficient way to edit the genome.”[517] With CRISPR/Cas9, it has been said, “the long theoretical issue now requires practical decisions.”[518]
In December 2015, the Organizing Committee for the International Summit on Human Gene Editing came to an agreement on “a recommendation not to stop human-gene-editing research outright, but to refrain from research and applications that use modified human embryos to establish a pregnancy.”[519] More specifically, intensive basic and preclinical research should proceed, the Committee said, but that research should be “subject to appropriate legal and ethical rules and oversight, on (i) technologies for editing genetic sequences in human cells, (ii) the potential benefits and risks of proposed clinical uses, and (iii) understanding the biology of human embryos and germline cells.”[520] And “[i]f, in the process of research, early human embryos or germline cells undergo gene editing,” the Committee entreated, “the modified cells should not be used to establish a pregnancy.”[521]
The Committee also called for an ongoing forum to address these issues. The push should be for “[t]he international community … [to] strive to establish norms concerning acceptable uses of human germline editing and to harmonize regulations, in order to discourage unacceptable activities while advancing human health and welfare.”[522] Against this backdrop, the Committee called upon the national academies that co-hosted the summit “to take the lead in creating an ongoing international forum to discuss potential clinical uses of gene editing; help inform decisions by national policymakers and others; formulate recommendations and guidelines; and promote coordination among nations.”[523] This forum, the Committee stated, “should be inclusive among nations and engage a wide range of perspectives and expertise,” such as “biomedical scientists, social scientists, ethicists, health care providers, patients and their families, people with disabilities, policymakers, regulators, research funders, faith leaders, public interest advocates, industry representatives, and members of the general public.”[524]
Zooming out, the call for various forms of self-regulation by these scientists might be relevant for those involved in the design and development of war algorithms—another area where some are concerned about crossing a moral Rubicon. In addition to the broader point (that, alongside forms of legal responsibility, a community can raise the normative bar for itself), specific possible regulatory avenues emerge: setting boundaries on possible research and imposing moratoriums (where deemed necessary); defining legal and ethical rules and oversight mechanisms; committing to review existing regulations on an ongoing basis; and establishing forums to address enduring and emergent concerns.
[450]. Derived in part from International Law Association, supra note 35, at 5.
[451]. See Wittes & Blum, supra note 31, at 203–206.
[452]. Sullo & Wyatt, supra note 297, at ¶ 1.
[453]. Id. at ¶ 4.
[454]. Id. at ¶ 5 (referring to art. 3 Hague Peace Conferences [1899 and 1907]) and art. 91 AP I).
[455]. Id. at ¶ 4.
[456]. Id. (“Based on the analysis of practice until 1995, Lillich, Weston, and Bederman concluded that the settlement of international claims by lump sum agreements was by far the prevailing practice and the creation of arbitral tribunals such as the Iran-United States Claims Tribunal the exception.”).
[457]. Id. at ¶ 5.
[458]. Id. (citing to Agreement between the Government of the State of Eritrea and the Government of the Federal Democratic Republic of Ethiopia, U.N. Doc. A/55/686-S/2000/1183 Annex).
[459]. Adrian Loets, Arms Control, in Max Planck Encyclopedia of Public International Law ¶ 21 (2013).
[460]. Id. at ¶ 23 (citation omitted).
[461]. AP I, supra note 12, at art. 90(2)(c)(i).
[462]. Id. at art. 90(2)(c)(ii).
[463]. See Sullo & Wyatt, supra note 297, at ¶ 4; see generally Christian Tomuschat, State Responsibility and the Individual Right to Compensation Before National Courts, in The Oxford Handbook of International Law in Armed Conflict (Andrew Clapham, Paola Gaeta & Tom Haeck eds., 2014).
[464]. Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law, adopted by UNGA Resolution 60/147, Dec. 16, 2005.
[465]. Theo van Boven, Victims’ Rights, in Max Planck Encyclopedia of Public International Law ¶ 19 (2007).
[466]. Id. at ¶¶ 10–13.
[467]. Compare, e.g., Joanne Foakes & Roger O’Keefe, Article 12, in The United Nations Convention on Jurisdictional Immunities of States and Their Property: A Commentary 209, 209–224 (Roger O’Keefe, Christian J. Tams & Antonios Tzanakopoulos eds., 2013) with Tomuschat, supra note 463. As noted above, another form of pecuniary compensation—though one not framed in terms of tortious liability—may arise under IHRL.
[468]. Another form of tortious liability—one that, in principle, establishes jurisdiction for serious violations of IHL to national courts in accordance with the principle of universal jurisdiction—may be relevant, though perhaps more in theory than in practice, at least under current interpretations. See, e.g., Tomuschat, supra note 463. Under the Alien Tort Claims Act (ATCA), federal judges “shall have original jurisdiction of any civil action by an alien for a tort only, committed in violation of the law of nations or a treaty of the United States.” 28 U.S.C. § 1350 (2012). Actions have been filed under the ATCA against foreign governments and foreign corporations, as well as against the U.S. government. Yet recent judicial interpretations have narrowed the statute’s scope of application. See, e.g., Ingrid Wuerth, Kiobel v. Royal Dutch Petroleum Co.: The Supreme Court and the Alien Tort Statute, 107 Am. J. Int’l L. 601 (2013).
[469]. See generally Foakes & O’Keefe, supra note 467. The form of pecuniary compensation here, which is based on a municipal tort law of the forum state, is distinguishable from the innovative “war tort” idea articulated by Rebecca Crootof, which is based on serious violations of IHL; however, the two might interface where a municipal tort is linked to a serious violation of IHL. See Crootof, War Torts, supra note 20, at 2. Crootof argues that “just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for ‘war torts’: serious violations of international humanitarian law that give rise to state responsibility.” Id. She believes that a “successful ban on autonomous weapon systems is unlikely (and possibly even detrimental).” Id. Instead, in her view, “what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons.” Id.
[470]. GC I, supra note 348, at art. 49; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146. See also AP I, supra note 12, at art. 85.
[471]. GC I, supra note 348, at art. 49; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146. See also AP I, supra note 12, at art. 85.
[472]. GC I, supra note 348, at art. 49; GC II, supra note 348, at art. 50; GC III, supra note 348, at art. 129; GC IV, supra note 348, at art. 146. See also AP I, supra note 12, at art. 85.
[473]. Bernard H. Oxman, Jurisdiction of States, in Max Planck Encyclopedia of Public International Law ¶ 37 (2007).
[474]. Id. at ¶ 38.
[475]. Id. at ¶ 39.
[476]. Id.
[477]. Id.
[478]. Menno T. Kamminga, Extraterritoriality, in Max Planck Encyclopedia of Public International Law ¶ 1 (2012).
[479]. See id. at ¶ 3.
[480]. See id. at ¶ 4.
[481]. Id. at ¶ 1.
[482]. Derived in part from International Law Association, supra note 35, at 5.
[483]. The obligation to review weapons, means, and methods of warfare laid down in Article 36 of AP I and the customary law cognate (if any), discussed above, constitutes a form of required scrutiny that directly implicates legal responsibility.
[484]. See U.N. Charter art. 25, 39–42.
[485]. Under the U.N. Charter, “[t]he General Assembly may discuss any questions or any matters within the scope of the … Charter or relating to the powers and functions of any organs provided for in the … Charter, and, except as provided in Article 12, may make recommendations to the Members of the United Nations or to the Security Council or to both on any such questions or matters.” U.N. Charter art. 10. Among its explicit competences laid down in the U.N. Charter, “[t]he General Assembly may consider the general principles of co-operation in the maintenance of international peace and security, including the principles governing disarmament and the regulation of armaments, and may make recommendations with regard to such principles to the Members or to the Security Council or to both.” U.N. Charter art. 11 (emphasis added). And “[t]he General Assembly may call the attention of the Security Council to situations which are likely to endanger international peace and security.” Id.
[486]. Pursuant to the U.N. Charter, “[t]he Secretary-General may bring to the attention of the Security Council any matter which in his opinion may threaten the maintenance of international peace and security.” U.N. Charter art. 99. An inherent right to investigate in connection with this power has been invoked by several Secretaries-General. Katja Göcke & Hubertus von Mohr, United Nations, Secretary-General, in Max Planck Encyclopedia of Public International Law ¶ 18 (2013). The rationale is that “[s]ince it is necessary for the Secretary-General to have comprehensive knowledge of the situation in the conflict area before taking action, his authority [to bring any relevant matter to the attention of the Security Council] must encompass the right to conduct investigations and to implement preparatory fact-finding missions.” Id. at ¶ 20. According to Katja Göcke and Hubertus von Mohr, this power has proven its value especially “since States may for various reasons be reluctant to bring certain matters before the Security Council….” Id. at ¶ 19.
[487]. See, e.g., Christof Heyns (Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions), Rep. to Human Rights Council, UN Doc. A/HRC/23/47 (Apr. 9, 2013).
[488]. See, e.g., Human Rights Committee; Committee on Economic, Social and Cultural Rights (CESCR); Committee against Torture (CAT); Committee on the Rights of the Child (CRC); and Organisation for the Prohibition of Chemical Weapons (OPCW).
[489]. See, e.g., the Steering Committee of the Campaign to Stop Killer Robots (Human Rights Watch, Article 36, Association for Aid and Relief Japan, International Committee for Robot Arms Control, Mines Action Canada, Nobel Women’s Initiative, PAX, Pugwash Conferences on Science & World Affairs, Seguridad Humana en América Latina y el Caribe, and Women’s International League for Peace and Freedom). About Us, Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/about-us/ (last visited Aug. 25, 2016).
[490]. U.N. Charter art. 13.
[491]. Huw Llewellyn, United Nations, Sixth Committee, in Max Planck Encyclopedia of Public International Law ¶ 1 (2012).
[492]. Pemmaraju Sreenivasa Rao, International Law Commission (ILC), in Max Planck Encyclopedia of Public International Law ¶ 3 (2013) (citing to G.A. Res. 174 (II) (November 1947)).
[493]. Statute of the International Law Commission, art. 18(2), GA Res. 174(II), UN Doc. A/519 (1947).
[494]. Rao, supra note 492, at ¶ 6.
[495]. Id. (emphasis added).
[496]. See, e.g., Public Law 100-180, § 224 (“No agency of the Federal Government may plan for, fund, or otherwise support the development of command and control systems for strategic defense in the boost or post-boost phase against ballistic missile threats that would permit such strategic defenses to initiate the directing of damaging or lethal fire except by affirmative human decision at an appropriate level of authority.”). But see Law of War Manual, supra note 110, at § 6.9.5.4 n.111 (“This statute may, however, be an unconstitutional intrusion on the President’s authority, as Commander in Chief, to determine how weapons are to be used in military operations.”).
[497]. See, e.g., DOD AWS Dir., supra note 91; Hui-Min Huang et al., Autonomy Levels for Unmanned Systems (ALFUS) Framework, Volume II: Framework Models, NIST Special Publication 1011-II-1.0, Version 1.0 (2007), http://www.nist.gov/el/isd/ks/upload/ALFUS-BG.pdf; Jessie Y.C. Chen; Ellen C. Haas, Krishna Pillalamarri & Catherine N. Jacobson, “Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies,” U.S. Army Research Laboratory, ARL-TR-3834 (July 2006).
[498]. European Parliament Resolution on the Use of Armed Drones ¶ H.2(d) (2014/2567(RSP)) (Feb. 25, 2014),
[499]. Henk Cor van der Kwast, Perm. Rep. of Neth. to the Conference on Disarmament, Opening Statement at the 2016 Informal Meeting of Experts, at 4, UN Office in Geneva (April 11, 2016), http://www.unog.ch/80256EDD006B8954/(httpAssets)/FC2E59B32F14D791C1257F920057CAE6/$file/2016_LAWS+MX_GeneralExchange_Statements_Netherlands.pdf. See also Steven Groves, A Manual Adapting the Law of Armed Conflict to Lethal Autonomous Weapons Systems (Heritage Foundation, Special Report No. 183, 2016), http://www.heritage.org/research/reports/2016/04/a-manual-adapting-the-law-of-armed-conflict-to-lethal-autonomous-weapons-systems.
[500]. E.g., Tallinn Manual on the International Law Applicable to Cyber Warfare (Michael Schmitt ed., 2013); Program on Humanitarian Policy and Conflict Research, Manual on International Law Applicable to Air and Missile Warfare (2009); International Institute of Humanitarian Law, San Remo Manual on International Law Applicable to Armed Conflicts at Sea (1995). See also Project on a Manual on International Law Applicable to Military Uses of Outer Space (MILAMOS), https://www.mcgill.ca/milamos/home (last visited Aug. 27, 2016).
[501]. Nils Melzer (ICRC), Interpretive Guidance on the Notion of Direct Participation in Hostilities Under International Humanitarian Law (2009).
[502]. Dutch Government, Response to AIV/CAVV Report, supra note 22.
[503]. This approach aligns in certain respects with the focus on systems engineering discussed in the UK MoD Joint Doctrine Note on unmanned aircraft systems. The authors of that document state that “[i]n order to ensure that new unmanned aircraft systems adhere to present and future legal requirements, it is likely that a systems engineering approach will be the best model for developing the requirement and specification.” U.K. Ministry of Def., supra note 113, at 5-2. Using such an approach, according to the Joint Doctrine Note authors, “the legal framework for operating the platform would simply form a list of capability requirements that would sit alongside the usual technical and operational requirements.” Id. In turn, “[t]his would then inform the specification and design of various sub-systems, as well as informing the concept of employment.” Id.
[504]. Dutch Government, Response to AIV/CAVV Report, supra note 22.
[505]. See Tjerk de Greef & Alex Leveringhaus, Design for Responsibility: Safeguarding Moral Perception via a Partnership Architecture, 17 Cognition, Technology & Work 319 (2015).
[506]. Id. at 326 (emphasis original).
[507]. Id. (citations omitted).
[508]. Id. at 327.
[509]. Id. The authors note that “[t]his raises issues about foresight, negligence and so on that we cannot tackle here.” Rather, “[f]or now, it suffices to note that the operator remains firmly control of his machine—even if there is a physical distance between them or that the machines operates at increased levels of automation.” Id.
[510]. Id.
[511]. See generally Lawrence Lessig, Code and Other Laws of Cyberspace (1999).
[512]. Danièle Bourcier, Centre national de la recherche scientifique, Artificial Intelligence & Autonomous Decisions: From Judgelike Robot to Soldier Robot, Address at the 2016 Informal Meeting of Experts, UN Office in Geneva (April 2016), available at
http://www.unog.ch/80256EDD006B8954/(httpAssets)/338ABCC8C57BB09CC1257F9A0045197A/$file/2016_LAWS+MX+Presentations_HRandEthicalIssues_Daniele+Vourcier.pdf.
[513]. Pasquale, supra note 1, at 157 (citing to Markle Task Force on National Security in the Information Age, Implementing a Trusted Information Sharing Environment: Using Immutable Audit Logs to Increase Security, Trust, and Accountability, at I (2006), http://research.policyarchive.org/15551.pdf).
[514]. See Pasquale, supra note 1, at 157; see also id. at 159 (stating that “[i]f immutable audit logs of fusion centers are regularly reviewed, misconduct might be discovered, wrongdoers might be held responsible, and similar misuses might be deterred”) (citation omitted).
[515]. U.K. Ministry of Def., supra note 113, at 5-6. See also DOD AWS Dir., supra note 91 (establishing audit-like requirements in DoD policy).
[516]. David Cyranoski, Ethics of Embryo Editing Divides Scientists, 519 Nature 272, 272 (2015).
[517]. Nicholas Wade, Scientists Seek Moratorium on Edits to Human Genome That Could Be Inherited, N.Y. Times (Dec. 3, 2015), http://www.nytimes.com/2015/12/04/science/crispr-cas9-human-genome-editing-moratorium.html.
[518]. Id.; see, e.g., George Church, Perspective: Encourage the Innovators, 528 Nature S7, S7 (2015).
[519]. Sara Reardon, Global Summit Reveals Divergent Views on Human Gene Editing, 528 Nature 173, 173 (2015).
[520]. David Baltimore et al., International Summit Statement, On Human Gene Editing, (Dec. 3, 2015), http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=12032015a.
[521]. Id.
[522]. Id.
[523]. Id.
[524]. Id.