[A PDF of this project overview is available here: link.]


International Legal and Policy Dimensions of War Algorithms: Enduring and Emerging Concerns


To strengthen international debate and inform policy-making on the ways that artificial intelligence and complex computer algorithms are transforming — and have the potential to reshape — war, as well as how international legal and policy frameworks already govern — and might further regulate — the design, development, and use of those technologies.


  • Independent analysis with a focus on international law and policy coupled with research engagements in select countries;

  • A workshop at Harvard Law School for experts from militaries and other government agencies and international policymakers; and

  • Briefings tailored for governments, international policymakers, and industry.

Anticipated Results

  • Better informed and more evidence-based legal-policy discussion and debate on enduring and emerging issues concerning computer-algorithm-infused decision-making and conduct connected with armed conflict; and

  • Renewed focus on incorporating normative commitments and obligations entailed in international law applicable in relation to armed conflict into strategic planning, military doctrine, and technology life cycles.


Two years (September 2018 – August 2020).


Ethics and Governance of Artificial Intelligence Fund.


We are a Harvard Law School research program exploring contemporary challenges concerning armed conflict through the lens of international law. We combine traditional public international law research with targeted analysis of today’s changing security environments. Our mode is critical, independent, and rigorous. With a focus on depth over breadth, we aim to reaffirm the centrality of international law in regulating war-fighting, in diminishing harmful effects of armed conflict, and in protecting civilians.

Background and Areas of Focus

Several states are actively seeking to harness artificial-intelligence techniques and other complex algorithmic systems in connection with war — not only for weapons but also for an array of other areas as well, such as detention, warships, and humanitarian services. Many of these technological developments threaten to outpace regulations. We recently devised a framework concerning war algorithms. (We defined “war algorithms” as algorithms that are expressed in computer code, that are effectuated through constructed systems, and that are capable of operating in relation to armed conflict.) In this new project, we will conduct research and convene government experts and international policymakers on rapid technological advancements with a focus on legal and policy dimensions of how AI techniques and other complex algorithmic systems are transforming war. 

We approach these issues from the standpoint of international law. States have agreed that international law is the primary normative framework regulating armed conflict. Yet while technologically advanced armed forces are pursuing developments in the collection of big data, deep learning, and related fields, it remains unclear how those militaries can and will in effect translate and implement — across increasingly complex computational systems — international legal rules, principles, and standards. Many war algorithms might pose challenges and opportunities in respect of several of the normative commitments and the concepts (including attribution, control, foreseeability, and reconstructability) underpinning legal frameworks that currently regulate conduct in relation to armed conflict. 

This project’s provisional list of areas of focus includes: 

  • The feasibility (or not) of — and the normative parameters concerning — testing and validation of AI components in relation to the conduct of hostilities and protection of civilians;

  • The diverse roles of industry and academia in war-algorithm life cycles in various countries;

  • Theoretical and practical concerns regarding the “translatability” (or not) of the value decisions and normative judgments embedded in international humanitarian law/law of armed conflict into computational procedures;

  • Trajectories in technological, legal, policy, and strategic developments in select countries; and

  • Intersections, (in)congruities, and distinctions between legal and ethical aspects of decision-making and conduct in relation to the design, development, and use of war algorithms.


  • Naz K. Modirzadeh, Founding Director, HLS PILAC [link].

  • Dustin A. Lewis, Senior Researcher, HLS PILAC [link].


  • Dustin A. Lewis, “Legal reviews of weapons, means and methods of warfare involving artificial intelligence: 16 elements to consider,” ICRC Humanitarian Law and Policy Blog, March 21, 2019 [link].


Related HLS PILAC Projects


Image credits: The U.S. Army [link] and Michael J. Vrabel [link].

Webpage last updated: April 2019