• Harvard Law School (map)

States are actively pursuing military and other advantages by incorporating artificial intelligence (AI) to support, speed up, and otherwise enhance decision-making and operations. Diverse applications of the technology labeled “AI” are likely to impact multiple facets of armed conflicts and may entail humanitarian consequences.

On December 3–4, 2018, the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada, and the Stockton Center for International Law at the U.S. Naval War College convened a private workshop titled “Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict.” The workshop — which took place at Harvard Law School — is one in a continuing series of research-oriented workshops bringing together experts from academia, government agencies, international organizations, and non-governmental organizations. With a focus on international humanitarian law, participants at the 2018 AI workshop analyzed an array of enduring and emergent issues in this area, including in relation to the provision of legal advice, reviews of weapon systems, deprivation of liberty, warships, and the deliberate-targeting process. The workshop was conducted under the Chatham House Rule.

Short analyses by certain participants were subsequently published in the ICRC’s Humanitarian Law and Policy Blog in a series on “Artificial Intelligence and Armed Conflict”:

Longer analyses by certain participants may subsequently be published in International Law Studies.

Previous workshops in the series include a 2016 workshop hosted by HLS PILAC exploring “Global Battlefields: The Future of U.S. Detention under International Law,” and a 2017 workshop hosted by the USNWC exploring “International Legal Implications of Military Space Operations.”

Related HLS PILAC Projects