Dustin A. Lewis and Hannah Sweeney, “Exercising Cognitive Agency: A Legal Framework Concerning Natural and Artificial Intelligence in Armed Conflict,” HLS PILAC, January 2025
Executive Summary
Most armed forces fighting most wars in most of the parts of the world do not rely on artificial intelligence (AI) to conduct military operations — at least not yet. Nonetheless, it is arguably warranted to consider the use of AI in armed conflict from an international legal perspective at this time. That is in part because some armed forces are already relying on AI-related technologies. Further, these technologies could entail potentially extensive implications not only for how wars are fought and whether people and parties can be held accountable for violations. The use of certain AI systems in war also concerns, more broadly, whether humans should rely on increasingly complex assemblages of sensors, data, algorithms, and machines in decisions that involve mortal endangerment. Unlike corporate codes of conduct, ethics guidelines, and domestic law, international law is the only framework that all States agree is binding in relation to all armed conflicts. Notably, however, there is no specific regime, provision, or rule of international law that expressly pertains to the use of AI in war. Instead, it is necessary to evaluate how the existing legal framework and related responsibility institutions may already regulate AI-related technologies in war.
In this legal concept paper, we seek to provide an analytical framework through which to understand some core issues related to respecting international law concerning the use of AI in armed conflict. Instead of isolating AI, we widen the lens to focus on intelligence and cognitive tasks more broadly. By drawing distinctions and similarities between exercises of natural intelligence by humans, on the one hand, and reliance on artificial intelligence by humans (and the entities they serve), on the other hand, we aim to help uncover part of what the current legal framework expects, assumes, and requires of humans.
In short, our understanding is that under the existing law it is assumed that, to administer the performance of obligations binding on States in relation to armed conflict, humans need to exercise what we term cognitive agency. More specifically, our analysis suggests that at least two premises underlie the performance of obligations in the principal field of international law applicable in armed conflict, namely international humanitarian law (IHL)/the law of armed conflict (LOAC). Those premises are that, arguably:
Only natural persons — that is, humans — are capable of administering the performance of IHL/LOAC obligations binding on States; and
In doing so, the humans concerned must exercise cognitive agency.
By cognitive agency, we mean — with respect to administering the performance of an IHL/LOAC obligation — the undertaking and carrying out of a conscientious and intentional operation of mind by one or more humans vested with State legal capacity, through which that person or those persons implement the execution of the cognitive tasks demanded by the obligation. We reason that these premises arguably reflect a specification or an instantiation of existing conditions of legality, not a new policy approach. We ground that assessment in analyses of assumptions about executing cognitive tasks in connection with war, how IHL/LOAC obligations are performed, and how certain rules of State responsibility operate. If these premises are well founded, they may entail significant consequences with respect to requirements and limits related to the use of AI in armed conflict.
The theoretical groundings form only part of the picture. To implement the identified conditions of legality, it is necessary to ascertain what it means in practice for humans to exercise cognitive agency in relation to each relevant obligation. To help illustrate what the first step in doing so might involve, we briefly explore two obligations under IHL/LOAC — one concerning proportionality in attacks, and another related to detaining civilians — and deduce respective sets of associated cognitive tasks.
Finally, with a view to clarifying what the existing law demands, permits, and prohibits, we formulate a set of guiding questions that States and other relevant stakeholders might consider forming positions on. The foundational questions raised by our inquiry are whether the humans responsible for administering the performance of an IHL/LOAC obligation binding on a State may rely on AI-related technologies in implementing the execution of one or more of the cognitive tasks demanded by the obligation — and, if so, under what circumstances and subject to what conditions can the humans concerned undertake and carry out the requisite conscientious and intentional operation of mind. By reflecting on their positions and publicly articulating their interpretations of how existing obligations may or must be performed, States and other stakeholders can contribute to a more precise and more stable understanding of how international law already regulates the (non-)use of AI-related technologies in armed conflict.