top of page

The Algorithm That Kills: Regulating Artificial Intelligence Under International Humanitarian Law

  • Writer: Lex Amica
    Lex Amica
  • Aug 19
  • 5 min read

By Musoke Gilbert*


Abstract

The rise of algorithmic warfare means artificial intelligence is now reshaping how wars are fought. Systems from surveillance drones to self-directed weapon platforms are being integrated into battle networks. Although these advancements lower the immediate danger to troops in the field, they introduce complex legal and ethical dilemmas. Central to the debate is whether Lethal Autonomous Weapon Systems can operate within the narrow confines of International Humanitarian Law. The core rules of distinction, proportionality, and precaution are being challenged in respects the Geneva Conventions’ authors could hardly imagine.


1. The legal Framework of International Humanitarian Law.

International Humanitarian Law provides the legal framework for regulating violence during armed conflict. Its foundations lie in the Geneva Conventions and their Additional Protocols, which are designed to shield civilians and others no longer participating in hostilities while permitting the targeting of legitimate military objectives.


The principle of distinction obliges parties to keep combatants and civilians constantly separate. This duty is enshrined in Article 48 of Additional Protocol I, which instructs conflict parties to “distinguish at all times between the civilian population and combatants” (Kleczkowska, 2018). The rule has attained the status of customary International law, evident in Rule 1 of the International Committee of the Red Cross’s Customary International Humanitarian Law compilation, which is binding on both state and non-state actors. In the Kassem decision of 1969, the Israeli Military Court at Mamallah confirmed that immunity of civilians against direct attack ranks among the core tenets of International Humanitarian Law.


The principle of proportionality prohibits military strikes that would inflict civilian suffering out of proportion to the military gain anticipated (Rogers, 2016) . Specifically, Article 51(5)(b) of the Additional Protocol I states that strikes expected to result in “incidental loss of civilian life” that is “excessive in relation to the concrete and direct military advantage anticipated” are unlawful (Daniele, 2024). Equally, Article 57 of the Protocol requires parties to take all feasible precautions to spare civilians, ordering them to “do everything feasible to verify that targets are military objectives” and to minimize risk to the civilian population (Haque, 2016).


Autonomous Weapons IHL

In the 2021 and 2023 Gaza conflicts, reports indicate that Israeli forces deployed the AI-assisted system known as “Habsora” to autonomously produce target nomination lists (Rehman et al,2025) . This technology enabled hundreds of strikes to be authorized each day, prompting grave doubts about whether the speed of operations permitted genuine proportionality and precaution evaluations. Those rules were framed on the assumption that military commanders would apply human judgment, reasoning and situational awareness to each decision (Hibner, 2008). AI systems, however sophisticated, lack those faculties.


2. The Rise of Autonomous Weapons and Legal Grey Zones.

A Lethal Autonomous Weapon System is any weapon that can locate, identify, track, and attack targets without human input at any stage of the action (Lele, 2019). Advances in artificial intelligence mean that such systems can now track and engage targets autonomously, operating independently of a human operator after launch. Once in the field, these systems rely on predetermined parameters and machine learning processes to determine targets and execute strikes. The result is a lethal decision loop that is, at least in theory, severed from human control, prompting critical questions about who can be held accountable, about the legal ramifications of such strikes, and about the basic requirement for meaningful human oversight in the use of force (Asaro, 2012).


In a 2020 report, the UN Security Council described a disturbing incident in Libya in which a STM Kargu-2 drone, equipped with explosives, autonomously located and attacked retreating combatants (Rantanen, 2024). The event underscored the possibility that lethal choices were made entirely by software without any human operator. Similarly, during the 2020 Nagorno-Karabakh conflict, Azerbaijan integrated loitering munitions including Harop systems able to autonomously recognize and destroy radar emissions into its operations (Calcara et al, 2022). This method of attack blurs the lines of distinction mandated by international humanitarian law, especially in urban environments where civilians and combatants may be intermingled.


In the ongoing Russia-Ukraine war, both sides have deployed AI enhanced tools for real time surveillance and automated target detection, with Ukrainian systems reportedly able to identify and locate enemy positions with minimal human verification and this highlights the thin line between supervised and full autonomy in lethal engagements.


In the event where a drone autonomously misidentifies a civilian object as a military target and strikes, who bears the legal burden, the commander, the state or the machine?


The current framework of IHL is grounded in the notion of human agency and there is a lacuna when it comes to dealing with such scenarios.  The principle of state responsibility and individual criminal liability under international criminal law presumes a culpable human actor.

 

3. Regulatory and Ethical Responses

The international community is parallel when it comes to the regulation of Lethal Autonomous Weapons Systems (Badell et al, 2022). Some states and civil society organizations such as those in the ‘Campaign to Stop Killer’ Robots, are of the view that a pre-emptive ban should be issued on fully autonomous weapons (Harrison, 2024). Some states such those with a highly advanced technological military system argue for continued research and reasonable use under existing legal frameworks.


However, negotiations under the UN Convention on Certain Conventional Weapons have so far failed to produce a binding agreement, largely due to disagreements over definitions and state interests (Shaw, 1983). Despite growing advocacy, the UN Group of Government Experts on LAWS has not achieved consensus, with some states resisting binding regulation, thus leaving the development of lethal Art largely ungoverned (Ahmed, 2025).


Autonomous Weapons IHL

The US Department of Defense’s Project Maven, launched in 2017, applies machine learning to analyse drone footage (Hogue, 2021). Although it was officially intended to assist human analysts, it sparked ethical backlash from engineers and employees who feared it would eventually enable autonomous kill decisions outside proper legal oversight.


Despite the lack of an agreement, there is a growing notion on the necessity of maintaining meaningful human control over the use lethal force (Watkin, 2004). This principle has not yet been codified in treaty law but is taken as a normative standard to preserve human judgement, moral responsibility and legal accountability in warfare. Some of the states have also begun incorporating ethical guidelines into their military doctrines on AI, though these remain non-binding and unequally implemented.


The concept of “meaningful human control” has become central to Artificial Intelligence warfare debates but is not well defined (Abbink, 2024) . It may require not just human presence in the loop, but genuine understanding and ethical judgment in the decision to kill. This is what algorithms cannot replicate.

 

4. Conclusion

The incorporation of Artificial Intelligence in armed conflict is more than a technological evolution, it is a legal and ethical revolution that challenges the very foundation of International Humanitarian Law. As ‘the algorithm that kills’ becomes an operational reality, the global community ought to address the accountability gaps, normative uncertainties and risks to civilian protection that AI enabled warfare presents.


The law must evolve to ensure that the future of warfare does not leave humanity behind as the stake is not just the legality of new weapons, but the enduring moral authority of International Humanitarian Law in an age where machines may decide who lives and who dies.


In situations where legal rules remain unsettled, the Martens Clause provides that the principles of humanity and the dictates of public conscience must still guide the conduct of war, including the use of autonomous technologies(Mero, 2000) .


Beyond legality, the delegation of life and death decisions to machines challenges our moral fabric. It challenges not just whether AI can comply with International Humanitarian Law, but whether it should be allowed to kill at all.


*The writer is a student of Law at Makerere University.

Comments


bottom of page