ICRC Humanitarian Law and Policy Blog

Algorithms of war: The use of artificial intelligence in decision making in armed conflict

In less than a year, Chat-GPT has become a household name, reflecting astonishing advances in artificial intelligence-powered software tools, especially generative AI models. These developments have been accompanied by frequent forecasts that AI will revolutionise warfare. At this stage of AI development, the parameters of what is possible are still being explored, but the military response to AI technology is undeniable. China’s white paper on national defense promoted the theory of the “intelligentization” of warfare, in which leveraging AI is key to the PLA’s modernization plan. The director of the US Cybersecurity and Infrastructure Security Agency Jen Easterly warned that artificial intelligence may be the “most powerful weapon of our time.” And whilst autonomous weapon systems have tended to dominate discussions about AI in military applications, less attention has been paid to the use of AI in systems that support human decisions in armed conflicts. In this post, ICRC Military Adviser Ruben Stewart, and Legal Adviser Georgia Hinds seek to critically examine some of the touted benefits of AI when used to support decisions by armed actors in war. In particular, the areas of civilian harm mitigation and tempo are discussed, with a particular focus on the implications for civilians in armed conflict.