Britain’s plan for defence AI risks the ethical and legal integrity of the military
Autonomous technology on the battlefield may not look like ‘killer robots’, but still has huge ethical implications. TSViPhoto/Shutterstock
In an unstable geopolitical climate, the UK’s strategic defence review focused on improving national resilience, from critical infrastructure security to technology and innovation. Many of the review’s recommendations have to do with transforming defence through artificial intelligence (AI) and autonomy, to make the armed forces “ten times more lethal”.
These recommendations and investments – drones, autonomous systems and £1 billion for a “digital targeting web” that would connect weapons systems – may well make the armed forces more lethal. But this comes at a risk to the ethical and legal integrity of the military.
A key part of international humanitarian law is the principle of precautions in attack. This requires that those planning an attack must do everything they feasibly can to ensure that targets are of a military nature. Similar is the principle of distinction, which mandates that civilians must never become a target.
In armed conflict, these principles are meant to protect civilians. They require human judgement — the ability to weigh up context, intent and likely outcomes. But how might they be upheld when humans are embedded in AI systems, which prioritise speed and scale in decision-making and action?
Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
An AI-enabled digital targeting web, like the one proposed in the strategic review, connects information (sensors) and action (weapons), enabling faster identification and elimination of potential targets. These webs would be able to identify and suggest possible targets considerably faster than humans. In many cases, leaving soldiers with only a few minutes, or indeed seconds, to decide whether these targets are appropriate or legitimate in legal or ethical terms.
One example already in use is the Maven Smart System, which was recently procured by Nato. This system could make it possible for small army teams to make up to “1,000 tactical decisions an hour”, according to a report by the US thinktank the Center for Security and Emerging Technology.
Legal scholars have argued that the prioritisation of speed with AI in conflict “leaves little room for human judgement” or restraint.
Unlike other technologies used in war, AI is more than an instrument. It is part of a cognitive system of humans and machines, which makes human control a lot more complicated than operating a fleet of tanks.
Proponents of autonomous weapons and AI targeting systems often argue that this technology would make warfare more precise, dispassionate and humane. However, military ethics scholar Neil Renic and I have shown how it can instead lead to an erosion of moral restraint, creating a war environment where technological processes replace moral reasoning.
Read more:
Silicon Valley’s bet on AI defence startups and what it means for the future of war – podcast
Training the data
The strategic defence review lauds autonomy as providing “greater accuracy”, but this is complicated by technical and human limitations. Instead of providing greater accuracy in targeting, AI-enabled systems threaten to undermine the principle of distinction and precaution.
AI systems also bear technical challenges for something as complex and dynamic as warfare. AI-supported systems are only as good as the data on which they are trained. Appropriate, comprehensive and up-to-date data is hard to come by in conflict, and dynamics can change quickly.
This is particularly true in urban conflicts. Understanding the complexities of a situation on the ground is difficult enough for human military personnel, without bringing in AI.
New AI models, in particular, bear risks. AI large language models are known to “hallucinate” – produce outputs that are erroneous or made up. As these systems are integrated into defence, the risks of technological failure become more pronounced.
AI could significantly speed up targeting technology.
Yuri A/Shutterstock
There is also a considerable risk of this technology enabling uncontrolled escalation and conflict at speed – what scholars have described as a “flash war”. Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about due to erroneous indications of attack, or a simple sensor or computer error.
Consider an AI system alerting commanders of a hostile tank approaching a border area. With potentially only minutes to spare, time for verification of the incoming information is sparse. Commanders may “prioritise rapid response over thorough analysis”. If the tank turns out to be a school bus, this response could have further retaliatory consequences.
Unpredictable systems could also give leaders false impressions of their capabilities, leading to overconfidence or encouraging preemptive attacks. This all may lead to greater global instability and insecurity.
Responsible AI
The UK government has shown that it is aware of some of these risks. Its 2022 report on responsible AI in defence emphasised ethics in the use of AI. It specified that the deployment “of AI-enabled capabilities in armed conflict needs to comply fully with [international humanitarian law]”, including the principles of distinction, necessity, humanity and proportionality.
The report also notes that responsible and ethical use of AI systems requires reliability and human understanding of the AI system and its decisions.
The strategic defence review, on the other hand, notes that the speed with which technologies develop is outpacing regulatory frameworks. It says that “the UK’s competitors are unlikely to adhere to common ethical standards in developing or using them”.
This might be so, but it should not open the door to a less ethical and responsible development or use of such systems by the UK. Ethics is not only about how we treat others, but also about who we are.
The UK still has an opportunity to shape global norms around military AI — before a generation of unaccountable systems becomes the default. But that window for action is closing rapidly.
Elke Schwarz is affiliated with the International Committtee for Robot Arms Control (ICRAC)