Published
Reading time
2 min read
A group of drones flying over a field

Autonomous weapons are often viewed as an alarming potential consequence of advances in AI — but they may already have been used in combat.

What’s new: Libyan forces unleashed armed drones capable of choosing their own targets against a breakaway rebel faction last year, said a recent United Nations (UN) report. The document, a letter from the organization’s Panel of Experts on Libya to the president of the Security Council, does not specify whether the drones targeted, attacked, or killed anyone. It was brought to light by New Scientist.

Killer robots: In March of 2020, amid Libya’s ongoing civil war, the UN-supported Government of National Accord allegedly attacked retreating rebel forces using Kargu-2 quadcopters manufactured by Turkish company STM.

  • The fliers are equipped with object-detection and face-recognition algorithms to find and strike targets without explicit human direction.
  • Upon acquiring a target, the drone flies directly at it and detonates a small warhead just before impact.
  • STM claims that its systems can distinguish soldiers from civilians.
  • The Turkish military bought at least 500 such units for use in its border conflict with Syria. STM is negotiating sales to three other nations, according to Forbes.

Behind the news: Many nations use machine learning in their armed forces, usually to bolster existing systems, typically with a human in the loop.

  • In the most recent battle between Israel and Palestinians in Gaza, the Israeli Defense Force deployed machine learning systems that analyzed streams of incoming intelligence. The analysis helped its air force identify targets and warn ground troops about incoming attacks.
  • The U.S. Army is testing a drone that uses computer vision to identify targets up to a kilometer away and determine whether they’re armed.
  • The European Union has funded several AI-powered military projects including explosive device detection and small unmanned ground vehicles that follow foot soldiers through rough terrain.

Why it matters: Observers have long warned that deploying lethal autonomous weapons  on the battlefield could ignite an arms race of deadly machines that decide for themselves who to kill. Assuming the UN report is accurate, the skirmish in Libya appears to have set a precedent.

We’re thinking: Considering the problems that have emerged in using today’s AI for critical processes like deploying police, sentencing convicts, and making loans, it’s clear that the technology simply should not be used to make life-and-death decisions. We urge all nations and the UN to develop rules to ensure that the world never sees a real AI war.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox