A remote sniper used an automated system to take out a human target located thousands of miles away.
What happened: The Israeli intelligence agency Mossad used an AI-assisted rifle in the November killing of Iran’s chief nuclear scientist, according to The New York Times.
How it worked: The agency killed Mohsen Fakhrizadeh, whom it considered a key player in Iran’s covert nuclear weapons program, as he was driving near Tehran.
- The Israelis smuggled a machine gun and robotic control system into Iran piece by piece. Agents inside the country assembled the weapon in secret and mounted it on a camera-equipped pickup truck parked near Fakhrizadeh’s home. A separate truck, also outfitted with cameras, alerted the sniper when his car was nearby. The gun’s operator watched the images from an undisclosed location in a different country.
- The agency estimated that it would take 1.6 seconds for video to travel via satellite from the gun truck’s cameras to the sniper and for the sniper's aim and fire orders to reach the gun. It had programmed the system to compensate for the delay as well as the target car’s speed and weapon’s recoil while firing.
- The sniper fired 15 auto-targeted bullets in less than a minute, killing Fakhrizadeh while sparing his wife in the passenger seat.
Behind the news: Scores of military weapon systems around the world use AI to assist in targeting and other functions.
- A Libyan military faction last year deployed autonomous aerial vehicles to attack enemy troops.
- The South Korean company DoDaam developed automated sentry towers that identify patterns of heat and motion caused by people using cameras, thermal imagers, and laser range-finders.
- Milrem Robotics of Estonia sells a ground-based robot that can be outfitted with various remote-controlled weapons including machine guns, missiles, and drones.
Why it matters: Automated weapons have a long history. This AI-targeted shooting, however, opens a new, low-risk avenue for well funded intelligence agencies to kill opponents.
We’re thinking: We find AI-assisted killing deeply disturbing even as we acknowledge that countries need ways to protect themselves. We believe that AI can be a tool for advancing democracy and human rights, and that the AI community should take part in drawing clear boundaries for acceptable machine behavior.