The U.S. military is developing a new generation of automated weaponry. Some people are calling for automated generals as well.

What happened: A pair of defense experts argue in War on the Rocks, an online publication covering national security, that the Pentagon should replace the human chain of command over nuclear defense with machines. The time available to respond to incoming warheads has dwindled from hours during the Cold War to roughly 6 minutes today. The change makes automated command-and-control a necessity, they say. Their analysis added urgency to feature stories on military AI published last week in The Atlantic and The Economist.

Behind the news: The Department of Defense’s 2019 budget calls for nearly $1 billion in AI spending. Almost one-third of the money will fund the Joint Artificial Intelligence Center dedicated to establishing and scaling up AI throughout the military. The remainder of the department’s AI budget will support initiatives led by individual branches. Among those efforts:

  • The Air Force is developing SkyBorg, an AI copilot for F-16s to help with navigation, radar awareness, and target recognition. The system will also pilot Valkyrie drones (pictured above) to serve as autonomous wingmen.
  • The Marine Corps is building self-driving assault boats to deliver troops to a beach, then support the landing via autonomous .50 caliber machine guns.
  • The Navy is testing Sea Hunter, an autonomous ship intended eventually to destroy enemy submarines without human intervention.

The controversy: The debate over automated warfare follows trench lines similar to those of the earlier (and ongoing) argument over nuclear weapons.

  • Pro-AI military experts fear that developments like hypersonic missiles — capable of traveling up to 20 times the speed of sound — could decapitate the U.S. military before it has a chance to react. They argue that AI-driven warfare is imperative, if only for effective defenses.
  • But critics such as The Bulletin of the Atomic Scientists warn that hackers, faulty code, or wayward machines could handicap AI-driven weapons. Moreover, non-nuclear autonomous weaponry could violate ethical and legal codes in the Geneva Conventions.

We’re thinking: Autonomous weapons are terrifying enough. Autonomous nuclear weapons verge on the unthinkable. We strongly support the United Nations’ effort to establish a ban on autonomous weapons as a complement to nuclear disarmament. That said, AI has potential nonlethal uses like mine removal and search and rescue. It will take vigorous, well informed argument to arrive at military uses of AI that improve conditions for humanity as a whole. It’s critical that the AI community play an active role in the discussion.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox