A fighter pilot battled a true-to-life virtual enemy in midair.

What’s new: In the skies over southern California, an airman pitted his dogfighting skills against an AI-controlled opponent that was projected onto his augmented-reality visor.

How it works: The trial aimed to test the integration of an autonomous fighter agent developed by EpiSci with high-brightness, low-latency, augmented-reality technology from Red Six Aerospace.

  • Red Six CEO Dan Robinson, an alumnus of the UK’s Royal Air Force, piloted a plane of his own design. EpiSci controlled a simulated Chinese J-20 stealth fighter using a combination of deep learning, reinforcement learning, and rules-based modeling.
  • EpiSci’s agent previously ran on ground-based hardware in a simulation. The trial confirmed that it ran well on the resources available in the Red Six craft and responded to real-world input from GPS and inertial sensors, Chris Gentile, EpiSci’s VP of tactical autonomous systems, told The Batch.
  • The event also confirmed that EpiSci could limit its agent to behaviors useful for training beginners — “It wasn’t kill-at-any-cost,” Gentile said — without compromising its ability to react to its human opponent’s tactics and errors. The U.S. Air Force plans to begin testing the system for pilot training next year.

Behind the news: EpiSci honed its agent technology in the U.S. Defense Advanced Research Projects Agency (Darpa) Alpha Dogfight program, in which a pilot on the ground helmed a flight simulator to fight AI-controlled foes. (See our report on the program, “AI Versus Ace.”) Darpa recently awarded the company a grant to develop AI systems for air combat.

Why it matters: Flight simulators don’t replicate all the challenges pilots face in the air — for instance, G-forces — and pitting human pilots against one another in the air is dangerous and expensive. Battling AI-controlled agents in augmented reality could make combat training more effective, safer, and cheaper.

We’re thinking: The ethical boundaries of military AI demand careful navigation. Using machine learning to make training pilots safer may be a reasonable application. Building aircraft that can fight on their own, however, is a different matter. The AI community needs to draw bright red lines to ensure that AI is beneficial and human. To that end, we support the United Nations proposed ban on autonomous weapons.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox