Drone startups are taking aim at military customers.
What’s new: As large tech companies have backed away from defense work, startups like Anduril, Shield AI, and Teal are picking up the slack. They’re developing autonomous fliers specifically for military operations, The New York Times reported. None of these companies has put weapons in their drones, but some have declared their willingness to do so.
What’s Happening: The new wave of AI-powered drones is designed for martial missions like reconnaissance and search and rescue.
- Anduril’s Ghost, pictured above, performs all of its computing on-device using an AI system that reads sensor data, performs object recognition, and controls navigation. The drone’s chassis can be fitted with a variety of equipment including radio surveillance gear. The UK Royal Marines have used Anduril’s drones, and the U.S. military has tested them.
- Shield AI is developing a quadcopter called Nova that specializes in tasks like mapping the interior of buildings or scouting threats. The company also makes a reinforcement learning system for training drones.
- Golden Eagle, a quadcopter from Teal, is geared for surveillance. It uses infrared and visible light cameras to identify and track targets.
Behind the news: The U.S. military and tech industry have a long history of collaboration and cross-pollination. In recent years, however, large tech companies including Google, Microsoft, and Salesforce have faced protests by employees, investors, and the public over their work for the Departments of Defense and Homeland Security.
- Google responded by canceling some defense contracts.
- Some venture capital groups have refused to fund AI that can be weaponized. Others like Andreesen Horowitz, Founders Fund, and General Catalyst support defense-focused startups including Anduril.
- Outside Silicon Valley, grassroots efforts like the Campaign to Stop Killer Robots are working for a global ban on autonomous weapons.
Why it matters: The question of whether and to what extent AI can and should be used for military purposes is a critical one that grows more pressing as technology advances.
We’re thinking: Until a ban is in place — one that has clear boundaries and mechanisms for enforcement — profit-seeking companies are sure to develop lethal AI. The Batch, along with more than 100 countries and thousands of AI scientists, opposes development of fully autonomous lethal weapons.