Does AI Have a Role in Warfare?

Published
Reading time
3 min read
Doves flying over barbed wire

Dear friends,

Russian troops have invaded Ukraine, and the terrifying prospect of a war in Europe weighs on my mind. My heart goes out to all the civilians affected, and I hope we won’t see the loss of life, liberty, or property that many people fear.

I’ve often thought about the role of AI in military applications, but I haven’t spoken much about it because I don’t want to contribute to the proliferation of AI arms. Many people in AI believe that we shouldn’t have anything to do with military use cases, and I sympathize with that idea. War is horrific, and perhaps the AI community should just avoid it. Nonetheless, I believe it’s time to wrestle with hard, ugly questions about the role of AI in warfare, recognizing that sometimes there are no good options.

Full disclosure: My early work on deep learning was funded by the U.S. Defense Research Projects Agency, or DARPA. Last week, Wired mentioned my early work on drone helicopters, also funded by DARPA. During the U.S.-Iraq war, when IEDs (roadside bombs) were killing civilians and soldiers, I spent time thinking about how computer vision can help robots that dispose of IEDs.

What may not be so apparent is that forces that oppose democracy and civil liberties also have access to AI technology. Russian drones have been found to contain parts made in the U.S. and Europe. I wouldn’t be surprised if they also contain open-source software that our community has contributed to. Despite efforts to control exports of advanced chips and other parts that go into AI systems, the prospects are dim for keeping such technology out of the hands of people who would use it to cause harm.

So I see little choice but to make sure the forces of democracy and civil liberties have the tools they need to protect themselves.

Several organizations have come to the same conclusion, and they’ve responded by proposing principles designed to tread a fine line between developing AI’s capacity to confer advantage on the battlefield and blunting its potential to cause a catastrophe. For example, the United Nations has issued guidance that all decisions to take human life must involve human judgment. Similarly, the U.S. Department of Defense requires that its AI systems be responsible, equitable, traceable, reliable, and governable.

I support these principles. Still, I’m concerned that such guidelines, while necessary, aren’t sufficient to prevent military abuses. User interfaces can be designed to lead people to accept an automated decision — consider the pervasive “will you accept all cookies from this website?” pop-ups that make it difficult to do anything else. An automated system may comply technically with the U.N. guidance, but if it provides little context and time for its human operator to authorize a kill mission, that person is likely to do so without the necessary oversight or judgment.

While it’s important to establish high-level principles, they must be implemented in a way that enables people to make fateful decisions — perhaps the most difficult decisions anyone can make — in a responsible way. I think of the protocols that govern the use of nuclear weapons, which so far have helped to avoid accidental nuclear war. The systems involved must be subject to review, auditing, and civilian oversight. A plan to use automated weapons could trigger protocols to ensure that the situation, legality, and schedule meet strict criteria, and that the people who are authorized to order such use are clearly identified and held accountable for their decisions.

War is tragic. Collectively we’ve invented wondrous technologies that also have unsettling implications for warfare. Even if the subject presents only a menu of unpalatable options, let’s play an active role in navigating the tough choices needed to foster democracy and civil liberties.

Keep learning,

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox