New Rules for Military AI China, U.S., and other nations want limits on military AI.

Reading time
2 min read
Drone with thermal night vision camera view of soldiers walking on a field

Nations tentatively agreed to limit their use of autonomous weapons.

What’s new: Representatives of 60 countries endorsed a nonbinding resolution that calls for responsible development, deployment, and use of military AI. Parties to the agreement include China and the United states but not Russia.

Bullet points: The resolution came out of the first-ever summit on Responsible Artificial Intelligence in the Military (REAIM), hosted in The Hague by the governments of South Korea and the Netherlands. It outlines how AI may be put to military uses, how it may transform global politics, and how governments ought to approach it. It closes by calling on governments, private companies, academic institutions, and non-governmental organizations to collaborate on guidelines for responsible use of military AI. The countries agreed that:

  • Data should be used in accordance with national and international law. An AI system’s designers should establish mechanisms for protecting and governing data as early as possible.
  • Humans should oversee military AI systems. All human users should know and understand their systems, the data they use, and their potential consequences.
  • Governments and other stakeholders including academia, private companies, think tanks, and non-governmental organizations should work together to promote responsible uses for military AI and develop frameworks and policies.
  • Governments should exchange information and actively discuss norms and best practices.

Unilateral actions: The U.S. released a 12-point resolution covering military AI development, deployment, governance, safety standards, and limitations. Many of its points mirrored those in the agreement, but it also called for a ban on AI control of nuclear weapons and clear descriptions of the uses of military AI systems. China called on governments to develop ethical guidelines for military AI.

Behind the news: In 2021, 125 of the United Nations’ 193 member nations sought to add AI weapons to a pre-existing resolution that bans or restricts the use of certain weapons. The effort failed due to opposition by the U.S. and Russia.

Yes, but: AI and military experts criticized the resolution as toothless and lacking a concrete call to disarm. Several denounced the U.S. for opposing previous efforts to establish binding laws that would restrict wartime uses of AI.

Why it matters: Autonomous weapons have a long history, and AI opens possibilities for further autonomy to the point of deciding to fire on targets. Fully autonomous drones may have been first used in combat during Libya’s 2020 civil war, and fliers with similar capabilities reportedly have been used in the Russia-Ukraine war. Such deployments risk making full autonomy seem like a normal part of warfare and raise the urgency of establishing rules that will rein them in.

We’re thinking: We salute the 60 supporters of this resolution for taking a step toward channeling AI into nonlethal military uses such as enhanced communications, medical care, and logistics.


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox