Iran hit at least three Amazon data centers in the Middle East, an indicator of AI’s critical role in the United States’ war against Iran and possibly the first time such facilities have been targeted during warfare.
What happened: Iranian drones damaged an Amazon Web Services (AWS) facility in Bahrain and two in the United Arab Emirates (UAE), disrupting online services including banking, payments, ride sharing, food delivery, and business software. The U.S. military uses AWS to run the unclassified version of Anthropic Claude and possibly other computing systems, but it didn’t disclose whether the attacks affected its operations.
Drone attacks: Early on March 1, drones struck two AWS data centers in the UAE, and the Bahrain data center suffered damage shortly afterward. Amazon said the Bahrain attack was “a drone strike in close proximity to one of our facilities,” while Iran said it had targeted the facility “to identify the role of these centers in supporting the enemy’s military and intelligence activities,” according to Iran’s state-controlled Fars News Agency via the messaging service Telegram.
- The data centers suffered structural damage, power disruptions, and water damage caused by firefighters, which resulted in service outages and higher-than-normal error rates. As of March 3, Amazon recommended that its cloud-computing customers back up data and move workloads from AWS Middle East Region to the U.S., Europe, or Asia Pacific.
- The attacks put at risk trillions of dollars of investments to build AI hubs in the Persian Gulf region, The New York Times reported.
- Member nations of the Gulf Cooperation Council, an economic union and military alliance that includes Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates, host 2.0 gigawatts of data center capacity, with an additional 0.4 gigawatts planned, Business Insider reported.
Behind the news: The risk to data centers mirrors a rise in AI’s role in warfare. Despite the U.S. military’s recent decision to ban defense uses of Anthropic’s Claude large language model, U.S. forces routinely use Claude and other systems for a variety of purposes in Iran and elsewhere. For its part, Iran uses weaponized drones that have some degree of autonomy.
- Claude was part of a system that helped select more than 1,000 targets during the initial 24 hours of the U.S. war on Iran, enabling U.S. forces to vastly accelerate the pace of strikes, The Washington Post reported. Claude is integrated with Maven Smart System (MSS), a system for targeting and logistics built by Palantir. To avoid errors, human analysts check the system’s output in life-and-death situations. In exercises, MSS reduced the targeting process from 12 hours to less than 1 minute and achieved results with a staff of 20 that previously required 2,000, Army Times reported. Claude/MSS played a role in the January operation that captured Venezuelan president Nicolás Maduro, but the actions in Iran are its first use in “major war operations.”
- The use of inexpensive drones has become a defining feature of the recent U.S.-Iran war. Iran responded to the initial U.S. bombing campaign with large waves of attack drones — many of them low-cost “kamikaze” designs that navigate autonomously and attack on command — aimed at regional infrastructure, military sites, and U.S. assets. The U.S., in turn, unleashed its own one-way attack drones, including the LUCAS system modeled on Iran’s Shahed-136. This style of warfare draws heavily on Ukraine’s innovations developed during the Russia-Ukraine war, where swarms of drones — often coordinated with software and AI — have destroyed tanks, artillery, and logistics targets.
Yes, but: As AI raises the pace of military decision-making, it also raises the risk of deadly mistakes. For example, during the initial wave of air strikes on Iran, a bomb destroyed a school, killing more than 170 people, mostly children. In a subsequent investigation, preliminary findings indicate that U.S. forces likely dropped the bomb. Out-of-date target data may have played a role in targeting the building, since the school was part of a nearby naval base roughly 15 years ago.
Why it matters: The sharp rise of AI-enabled warfare signals a shift in the pace of combat from human to machine speed. AI makes it practical to plan missions by running vast numbers of simulations to identify actions most likely to lead to success. It accelerates battlefield decisions and actions while potentially reducing the so-called fog of war that can obscure realities on the battlefield. Missions become viable that previously were constrained by lack of human attention to analyze the flood of battlefield communications, imagery, and other information. The acceleration could shorten some phases of conflict, yet it adds pressure to make snap decisions that could have catastrophic consequences.
We’re thinking: AI-generated recommendations don’t remove the need to verify intelligence, question assumptions, and weigh the moral and strategic consequences of using force.