April 13, 2026

How the Pentagon Can Manage the Risks of AI Warfare

This article was originally published in Foreign Policy.

The U.S. military struck more than 13,000 targets in the war on Iran, and used artificial intelligence to help plan operations. AI tools were used to synthesize intelligence, help prioritize targets, and build strike packages. The battle space is changing, but the age of AI warfare is already here. In addition to Iran, AI has been used for real-world operations in Ukraine, Gaza, and Venezuela. And next up is agentic warfare, in which AI systems are used as agents to take action. Over the next few years, these AI agents will be adopted by militaries to improve workflows in everything from logistics and maintenance to offensive cyberoperations.

To use AI effectively, militaries will need to not only harness the promise of AI but also grapple with its limitations and risks.

Given all these capabilities, AI has the potential to dramatically change the cognitive speed and scale of warfare. Yet military AI comes with profound risks. The dangers go beyond the use of autonomous weapons, which was one of the sticking points in the recent dispute between the Pentagon and leading AI company Anthropic. General-purpose AI systems such as large language models are prone to novel failure modes, vulnerable to hacking and manipulation, and have even been demonstrated to lie and scheme against their own users.

Read the full article in Foreign Policy.

View All Reports View All Articles & Multimedia