March 04, 2026
CNAS Insights | Setting the Rules for AI Warfare
The escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security in the 21st century: How will AI change warfare and what, if any, rules should govern its use? The Department of War believes it must have access to the most capable AI technology like Anthropic’s Claude model, without guardrails, to stay ahead of competitors. Anthropic’s CEO Dario Amodei has warned of the dangers of AI-enabled domestic mass surveillance and autonomous weapons that would choose whom to kill on their own.
Yet a more fundamental question looms: How will humans retain control of AI-driven warfare fought at machine speed? For the United States to remain the world’s leading military power, it must both accelerate AI adoption and employ AI in ways that ensure it remains under human control.
The intelligentization of warfare—to borrow a phrase from the Chinese military—will likely unfold over decades. The industrial revolution increased the physical scale of destructiveness that militaries could unleash on the battlefield. AI will lead to similar transformations in the cognitive dimensions of warfare.
For the United States to remain the world’s leading military power, it must both accelerate AI adoption and employ AI in ways that ensure it remains under human control.
AI will allow militaries to process more information faster and more accurately. As AI becomes fully integrated into militaries, it will transform the speed and scale of warfare. Autonomy will enable massive drone swarms, presenting defenders with a constantly shifting threat that overwhelms humans’ ability to respond. To counter this threat, adversaries will use more autonomy. But as humans cede more tasks to machines, from analyzing intelligence to choosing targets, humans will not be able to effectively supervise everything AI is doing. They will be forced to trust the AI system.
Yet AI is not always trustworthy. Large language models are prone to hallucinations, sycophancy, and hidden biases. In the military context, AI might selectively feed information to human analysts to confirm their own pre-existing biases about enemy behavior. Or AI might simply make things up. Agentic AI systems that take actions present a whole new set of challenges, as early adopters are discovering. One safety researcher at Meta had an AI agent speed delete her inbox—and ignore requests to stop. (The AI agent apologized afterwards: “you’re right to be upset.”)
In an adversarial context, more threats abound. Enemies could poison training data, plant backdoors in AI systems, or manipulate their performance through malicious inputs. Large language models are susceptible to prompt injection attacks, where malicious prompts manipulate the model. Enemies can even undermine AI performance without direct access to the system by implanting false data in the environment like cognitive land mines.
The industrial revolution increased the physical scale of destructiveness that militaries could unleash on the battlefield. AI will lead to similar transformations in the cognitive dimensions of warfare.
The most capable AI systems present an even stranger and more insidious threat—that the AI system itself might secretly work against its user or developer to pursue its own goals. Such a scenario seems ripped from the pages of science fiction, but AI systems have engaged in a variety of deceptive behaviors in test settings. These include lying and attempting to blackmail users, deleting and manipulating files, sandbagging performance on tests, and attempting to create secret copies of themselves to avoid deletion. For militaries, AI “scheming” is a new kind of insider threat, requiring new means of evaluating and monitoring AI systems.
Simple automated systems have already demonstrated the dangers if humans lose control. In 2003, the U.S. Army’s Patriot air and missile defense system shot down two friendly aircraft. While humans were “in the loop” for both incidents, automation contributed to human operators not understanding the system’s functionality. While the incidents were tragic, the consequent loss of trust was even more catastrophic. After the second fratricide, the U.S. military effective took the Patriot offline for the remainder of the Iraq invasion.
Harm can scale even faster in cyberspace, where malware can replicate and spread across networks. In 2010, the cyber weapon Stuxnet spread far beyond its intended target of Iranian centrifuges, infecting computers in 150 different countries. Stuxnet was designed with multiple safeguards, however, which prevented it from causing collateral damage. The 2017 Russian NotPetya worm had no such safeguards and spread beyond its Ukrainian targets to wreak havoc worldwide, causing $10 billion in damages.
Militaries will need rules for how they adopt AI, not because autonomy is inherently illegal or unethical, but because militaries will want to ensure their weapons function as intended on the battlefield. The Department of War needs to move faster to adopt AI, but it’s a fallacy to assume that it is policies or hand-wringing about ethics that are holding back AI adoption today. The real obstacles are sclerotic bureaucracies, slow-moving acquisition processes, and military cultures that resist uncomfortable change.
The Pentagon’s recent AI strategy rightly focuses on speed. Department of War leaders have demonstrated a willingness to disrupt the traditionally slow-moving bureaucracy. AI assurance processes will need to catch up. American warfighters deserve the best AI technology to accomplish their missions and defend American lives. They deserve AI they can trust.
Paul Scharre is the executive vice president at the Center for a New American Security and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence.
More from CNAS
-
Technology & National Security
The Pentagon and Anthropic - NBC’s Meet the Press NowPresident Trump is in Texas speaking about the economy ahead of the state’s high-stakes primary. Retired Lt. Gen. John “Jack” Shanahan, CNAS adjunct senior fellow and former d...
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Fighting AI Cyberattacks Starts with Knowing They’re HappeningThis article was originally published in Lawfare. Anthropic reported in November 2025 that Chinese threat actors used its Claude model to launch widespread cyberattacks on com...
By Janet Egan & Michelle Nie
-
Technology & National Security
The Sovereignty Gap in U.S. AI StatecraftThis article was originally published in Lawfare. As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Thro...
By Pablo Chavez
-
Technology & National Security
America’s Key to Biotechnology Leadership? AI-Ready Biodata.This article was originally published in Just Security. From strengthening armor for U.S. warfighters to patching supply chain vulnerabilities, the convergence of AI and biote...
By Sam Howell & Michelle Holko
