March 09, 2026
Recommendations for Securing and Promoting AI Agents
Introduction
AI agents—AI systems that can autonomously plan and execute actions affecting real world systems with minimal human oversight—are poised to transform American productivity and disrupt the technology landscape. Historically, agents completed tasks quickly and humans remained meaningfully in the loop. Now, extended autonomous operations and sprawling agent-to-agent interactions are increasingly common. CNAS welcomes CAISI’s prescient focus on agent-specific security risks.
The core challenge is trust. Many of the underlying security questions are not new. Ensuring secure inputs, monitoring actions, and access control are long-standing concerns for software deployment. But agents introduce distinct and novel challenges. Traditional software executes predefined instructions; agents interpret inputs and select their own actions. Organizations must extend trust beyond a system’s execution to its judgment. The security implications of that shift are significant, and the field lacks mature frameworks for addressing them. Without such frameworks, agent adoption risks a predictable failure mode: it either stalls under excessive caution or advances unchecked until a serious breach forces a disruptive correction.
This response outlines the structural factors that introduce unique security needs for AI agents, the most promising technical and oversight controls, and where further work and government action is most needed.
Download the Full Recommendations
- As a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its website annually all donors who contribute. ↩
More from CNAS
-
Technology & National Security
Quantum 201: U.S. vs China Quantum Industrial BaseConstanza Vidal Bustamante, fellow at the Center for a New American Security, joins Chris Miller and Zachary Yerushalmi to break down her new report with John Burke, Quantum's...
By Constanza M. Vidal Bustamante
-
Technology & National Security
CNAS Insights | MATCHing Policy to StrategyAdvanced semiconductors underpin the growing capabilities of artificial intelligence (AI). The computing power they provide drives military capability, economic productivity, ...
By Janet Egan & Michelle Nie
-
Technology & National Security
CNAS Insights | American AI Exports Need a Sovereignty SolutionEarlier this month, the Department of Commerce opened applications for the American AI Exports Program, an ambitious effort to deploy American AI technology, tools, and infras...
By Ruby Scanlon & Vivek Chilukuri
-
Technology & National Security
How the Pentagon Can Manage the Risks of AI WarfareTo use AI effectively, militaries will need to not only harness the promise of AI but also grapple with its limitations and risks....
By Paul Scharre

The Center for a New American Security (CNAS) welcomes the opportunity to provide a response to the U.S. Center for AI Standards and Innovation (CAISI) Request For Information: “Security Considerations for Artificial Intelligence Agents.” This submission reflects the views of the following authors1:
- Ben Hayum, Research Assistant, Technology and National Security Program
- Janet Egan, Senior Fellow and Deputy Director, Technology and National Security Program
- Caleb Withers, Research Associate, Technology and National Security Program
With thanks to Paul Scharre, Vivek Chilukuri, Geoffrey Gertz, William L. Anderson, Andy Wang, and Kunyang Li for their valuable feedback.