March 09, 2026
Recommendations for Securing and Promoting AI Agents
Introduction
AI agents—AI systems that can autonomously plan and execute actions affecting real world systems with minimal human oversight—are poised to transform American productivity and disrupt the technology landscape. Historically, agents completed tasks quickly and humans remained meaningfully in the loop. Now, extended autonomous operations and sprawling agent-to-agent interactions are increasingly common. CNAS welcomes CAISI’s prescient focus on agent-specific security risks.
The core challenge is trust. Many of the underlying security questions are not new. Ensuring secure inputs, monitoring actions, and access control are long-standing concerns for software deployment. But agents introduce distinct and novel challenges. Traditional software executes predefined instructions; agents interpret inputs and select their own actions. Organizations must extend trust beyond a system’s execution to its judgment. The security implications of that shift are significant, and the field lacks mature frameworks for addressing them. Without such frameworks, agent adoption risks a predictable failure mode: it either stalls under excessive caution or advances unchecked until a serious breach forces a disruptive correction.
This response outlines the structural factors that introduce unique security needs for AI agents, the most promising technical and oversight controls, and where further work and government action is most needed.
Download the Full Recommendations
- As a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its website annually all donors who contribute. ↩
More from CNAS
-
Technology & National Security
American AI Companies Can’t Get Enough ChipsExecutive Summary In 2026, artificial intelligence (AI) chip production has become a binding constraint on the pace of the AI compute buildout. Demand for computing power to t...
By James Sanders, Janet Egan & Rory Madigan
-
Technology & National Security
Anthony Vinci on Turning Uncertainty Into Decisions With AI ForecastingAnthony Vinci, CEO of Vico, joins the podcast to explain how AI-powered forecasting can quantify uncertainty and help people make better decisions. Drawing from his background...
By Anthony Vinci
-
Indo-Pacific Security / Technology & National Security
CNAS Insights | Trump Should Talk to Xi About Military AIWhen President Donald Trump goes to China to meet with General Secretary Xi Jinping next month, the leaders of the world’s two superpowers will have much to discuss, with trad...
By Jacob Stokes & Daniel Remler
-
Technology & National Security
The Political Limits of China’s AI Diffusion AmbitionsBeijing’s drive to diffuse AI will increasingly run up against its commitment to employment stability and fear of collective action....
By Ruby Scanlon

The Center for a New American Security (CNAS) welcomes the opportunity to provide a response to the U.S. Center for AI Standards and Innovation (CAISI) Request For Information: “Security Considerations for Artificial Intelligence Agents.” This submission reflects the views of the following authors1:
- Ben Hayum, Research Assistant, Technology and National Security Program
- Janet Egan, Senior Fellow and Deputy Director, Technology and National Security Program
- Caleb Withers, Research Associate, Technology and National Security Program
With thanks to Paul Scharre, Vivek Chilukuri, Geoffrey Gertz, William L. Anderson, Andy Wang, and Kunyang Li for their valuable feedback.