March 09, 2026
Recommendations for Securing and Promoting AI Agents
Introduction
AI agents—AI systems that can autonomously plan and execute actions affecting real world systems with minimal human oversight—are poised to transform American productivity and disrupt the technology landscape. Historically, agents completed tasks quickly and humans remained meaningfully in the loop. Now, extended autonomous operations and sprawling agent-to-agent interactions are increasingly common. CNAS welcomes CAISI’s prescient focus on agent-specific security risks.
The core challenge is trust. Many of the underlying security questions are not new. Ensuring secure inputs, monitoring actions, and access control are long-standing concerns for software deployment. But agents introduce distinct and novel challenges. Traditional software executes predefined instructions; agents interpret inputs and select their own actions. Organizations must extend trust beyond a system’s execution to its judgment. The security implications of that shift are significant, and the field lacks mature frameworks for addressing them. Without such frameworks, agent adoption risks a predictable failure mode: it either stalls under excessive caution or advances unchecked until a serious breach forces a disruptive correction.
This response outlines the structural factors that introduce unique security needs for AI agents, the most promising technical and oversight controls, and where further work and government action is most needed.
Download the Full Recommendations
- As a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its website annually all donors who contribute. ↩
More from CNAS
-
Technology & National Security
AI-Ready Biodata Is America’s Next Strategic InfrastructureThis article was originally published in War on the Rocks. There is little doubt in Washington that AI is a powerful technology that will help determine which country rules th...
By Sam Howell
-
Technology & National Security
CNAS Insights | Setting the Rules for AI WarfareThe escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security i...
By Paul Scharre
-
Middle East Security / Technology & National Security
AI and the Future of WarfareRapid developments in artificial intelligence are changing modern warfare and raising a new batch of ethical questions. The U.S. military reportedly used Anthropic’s AI model ...
By Paul Scharre
-
Middle East Security / Technology & National Security
AI Goes to WarPaul Scharre, executive vice president at the Center for a New American Security, spoke with Today, Explained co-host Sean Rameswaram about how AI and the military are becomin...
By Paul Scharre

The Center for a New American Security (CNAS) welcomes the opportunity to provide a response to the U.S. Center for AI Standards and Innovation (CAISI) Request For Information: “Security Considerations for Artificial Intelligence Agents.” This submission reflects the views of the following authors1:
- Ben Hayum, Research Assistant, Technology and National Security Program
- Janet Egan, Senior Fellow and Deputy Director, Technology and National Security Program
- Caleb Withers, Research Associate, Technology and National Security Program
With thanks to Paul Scharre, Vivek Chilukuri, Geoffrey Gertz, William L. Anderson, Andy Wang, and Kunyang Li for their valuable feedback.