March 09, 2026

Recommendations for Securing and Promoting AI Agents

The Center for a New American Security (CNAS) welcomes the opportunity to provide a response to the U.S. Center for AI Standards and Innovation (CAISI) Request For Information: “Security Considerations for Artificial Intelligence Agents.” This submission reflects the views of the following authors:

- Ben Hayum, Research Assistant, Technology and National Security Program

- Janet Egan, Senior Fellow and Deputy Director, Technology and National Security Program

- Caleb Withers, Research Associate, Technology and National Security Program

With thanks to Paul Scharre, Vivek Chilukuri, Geoffrey Gertz, William L. Anderson, Andy Wang, and Kunyang Li for their valuable feedback.

Introduction

AI agents—AI systems that can autonomously plan and execute actions affecting real world systems with minimal human oversight—are poised to transform American productivity and disrupt the technology landscape. Historically, agents completed tasks quickly and humans remained meaningfully in the loop. Now, extended autonomous operations and sprawling agent-to-agent interactions are increasingly common. CNAS welcomes CAISI’s prescient focus on agent-specific security risks.

The core challenge is trust. Many of the underlying security questions are not new. Ensuring secure inputs, monitoring actions, and access control are long-standing concerns for software deployment. But agents introduce distinct and novel challenges. Traditional software executes predefined instructions; agents interpret inputs and select their own actions. Organizations must extend trust beyond a system’s execution to its judgment. The security implications of that shift are significant, and the field lacks mature frameworks for addressing them. Without such frameworks, agent adoption risks a predictable failure mode: it either stalls under excessive caution or advances unchecked until a serious breach forces a disruptive correction.

This response outlines the structural factors that introduce unique security needs for AI agents, the most promising technical and oversight controls, and where further work and government action is most needed.

Download the Full Recommendations

Download PDF

  1. As a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its website annually all donors who contribute.

View All Reports View All Articles & Multimedia