June 21, 2023

CNAS Responds: Deputy Sec. Def: What the Pentagon Thinks about Artificial Intelligence

Last week, Deputy Secretary of Defense Kathleen Hicks wrote the op-ed, What the Pentagon Thinks About Artificial Intelligence, shedding light on the Pentagon's perspective on the potential risks of artificial intelligence usage, particularly among great powers. CNAS' AI Safety and Stability Project provided their insights and analysis on the opinion piece, and its consequential implications for the future of artificial intelligence in national security.

All quotes may be used with attribution. To arrange an interview, email Alexa Whaley at [email protected].


Josh Wallin, Fellow, Defense Program

Deputy Secretary Hicks’ editorial serves as one more piece in a whole-of-government push to advance international norms on the responsible use of AI. Along with the State Department’s Political Declaration on Military AI and recent Congressional interest in the use of AI in nuclear command and control, the DoD is openly and loudly positioning itself as a responsible player in the employment of autonomous systems. Their autonomous weapon policy was written just as the deep learning revolution began, demonstrating a standing recognition in the department of how game changing this technology can be.

Still, the DoD can go further to ensure that potentially lethal AI is employed as safely as possible. While methods for testing and evaluating AI systems are still maturing, technical information sharing with other major players like China would produce benefits for both nations by ensuring that autonomous systems are responsibly developed and compliant with existing international law. Creating engagements between American and Chinese engineers will not hurt U.S. competitiveness or the dominance of the world’s finest fighting force, but will instead help to avoid the very AI dangers that Deputy Secretary Hicks references.

Bill Drexel, Associate Fellow, Technology and National Security

Deputy Defense Secretary Kathleen Hicks has rightly pointed out a growing gap between the United States and China on safeguards for AI in military use. Contrary to popular concerns that militaries are careening toward dangerous AI-powered weapons, the American military can rightly claim to have been at the forefront of thinking about AI ethics and safety. The Defense Department was well ahead of the curve when in 2012 it published a responsible use policy for autonomous systems and AI. The United States has more recently adopted AI ethics principles and started implementing a responsible AI strategy, as well as taking a public stand on ensuring that humans remain in the loop for all steps that go into the decisions and actions taken in the nuclear domain.

The United States is clearly hoping that China and others will reciprocate with similar actions, beginning with promising that humans will remain firmly in charge of nuclear weapons on their side, too—seemingly low-hanging fruit. But China has yet to offer that assurance and has recently avoided political and scientificcrisis mitigation mechanisms. Seeking out common sense opportunities like nuclear command and control with which to start building confidence between the United States and China on AI is wise, but the path ahead is long and uncertain. China maintains outsized risks of AI-related disasters, and is not showing signs of wanting to cooperate further.

Michael Depp, Research Associate, AI Safety and Stability Project

It is interesting to see the DoD go on such a charm offensive over something that takes up less than 1 percent of the defense budget. It is heartening that the DoD has spent so much time thinking about the ethical and policy implications of AI in defense infrastructure before the integration actually begins in earnest. The consistent messaging over the last few years is good reinforcement and I am glad to see another direct mention of the commitment to avoiding autonomous nuclear launches, which should be a focus for building international consensus.  

I don't think the challenge that the deputy secretary of defense lays down for China to join the United States in this effort will yield much, but that isn't really the point of it. This document in effect talks about a coalition of the already-willing by highlighting the growth of partnerships with the United States on AI; being open to China joining is more about reassuring those partners than it is about China.

All CNAS experts are available for interviews. To arrange one, contact Alexa Whaley at [email protected].


  • Josh Wallin

    Fellow, Defense Program

    Josh Wallin is a Fellow in the Defense Program at CNAS. His research forms part of the Artificial Intelligence (AI) Safety & Stability Project, focusing on technology and ...

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a Fellow for the Technology and National Security Program at CNAS. His work focuses on Sino-American competition, artificial intelligence, and technology as an ...

  • Michael Depp

    Research Associate, AI Safety and Stability Project

    Michael Depp is a Research Associate supporting the center’s initiative on artificial intelligence safety and stability. His research focuses on the international governance o...