In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.
Listen to the full conversation here.
More from CNAS
CommentaryAI Ethical Principles: Implementing the U.S. Military’s Framework
Over the last two years, the DoD has taken a number of steps to lay the groundwork for AI adoption....
By Megan Lamberth
VideoDangers of an AI Race
AI has value across a range of military applications, but also brings risks....
By Paul Scharre
CommentaryThe Coming Revolution in Intelligence Affairs
The U.S. intelligence community must embrace the RIA and prepare for a future dominated by AI—or else risk losing its competitive edge....
By Anthony Vinci
CommentaryBeyond TikTok: Preparing for Future Digital Threats
By the end of September, the American social media landscape will undergo a profound transformation, and we cannot yet map this new terrain. President Donald Trump’s executive...
By Kara Frederick, Chris Estep & Megan Lamberth