June 30, 2023
Weighing the Risks: Why a New Conversation is Needed on AI Safety
A high-profile debate has been playing out in the media over the safe and responsible use of artificial intelligence (AI), kicked off by the Future of Life Institute’s “pause petition” calling for halting the most advanced AI systems. The petition had a wide-ranging focus on AI safety but was soon joined by other arguments with more specific concerns about worker protection, social inequality, the emergence of “God-like AI,” and the survival of the human race.
AI researchers, for their part, must go beyond government regulators in developing models in safe and responsible ways.
In response to concerns about AI safety, U.S. President Joe Biden met last month with the CEOs of frontier AI labs and Congress held hearings on AI in government and AI oversight. These conversations have been echoed around the world, with the United Kingdom planning to host the first global summit on AI this fall.
But as the world focuses more on regulation, it is important not to lose sight of the forest for the trees. AI poses different types of risks in the short and long term, and different stakeholders are best placed to mitigate existing problems that are exacerbated by AI, new problems that AI creates, and risks arising from uncontrollable AI systems.
Read the full article from Just Security.
More from CNAS
-
Technology & National Security
CNAS Insights | Setting the Rules for AI WarfareThe escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security i...
By Paul Scharre
-
Technology & National Security
The Pentagon and Anthropic - NBC’s Meet the Press NowPresident Trump is in Texas speaking about the economy ahead of the state’s high-stakes primary. Retired Lt. Gen. John “Jack” Shanahan, CNAS adjunct senior fellow and former d...
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Fighting AI Cyberattacks Starts with Knowing They’re HappeningThis article was originally published in Lawfare. Anthropic reported in November 2025 that Chinese threat actors used its Claude model to launch widespread cyberattacks on com...
By Janet Egan & Michelle Nie
-
Technology & National Security
The Sovereignty Gap in U.S. AI StatecraftThis article was originally published in Lawfare. As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Thro...
By Pablo Chavez
