June 30, 2023
Weighing the Risks: Why a New Conversation is Needed on AI Safety
A high-profile debate has been playing out in the media over the safe and responsible use of artificial intelligence (AI), kicked off by the Future of Life Institute’s “pause petition” calling for halting the most advanced AI systems. The petition had a wide-ranging focus on AI safety but was soon joined by other arguments with more specific concerns about worker protection, social inequality, the emergence of “God-like AI,” and the survival of the human race.
AI researchers, for their part, must go beyond government regulators in developing models in safe and responsible ways.
In response to concerns about AI safety, U.S. President Joe Biden met last month with the CEOs of frontier AI labs and Congress held hearings on AI in government and AI oversight. These conversations have been echoed around the world, with the United Kingdom planning to host the first global summit on AI this fall.
But as the world focuses more on regulation, it is important not to lose sight of the forest for the trees. AI poses different types of risks in the short and long term, and different stakeholders are best placed to mitigate existing problems that are exacerbated by AI, new problems that AI creates, and risks arising from uncontrollable AI systems.
Read the full article from Just Security.
More from CNAS
-
Energy, Economics & Security / Technology & National Security
Sharper: Chips and Export ControlsAs competition between the United States and China has intensified, advanced technology has become the latest battlefield. After years of restricting China’s access to advance...
By Charles Horn
-
Technology & National Security
Scaling Laws: The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBrideKeegan McBride, adjunct senior fellow at the Center for a New American Security joins to explore the current state of open source AI model development and associated policy qu...
By Keegan McBride
-
Energy, Economics & Security / Technology & National Security
Export Controls: Janet Egan, Sam Levy, and Peter Harrell on the White House's Semiconductor DecisionJanet Egan, a senior fellow with the Technology and National Security Program at the Center for a New American Security, discussed the Trump administration’s recent decision t...
By Janet Egan
-
Indo-Pacific Security / Technology & National Security
America Should Rent, Not Sell, AI Chips to ChinaSelling AI chips to China outright reduces America's AI lead for little benefit....
By Janet Egan & Lennart Heim