September 13, 2023

To Avoid AI Catastrophes, We Must Think Smaller

Early last year, researchers demonstrated the capacity of an artificial intelligence (AI) drug discovery tool called MegaSyn to generate thousands of novel chemical weapons in a matter of hours. While conversations about AI safety measures proliferate, including a White House-mediated effort at voluntary regulation and an upcoming UK AI summit, systems like MegaSyn remain unchecked in a regulatory conversation that is increasingly focused on long-term, theoretical risks, at the cost of addressing the harms that AI already poses to society.

These incidents are not theoretical, nor are they projections of long-term dangers; rather, these AI tools are already presenting tangible threats to individual health and well-being.

Amid the furor caused by large language models (LLMs) like OpenAI’s ChatGPT, members of Congress, policy researchers and tech companies have proposed regulations that are designed to curb some of the greatest dangers presented by highly capable “frontier AI.” Famed computer scientists and contributors to the AI revolution have come forward to warn of emerging extinction risks. To be sure, this technology presents long-term and far-reaching threats to society, but these calls for regulation fall short at addressing some of the most acute dangers.

Read the full article from The Messenger.

View All Reports View All Articles & Multimedia