May 17, 2023

CNAS Responds: Oversight of A.I.: Rules for Artificial Intelligence

On May 16, Sam Altman, the CEO of OpenAI, testified alongside other representatives from the artificial intelligence industry before the Senate Judiciary subcommittee on Privacy, Technology, and the Law. Researchers from CNAS's AI Safety and Stability Project have provided their insights and analysis on the testimony and its implications for the future of AI regulation in the United States.

All quotes may be used with attribution. To arrange an interview, email comms@cnas.org.

Josh Wallin, Fellow, Defense Program

Tuesday's hearing reiterated the importance of establishing a useful regulatory regime, based on lessons learned from social media. Sam Altman’s support of licensing requirements, mandated independent auditing, and clarified liability are consistent with other expert assessments, though the devil will inevitably be in the details. What metrics will be used to determine if a company requires a license to train its models? How will liability be determined as some models become open-sourced and fine-tuned beyond company (fire)walls? The answers to these sorts of questions will determine the true effectiveness of any policy proposal.

In one brief exchange, Senator Graham inquired about the military applications of AI, invoking DOD’s language for describing fully autonomous weapon systems. I would have liked to see more discussion about the use of privately-developed AI in military applications, especially as these tools start being integrated into planning and some AI companies dodge questions about their future role.

Bill Drexel, Associate Fellow, Technology and National Security

Sen. Dick Durbin (D-IL) captured much of what was unusual about Tuesday’s hearing when he pointed out that he could not “recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them.” Though there were divergences among the panelists on what appropriate rules would look like, the receptivity to regulation among political, industry, and thought leaders in the room was remarkable.

Equally notable was the consensus in the room around the gravity of the social, political, and economic changes that AI will bring, a sea change in lawmakers’ perspective as Sen. Josh Hawley (R-MO) rightly highlighted in his opening remarks. Still, despite growing momentum, developing smart regulation that encourages America’s AI sector to effectively mitigate risk, maintain advantage over China, and stoke innovation—including from new startups—is far from straightforward, particularly on the timescale at which the sector is moving.

Michael Depp, Research Associate, AI Safety and Stability Project

Congress has a history of not understanding rapidly changing technology and applying old paradigms to it. It was heartening to see the Senators genuinely engaging with the technology and some of its challenges (Senator Coons noting the potential value of constitutional AI is a good example) even if there were some missteps in comparing it to previous technological revolutions. Senator Ossof challenged the panelists to define AI, which is an important piece to get right early if legislation is to be effective, as we have seen from the EU. There is a very real threat of generative AI becoming the only AI that matters in the minds of lawmakers and the public due to the popularity of these models.

This hearing was a great start but with the different focuses of the senators (from bias to inclusion, intellectual property, job losses, war, and market concentration), getting to the specifics of a licensing scheme or regulatory agency will likely be much more acrimonious. The danger is that AI follows the rabbit hole of cybersecurity and social media where everyone agrees a threat exists but can't agree what it is and necessary and timely legislation falls by the wayside.

Caleb Withers, Research Assistant, Technology and National Security

Three areas of emerging agreement between senators and those testifying were discernible: First, the magnitude of AI's regulatory challenge necessitates a new regulatory body—and therefore, defining the specific purview of any new regulator. At a minimum, a new regulator should be empowered to tackle issues that existing structures are not well-equipped to handle, such as large general models that don’t have clearly defined end uses, or models that pose significant enough risks to justify regulatory scrutiny well before they are deployed or distributed.

Second, policymakers must examine how current liability settings apply to potential harms from AI, and whether changes are needed to accommodate the unique difficulties posed by frontier systems.

Third, the notion of a blanket pause on scaling up AI systems has turned out to be a nonstarter. Discussion quickly coalesced around establishing standards, licenses, and auditing measures for responsible scaling. This approach would seek to set the 'rules of the road' and provide early warnings when it’s time to apply the brakes.

While transparency and trustworthiness were frequently discussed, current cutting-edge models fall well short of these ideals. Two recurring themes of the hearing were the importance of maintaining U.S. AI leadership in terms of both capabilities and values—but we need to recognize that technical progress in the former is far outpacing the latter. Policymakers will need to provide funding and direction to industry and researchers to ensure that efforts to develop trustworthy AI keep pace, and lay the groundwork for powerful AI to eventually exhibit trustworthy characteristics with a high degree of confidence.

Policymakers who find themselves skeptical of Altman should call his bluff and, as requested, focus particular regulatory attention on the most advanced models, which at present, would burden only a handful of very well-resourced labs.

Overall, this hearing hit many of the right notes in building further momentum towards a coherent and effective regulatory framework for the most advanced and impactful AI systems.

All CNAS experts are available for interviews. To arrange one, contact comms@cnas.org.


Authors

  • Josh Wallin

    Fellow, Defense Program

    Josh Wallin is a Fellow in the Defense Program at CNAS. His research forms part of the Artificial Intelligence (AI) Safety & Stability Project, focusing on technology and ...

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a Fellow for the Technology and National Security Program at CNAS. His work focuses on Sino-American competition, artificial intelligence, and technology as an ...

  • Michael Depp

    Research Associate, AI Safety and Stability Project

    Michael Depp is a Research Associate supporting the center’s initiative on artificial intelligence safety and stability. His research focuses on the international governance o...

  • Caleb Withers

    Research Assistant, Technology and National Security Program

    Caleb Withers is a Research Assistant for the Technology and National Security Program at CNAS, supporting the center’s initiative on artificial intelligence safety and stabil...