May 24, 2024
Tort Law and Frontier AI Governance
The development and deployment of highly capable, general-purpose frontier AI systems—such as GPT-4, Gemini, Llama 3, Claude 3, and beyond—will likely produce major societal benefits across many fields. As these systems grow more powerful, however, they are also likely to pose serious risks to public welfare, individual rights, and national security. Fortunately, frontier AI companies can take precautionary measures to mitigate these risks, such as conducting evaluations for dangerous capabilities and installing safeguards against misuse. Several companies have started to employ such measures, and industry best practices for safety are emerging.
Frontier AI developers can take precautions to reduce the risks that their most advanced systems will increasingly pose.
It would be unwise, however, to rely entirely on industry and corporate self-regulation to promote the safety and security of frontier AI systems. Some frontier AI companies might employ insufficiently rigorous precautions, or refrain from taking significant safety measures altogether. Other companies might fail to invest the time and resources necessary to keep their safety practices up to date with the rapid pace at which AI capabilities are advancing. Given competitive pressures, moreover, the irresponsible practices of one frontier AI company might have a contagion effect, weakening other companies’ incentives to proceed responsibly as well.
Read the entire article from Lawfare.
More from CNAS
-
Indo-Pacific Security / Energy, Economics & Security / Technology & National Security
Selling AI Chips Won’t Keep China Hooked on U.S. TechnologyU.S. policy should not rest on the illusion that selling chips can trap China inside the American tech ecosystem....
By Janet Egan
-
Technology & National Security
Scaling Laws: The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBrideKeegan McBride, adjunct senior fellow at the Center for a New American Security joins to explore the current state of open source AI model development and associated policy qu...
By Keegan McBride
-
Energy, Economics & Security / Technology & National Security
Export Controls: Janet Egan, Sam Levy, and Peter Harrell on the White House's Semiconductor DecisionJanet Egan, a senior fellow with the Technology and National Security Program at the Center for a New American Security, discussed the Trump administration’s recent decision t...
By Janet Egan
-
Indo-Pacific Security / Technology & National Security
America Should Rent, Not Sell, AI Chips to ChinaSelling AI chips to China outright reduces America's AI lead for little benefit....
By Janet Egan & Lennart Heim