October 30, 2023

CNAS Responds: White House Executive Order on Artificial Intelligence

Today, the White House released an executive order detailing a new approach to artificial intelligence safety and security that attempts to mitigate future risk and keep America in the forefront of the global AI competition. CNAS experts respond to the sweeping order and what these provisions mean for technology and national security.

All quotes may be used with attribution. To arrange an interview, email Alexa Whaley at awhaley@cnas.org.

Paul Scharre, Executive Vice President and Director of Studies:

This executive order takes a significant step forward to advance AI safety, establishing wide-ranging regulations on AI models across a host of applications and industries. One of the most consequential moves is a requirement for companies to notify the government when training powerful foundation models and share the results of their red-team safety tests. Additionally, the National Institute of Standards and Technology will establish red-teaming standards. Together, these steps will ensure that the most powerful AI systems are rigorously tested to ensure they are safe before public deployment. As AI labs continue to train ever-more-powerful AI systems, these are vital steps to ensure that AI development proceeds safely.

Vivek Chilukuri, Senior Fellow and Director, Technology and National Security Program:

The Biden administration’s executive order is a forceful and far-reaching attempt to reassert American leadership on AI governance ahead of this week’s UK summit. This is the Administration’s biggest step yet to lead by example and offer an American model for responsible AI development and deployment that balances opportunity, risk, and rights. In doing so, the order advances the urgent but still incomplete work of offering the world a compelling alternative to China’s authoritarian model of AI rooted in mass surveillance and social control. For example, the order prioritizes support for privacy-preserving technologies, which could allow advanced AI to train on data without compromising its privacy; creates clear standards for agencies to use AI while protecting rights and safety; intensifies U.S. efforts to shape international AI standards; and promotes responsible, rights-affirming AI development and deployment abroad. All of this points to an administration that takes seriously America's responsibility to show the world how to unlock the benefits of AI without trading away core democratic values.

Tim Fist, Fellow, Technology and National Security Program:

The AI executive order tries to include something for everyone: safety, security, privacy, consumer protections, addressing job displacement, innovation, international leadership, and government use of AI. Our work focuses on AI safety and security, so I'll zoom in there.

At a high level, any artificial intelligence developer working on a sufficiently powerful “foundation model” (large AI systems with general purpose capabilities) must notify the government and share the results of any safety testing they plan to do. It's unclear from the text released so far whether government will actually be able to block the release of sufficiently unsafe models using its existing powers, but these measures will at minimum incentivize companies to move in a safer direction. The threshold for model capabilities will be based on the amount of computation used to develop them.

The executive order also directs agencies to build out the infrastructure (standards, tools, tests) required to evaluate powerful models before they are deployed. For example, the Departments of Energy and Homeland Security are tasked with exploring risks that these models could pose in the chemical, biological, nuclear, and cybersecurity domains. This is combined with measures in the biological space that focus on other inputs to weaponization: specifically to screen any synthesized biological material.

Keep in mind that while executive orders are a flexible tool that allow the White House to act quickly, they cannot create new regulatory authorities beyond those granted by the Constitution or Congress. There's a lot of great stuff in this executive order, but anyone hoping for more toothy regulation of AI will likely need to wait for Congress to act.

Josh Wallin, Fellow, Defense Program:

Today’s executive order is a first step at addressing the risks that AI presents across a range of possible use cases. While significant public attention has been focused on large “frontier models,” such as the large language models underpinning tools like ChatGPT, the executive order’s wide scope considers the dangers presented by AI tools across industries and applications, from disinformation and criminal justice to critical infrastructure and biosecurity. The executive order also orders the development of a National Security Memorandum to support the safe and effective use of AI in countering adversary military systems. This follows a string of initiatives across the defense community in recent years, such as the Department of Defense’s Responsible AI Strategy and Implementation Pathway.

In the lead-up to the UK’s AI Safety Summit later this week, the executive order calls out the need for greater engagement in international bodies to develop AI safety standards. While efforts like the OECD’s AI Principles have laid the groundwork for responsible international AI development, drafting rigorous standards through bodies like the International Organization for Standardization or the UN’s International Telecommunications Union could enable interoperability of AI systems and tackle risks that are expected to cross borders. What remains to be seen is whether this work can be accomplished solely through existing bodies, or if new agencies will need to be established, such as what some have termed an “International Atomic Energy Agency” to monitor and restrict the development of very large AI models.

Bill Drexel, Associate Fellow, Technology and National Security Program:

The White House’s new executive order on AI is most striking for its comprehensiveness. Its emphases span standards development, privacy, civil rights, consumer protection, labor issues, innovation competitiveness, government use, American leadership abroad, and more. Among its most promising points is a proactive focus on speeding government adoption of AI through accelerating hiring in AI expertise and helping agencies acquire relevant tools and services. Successfully implementing this goal will be difficult given the highly competitive AI job market and the political complexities of high-skilled immigration, but the impacts could be tremendous if done well. Another promising area is working to better deploy AI abroad for sustainable development. This will be particularly important as China increasingly attempts to position itself as a champion for the Global South on AI, with initiatives already underway in BRICS fora and Belt and Road training projects focusing on diffusing AI. To fully realize its ambitions in pioneering a robust, democratic AI ecosystem, the United States will also have to be competitive in helping other countries build out AI ecosystems that are resonant with democratic values, lest China take the initiative instead with its own AI agenda.

Sam Howell, Research Associate, Technology and National Security Program:

The Biden administration’s executive order on artificial intelligence addresses a critical national security vulnerability: the United States does not have access to the STEM talent required to maintain global technology leadership.

STEM talent—from non-degree holding technicians to PhD-level scientists and engineers—is necessary to invent, scale, and commercialize new technologies and ensure their responsible and safe deployment. The U.S. National Security Commission for AI identifies STEM talent as “the most important driver of progress” in AI innovation and development. Yet the United States maintains concerning STEM talent gaps.

The United States is projected to face a shortfall of nearly 2 million STEM workers by 2025, due in part to insufficient processes to develop future STEM talent. U.S. students consistently underperform on standardized math and science tests and demonstrate low STEM degree complete rates. The United States also increasingly struggles to attract and retain international STEM talent. International enrollments at U.S. universities have steadily declined since 2016, with many students citing anticipated immigration challenges as key factors influencing their decision not to study in the United States. 60 percent of U.S.-trained international AI PhD students who left the country after graduation said that visa issues weighed heavily on their decision.

The Biden administration’s new executive order takes important steps toward reversing these trends. The executive order aims to ease barriers to entry for high-skilled workers seeking U.S. employment by streamlining visa criteria and interviews. The White House is also launching AI.gov to connect high-skilled job candidates to AI-relevant opportunities in the federal government. The executive order compliments the administration’s recent proposed changes to the H-1B visa program, which aim to improve the program’s efficiency and integrity and add benefits and flexibility for applicants.

Such reforms are no longer a choice, but a necessity. Restrictive immigration policies enable other countries to outcompete the United States in securing the best STEM talent. To safeguard the United States’ technological edge in AI and other emerging technology areas, U.S. policymakers—in the executive branch and on Capitol Hill—must build on the executive order’s momentum and pursue meaningful high-skilled immigration reform.

Hannah Kelley, Research Associate, Technology and National Security Program:

The new AI executive order is as ambitious as it is long. I am excited to see a comprehensive approach to the issue set, and especially pleased to see a largely affirmative vision for the realization of AI’s benefits. While the administration rightfully emphasizes safety and security considerations at the top, the U.S. must also recognize the potential that AI has to bolster equality, efficiency, and economic competitiveness at home and abroad—if developed and deployed responsibly. This recognition will not only keep U.S. innovators and consumers energized and engaged with the technology, but will also help garner international support for the United States’ vision of AI for democracy and international stability. We'll see how this affirmative vision plays out in practice—but maintaining and expanding U.S. leadership in AI, and in setting the rules of the road that will govern AI development and deployment in the future, will require careful navigation between the perils and promise of the technology. This EO is a solid step in that direction.

Michael Depp, Research Associate, AI Safety and Stability Project:

It is heartening to see the White House take such a wide breadth of AI concerns so seriously with this executive order. It is clear the Biden administration sees AI as a critical issue and is working hard to balance the many competing priorities it presents.

Two areas that are especially valuable to watch in the future are the focus on advancing American leadership abroad and the balance between innovation and competition. The United States has pitched itself as a global leader on AI issues, but these statements about international engagement on AI will need to be backed up by something concrete. New initiatives or proposals that have agreement from other countries will need to arrive soon in order to back up these claims; it is not enough to just continually claim a leadership role. On innovation and competition issues, it is good to see the White House attempt to strike a balance between leveraging large tech companies for global leadership on AI and an open ecosystem that promotes competition. This is going to be difficult to maintain, so the early commitment to the idea is valuable.


Authors

  • Paul Scharre

    Executive Vice President and Director of Studies

    Paul Scharre is the Executive Vice President and Director of Studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence...

  • Vivek Chilukuri

    Senior Fellow and Director, Technology and National Security Program

    Vivek Chilukuri is the Senior Fellow and Program Director of the Technology and National Security Program at CNAS. His work focuses on the responsible development and deployme...

  • Tim Fist

    Fellow, Technology and National Security Program

    Tim Fist is a Fellow with the Technology and National Security Program at CNAS. His work focuses on the governance of artificial intelligence using compute/computing hardware....

  • Josh Wallin

    Fellow, Defense Program

    Josh Wallin is a Fellow in the Defense Program at CNAS. His research forms part of the Artificial Intelligence (AI) Safety & Stability Project, focusing on technology and ...

  • Bill Drexel

    Associate Fellow, Technology and National Security Program

    Bill Drexel is an Associate Fellow for the Technology and National Security Program at CNAS. His work focuses on the risks of artificial intelligence applications in national ...

  • Sam Howell

    Research Associate, Technology and National Security Program

    Sam Howell is a Research Associate with the Technology and National Security Program at CNAS. Her research interests include quantum information science, semiconductors, and t...

  • Hannah Kelley

    Research Associate, Technology and National Security Program

    Hannah Kelley is a Research Associate with the Technology and National Security Program at CNAS. Her work focuses on U.S. national technology strategy and international cooper...

  • Michael Depp

    Research Associate, AI Safety and Stability Project

    Michael Depp is a Research Associate supporting the center’s initiative on artificial intelligence safety and stability. Before joining CNAS, he was a junior fellow and progra...