America could “win” the technological race, but only with a national cloud. History illustrates why.
In 1903, Americans crowded along the banks of the Potomac, hoping to witness the first powered plane take flight. Instead, they watched it plummet into the river. The United States government had given Samuel Langley, an established astrophysicist and aeronautical engineer, $50,000 to spearhead the mission. The task seemed impossible. If Langley couldn’t get a heavier-than-air vehicle to fly, who could? But eight days later, two bike shop owners, the Wright brothers, achieved exactly that. The world’s first plane went airborne on a budget of $1,000.
A culture that fosters an innovative spirit and encourages creative minds to take risks underpins America’s technological strength. Access to the right resources, combined with unrelenting tenacity, has led to untold possibilities for innovators to transform their technological ideas into reality. Today, when artificial intelligence (AI) is the new technological frontier, innovation is no different. In 2016, the AI gaming system AlphaGo shocked Go players and AI researchers worldwide when it defeated world champion Lee Sedol, executing unprecedented moves and demonstrating a mastery of strategy. OpenAI—a world-leading AI research organization—developed a language generator, GPT3, that can independently write fiction and auto-complete images.
A culture that fosters an innovative spirit and encourages creative minds to take risks underpins America’s technological strength.
Considering these groundbreaking AI developments are all fairly recent, many experts wonder what the future of AI—specifically deep machine learning—has in store. AI requires three core ingredients: data, computing power, and algorithms. The possibilities of what AI can do exploded in the 21st century with the surge of available data and expansion of computing power needed to train complex algorithms. Today, 90 percent of the world’s data has been generated in just the past two years, and computer chips can process 10 trillion calculations per second. But these two components—data and computing power—could serve as bottlenecks for future progress.
Data is siloed between and within sectors. There is no uniform system for data scientists across various sectors to collect, manage, organize, maintain, and share data. Even within a single organization, data across different offices are stored in separate spaces. This poses a challenge for innovators who may need a dataset for their work but cannot access it, and creates artificial limits on their creativity if they can’t find a dataset and conclude it doesn’t exist.
The cost of computing power has skyrocketed in recent years. According to OpenAI, the computing power necessary to train increasingly complex algorithms is doubling every 3.4 months. And the cost to keep up with this demand places a significant financial burden on universities and small businesses, preventing them from competing on the cutting edge of machine learning research. Take Google’s Meena chatbot, for example, which cost $1.5 million to train, or GPT3, which rang up a tab of $12 million. In 2019, Microsoft co-founder Paul Allen warned that the growing cost of computing power may result in the United States having only “a handful of places where you can be on the cutting edge.”
A national cloud is necessary to lower these barriers. A cloud, essentially, is a virtual computer system that provides storage and computing power to users. A national cloud would allow Americans to access this virtual computer as long as they have internet and a device. Rather than sift through scattered data on the internet, innovators could access multiple sources of data, from government to industry, in one location. Data would be federated, meaning that different datasets would coexist without merging, and if necessary, users would need special permission to access certain datasets. For example, healthcare or criminal justice data may require extra layers of security to ensure privacy, but researchers would still be able to see that the dataset exists and details about what it contains. Further, computing power can be distributed in the form of grants—through a similar process to how government grants are currently assigned—or subsidized so small organizations won’t be financially crippled.
A national cloud would allow Americans to access this virtual computer as long as they have internet and a device.
There are various options for how Americans could access the cloud. For example, subscription memberships could be offered to universities and research institutes. A vetting and granting process could be set up to review proposals from industry and other innovators who hope to use the cloud’s resources for a novel idea. Security measures could be put in place so access to sensitive data would only be provided to legitimate users.
The idea of a national cloud isn’t new. Stanford’s Institute for Human-Centered Artificial Intelligence proposed a national research cloud last year, and a few months later, the U.S. Senate and House introduced bipartisan legislation on the idea, which has received support from 21 top tech giants and universities. The House version of the bill has even been included in the 2020–21 National Defense Authorization Act.
But current proposals limit the national cloud to researchers at established universities, businesses, or research institutes. Why stop there? The next greatest AI innovators might not be where researchers or policymakers envision them. The U.S. government did not expect two bike shop owners to create the world’s first powered plane. But the Wright brothers had the resources, knowledge, and creativity to solve a challenge that the most renowned aeronautical engineer at the time couldn’t figure out.
Indeed, open access to a national cloud presents its own challenges. How do we ensure those accessing the cloud don’t have ill intent? Or even if they are well-intentioned, seasoned computer scientists are well aware of projects gone wrong due to the brittleness of machine learning. Cyberattacks from foreign enemies, unethical and biased algorithms and data, privacy concerns, and bandwidth overload are other pressing concerns that must guide the design of a national cloud.
The U.S. government shouldn’t create a national cloud from scratch; it has neither the time nor the resources to do so. Instead, a public-private partnership is needed to divvy up the costs and responsibilities of creating, maintaining, and monitoring a national cloud. The onus cannot fall onto a single organization or sector. The upfront costs will be steep, but if done correctly, the pay-off to America’s security and prosperity would be invaluable.
The recent expansion of online courses, open-source data, and cloud computing allows anyone with a computer and internet access to gain basic machine learning skills and knowledge. But that isn’t enough. We must democratize data and computing power so that every innovative mind in America, not just those at prestigious institutions, have the resources necessary to become the next AI pioneer. The next Wright brothers are out there, and the United States must equip them so the country can lead in the world of AI.
About the Author
Tina Huang is a research analyst at the Center for Security and Emerging Technology (CSET), where she focuses on immigration and the AI workforce and military applications of AI.
Special thanks to everyone at CSET who provided useful insights and feedback, especially Senior Fellow Melissa Flagg who encouraged and supported me throughout the entire process.
About The Pitch
In 2020, the Center for a New American Security (CNAS) launched a premier event to elevate emerging and diverse voices in national security. Selected applicants made their pitch for innovative policy ideas to renew American competitiveness in front of a distinguished panel of judges and live virtual audience at the CNAS National Security Conference on June 24, 2020. Winners included Grace Kim (Best in Show and Military and Defense Heat Winner), Tina Huang (People's Choice Award), Luke Chen (National Security Institutions Heat Winner), Khyle Eastin (Alliances and Multilateralism Heat Winner), and Alan McQuinn (Economics and Technology Heat Winner).
More from CNAS
PodcastAre ‘killer robots’ the future of warfare?
Paul Scharre joins host Suzanna Kianpour to discuss the technology, fears and even potential advantages of developing autonomous weapons. Listen to the full conversation from...
By Paul Scharre
CommentarySteering in the Right Direction in the Military-Technical Revolution
A couple of years ago at the Beijing Xiangshan Forum, a senior executive at one of China’s largest defense companies claimed that “mechanized equipment is just like the hand o...
By Robert O. Work, James Winnefeld & Stephanie O'Sullivan
PodcastThe Xi Jinping—Josh Hawley tech consensus
Jordan Schneider talks to The Cyberlaw Podcast about China and tech policy. Listen to the full interview from The Cyberlaw Podcast....
By Jordan Schneider
CommentaryWhat the Biden Administration Gets Right and Wrong on ICT in the New Supply Chain Executive Order
The Department of Commerce not only should serve as the lead agency for ICT and semiconductors but also, moving forward, should be the center of any strategy to ensure securit...
By John Costello & Robert Morgus