After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.
There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content. Accordingly, the company shared a small, 117M version along with sampling code but announced that it would not share key elements of the dataset, training code or model weights.
Read the full article from Lawfare.
Learn more about the Artificial Intelligence and International Stability Project:
More from CNAS
CommentaryAI & Military Procurement: What Computers Still Can’t Do
Not all artificial intelligence (AI) is made equal. A wide range of different techniques and applications fall under the term “AI.” Some of these techniques and applications w...
By Maaike Verbruggen
CommentaryWhen machine learning comes to nuclear communication systems
Nuclear deterrence depends on fragile, human perceptions of credibility. As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear com...
By Philip Reiner, Alexa Wehsener & M. Nina Miller
CommentaryHow Adversarial Attacks Could Destabilize Military AI Systems
Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, m...
By Dr. David Danks
CommentaryAI Deception: When Your Artificial Intelligence Learns to Lie
In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifyin...
By Heather Roff