October 24, 2019
Artificial Intelligence Research Needs Responsible Publication Norms
After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.
There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content. Accordingly, the company shared a small, 117M version along with sampling code but announced that it would not share key elements of the dataset, training code or model weights.
Read the full article from Lawfare.
Learn more about the Artificial Intelligence and International Stability Project:
Artificial Intelligence and International Stability Project
Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...
Read MoreMore from CNAS
-
Commentary
Not all artificial intelligence (AI) is made equal. A wide range of different techniques and applications fall under the term “AI.” Some of these techniques and applications w...
By Maaike Verbruggen
-
Commentary
Nuclear deterrence depends on fragile, human perceptions of credibility. As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear com...
By Philip Reiner, Alexa Wehsener & M. Nina Miller
-
Commentary
Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, m...
By Dr. David Danks
-
Commentary
In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifyin...
By Heather Roff