October 24, 2019

Artificial Intelligence Research Needs Responsible Publication Norms

By Rebecca Crootof

After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.

There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content. Accordingly, the company shared a small, 117M version along with sampling code but announced that it would not share key elements of the dataset, training code or model weights.

Read the full article from Lawfare.

Learn more about the Artificial Intelligence and International Stability Project:

Artificial Intelligence and International Stability Project

Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...

Read More
  • Commentary
    • War on the Rocks
    • May 5, 2020
    AI & Military Procurement: What Computers Still Can’t Do

    Not all artificial intelligence (AI) is made equal. A wide range of different techniques and applications fall under the term “AI.” Some of these techniques and applications w...

    By Maaike Verbruggen

  • Commentary
    • C4ISRNET
    • April 30, 2020
    When machine learning comes to nuclear communication systems

    Nuclear deterrence depends on fragile, human perceptions of credibility. As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear com...

    By Philip Reiner, Alexa Wehsener & M. Nina Miller

  • Commentary
    • IEEE Spectrum
    • February 26, 2020
    How Adversarial Attacks Could Destabilize Military AI Systems

    Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, m...

    By Dr. David Danks

  • Commentary
    • IEEE Spectrum
    • February 24, 2020
    AI Deception: When Your Artificial Intelligence Learns to Lie

    In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifyin...

    By Heather Roff

View All Reports View All Articles & Multimedia