October 24, 2019

Artificial Intelligence Research Needs Responsible Publication Norms

After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.

There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content. Accordingly, the company shared a small, 117M version along with sampling code but announced that it would not share key elements of the dataset, training code or model weights.

Read the full article from Lawfare.

Learn more about the Artificial Intelligence and International Stability Project:

Artificial Intelligence and International Stability Project

Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...

Read More
  • Congressional Testimony
    • October 19, 2023
    Obstacles and Opportunities for Transformative Change

    Watch:...

    By Paul Scharre

  • Commentary
    • Foreign Policy
    • June 13, 2023
    Every Country Is on Its Own on AI

    But establishing such an institution quickly enough to match AI’s accelerating progress is likely a pipe dream, given the history of nuclear arms controls and their status tod...

    By Bill Drexel & Michael Depp

  • Commentary
    • The Hill
    • June 10, 2023
    The Time to Regulate AI Is Now

    Policymakers should also be under no illusion that a light regulatory touch will somehow prevent a degree of concentration at AI’s frontier....

    By Caleb Withers

  • Video
    • June 6, 2023
    Is an AI arms race underway?

    The role of artificial intelligence has long been debated in military communities. But as recent leapfrog advancements in technology have garnered headlines about the effects ...

    By Paul Scharre

View All Reports View All Articles & Multimedia