February 24, 2020

AI Deception: When Your Artificial Intelligence Learns to Lie

By Heather Roff

In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifying, something incorrectly. Self-driving cars being fooled into “thinking” stop signs are speed limit signs, pandas being identified as gibbons, or even having your favorite voice assistant be fooled by inaudible acoustic commands—these are examples that populate the narrative around AI deception. One can also point to using AI to manipulate the perceptions and beliefs of a person through “deepfakes” in video, audio, and images. Major AI conferences are more frequently addressing the subject of AI deception too. And yet, much of the literature and work around this topic is about how to fool AI and how we can defend against it through detection mechanisms.

I’d like to draw our attention to a different and more unique problem: Understanding the breadth of what “AI deception” looks like, and what happens when it is not a human’s intent behind a deceptive AI, but instead the AI agent’s own learned behavior. These may seem somewhat far-off concerns, as AI is still relatively narrow in scope and can be rather stupid in some ways. To have some analogue of an “intent” to deceive would be a large step for today’s systems. However, if we are to get ahead of the curve regarding AI deception, we need to have a robust understanding of all the ways AI could deceive. We require some conceptual framework or spectrum of the kinds of deception an AI agent may learn on its own before we can start proposing technological defenses.

Read the full article from IEEE Spectrum.

Learn more about the Artificial Intelligence and International Stability Project:

Artificial Intelligence and International Stability Project

Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...

Read More
  • Commentary
    • War on the Rocks
    • April 10, 2023
    AI's Inhuman Advantage

    AI agents’ victories demonstrate that machines can dramatically outperform humans in command and control, a potential major advantage in war....

    By Paul Scharre

  • Commentary
    • Foreign Affairs
    • April 4, 2023
    America Can Win the AI Race

    If the United States wants to win the AI competition, it must approach Beijing carefully and construct its own initiatives thoughtfully....

    By Paul Scharre

  • Podcast
    • April 2, 2023
    AI Military Competition: Tactical, Operational, and Strategic Implications

    Paul Scharre, Vice President and Director of Studies at CNAS, joins ChinaTalk to discuss AI, military, strategy, and US-China geopolitics. Listen to the full episode and more...

    By Paul Scharre

  • Podcast
    • March 28, 2023
    Artificial Intelligence and Great Power Competition, With Paul Scharre

    Paul Scharre, the vice president and director of Studies at the Center for a New American Security, sits down with James M. Lindsay to discuss how artificial intelligence is r...

    By Paul Scharre

View All Reports View All Articles & Multimedia