August 14, 2019

Why Are Deepfakes So Effective?

It's because we often want them to be true.

By Martijn Rasser

Public opinion shifts, skewed election results, mass confusion, ethnic violence, war. All of these events could easily be triggered by deep fakes—realistic seeming but falsified audio and video made with AI techniques. Leaders in government and industry, and the public at large are justifiably alarmed. Fueled by advances in AI and spread over the tentacles of social media, deep fakes may prove to be among the most destabilizing of forces humankind has faced in generations.

It will soon be impossible to tell by the naked eye or ear whether a video or audio clip is authentic. While propaganda is nothing new, the visceral immediacy of voice and image give deep fakes unprecedented impact and authority; as a result, both governments and industry are scrambling to develop ways to reliably detect them. Silicon Valley startup Amber, for example, is working on ways to detect even the most sophisticated altered video. You can imagine a day when we can verify the authenticity and provenance of a video by way of a digital watermark.

Developing deep fake detection technology is important, but it's only part of the solution. It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deep fakes so effective. New research hints at how foundational the problem is.

After showing over 3,000 adults fake images accompanied by fabricated text, a group of researchers reached two conclusions. First, the more online experience and familiarity with digital photography one had, the more skeptical the person evaluating the information was. Second, confirmation bias—the tendency to frame new information to support our pre-existing beliefs—was a big factor in how people judged the veracity of the fake information.

Read the full article in Scientific American.

  • Video
    • December 30, 2019
    Amazon, Ring hit with $5 million class action lawsuit

    Center for a New American Security fellow Kara Frederick gives tips on how to protect smart home devices from hackers. Watch the full conversation on Fox News....

    By Kara Frederick

  • Commentary
    • Foreign Policy
    • December 24, 2019
    The United States Needs a Strategy for Artificial Intelligence

    In the coming years, artificial intelligence will dramatically affect every aspect of human life. AI—the technologies that simulate intelligent behavior in machines—will chang...

    By Martijn Rasser

  • Commentary
    • Breaking Defense
    • December 23, 2019
    America Desperately Needs AI Talent, Immigrants Included

    The United States is engaged in a global technology competition in artificial intelligence. But while the US government has shown commitment to developing AI systems that will...

    By Megan Lamberth

  • Video
    • December 19, 2019
    CNAS: Bold Ideas for National Security

    This year, CNAS experts brought bold ideas and bipartisan cooperation to the national security conversation. In 2020, the CNAS team will continue tackling the biggest security...

    By Susanna V. Blume, Kara Frederick, Kayla M. Williams, Loren DeJonge Schulman, Richard Fontaine, Kristine Lee, Andrea Kendall-Taylor, Ely Ratner, Paul Scharre, Elizabeth Rosenberg & Carrie Cordero

View All Reports View All Articles & Multimedia