Two weeks ago, a grand jury in Pennsylvania indicted seven Russian intelligence officers for state-sponsored hacking and influence operations. Both U.S. Attorney General Jeff Sessions and FBI Director Christopher Wray affirmed the gravity of the crime. The same day, Vice President Mike Pence warnedthat China is launching an “unprecedented” effort to influence public opinion ahead of the 2018 and 2020 elections. From Russia to China to Iran, America’s adversaries are increasingly using influence operations — the organized use of information to intentionally confuse, mislead, or shift public opinion — to achieve their strategic aims.
To most Americans, the recent onslaught of influence operations at home may feel like a novel threat. But the reality is that while the battlefield has changed in important ways, nearly two decades of countering terrorism taught the United States a great deal about how to approach this latest challenge.
From 2010 to 2016, I worked as a counter-terrorism analyst supporting special operations forces, with three tours in Afghanistan. I went on to lead an intelligence team at Facebook that focused on counter-terrorism and global security. As such, I’ve had a front-row seat to observe how the government and tech companies dealt with terrorism’s online dimension, and to consider the similarities to today’s state-sponsored disinformation campaigns. Five key lessons stand out: improving technical methods for identifying foreign influence campaign content, encouraging platforms to collaborate, building partnerships between the government and the private sector, devoting the resources necessary to keep the adversary on the back foot, and taking advantage of U.S. allies’ knowledge.
Lesson 1: Hack It
A critical goal in any information battle is rooting out your adversary. In the tech sector, companies like Google, Twitter, and Facebook have employed a combination of methods to identify and address terrorist content. These techniques include automating content identification through machine learning, mitigating the amplification of nefarious content, and reducing anonymity.
Tech firms seeking to root out terrorism on their platforms have trained a variety of “classifiers” to help identify content that violates companies’ terms of service. Companies experimented with natural language understanding to help machines “understand” this content and categorize it as terrorist propaganda or not. On Twitter alone, their algorithms flagged all but 7 percent of accounts suspended for promoting terrorism in late 2017. And of those 93 percent flagged by machines first, 74 percent were taken down before launching a single tweet. (There are no widely accepted, publicly available indications of how many violating accounts were not caught by Twitter’s internal tools or human review.) Additionally, companies like Microsoft and Facebook bank the text, phrases, images, and videos they characterize as terrorist propaganda and use this data to train their software to recognize similar content before it can proliferate. Finally, companies reduced anonymity and improved attribution by tightening verification processes (e.g. checking accounts that show signs of automation rather than human control) to combat the automated spread of malign messaging.
These techniques can be applied directly to policing influence operations on social media platforms. Social media companies should identify the methods that have most effectively made their platforms “hostile” to terrorist content. The three ways of countering terrorism highlighted above — content identification through machine learning technologies, mitigating the amplification of nefarious content, and reducing anonymity — are good starting points. For example, creating a bank of commonly recycled disinformation campaign terms or phrases can be a source for automated flagging of this content for human review. Algorithms like those that detect potential terrorist propaganda, but which instead detect bots and track trolls, can help reduce the amplifiers of state-led disinformation campaigns.
Already, Facebook and Google are implementing practices along these lines, like de-ranking content rated false by third-party fact checkers and recalibrating search algorithms. Twitter’s suspension of 70 million accounts in May and June also signals a commitment to getting this right. However, there is much more to be done. Tech companies should make “hacking” the disinformation problem a genuine priority by directing a percentage of engineering capacity to automating the identification of state-sponsored influence campaigns. This can be incorporated into existing traditions, like a disinformation-themed Facebook “Hackathon,” and will help counter malicious foreign actors seeking to scale their operations using emerging technology.
Read the full article at War on the Rocks.
More from CNAS
VideoNational Technology Strategy concept sparks bipartisan interest
The surge of money for the Technology Modernization Fund, a government funding vehicle encouraging modernization, means agencies will prioritize updates to legacy systems. Som...
By Ainikki Riikonen & Francis Rose
CommentaryDemocracy’s Digital Defenses
In early 2021, the audio-only social media app Clubhouse allowed users in mainland China to enter chat rooms and talk freely to the world—including American journalists and pe...
By Richard Fontaine & Kara Frederick
VideoWhy China’s eventual aims with Taiwan could have a major global financial and economic impact
On CNBC’s Worldwide Exchange, Martijn Rasser discusses the rise in tensions between China and Taiwan, potential responses by the U.S. and G-7 countries, and whether Beijing co...
By Martijn Rasser
CommentaryWhat Would a US-Led Global Technology Alliance Look Like?
In its early weeks, the Biden administration has wisely signaled that it plans to shift U.S. focus away from the Middle East and toward global competition with China. In addit...
By Ilan Goldenberg & Martijn Rasser