June 03, 2019

The New War of Ideas

Counterterrorism Lessons for the Digital Disinformation Fight

By Kara Frederick

Executive Summary

A new battlespace emerged in the post-9/11 counterterrorism era, encompassing the halls of U.S. technology companies and the alleys of Raqqa alike. Today, the United States is engaged in an expansive conflict that requires solutions from the same key players—the private tech industry and the U.S. government. They cannot afford to waste the digital, organizational, and strategic lessons learned from nearly two decades of countering terrorism.

Senator Patrick Leahy (D-VT) questions representatives from Facebook, Twitter, and Google during a U.S. Senate Judiciary Subcommittee Hearing. The October 31, 2017, hearing “Extremist Content and Russian Disinformation Online: Working with Tech to Find Solutions” featured examples of Russian-purchased ads on Facebook. (Drew Angerer/Getty Images)

Learning from specific successes in tech sector and U.S. government counterterrorism efforts will optimize the United States’ collective response to the digital disinformation challenges of the future. Private and public actors should consider five important lessons from countering terrorism: (1) improve technical methods for identifying foreign influence campaign content; (2) increase collaboration among companies; (3) build partnerships between government and the technology sector via public and private analyst exchanges; (4) maintain an offensive posture and devote the resources necessary to keep the adversary on the back foot; and (5) take advantage of U.S. allies’ knowledge.

The following set of recommendations offers opportunities to apply these five lessons to combating foreign influence campaigns. The first two recommendations are aimed at the private technology industry; the third applies to both the tech industry and the U.S. government; and the final two recommendations are directed at U.S. government agencies.

Summary of Recommendations

  • Tech companies should, over the long term, direct a sustainable percentage of engineering capacity to automating the identification of state-sponsored, malign influence campaigns. Companies can leverage existing practices and traditions, like Facebook “hackathons,” to share engineering tasks, build prototypes, and seek new technical fixes for the disinformation problem.1
  • Tech companies should create and fund an enduring disinformation-related consortium among willing companies, modeled after the Global Internet Forum to Counter Terrorism (GIFCT). The goal would be to move toward establishing industry standards on what constitutes disinformation and malign, foreign influence campaigns for U.S. companies.
  • The Office of the Director of National Intelligence (ODNI), in coordination with the private sector, should appoint a body of interagency representatives to create and fund smaller, more forward-leaning fusion cells that integrate public and private sector analysts. Social media companies should lend their threat intelligence analysts (with intelligence agencies providing relevant all-source analysts) to this effort in an enduring dialogue at appropriate levels of classification. If this body meets certain standards of success, the U.S. government should explore appointing a standalone, high-level inter-agency task force to incorporate these cells and possess full responsibility for countering digital foreign influence operations.
  • The executive branch should expand its Cybersecurity Strategy and U.S. Cyber Command’s (CYBERCOM’s) authorities to conduct offensive cyber operations that impose costs on foreign adversaries. However, expanding authorities should stop short of directives to conduct offensive influence operations in foreign countries.
  • The United States should work with democratic allies to exchange best practices from their own efforts in countering foreign influence operations and conducting offensive cyber measures. The United States should use the same convening mechanism to institute a formal method of providing CYBERCOM with the results of this information-sharing and recommendations for action.

Introduction

The future of the world order hinges on influencing populations. While civilians have long been the currency of conflict—from insurgencies to terrorism to information operations—emerging technologies are revolutionizing the influence game. Advances in artificial intelligence, particularly machine learning, stand to weaponize information to exert social control at scale. Authoritarian regimes, such as China, have taken advantage of new tools to deepen their hold over their populations, using state-controlled social media accounts, automated bot networks, and facial recognition technology. Foreign actors are attempting to undermine and erode public trust in democratic processes through computational propaganda and microtargeting, and even non-state actors are stoking political tensions through the spread of misinformation online. Such developments, often aimed at the existing liberal order and the institutions that buttress it, portend potential geopolitical upheavals.

Yet an unlikely blueprint to resist this threat exists in the lessons of a different war. The post-9/11 counterterrorism fight offers a roadmap for both public and private organizations to respond to this new information battlespace. In recognition of the terrorist threat, the U.S. government and private businesses mobilized to contest it in both physical and digital landscapes. The degree of seriousness with which the U.S. government took the threat was reflected in its price tag. From 2002–2017, the global war on terrorism cost the United States approximately $2.8 trillion in related expenditures and made up almost 16 percent of discretionary spending during that timeframe.2 This paid for a strategy to disrupt and deny threats before they struck home, as the U.S. military undertook operations to confront terrorists in their safe havens abroad.

In concert, the government launched major organizational, legislative, and policy reforms at the federal level. After the release of the 9/11 Commission Report in 2004, President George W. Bush and both the House and Senate instituted a breadth of changes aimed at restructuring the intelligence community to better warn of and respond to terrorist threats.3 On the information-sharing front, the creation of the Office of the Director of National Intelligence and the National Counterterrorism Center (NCTC) were hallmarks of this reform. In 2005, ODNI began operations with a mission to lead and support intelligence integration within the intelligence community.4 As a mission center within ODNI, NCTC fused foreign and domestic counterterrorism information, conducted terrorism analysis, shared “information with partners across the counterterrorism enterprise, and [drove] whole of government action to secure national counterterrorism objectives.”5 On top of newly improved indications and warnings, lawmakers and executives ratified numerous counterterrorism policies—some aimed at deterrence, others punitive. Most sought to target terrorist funding mechanisms, stem foreign fighter flows into the country, and interdict and prosecute threats to the homeland. Collectively, the USA Patriot Act in 2001, amendments to the long-standing 1978 Foreign Intelligence Surveillance Act, and the establishment of the Department of Homeland Security in 2002 increased penalties for terrorist activities, expanded surveillance measures, and tightened border security.6 The Department of Justice did its part to try offenders under decades-old legislation like 18 U.S. Code 2339A and B, which prohibits the provision of material support to terrorists and designated organizations. From top to bottom, the federal government coordinated and organized for the fight.

The post-9/11 counterterrorism fight offers a roadmap for both public and private organizations to respond to this new information battlespace.

Social media companies followed suit in organizing to combat terrorism. The online distribution of an ISIS video depicting the beheading of U.S. journalist James Foley via YouTube and Twitter in 2014 opened up a new front at companies’ door-steps.7 Against this backdrop and existing legislation aimed at stemming the terrorist advance, Facebook began meeting with other technology companies to discuss platform-based counterterrorism efforts around 2015.8 In early 2016, White House and interagency officials flew to Silicon Valley to meet with tech leaders—including Apple CEO Tim Cook and representatives from Google, Facebook, Yahoo, and Twitter—to discuss solutions to the spread of terrorism-related content on the internet.9 That year, Alphabet Inc.’s Jigsaw helped confront ISIS online messaging tactics and clean up content on YouTube.10 By 2018, Facebook had hired 7,500 content moderators, a portion of whose job is dedicated to keeping terrorist content off the platform.11 And in the three years since those initial discussions in 2015, Twitter permanently suspended 1.2 million accounts related to violations of the company’s counterterrorism policies.12

The war was on, and tech companies actively worked to make their platforms hostile to terrorist actors. They hired talent to fill gaps in their counterterrorism expertise, created positions to coordinate and oversee global counterterrorism policy, convened relevant players in internal forums, and instituted a combination of technical measures and good old-fashioned analysis to root out offending users and content. Major and minor tech companies coordinated with each other and with law enforcement to share threat information, drafted policies around preventing terrorist abuse of their platforms, updated their community guidelines, and even supported counter-speech initiatives to offer alternative messaging to terrorist propaganda.

The blind transfer of counterterrorism practices to the battle against foreign influence operations would mean fighting yesterday’s war. But certain lessons are critical enough to be repurposed for a different battlefield. Nearly two decades of countering terrorism taught the United States a great deal about how to approach this latest challenge. Five key lessons stand out:13

  1. Improve technical methods for identifying foreign influence campaign content;
  2. Increase collaboration among companies;
  3. Build partnerships between the government and the private sector via analyst exchanges;
  4. Maintain an offensive posture and devote the resources necessary to keep the adversary on the back foot; and
  5. Take advantage of U.S. allies’ knowledge.

These lessons provide the opportunity to fight back against malign foreign influence campaigns. But understanding the breadth and trajectory of the threat is critical to marshaling a response: Foreign attempts to propagate disinformation, amplify political polarization, disclose information, and hack elections persist. The ultimate goal of these actors is to influence the public discourse and undermine democratic institutions. A series of recommendations, aimed at thwarting digital attempts to undermine democracies, will help both social media companies and the U.S. government apply key technical, organizational, and tactical strategies learned in the years following 9/11 to foreign influence campaigns today.

Read the full report.

Download PDF

Endnotes

  1. David Zax, “Secrets of Facebook’s Legendary Hackathons Revealed,” Fast Company, November 9, 2012, https://www.fastcompany.com/3002845/secrets-facebooks-legendary-hackathons-revealed
  2. Laicie Heeley, Amy Belasco, Mackenzie Eaglen, Luke Hartig, Tina Jonas, Mike McCord, and John Mueller, “Counterterrorism Spending: Protecting America While Promoting Efficiencies and Accountability,” Stimson Study Group on Counterterrorism Spending (Stimson Center, May 2018); and Aaron Mehta, “Here’s how much the US has spent fighting terrorism since 9/11,” Defense News, May 16, 2018, https://www.defensenews.com/pentagon/2018/05/16/heres-how-much-the-us-has-spent-fighting-terrorism-since-911
  3. The prevailing logic of the U.S. government was that such reforms could help prevent or mitigate the impact of similar terrorist threats to the United States. As of 2019, the mission centers of ODNI include NCTC and the Cyber Threat Intelligence Integration Center, the National Counterproliferation Center, and the National Counterintelligence and Security Center.
  4. Office of the Director of National Intelligence, “Who We Are,” ODNI.gov, https://www.odni.gov/
  5. The National Counterterrorism Center, “Who We Are,” ODNI.gov, https://www.odni.gov/index.php/nctc-home
  6. Department of Justice, “The USA Patriot Act: Preserving Life and Liberty,” Justice.gov, https://www.justice.gov/archive/ll/highlights.htm
  7. SITE Staff, “IS Supporters React to James Foley Beheading Video,” INSITE Blog, August 20, 2014, http:http://news.siteintelgroup.com/blog/index.php/categories/jihad/entry/238-is-supporters-react-to-james-foley-beheading-video; and Seamus Hughes, “Whose Responsibility is it to Confront Terrorism Online?” Lawfare, April 27, 2018, https://www.lawfareblog.com/whose-responsibility-it-confront-terrorism-online. While concerns over terrorist use of online platforms as propaganda was discussed in 2007 and earlier, the online amplification of Foley’s 2014 murder increased the visibility of the problem for U.S. lawmakers.
  8. Monika Bickert and Brian Fishman, “Hard Questions: Are We Winning the War on Terrorism Online?” Facebook, press release, November 28, 2017, https://newsroom.fb.com/news/2017/11/hard-questions-are-we-winning-the-war-on-terrorism-online; 18 U.S.C. § 2339A, “Providing Material Support to Terrorists,” https://www.justice.gov/jm/criminal-resource-manual-15-providing-material-support-terrorists-18-usc-2339a; and 18 U.S.C. § 2339B, “Providing Material Support to Terrorists,” https://www.justice.gov/jm/criminal-resource-manual-16-providing-material-support-designated-terrorist-organizations
  9. Aarti Shahani, “U.S. Officials, Tech Leaders Meet To Discuss Counterterrorism,” NPR, January 8, 2016, https://www.npr.org/sections/alltechconsidered/2016/01/08/462385985/u-s-tech-industry-leaders-hold-meeting-on-counterterrorism
  10. Austin Carr, “Can Alphabet’s Jigsaw Solve Google’s Most Vexing Problems?” Fast Company, October 22, 2017, https://www.fastcompany.com/40474738/can-alphabets-jigsaw-solve-the-internets-most-dangerous-puzzles
  11. Alexis C. Madrigal, “Inside Facebook’s Fast-Growing Content-Moderation Effort,” The Atlantic, February 7, 2018, https://www.theatlantic.com/technology/archive/2018/02/what-facebook-told-insiders-about-how-it-moderates-posts/552632
  12. “Twitter bans 270,000 accounts for ‘promoting terrorism,’” The Guardian, April 5, 2018, https://www.theguardian.com/technology/2018/apr/05/twitter-bans-270000-accounts-to-counter-terrorism-advocacy
  13. This section draws from previously published work by the author, namely: Kara Frederick, “How to Defend Against Foreign Influence Campaigns: Lessons from Counter-terrorism,” War on the Rocks, October 19, 2018, https://warontherocks.com/2018/10/how-to-defend-against-foreign-influence-campaigns-lessons-from-counter-terrorism/
  • Kara Frederick

    Associate Fellow, Technology and National Security Program

    Kara Frederick is the Associate Fellow for the Technology and National Security Program at the Center for a New American Security (CNAS). Prior to joining CNAS, Kara helped cr...

View All Reports View All Articles & Multimedia