August 03, 2020

Institutional Roadblocks to the Defense Department’s Adoption of AI

By Megan Lamberth and Martijn Rasser

Introduction

Advancements in artificial intelligence (AI) present remarkable opportunities and novel challenges for America’s national security institutions. As a general-purpose technology analogous to electricity or the internal combustion engine, AI can be particularly transformational for the Department of Defense, enabling new capabilities in areas as varied as warfighting, logistics and maintenance, command and control, surveillance, intelligence collection and analysis, and healthcare.

Achieving that transformation will not be easy. “DoD does not have an innovation problem; it has an innovation adoption problem,” as the authors of a report from the Defense Innovation Board (DIB)—a federal advisory committee of technology experts—raptly observed.

The Bottom Line

  • The DoD faces a particularly daunting obstacle to widespread adoption of AI: persistent institutional resistance to disruptive change within the Pentagon.
  • While resistance to large-scale change is the norm in massive bureaucratic structures, impediments to AI adoption are confounded by a general lack of familiarity with how and in what ways AI can be beneficial to the numerous DoD components.
  • Overcoming these institutional and cultural barriers is one of the foremost challenges to successfully adopting and deploying AI technologies critical to U.S. national security.

In the past several years, the Defense Department has indicated its prioritization of AI research and development through a series of policy documents and initiatives. The 2014 Third Offset Strategy made AI and autonomy its focal point, and the 2018 National Defense Strategy recommended investments in “autonomy, artificial intelligence, and machine learning . . . to gain competitive military advantages.” The department’s first AI strategy, released in early 2019, stressed the essential role humans play in the development and use of AI technologies—a so-called “human-centered adoption of AI.” The strategy emphasized the role of the Joint Artificial Intelligence Center (JAIC) as the hub for “synchronizing DoD AI activities across all DoD Components.” Later that year, Secretary of Defense Mark Esper named AI his top technology priority. In late 2019, the DIB unveiled a list of proposed ethical principles to govern the DoD’s use of artificial intelligence. The department formally adopted the principles earlier this year, an important milestone on the road toward institutionalizing AI technologies.

In the past several years, the Defense Department has indicated its prioritization of AI research and development through a series of policy documents and initiatives.

Despite these promising developments, the Defense Department has struggled to translate written policy into concrete action. The obstacles to adopting and deploying AI technologies in the DoD are numerous and complex. The department has a shortage of skilled employees in computer science and science, technology, engineering, and math (STEM) and struggles to attract and retain talent. It also suffers from an arduous and time-consuming acquisition and procurement process, despite greater use of more flexible contract vehicles such as Other Transaction Authorities. These problems make it a challenging client for small tech companies and startups.

In addition to these formidable challenges, the DoD faces three distinct yet interrelated types of institutional barriers to widespread AI adoption: organizational, technological, and conceptual.

The first barrier is rooted in organizational structure. Institutional resistance to disruptive change is nothing new, nor is it unique to the department. As a disruptive technology, AI adoption has met resistance from myriad sources. In a 2019 report, the RAND Corporation found that AI adoption was impeded by “inherent resistance to change . . . concerns about the potential loss of an individual’s value to the organization,” and a general lack of trust in the technology itself. For example, an increase in AI and autonomous systems may be perceived as threatening to a person’s role and identity within the department, whether a human resources specialist or a pilot.

Bureaucratic inertia, stemming in part from deep-rooted institutional and cultural resistance, has hampered DoD’s ability to rapidly develop, acquire, and deploy AI capabilities at scale. The department is a monolithic structure with innumerable missions and competing priorities, all of which limits its capacity to move quickly and decisively. As political scientist Michael Horowitz explains, the “more bureaucratically disruptive it is to adopt a technology, the more challenging it can be for older, more established organizations to do so.” Rooted in this inertia are two other related barriers: a dearth of AI literacy at all levels of the department, and a lack of awareness of past successful AI deployments.

Bureaucratic inertia, stemming in part from deep-rooted institutional and cultural resistance, has hampered DoD’s ability to rapidly develop, acquire, and deploy AI capabilities at scale.

The second barrier is technological—a limited understanding and knowledge of how AI works. This causes a mismatch between the desired and actual capabilities on offer due to the general immaturity of the highest profile and most sought-after AI projects. The department’s end-users as well as the developers of AI solutions, largely in the private sector, both have culpability here. On one hand, inflated expectations about AI capabilities rooted in misunderstandings about the state of the art, lead to eventual disillusionment when the desired solution proves unrealistic. Industry players worsen this situation by over-promising and under-delivering on a range of AI-related projects, and sometimes lying outright about AI capabilities.

The third barrier is conceptual—what one considers to be AI. To paraphrase a clever quip, when AI works well, people consider it software. In other words, they tend to not perceive a capability as being AI after it is in common use. In the current defense context, AI is often equated with futuristic robotic weapons. This leads to the often-overlooked fact that AI solutions in fact are already in widespread use and have been for some time. Examples of this are Google’s search engine and machine translation, TurboTax, and Pandora’s song recommendation service. The algorithms underpinning these offerings have direct and indirect applications for the department’s day-to-day work. Specific to DoD, expert systems for decision support were in use during the 1980s; in that same decade, the U.S. Navy deployed the world’s first operational fully autonomous weapon, the Tomahawk anti-ship missile. The Predator unmanned aerial vehicle was used in active combat operations during the 1995 Balkan War.

Recommendations

Improving communication by senior leadership, realigning organizations and bureaucracies, demystifying techniques that underpin AI systems, and highlighting the department’s past and current success with AI-enabled solutions and systems will help break down institutional resistance to broad-based AI adoption.

The department should employ the following recommendations to help execute its vision of an AI future:

  1. Update DoD’s AI vision to include a clear execution strategy with metrics. While the Department’s vision statement strikes the right tones, it lacks the detail needed to act.
  2. Align job duties and promotion criteria to dovetail with department-wide development and adoption of AI solutions. Doing so will be essential to attracting and retaining skilled personnel.
  3. Mandate AI literacy training for all officers, enlisted personnel, and civilian employees, with special emphasis on acquisition specialists.
  4. Implement a top-down approach to AI adoption. As in past major technological changes, DoD senior leadership will play an essential role in overcoming institutional barriers and actualizing the department’s vision for AI.
  5. Showcase the benefits of department-wide AI initiatives. This should include a campaign to highlight the impact on civilian and uniformed personnel of historical and current DoD AI deployments. The campaign should also focus on the objectives and anticipated results of projects currently under way.

The Road Ahead

Although the DoD still has a long way to go in adopting AI at scale, the department has made some promising strides, and senior leaders have vocalized their commitment to the technology. On his last day in uniform, Lieutenant General Jack Shanahan stated, “As I walk out of the Pentagon for the final time, as the Director of the JAIC, I am convinced more than ever that our future national security, economic security, and preservation of those ideals depend on embracing artificial intelligence across every element of the Department of Defense and society.” The Defense Department knows AI is essential for future competitiveness. But to make widespread adoption of it a reality, first DoD needs to address the institutional impediments that obstruct its path.

About the Authors

Megan Lamberth is a Research Assistant with the Technology and National Security Program at CNAS, where Martijn Rasser is a Senior Fellow.

Learn More

From July to December 2020, CNAS will release new papers every week on the tough issues the next NDS should tackle. The goal of this project is to provide intellectual capital to the drafters of the 2022 NDS, focusing specifically on unfinished business from the past several defense strategies and areas where change is necessary but difficult.

Defense

The Next Defense Strategy

About this commentary series Regardless of who wins the next presidential election, by statute the DoD must deliver a new National Defense Strategy (NDS) to Congress in 2022. ...

Read More
  1. James Pethokoukis, “How AI Is Like That Other General Purpose Technology, Electricity,” AEIdeas blog, American Enterprise Institute, November 25, 2019, https://www.aei.org/economics/how-ai-is-like-that-other-general-purpose-technology-electricity/. Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review, 1 no. 3 (May 2018), https://tnsr.org/2018/05/artificial-intelligence-international-competition-and-the-balance-of-power/. Congressional Research Service, Daniel S. Hoadley and Kelley M. Sayler, Artificial Intelligence and National Security, CRS Report No. R45178 (November 21, 2019), https://fas.org/sgp/crs/natsec/R45178.pdf. Patrick Tucker, “Spies Like AI: The Future of Artificial Intelligence for the U.S. Intelligence Community,” Defense One, January 27, 2020, https://www.defenseone.com/technology/2020/01/spies-ai-future-artificial-intelligence-us-intelligence-community/162673/.
  2. Patrick Tucker, “Here’s How to Stop Squelching New Ideas, Eric Schmidt’s Advisory Board Tells DoD,” Defense One, January 17, 2018, https://www.defenseone.com/technology/2018/01/heres-how-stop-squelching-new-ideas-eric-schmidts-advisory-board-tells-dod/145240/.
  3. Secretary Chuck Hagel, “A Game-Changing Third Offset Strategy,” War on the Rocks, November 17, 2014, https://warontherocks.com/2014/11/a-game-changing-third-offset-strategy/. Department of Defense, Summary of the 2018 National Defense Strategy of the United States of America, 7, https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf.
  4. Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy, 17, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF. Megan Lamberth, “The White House and Defense Department Unveiled AI Strategies. Now What?” C4ISRNET, February 27, 2019, https://www.c4isrnet.com/opinion/2019/02/27/the-white-house-and-defense-department-unveiled-ai-strategies-now-what/.
  5. Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy, 9.
  6. Jane Edwards, “DoD Nominee Mark Esper Cites AI As Top Modernization Priority,” ExecutiveGov, July 17, 2019, https://www.executivegov.com/2019/07/dod-nominee-mark-esper-cites-ai-as-top-modernization-priority/.
  7. Defense Innovation Board, “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (October 31, 2019), https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF.
  8. “DoD Adopts Ethical Principles for Artificial Intelligence,” Department of Defense, press release, February 24, 2020, https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.
  9. Danielle C. Tarraf, William Shelton, Edward Parker, Brien Alkire, et al., “The Department of Defense Posture for Artificial Intelligence,” Report No. RR4229 (RAND, December 2019), 120, https://www.rand.org/content/dam/rand/pubs/research_reports/RR4200/RR4229/RAND_RR4229.pdf. Jackson Barnett, “‘STEM Corps’ Legislation Would Fill DoD’s Gaps in Tech Talent,” FedScoop, April 14, 2020, https://www.fedscoop.com/dod-stem-training-workforce-scholarship/. Lindsey Sheppard, “Accelerating the Defense Department’s AI Adoption” (Council on Foreign Relations, April 9, 2020), https://www.cfr.org/report/accelerating-defense-departments-ai-adoption.
  10. Susanna V. Blume and Molly Parrish, “Make Good Choices, DoD: Optimizing Decisionmaking Processes for Great Power Competition” (Center for a New American Security, November 2019), 16, https://www.cnas.org/publications/reports/make-good-choices-dod.
  11. Andrew Imbrie, “Artificial Intelligence Meets Bureaucratic Politics,” War on the Rocks, August 1, 2019, https://warontherocks.com/2019/08/artificial-intelligence-meets-bureaucratic-politics/.
  12. Tarraf, Shelton, Parker, Alkire, et al., “The Department of Defense Posture for Artificial Intelligence,” 54.
  13. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” 44.
  14. Ron Schmelzer, “Artificial or Human Intelligence? Companies Faking AI,” Forbes, April 4, 2020, https://www.forbes.com/sites/cognitiveworld/2020/04/04/artificial-or-human-intelligence-companies-faking-ai/#7089825f664f.
  15. Stephen P. Dodd and Michael J. Molidor, “An Expert Systems Approach to Military Decision Support in an Air Defense Scenario,” master’s thesis, Naval Postgraduate School, March 1987, https://calhoun.nps.edu/handle/10945/22801. Paul Scharre, Army of None (New York: Norton, 2018), 55. Arthur Holland Michel, “Drones in Bosnia,”Center for the Study of the Drone, Bard College (June 7, 2013), https://dronecenter.bard.edu/drones-in-bosnia/.
  16. To overcome bureaucratic impediments, the National Security Commission on Artificial Intelligence recommended creating a steering committee on emerging technology tri-chaired by the Deputy Secretary of Defense, the Vice Chairman of the Joint Chiefs of Staff, and the Principal Director of the Office of the Director of National Intelligence. National Security Commission on Artificial Intelligence, First Quarter Recommendations (March 2020), 16, https://drive.google.com/a/nscai.org/file/d/1wkPh8Gb5drBrKBg6OhGu5oNaTEERbKss/view?usp=sharing.
  17. At a webinar hosted by the Center for a New American Security, former Deputy Secretary of Defense Robert O. Work and CNAS Senior Fellow and Director Paul Scharre discussed the need for a ”killer app” to demonstrate the value of AI for DoD civilian and uniformed personnel: “Transcript from Military AI Applications” (Center for a New American Security, April 29, 2020), 1–2, https://www.cnas.org/publications/transcript/transcript-from-military-ai-applications.
  18. Lt. Gen. John N. T. “Jack” Shanahan, “JAIC Director Reflects, Bids Farewell to Partners in Government and Private Sector,” AI in Defense, DoD’s Artificial Intelligence Blog, Joint AI Center, June 1, 2020, https://www.ai.mil/blog_06_01_20-jaic_director_reflect_bids_farewell.html.

View All Reports View All Articles & Multimedia