November 19, 2019

Preparing the Military for a Role on an Artificial Intelligence Battlefield

By Megan Lamberth

The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI). If adopted by the Defense Department, then the recommendations will help shape the Pentagon’s use of AI in both combat and non-combat systems. The board’s principles are an important milestone that should be celebrated, but the real challenge of adoption and implementation is just beginning. For the principles to have an impact, the department will need strong leadership from the Joint AI Center (JAIC), buy-in from senior military leadership and outside groups, and additional technical expertise within the Defense Department.

In its white paper, the board recognizes that the AI field is constantly evolving and that the principles it proposes represent guidelines the department should aim for as it continues to design and field AI-enabled technologies. The board recommends that the Defense Department should aspire to develop and deploy AI systems that are:

  1. Responsible. The first principle establishes accountability, putting the onus on the human being for not only the “development, deployment, [and] use” of an AI system, but most importantly, any “outcomes” that system produces. The burden rests on the human being, not the AI.
  2. Equitable. The second principle calls on the DoD to take “deliberate steps” to minimize “unintended bias” in AI systems. The rise of facial recognition technology and the subsequent issues of algorithmic biases show that the board is right to prioritize mitigating potential biases, particularly as the DoD continues to develop AI systems with national security applications.
  3. Traceable. The third principle addresses the need for technical expertise within the Defense Department to ensure that AI engineers have an “appropriate understanding of the technology” and the insight of how a system arrives at its outcome.
  4. Reliable. The board’s fourth principle essentially says that an AI system should do what it has been programmed to do within the domain it has been programmed to operate in. AI engineers should then conduct tests to ensure the “safety, security, and robustness” of the system across its “entire life cycle.”
  5. Governable. The fifth principle tackles the need for fail-safes in situations where an AI system acts unexpectedly. The AI system should be able to “detect and avoid unintended harm,” and mechanisms should exist that allow “human or automated disengagement” for systems demonstrating “unintended escalatory” behavior.

Read the full article in The National Interest.

  • Commentary
    • Foreign Policy
    • December 24, 2019
    The United States Needs a Strategy for Artificial Intelligence

    In the coming years, artificial intelligence will dramatically affect every aspect of human life. AI—the technologies that simulate intelligent behavior in machines—will chang...

    By Martijn Rasser

  • Commentary
    • Breaking Defense
    • December 23, 2019
    America Desperately Needs AI Talent, Immigrants Included

    The United States is engaged in a global technology competition in artificial intelligence. But while the US government has shown commitment to developing AI systems that will...

    By Megan Lamberth

  • Reports
    • December 17, 2019
    The American AI Century: A Blueprint for Action

    Foreword By Robert O. Work We find ourselves in the midst of a technological tsunami that is inexorably reshaping all aspects of our lives. Whether it be in agriculture, finan...

    By Martijn Rasser, Megan Lamberth, Ainikki Riikonen, Chelsea Guo, Michael Horowitz & Paul Scharre

  • Commentary
    • December 12, 2019
    The China Challenge

    The United States and China are strategic competitors, and technology is at the center of this competition, critical to economic strength and national security. The United Sta...

    By Martijn Rasser, Elizabeth Rosenberg & Paul Scharre

View All Reports View All Articles & Multimedia