Artificial intelligence and machine learning capabilities are growing
at an unprecedented rate. These technologies have many widely
beneficial applications, ranging from machine translation to medical
image analysis. Countless more such applications are being
developed and can be expected over the long term. Less attention
has historically been paid to the ways in which artificial intelligence
can be used maliciously. This report surveys the landscape of
potential security threats from malicious uses of artificial intelligence
technologies, and proposes ways to better forecast, prevent, and
mitigate these threats. We analyze, but do not conclusively resolve,
the question of what the long-term equilibrium between attackers and
defenders will be. We focus instead on what sorts of attacks we are
likely to see soon if adequate defenses are not developed.
Read the full article at MaliciousAIReport
More from CNAS
CommentaryUpending the 5G Status Quo with Open Architecture
This article is adapted in part from written testimony the author submitted to the Joint Committee on the National Security Strategy of the Parliament of the United Kingdom. ...
By Martijn Rasser
CommentaryDecide, Disrupt, Destroy: Information Systems in Great Power Competition with China
The 2018 US National Defense Strategy (NDS) cites Russia and the People’s Republic of China (PRC) as “revisionist powers” that “want to shape a world consistent with their aut...
By Ainikki Riikonen
VideoImplementing AI ethics standards at the DoD
Robert Work, former Deputy Secretary of Defense, discusses the need for ethical AI standards in government, and why it’s important that AI usage reflects our values.Watch the ...
By Robert O. Work
CommentaryPreparing the Military for a Role on an Artificial Intelligence Battlefield
The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI)...
By Megan Lamberth