Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, military, and security systems. Unsurprisingly, there is increasing concern about the stability and safety of these systems. In a different sector, runaway interactions between autonomous trading systems in financial markets have produced a series of stock market “flash crashes,” and as a result, those markets now have rules to prevent such interactions from having a significant impact.
Could the same kinds of unexpected interactions and feedback loops lead to similar instability with defense or security AIs?
Read the full article from IEEE Spectrum.
Learn more about the Artificial Intelligence and International Stability Project:
More from CNAS
CommentaryAI & Military Procurement: What Computers Still Can’t Do
Not all artificial intelligence (AI) is made equal. A wide range of different techniques and applications fall under the term “AI.” Some of these techniques and applications w...
By Maaike Verbruggen
CommentaryWhen machine learning comes to nuclear communication systems
Nuclear deterrence depends on fragile, human perceptions of credibility. As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear com...
By Philip Reiner, Alexa Wehsener & M. Nina Miller
CommentaryAI Deception: When Your Artificial Intelligence Learns to Lie
In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifyin...
By Heather Roff
CommentaryArtificial Intelligence, Foresight, and the Offense-Defense Balance
There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence...
By Ben Garfinkel & Allan Dafoe