One of the biggest fears about the nexus of artificial intelligence and the military is that machine learning—a type of artificial intelligence that allows computers to learn from new data without being explicitly programmed—could spread rapidly in military systems, and even risk an arms race. And the alarm over the consequences of robots armed with weapons even extended to the recent use of a remotely piloted, rather than autonomous, bomb disposal robot retrofitted with an explosive by the Dallas Police Department in July 2016. That event triggered a wave of articles about the consequences of robots with weapons, especially when used outside the military.
Discussions of the military applications of robotics have tended to focus on the United States, largely because of America’s extensive use of uninhabited (also called unmanned) aerial vehicles, or “drones,” to conduct surveillance and launch lethal strikes against suspected militants around the world. Yet limiting the discussion of military robotics to those developed by wealthy, democratic countries such as the United States may miss important underlying trends.
Read the full article on the Bulletin of the Atomic Scientists.
More from CNAS
CommentaryAI Ethical Principles: Implementing the U.S. Military’s Framework
Over the last two years, the DoD has taken a number of steps to lay the groundwork for AI adoption....
By Megan Lamberth
VideoDangers of an AI Race
AI has value across a range of military applications, but also brings risks....
By Paul Scharre
CommentaryThe Coming Revolution in Intelligence Affairs
The U.S. intelligence community must embrace the RIA and prepare for a future dominated by AI—or else risk losing its competitive edge....
By Anthony Vinci
CommentaryBeyond TikTok: Preparing for Future Digital Threats
By the end of September, the American social media landscape will undergo a profound transformation, and we cannot yet map this new terrain. President Donald Trump’s executive...
By Kara Frederick, Chris Estep & Megan Lamberth