There is growing concern in some quarters that the drones used by the United States and others represent precursors to the further automation of military force through the use of lethal autonomous weapon systems (LAWS). These weapons, though they do not generally exist today, have already been the subject of multiple discussions at the United Nations. Do autonomous weapons raise unique ethical questions for warfare, with implications for just war theory? This essay describes and assesses the ongoing debate, focusing on the ethical implications of whether autonomous weapons can operate effectively, whether human accountability and responsibility for autonomous weapon systems are possible, and whether delegating life and death decisions to machines inherently undermines human dignity. The concept of LAWS is extremely broad and this essay considers LAWS in three categories: munition, platforms, and operational systems.
Read the full report here.
More from CNAS
CommentaryAI Ethical Principles: Implementing the U.S. Military’s Framework
Over the last two years, the DoD has taken a number of steps to lay the groundwork for AI adoption....
By Megan Lamberth
VideoDangers of an AI Race
AI has value across a range of military applications, but also brings risks....
By Paul Scharre
CommentaryThe Coming Revolution in Intelligence Affairs
The U.S. intelligence community must embrace the RIA and prepare for a future dominated by AI—or else risk losing its competitive edge....
By Anthony Vinci
CommentaryBeyond TikTok: Preparing for Future Digital Threats
By the end of September, the American social media landscape will undergo a profound transformation, and we cannot yet map this new terrain. President Donald Trump’s executive...
By Kara Frederick, Chris Estep & Megan Lamberth