Computers have gotten pretty good at making certain decisions for themselves. Automatic spam filters block most unwanted email. Some US clinics use artificial-intelligence-powered cameras to flag diabetes patients at risk of blindness. But can a machine ever be trusted to decide whether to kill a human being?
It’s a question taken up by the eighth episode of the Sleepwalkers podcast, which examines the AI revolution. Recent, rapid growth in the power of AI technology is causing some military experts to worry about a new generation of lethal weapons capable of independent and often opaque actions.
“We're moving to a world where machines may be making some of the most important decisions on the battlefield about who lives and dies,” says Paul Scharre, director of the technology and national security program at the Center for a New American Security, a bipartisan think tank.
Read the full story and more in WIRED.