In 2011, the simple exploitation of an existing data set could have prevented a near disaster in northern Afghanistan.
Then, an entire operations center watched as the feed from an MQ-1 drone, newly reassigned from its original mission, displayed a growing group of protesters at the perimeter of a small U.S. forward operating base. Although conventional signals intelligence indicated a possible disturbance, full-motion video confirmed the severity of the threat only well after it had matured. Intelligence analysts didn’t understand what the protestors were doing — and why they were doing it — until they had already massed at the entry point. If used properly, automated social media monitoring and geofencing, which calls for creating virtual geographic boundaries, could have filled this critical gap in situational awareness.
Read the full article at C4ISRNET
More from CNAS
CommentaryUpending the 5G Status Quo with Open Architecture
This article is adapted in part from written testimony the author submitted to the Joint Committee on the National Security Strategy of the Parliament of the United Kingdom. ...
By Martijn Rasser
CommentaryDecide, Disrupt, Destroy: Information Systems in Great Power Competition with China
The 2018 US National Defense Strategy (NDS) cites Russia and the People’s Republic of China (PRC) as “revisionist powers” that “want to shape a world consistent with their aut...
By Ainikki Riikonen
VideoImplementing AI ethics standards at the DoD
Robert Work, former Deputy Secretary of Defense, discusses the need for ethical AI standards in government, and why it’s important that AI usage reflects our values.Watch the ...
By Robert O. Work
CommentaryPreparing the Military for a Role on an Artificial Intelligence Battlefield
The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI)...
By Megan Lamberth