The US Defense Department is still in the early stages of determining how best to ensure the development and deployment of AI that is ethical, reliable, and secure. Earlier this year, the DoD formally adopted a set of AI ethical principles that are meant to guide the Department’s development, adoption, and use of AI-enabled systems.
Over the last two years, the DoD has taken a number of steps to lay the groundwork for AI adoption.
The DoD, and in particular, the Joint Artificial Intelligence Centre (JAIC), is in the midst of transforming those principles into actionable guidance for DoD personnel. For the principles to be meaningful and enduring, the JAIC will need additional authority and resources; the DoD will also need to work hand-in-hand with allies and partners who are also tackling the challenge of ensuring safe, secure, and ethical AI. What can others learn from the US experience?
Read the full article from RSIS Commentary.
More from CNAS
VideoDangers of an AI Race
AI has value across a range of military applications, but also brings risks....
By Paul Scharre
CommentaryThe Coming Revolution in Intelligence Affairs
The U.S. intelligence community must embrace the RIA and prepare for a future dominated by AI—or else risk losing its competitive edge....
By Anthony Vinci
CommentaryBeyond TikTok: Preparing for Future Digital Threats
By the end of September, the American social media landscape will undergo a profound transformation, and we cannot yet map this new terrain. President Donald Trump’s executive...
By Kara Frederick, Chris Estep & Megan Lamberth
TranscriptTranscript from Russian Advances in Military Automation and AI
On Thursday, June 4, the CNAS Technology and National Security Program hosted a virtual discussion on Russian advances in military automation and AI featuring Samuel Bendett, ...
By Samuel Bendett & Martijn Rasser