Center for a New American Security CEO Richard Fontaine told Fox News Digital that until now, humans have primarily created disinformation. While it may have been propagated through digital means, it was not made through digital means. But these new AI applications could now allow a government to propagate and originate disinformation at scale. On the defensive side of the equation, AI could provide an additive benefit to existing scanning systems, helping to detect anomalies, find vulnerabilities and recommend ways to fix them.
"There's a lot that's happening very quickly with that aspect of the threat landscape," Fontaine said.
From a dictatorship perspective, Fontaine highlighted the kind of possibilities that generative AI enables, such as scalable automated bots that put forth propaganda, disinformation and misinformation. This information, generated by AI, could be easily translated into a multitude of languages to help better spread that message, with the potential to drown out dissident voices and flood the social media ecosystem.
Speaking to Russian government attempts to gain access to a Florida-based voting software supplier and the private accounts of DNC election officials leading up to the 2016 presidential election, Fontaine highlighted how phishing email scams were used as an attack vector. Often, these phishing scams contain misspellings, grammatical errors and questionable punctuation. Many of these errors arise from the language barrier between the foreign attacker and the American target.
However, ChatGPT and other large language models (LLM's) are highly adept at English. As a result, a bad actor could prompt the system to provide them with a more sophisticated scam that is harder to decipher.
"That kind of thing would allow potentially the attacker, whether it's a nation-state or anyone else, more ways in than they would have otherwise," Fontaine added.
Read the full story and more from Fox News.