Google’s Threat Intelligence Group (GTIG) recently released a report on the adversarial misuse of generative AI. The team investigated prompts used by advanced persistent threat (APT) and coordinated information operations (IO) actors, finding that they have so far achieved productivity gains but have not yet developed novel capabilities.
Arguing that much of the current misuse of AI is confined to theoretical research and does not reflect the reality of how AI is currently being used by threat actors, the team at Google shared data on their interactions with Gemini. The GTIG team writes:
We did not observe any original or persistent attempts by threat actors to use prompt attacks or other machine learning (ML)-focused threats as outlined in the Secure AI…