AI-generated zero-day exploit targeted 2FA in open-source admin tool

 

AI-generated zero-day exploit targeted 2FA in open-source admin tool

Researchers at Google’s Google Threat Intelligence Group (GTIG) say they have discovered what may be the first known zero-day exploit likely developed with the help of artificial intelligence.

The exploit targeted an unnamed open-source web administration platform and was designed to bypass two-factor authentication (2FA). The attack was stopped before it could be widely deployed, Google said, warning that threat actors increasingly using AI to speed up discovery of vulnerabilities and exploit development.

According to GTIG, the Python-based exploit code showed several signs of AI generation, including unusually detailed educational docstrings, a hallucinated CVSS severity score, and a highly structured coding style commonly associated with large language models (LLMs).

“For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI. The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use,” Google said. “Threat actors associated with the People’s Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK) have also demonstrated significant interest in capitalizing on AI for vulnerability discovery.”

Threat actors, including APT27, APT45, UNC2814, UNC5673, and UNC6201, were reportedly observed using AI tools for exploit development and malware obfuscation.

Researchers noted that while usually flaws are being uncovered via fuzzing or static analysis, this issue was described as a semantic logic bug, a type of flaw that modern AI systems are increasingly capable of identifying.

AI tools are also increasingly being used in influence campaigns and malware automation. In one case, Russian-linked actors used AI voice cloning in fake journalist videos promoting anti-Ukraine narratives. Another example is the Android malware called PromptSpy, which allegedly integrated Gemini APIs and AI-driven automation to replay authentication methods such as PINs and lock patterns.

“AI-enabled malware, such as PROMPTSPY, signal a shift toward autonomous attack orchestration, where models interpret system states to dynamically generate commands and manipulate victim environments. Our analysis of this malware reveals previously unreported capabilities and use cases for its integration with AI. This approach allows threat actors to offload operational tasks to AI for scaled and adaptive activity,” the report noted.

Back to the list