Google Finds Hackers Used AI for First Zero-Day Exploit
Google Threat Intelligence Group reports that hackers used AI to develop a zero-day exploit bypassing 2FA in an unnamed popular open-source web admin tool. The attack was stopped before mass exploitation, but the case marks the first time GTIG has seen AI used this way and highlights growing adversary reliance on the technology.

Although the attack was foiled before reaching the mass exploitation phase, the incident illustrates that threat actors are relying more on AI assistance for vulnerability discovery and exploitation efforts. Google notified the software developer about the significant threat, enabling timely action to disrupt the attack.
GTIG reached its conclusion with high confidence after examining the structure and content of the Python exploit code. The script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data.
The large language model used remains unclear. Google explicitly rules out the possibility that its Gemini model was involved in the process.
Additional evidence of AI involvement is the nature of the vulnerability itself. It was a high-level semantic logic bug that AI systems excel at identifying, rather than memory corruption or input sanitization issues typically uncovered through fuzzing or static analysis.
The GTIG researchers stated in their report, "For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI." The report was published on May 11, 2026.
This case marks a notable shift in how adversaries are using AI tools, with the Python code's characteristics providing clear indicators of large language model generation.
Reader-supported
The Circuitry is a passion project I've always wanted to build, and I love the work behind it.
Running it costs real money. APIs, hosting, time. To keep improving the site and growing this into something useful for everyone, those costs have to be covered.
Any contribution is appreciated. If not, no pressure. Thanks for reading.
EXPERT TAKE
Security teams managing open-source admin tools should watch for exploit code containing excessive docstrings or inconsistent metrics like hallucinated CVSS scores that may signal AI generation.