VERIFIEDBy Xavier Rivera· ·1.5 min read

Google: AI Built First Zero-Day Exploit to Bypass 2FA

Google’s Threat Intelligence Group confirms that cybercriminals used AI to build the first observed zero-day exploit that bypassed two-factor authentication in an open-source web tool. The report highlights growing AI use by state-linked actors while noting a Cambridge study finding limited AI sophistication in most cybercrime.

Source:Decrypt
Google: AI Built First Zero-Day Exploit to Bypass 2FA
Google’s Threat Intelligence Group reports that cybercriminals used an AI model to discover and weaponize a zero-day vulnerability in a popular open-source web administration tool. The flaw allowed attackers to bypass two-factor authentication by exploiting a logic error in the software’s authorization checks. Google says this is the first time it has identified AI-assisted zero-day development in the wild.

In a report published Monday, Google describes how the attackers leveraged the AI to analyze the software’s intended behavior rather than relying on traditional scanners that look for code crashes or obvious errors. The model identified contradictions in the hardcoded exceptions that permitted the 2FA bypass. The company worked with the affected vendor to patch the vulnerability before the attackers could launch a planned mass exploitation campaign.

Google states that the incident shows adversaries increasingly treat AI models as force multipliers for vulnerability research and exploit development. The report notes that threat actors linked to China and North Korea are using AI for vulnerability discovery, while suspected Russian groups apply it to malware obfuscation and evasion techniques.

A separate Cambridge University study of over 90,000 cybercrime forum threads found that most criminals still use AI primarily for spam and phishing rather than sophisticated attacks. The study concluded that claims of widespread AI-driven advanced cybercrime may be overstated given the continued importance of community learning and social factors in hacking circles.

Google researchers add that, based on the structure of the exploits, they have high confidence an AI model supported the discovery and weaponization of the vulnerability, though they did not identify which model was used.
HELP US IMPROVE

Reader-supported

The Circuitry is a passion project I've always wanted to build, and I love the work behind it.

Running it costs real money. APIs, hosting, time. To keep improving the site and growing this into something useful for everyone, those costs have to be covered.

Any contribution is appreciated. If not, no pressure. Thanks for reading.

Support →

VERIFICATION STATUS

VERIFIED
HIGH
Claims cross-referenced
No factual discrepancies detected

FLAGGED ISSUES

Cambridge University study on 90,000 cybercrime forum threads unconfirmed in public sources (secondary claim)

MORE IN TECH/AI