Google has used a security-specific AI agent to detect a second critical vulnerability that was on the radar of threat actors but yet to be exploited.
Its security researchers used Google’s internal Big Sleep agent to detect a SQLite relational database management system critical security flaw, indexed as CVE-2025-6965, which could lead to memory corruption.
The bug was only known to threat actors, Google said, and about to be exploited before it was patched.
However, the vulnerability was fixed by SQLite before impacting users of the RDBMS.
“We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild,” Kent Walker, Google and Alphabet’s president of global affairs said.
The company’s Project Zero researchers built Big Sleep, announced in November last year, together with Google’s DeepMind AI division.
At the time, Big Sleep was able to find a vulnerability in the popular SQLite serverless relational database management system with an artificial agent, enabling the flaw to be remediated before it affected users.
Google is placing its hopes that AI will be able to find vulnerabilities that fuzzing, which involves testing through inputting random, malformed and/or unexpected data into applications, cannot uncover.
In April this year, Google introduced an experimental Sec-Gemini version 1 AI model, which is trained on security vendor Mandiant’s data.
Sec-Gemini drives the open-source collaborative digital forensics platform Timesketch, which will now get agentic capabilities, Google said.
Timesketch will be demonstrated at the annual Black Hat USA security conference in Las Vegas in August, along with Google’s AI-based Fast and Accurate Contextual Anomaly Detection (FACADE) which has been in use since 2018 for threat spotting.
Google also said it will donate data from its Secure AI Framework (SAIF) to the Coalition for Secure AI (CoSAI), to accelerate the organisation’s agentic AI, defensive and software supply chain security workstreams.