Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

AI Offensive Cyber Capabilities Are Doubling Every Six Months, Safety Researchers Warn

A new safety research report finds that AI models' ability to autonomously exploit security vulnerabilities has been doubling roughly every 5.7 months since 2024 — a rate that is outpacing the development of defensive tooling and policy frameworks.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
AI Offensive Cyber Capabilities Are Doubling Every Six Months, Safety Researchers Warn

The AI safety research community has been tracking the growth of AI-assisted offensive cyber capabilities for two years. The latest findings are stark: the ability of frontier AI models to identify and exploit real-world security vulnerabilities has been doubling approximately every 5.7 months since 2024. If that trajectory continues, the capability gap between AI-assisted attack and AI-assisted defense will widen significantly before any regulatory framework has the tools to close it.

What "Doubling" Means in Practice

The researchers evaluated AI models against a standardized set of known vulnerability classes — including SQL injection, privilege escalation, and memory corruption patterns — and measured the rate at which models could autonomously identify, adapt to, and exploit novel instances of each class without human guidance. In early 2024, frontier models could autonomously complete roughly 15% of these tasks. In early 2026, the figure sits above 50% and is still rising.

The 5.7-month doubling rate is not a smooth curve. Safety researchers note it reflects a series of step-changes tied to model capability releases, particularly the introduction of extended context windows and longer chain-of-thought reasoning, which allow models to trace multi-step vulnerability chains through large codebases that would have previously required expert human analysis.

The Defense Gap

Cybersecurity vendors are actively building AI-assisted defensive tooling, but the research suggests offense is currently moving faster. Detection systems are trained on historical attack patterns; AI-generated attacks can be novel enough to evade signature-based defenses while still being systematic enough to be highly effective. The practical implication is that organizations relying on traditional security monitoring may face an increasingly asymmetric threat environment.

For AI labs, the finding adds urgency to the debate over how capability evaluations should handle dual-use risks. Anthropic, OpenAI, and Google DeepMind all conduct pre-release safety evaluations that include cybersecurity assessments. The research suggests those assessments may need to be updated on shorter cycles than current release cadences allow.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom