AI Offensive Cyber Capabilities Are Doubling Every Six Months, Safety Researchers Find
A new study from AI safety researchers shows that AI models' offensive cybersecurity capabilities are doubling approximately every 5.7 months. Current systems can now solve tasks that would take a skilled human security researcher around three hours — a capability threshold that didn't exist in commercial AI two years ago.

D.O.T.S AI Newsroom
AI News Desk
The capability of AI models to conduct offensive cybersecurity operations is doubling roughly every 5.7 months, according to a study published by AI safety researchers and reported by The Decoder. The finding represents one of the more concrete empirical measurements of AI capability growth in a domain with direct security implications — and the trajectory is steep enough that the researchers describe the current moment as a threshold, not a preview.
What the Research Measured
The researchers evaluated AI models against a standardized set of cybersecurity tasks drawn from real offensive security workflows: vulnerability discovery, exploit development, privilege escalation, lateral movement, and exfiltration techniques. Rather than measuring benchmark scores in isolation, they calibrated the tasks against time-to-completion for skilled human security professionals. The current generation of frontier AI models can complete tasks that would take a competent human security researcher approximately three hours — autonomously, at scale, and at near-zero marginal cost per execution.
The doubling rate of 5.7 months is derived from tracking model performance on the same standardized task set over time, across multiple model generations from multiple labs. The researchers note that this rate has been consistent enough to treat as a trend, not noise. If the trajectory holds, the threshold for what AI can accomplish offensively will continue compressing significantly over the next 12 to 18 months.
The Asymmetry Problem
Cybersecurity has always had an asymmetric dynamic: defenders must secure everything, attackers need only find one gap. AI capability growth amplifies this asymmetry in two ways. First, offensive operations that previously required significant expertise can now be delegated to AI systems with modest human oversight, dramatically lowering the skill floor for conducting sophisticated attacks. Second, the speed advantage that defenders typically have — attack campaigns take time to develop and execute — erodes as AI accelerates the offensive development cycle.
The researchers are careful to note that AI capabilities also benefit defensive security: vulnerability scanning, anomaly detection, and patch prioritization all improve with more capable models. But the asymmetry concern is structural. Offense and defense do not improve at equal rates when offense benefits more from AI's particular strengths — pattern matching, code generation, and tireless iteration against defined targets.
Policy and Industry Implications
The study adds empirical weight to a debate that has largely been conducted through anecdote and speculation. Governments and security agencies have been assessing AI-enabled cyber risk for several years, but concrete capability measurements have been sparse. The 5.7-month doubling rate gives policymakers and security practitioners a number to work with — and a timeline to plan around.
For AI companies, the findings sharpen the case for capability-specific safety evaluations. Current model evaluation frameworks focus heavily on harmful content generation; the research suggests equal attention is warranted for operational cyber capabilities, which may scale on a different trajectory than language or reasoning tasks. The question of whether the same capabilities that help security researchers find vulnerabilities can be meaningfully restricted for offensive use remains unresolved.