Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Research

Meta's Hyperagents Don't Just Improve at Tasks — They Improve at Improving

Researchers at Meta and the University of British Columbia have built 'hyperagents' that can rewrite both the task-solving part of their code and the mechanism they use to improve. Unlike prior self-improving AI, the optimization loop itself becomes subject to optimization — breaking through the ceiling that has limited recursive self-improvement since the concept was first formalized.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Meta's Hyperagents Don't Just Improve at Tasks — They Improve at Improving

Self-improving AI systems have always run into the same wall: the mechanism that drives improvement is written by humans and never changes. The agent can get arbitrarily good at the task it was designed for, but it can never escape the constraints of the fixed loop that governs how it improves. A research team from Meta, the University of British Columbia, and several partner institutions has published a method they believe breaks through that ceiling.

The approach, which the team calls DGM-Hyperagents (DGM-H), builds on the Darwin Gödel Machine — a prior method that showed a coding agent could improve itself through repeated self-modification. The agent generates variants of its own code, tests them, and archives successful versions as stepping stones for further refinement. DGM-H extends this by making the improvement mechanism itself part of what gets optimized. Both the task-solving code and the code that modifies the agent live in the same editable program — so when the agent rewrites itself, it can rewrite the meta-level too.

Why Prior Approaches Couldn't Generalize

The original DGM worked well for coding tasks, where being a better programmer naturally makes you better at writing self-modifications. That link breaks down in other domains. An agent that gets better at evaluating scientific papers doesn't automatically get better at rewriting its own code. The team found DGM hit near-zero performance on non-programming tasks without manual tweaking.

DGM-H sidesteps this by making the improvement mechanism itself improvable — independently of what task the agent is doing. The team tested this across four task areas, demonstrating that the self-accelerating loop generalizes beyond coding.

What It Could Mean

The implications for AI capability trajectories are significant, if still early. Systems that improve at improving — rather than just improving — could in theory reach capability thresholds faster than scaling compute or data alone would allow. The research is published and peer-reviewed; production deployment is a different question. But Meta's willingness to publish on recursive self-improvement mechanisms signals that the lab views this class of research as publicly defensible, which itself tells you something about where the frontier is moving.

Back to Home

Related Stories

Google's AI Overviews Are Right Nine Times Out of Ten — but the 10% Failure Rate Has a Specific Shape
Research

Google's AI Overviews Are Right Nine Times Out of Ten — but the 10% Failure Rate Has a Specific Shape

A new independent study is the first to systematically measure the factual accuracy of Google's AI Overviews at scale. The headline finding — 90% accuracy — is better than critics expected and worse than Google implies. The more important finding is where that 10% comes from: complex multi-step queries, niche topics, and questions where the web itself is the source of conflicting claims.

D.O.T.S AI Newsroom
Databricks Co-Founder Wins Top Computing Prize — and Says AGI Is 'Already Here'
Research

Databricks Co-Founder Wins Top Computing Prize — and Says AGI Is 'Already Here'

Matei Zaharia, co-founder of Databricks and creator of Apache Spark, has won the ACM Prize in Computing — one of the most prestigious awards in computer science. In interviews accompanying the announcement, Zaharia made a pointed argument: AGI is not a future event but a present condition, and the industry's endless debate about its arrival is obscuring more useful questions about what to do with the AI we already have.

D.O.T.S AI Newsroom
Researchers Fingerprinted 178 AI Models' Writing Styles — and Found Alarming Clone Clusters
Research

Researchers Fingerprinted 178 AI Models' Writing Styles — and Found Alarming Clone Clusters

A new study from Rival analyzed 3,095 standardized responses across 178 AI models, extracting 32-dimension stylometric fingerprints to map which models write like which others. The findings reveal tightly grouped clone clusters across providers — and raise serious questions about whether the AI ecosystem is converging on a single voice.

D.O.T.S AI Newsroom