Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Tools

Anthropic Confirms Three Bugs Quietly Degraded Claude Code for Weeks — and Promises a Stricter Rollout Process

After weeks of user complaints, Anthropic published a postmortem on three separate issues that degraded Claude Code between March 4 and April 20: a silent reasoning-effort downgrade, a caching bug that erased reasoning history every turn, and a 25-word system instruction that knocked 3 percent off output quality. All three are now fixed, and Anthropic is overhauling how it ships changes.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

5 min read
Anthropic Confirms Three Bugs Quietly Degraded Claude Code for Weeks — and Promises a Stricter Rollout Process

Anthropic on Friday published a postmortem confirming what Claude Code users had been reporting for nearly two months: three separate bugs, all introduced via routine product changes rather than model retraining, quietly degraded the coding agent between March 4 and April 20. All three are fixed in version 2.1.116, released on April 20, and Anthropic has reset usage limits for affected subscribers as compensation. The company has also created a new public-facing X account, @ClaudeDevs, dedicated to product communication, and committed to a slate of process changes intended to prevent the same class of regressions in the future.

The Three Bugs, in Order

The first issue, deployed on March 4, lowered the default reasoning effort on Claude Code from "high" to "medium" in pursuit of faster latency. Internal evaluations did not flag the change as harmful, but users running real coding agents over real codebases noticed that Claude was missing edge cases and producing shallower fixes. The second issue, deployed March 26, was a caching bug: reasoning history that was supposed to be retained for an hour was being deleted after every turn instead, causing context loss within multi-step coding workflows and consuming user usage limits faster than expected. The third issue, deployed April 16, was a system instruction that capped Claude's responses at 25 words between tool calls and 100 words for final responses; it was intended to reduce verbosity but caused a measurable 3 percent quality drop on internal evaluations. None of the three issues affected the API directly — they were all artifacts of Claude Code's product layer.

The Process Fixes That Matter

Anthropic's stated remediation is more interesting than the postmortem itself. Going forward, more Anthropic employees will use the public Claude Code build instead of internal test versions — a dogfooding policy other AI labs have abandoned in favor of internal-only "champion" builds because they ship faster. All system-prompt changes will now have to clear broader evaluation suites before deployment. And compute-intensive changes will go through "soak periods and gradual rollouts" rather than instant global pushes. The implicit acknowledgement is that internal evaluations did not catch any of the three regressions because they did not measure the right things; the user complaints did. For Claude Code's professional user base — many of whom run agentic coding workflows that compound subtle quality drops into hours of wasted work — the postmortem matters because it concedes that "the API is fine" is not a sufficient quality bar when the product is the layer that actually faces developers.

Back to Home

Related Stories

Astropad's Workbench Turns a Mac Mini Into an AI Agent Server You Control From Your Phone
Tools

Astropad's Workbench Turns a Mac Mini Into an AI Agent Server You Control From Your Phone

Astropad, the company behind the Luna Display hardware that lets iPads function as Mac monitors, has built a new product for a new era: Workbench lets users remotely monitor and control AI agents running on Mac Minis from an iPhone or iPad. It is remote desktop software reimagined not for IT support but for the AI agent operator — the person who needs to check on autonomous workflows without being at their desk.

D.O.T.S AI Newsroom
Microsoft's Bing Team Open-Sources Harrier, a Multilingual Embedding Model That Tops the MTEB v2 Benchmark
Tools

Microsoft's Bing Team Open-Sources Harrier, a Multilingual Embedding Model That Tops the MTEB v2 Benchmark

Microsoft's Bing search team has released Harrier as an open-source embedding model, and it tops the multilingual MTEB v2 benchmark while supporting over 100 languages. The release is significant not just for the benchmark numbers but for the source: a search team that has spent decades optimizing retrieval systems has built an embedding model for the exact use case — semantic search and retrieval — that underpins most production RAG applications.

D.O.T.S AI Newsroom
Stability AI Pivots to Enterprise With Brand Studio — a Platform for Brand-Consistent AI Image Generation
Tools

Stability AI Pivots to Enterprise With Brand Studio — a Platform for Brand-Consistent AI Image Generation

Stability AI, the company that made open-source image generation mainstream with Stable Diffusion, is repositioning for enterprise with Brand Studio. The platform lets creative teams train brand-specific image models, automate visual production workflows, and route tasks to the best-suited AI model — a commercial play from a company that built its name on open access.

D.O.T.S AI Newsroom