Anthropic's Own Data Suggests AI Is Making Skilled Users More Skilled — and Leaving Others Behind
Anthropic's Economic Index, released with its latest model, contains a finding with uncomfortable long-term implications: sustained AI users achieve progressively better results over time as their ability to prompt, direct, and evaluate AI output compounds. The researchers flag this as a potential mechanism for widening economic inequality — the same technology that democratizes access to AI may simultaneously concentrate its benefits among those already skilled enough to use it well.

D.O.T.S AI Newsroom
AI News Desk
Anthropic has published its Economic Index — a large-scale analysis of how Claude usage patterns correlate with economic outcomes and skill development. The headline findings about AI's labor market impact have received most of the coverage. But there is a quieter finding buried in the data that deserves more attention: AI skill is itself a compounding resource, and it compounds unevenly.
The key finding is this: users who engage with Claude intensively and consistently over time achieve progressively better results — not because the model improves, but because they improve. They learn how to frame requests more precisely, how to evaluate model outputs critically, how to chain prompts into productive workflows, and how to identify when the model is confabulating versus reasoning soundly. These meta-skills compound over time. A user with 500 hours of Claude experience achieves qualitatively different outcomes than a new user even on the same tasks.
The Inequality Mechanism
This finding has a troubling implication that the researchers flag explicitly: if AI skill compounds with use, and if access to high-quality AI tools correlates with income (Claude Pro costs $20/month; enterprise tiers cost significantly more), then AI may be accelerating divergence rather than convergence in productivity outcomes.
The classic democratization argument for AI runs like this: a first-generation college student with Claude access can now get writing feedback, coding help, and research assistance that previously required expensive tutors or professional networks. That is true. But the Stanford-educated consultant who uses Claude 6 hours a day in her professional workflow is also compounding those skills — and at a rate that the occasional user cannot match.
The gap between these two users is not AI access. It is AI fluency, and fluency correlates with education, professional context, and the available time for experimentation that comes with economic security.
What Anthropic Is — and Isn't — Saying
The Economic Index does not claim AI will increase inequality. It flags the mechanism by which it could, and calls for research to track whether it does. That is responsible scientific framing. But the implication is significant: the companies building these tools need to think about AI literacy and access not just as a binary (do you have a subscription?) but as a continuum (how much compounded skill are you bringing to the tool?).
Programs that give underprivileged students access to ChatGPT or Claude without accompanying instruction in how to use those tools effectively may be solving the wrong problem. The bottleneck is not access — it is compound fluency. And compound fluency takes time and guidance to develop.