Google's AI Overviews Are Right Nine Times Out of Ten — but the 10% Failure Rate Has a Specific Shape
A new independent study is the first to systematically measure the factual accuracy of Google's AI Overviews at scale. The headline finding — 90% accuracy — is better than critics expected and worse than Google implies. The more important finding is where that 10% comes from: complex multi-step queries, niche topics, and questions where the web itself is the source of conflicting claims.

D.O.T.S AI Newsroom
AI News Desk
Google has placed a standard disclaimer under every AI-generated search response since AI Overviews launched: "AI responses may include mistakes." It is the kind of hedge that covers everything and commits to nothing. Until now, there has been no rigorous independent data on how often those mistakes actually happen or what form they take. A new study changes that by evaluating AI Overviews accuracy at scale across multiple query categories — and the results are more nuanced than either critics or Google's PR would suggest.
What the Study Found
The research, published this week and analyzed by The Decoder, found that AI Overviews were factually accurate approximately 90% of the time across a large sample of queries spanning medical information, historical facts, product comparisons, local business details, and general knowledge. That accuracy rate held reasonably well for straightforward factual lookups — the kinds of queries where the web contains a clear, consistent answer that a retrieval-augmented system can find and summarize reliably. The failure rate was not randomly distributed across query types. It concentrated in three categories: multi-hop reasoning queries that require synthesizing information across several sources, questions about niche topics with limited high-quality web coverage, and queries where the underlying web content itself contains conflicting claims that the AI system resolves by picking one arbitrarily rather than acknowledging the disagreement.
The Medical Information Problem
The category that attracted the most scrutiny in the study was medical information, where AI Overviews errors have previously generated news coverage — most notoriously the early incident in which an AI Overview recommended eating rocks. The new data suggests the rate of clear factual errors in medical queries has declined significantly since the product launched in 2024, consistent with Google's claim that it has applied additional quality filters to health-related searches. What persists is a subtler problem: AI Overviews for medical queries tend toward confident presentation of what is actually contested clinical guidance, flattening genuine medical uncertainty into declarative statements. This is not the same as stating a fact that is false, but it may be more dangerous in practice because it is harder to detect.
What 90% Accuracy Means in Practice
AI Overviews receives billions of queries. At Google's scale, a 10% error rate does not describe a marginal phenomenon — it describes hundreds of millions of incorrect or misleading AI-generated responses delivered to users who have increasing reason to trust them as the default search experience. The aggregate accuracy number also obscures the difference between a wrong date in a historical summary and a wrong medication dosage in a health query. Accuracy rates that aggregate across all query types hide the specific failure modes that matter most for user safety and decision-making. The study's contribution is not the headline number but the identification of where that number breaks down — which gives Google a precise target for the next round of quality improvements, and gives researchers a methodology for tracking whether those improvements materialize.