The AI Trust Paradox: More Americans Are Using It. Fewer Trust What It Tells Them.
A new Quinnipiac University poll reveals a deepening divide in American attitudes toward AI: adoption continues to accelerate, but confidence in AI accuracy is falling in parallel. The finding challenges the assumption that familiarity breeds trust — in AI, it appears to be breeding skepticism instead.

D.O.T.S AI Newsroom
AI News Desk
The standard model of technology adoption assumes that trust and usage grow together. People adopt tools as they see them work, develop confidence through experience, and eventually integrate the technology into their baseline expectations. A new Quinnipiac University national poll suggests AI is breaking this model in a meaningful way.
The survey, conducted across a nationally representative sample of American adults, found that AI tool adoption is continuing to rise — more Americans report using AI assistants, writing tools, and AI-powered search than at any prior measurement point. But in the same survey, confidence in the accuracy of AI outputs has declined. The gap between "I use this" and "I trust what it says" is widening.
What's Driving the Divergence
The most likely explanation is experience. Early AI adoption was driven by users who had not yet encountered the failure modes — confident hallucination, factual error, context collapse — that emerge with regular use. As AI tools have become mainstream enough to be used for consequential tasks, more users have hit those failure modes personally. The person who first used ChatGPT for creative brainstorming and encountered no friction is now using it to check a legal question or verify a medical claim — and finding that the confident tone masks real reliability problems.
This is a structural feature of how large language models work, not a product defect that will be patched away. Models generate plausible text, not verified truth. The confidence of the output is not correlated with its accuracy. Users who have learned this through experience are incorporating it into their mental model of what AI is useful for — and what it is not.
The Transparency and Regulation Gap
The poll also found that concerns about AI transparency and the desire for stronger regulation have increased in parallel with declining trust. This suggests users are not simply adjusting their personal usage behavior — they are developing views about institutional accountability. An AI tool that confidently produces inaccurate information is a personal inconvenience. The same capability operating at the scale of news, healthcare, financial advice, or public services is a systemic risk that individuals cannot manage on their own.
The paradox the poll reveals — widespread adoption combined with declining institutional trust — is one of the more politically significant data points in AI's current trajectory. The policy window for establishing credible AI accountability frameworks is likely narrower than it appears, because it closes at the point where low trust crystallizes into active opposition.