Americans Are Using More AI Than Ever — and Trusting It Less
A new Quinnipiac University poll reveals a deepening paradox at the heart of AI adoption: as more Americans integrate AI tools into their daily lives, confidence in AI-generated results is declining — not rising. The findings carry serious implications for every company building AI products.

D.O.T.S AI Newsroom
AI News Desk
A new Quinnipiac University poll, reported by TechCrunch, has surfaced a paradox that should give every AI product team pause: American adoption of AI tools is rising, but trust in those tools is falling — simultaneously, and in the same population.
The finding contradicts a common assumption baked into AI product strategy: that familiarity breeds confidence. The data suggests the opposite is happening. The more Americans use AI tools, the more they notice the gaps.
The Adoption-Trust Divergence
The Quinnipiac survey captures a moment of cognitive dissonance across the American public. Increasing numbers of Americans use AI tools — ChatGPT, Claude, Gemini, and the expanding ecosystem of AI-powered applications embedded in productivity software — but this growing usage is not translating into confidence in outputs. The poll found that fewer respondents express trust in AI-generated results than in previous comparable surveys, even as adoption rates trend upward.
This is structurally different from how trust typically develops with new technologies. Early adopters of the internet, smartphones, or cloud services became more confident as they gained experience. AI appears to be following a different trajectory — one where experience exposes limitations rather than building confidence.
Three Fault Lines
The poll identifies three primary anxiety vectors. Transparency leads: respondents want to understand how AI systems make decisions, and the opacity of current systems leaves that need unmet. Regulatory gaps follow: a substantial majority believes government oversight of AI remains inadequate, suggesting that institutional trust structures — the mechanisms that underpin confidence in medicine, finance, and food safety — are absent for AI. Societal impact rounds the picture: Americans express broad concern about AI's effects on employment, misinformation, privacy, and social stability that extends beyond individual product experiences.
Why This Matters for AI Companies
The adoption-trust gap is a precarious position to occupy at scale. Users who adopt AI pragmatically — because it saves time, not because they trust it — are not loyal users. They are cost-benefit calculators who will recalibrate the moment a high-profile failure shifts the calculus, or the moment regulation makes caution cheaper than adoption.
The 2026 AI landscape increasingly resembles the early social media era: massive adoption, eroding public confidence, mounting regulatory pressure, and a technology industry that has optimised for growth metrics rather than the trust infrastructure that makes growth sustainable. The Quinnipiac data is a leading indicator. Companies that read it as a product problem — not a PR problem — will be better positioned for what comes next.
The poll was conducted among American adults in late March 2026. Full methodology is available from Quinnipiac University.