Perplexity AI Faces Class-Action Lawsuit Alleging It Secretly Shared User Chats With Meta and Google
Plaintiffs in a new class-action lawsuit allege Perplexity AI shared user conversation data with Meta and Google without consent, in violation of privacy laws. The case tests whether AI search tools operate under the same data-sharing expectations as traditional search engines — and could reshape how AI platforms disclose their data practices.

D.O.T.S AI Newsroom
AI News Desk
Perplexity AI is facing a class-action lawsuit alleging the company shared user chat data with Meta and Google without disclosing it to users or obtaining their consent. The complaint, which seeks class certification, claims Perplexity's data practices violate both the California Consumer Privacy Act and federal wiretapping statutes by enabling third-party access to what users reasonably believed were private AI search conversations.
What the Lawsuit Alleges
The plaintiffs contend that Perplexity shared behavioral and conversational data derived from user queries with Meta and Google for advertising and analytics purposes — a practice they describe as fundamentally inconsistent with how users understand a "private AI search" experience. The complaint draws a distinction between the implied privacy expectations of an AI conversation interface and the more widely-understood advertising models of traditional search engines. When a user searches Google, the exchange is broadly understood to involve some data monetization. When a user asks an AI assistant a question, the complaint argues, the conversational framing creates a different expectation.
The case has not yet produced documentary evidence of the specific data flows alleged, and Perplexity has not publicly addressed the specific claims. The legal strategy appears designed to force discovery — compelling Perplexity to produce internal documentation about its data sharing arrangements before the case is resolved on the merits.
The Broader Privacy Stakes for AI Search
Perplexity has grown rapidly as an alternative to traditional search, positioning itself as an AI-native research tool that synthesizes answers rather than returning links. That positioning has attracted significant investment — the company reached a $9 billion valuation in a 2025 funding round — and a user base that skews toward technical professionals who may have higher-than-average privacy expectations. The class-action framing suggests plaintiffs' attorneys see Perplexity's user base as a defined cohort with documentable reliance on its privacy representations.
The case connects to a broader regulatory and legal environment in which AI companies are increasingly being held to the same disclosure standards as established tech platforms — without having necessarily built the compliance infrastructure to match. GDPR enforcement in Europe, state-level privacy laws in the US, and FTC scrutiny of AI data practices have created a complex terrain that companies scaling quickly may not have navigated adequately. For Perplexity specifically, the lawsuit arrives as the company is aggressively expanding its enterprise product and pursuing large-scale partnerships — an environment where unresolved privacy litigation carries significant commercial risk beyond the immediate legal exposure.
What Comes Next
If the case proceeds to discovery, the resulting disclosures about AI search data practices could set precedents well beyond Perplexity. Every major AI assistant — ChatGPT, Claude, Gemini, Copilot — operates within data architectures that most users do not fully understand. A successful class-action against Perplexity would create an incentive structure for similar cases against larger platforms, and could accelerate regulatory pressure for standardized AI data disclosure requirements.