Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

Perplexity AI Faces Class-Action Lawsuit Alleging It Secretly Shared User Chats With Meta and Google

Plaintiffs in a new class-action lawsuit allege Perplexity AI shared user conversation data with Meta and Google without consent, in violation of privacy laws. The case tests whether AI search tools operate under the same data-sharing expectations as traditional search engines — and could reshape how AI platforms disclose their data practices.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
Perplexity AI Faces Class-Action Lawsuit Alleging It Secretly Shared User Chats With Meta and Google

Perplexity AI is facing a class-action lawsuit alleging the company shared user chat data with Meta and Google without disclosing it to users or obtaining their consent. The complaint, which seeks class certification, claims Perplexity's data practices violate both the California Consumer Privacy Act and federal wiretapping statutes by enabling third-party access to what users reasonably believed were private AI search conversations.

What the Lawsuit Alleges

The plaintiffs contend that Perplexity shared behavioral and conversational data derived from user queries with Meta and Google for advertising and analytics purposes — a practice they describe as fundamentally inconsistent with how users understand a "private AI search" experience. The complaint draws a distinction between the implied privacy expectations of an AI conversation interface and the more widely-understood advertising models of traditional search engines. When a user searches Google, the exchange is broadly understood to involve some data monetization. When a user asks an AI assistant a question, the complaint argues, the conversational framing creates a different expectation.

The case has not yet produced documentary evidence of the specific data flows alleged, and Perplexity has not publicly addressed the specific claims. The legal strategy appears designed to force discovery — compelling Perplexity to produce internal documentation about its data sharing arrangements before the case is resolved on the merits.

The Broader Privacy Stakes for AI Search

Perplexity has grown rapidly as an alternative to traditional search, positioning itself as an AI-native research tool that synthesizes answers rather than returning links. That positioning has attracted significant investment — the company reached a $9 billion valuation in a 2025 funding round — and a user base that skews toward technical professionals who may have higher-than-average privacy expectations. The class-action framing suggests plaintiffs' attorneys see Perplexity's user base as a defined cohort with documentable reliance on its privacy representations.

The case connects to a broader regulatory and legal environment in which AI companies are increasingly being held to the same disclosure standards as established tech platforms — without having necessarily built the compliance infrastructure to match. GDPR enforcement in Europe, state-level privacy laws in the US, and FTC scrutiny of AI data practices have created a complex terrain that companies scaling quickly may not have navigated adequately. For Perplexity specifically, the lawsuit arrives as the company is aggressively expanding its enterprise product and pursuing large-scale partnerships — an environment where unresolved privacy litigation carries significant commercial risk beyond the immediate legal exposure.

What Comes Next

If the case proceeds to discovery, the resulting disclosures about AI search data practices could set precedents well beyond Perplexity. Every major AI assistant — ChatGPT, Claude, Gemini, Copilot — operates within data architectures that most users do not fully understand. A successful class-action against Perplexity would create an incentive structure for similar cases against larger platforms, and could accelerate regulatory pressure for standardized AI data disclosure requirements.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom