A Two-Person Telehealth Startup Generated $1.8 Billion Using AI to Mass-Produce Fake Advertising
Medvi, a GLP-1 weight loss drug platform that The New York Times celebrated as a lean AI efficiency story, was generating its extraordinary revenue through AI-powered fraud: fabricated doctor profiles, deepfake testimonial videos, and synthetic before-and-after comparisons. The case is a landmark example of what happens when AI marketing tools are applied without guardrails — and a preview of the fraud ecosystem regulators now have to catch up to.

D.O.T.S AI Newsroom
AI News Desk
In April 2026, The New York Times published a glowing profile of Medvi, a two-person telehealth startup selling GLP-1 weight loss medications that had somehow generated $1.8 billion in revenue. The story was framed as a case study in AI-driven efficiency — a minimal team achieving maximal output through intelligent automation. Within days, the story had inverted. Subsequent investigation revealed that Medvi's extraordinary revenue was not the product of efficient legitimate operations. It was the product of AI-enabled fraud at industrial scale.
How the Scheme Worked
Medvi used AI tools to generate and distribute advertising content that misrepresented its products and the professionals behind them. The operation included fabricated social media profiles impersonating healthcare providers who do not exist, deepfake video testimonials depicting before-and-after weight loss transformations that were artificially generated, and synthetic images used in marketing claims. These were not one-off deceptions — they were automated campaigns running across digital advertising platforms at volumes that would have required dozens of employees to produce manually. AI reduced that production to a two-person operation.
The GLP-1 category — drugs like semaglutide (Ozempic, Wegovy) that have generated enormous consumer demand — provided cover. Demand for these medications has so far outpaced supply that consumers and regulators alike have struggled to distinguish legitimate telehealth providers from fraudulent ones. Medvi operated in a gray zone that AI tools allowed it to exploit at a scale that human-staffed fraud operations could not have matched.
The Regulatory Gap This Exposes
The case illustrates a structural gap in how advertising platforms and regulators handle AI-generated content. Platform verification systems are designed to catch human-scale fraud — a bad actor posting fabricated testimonials manually can be detected and removed. AI-generated fraud operates on a different order of magnitude. A two-person team deploying automated content generation tools can produce thousands of misleading ad variants, across dozens of platforms, faster than any human review process can flag them.
The FTC and FDA both have jurisdiction over the type of advertising Medvi was running. What the case demonstrates is that neither agency's current enforcement tooling is calibrated for this velocity. The problem is not the absence of rules — fake doctor profiles and fabricated testimonials are clearly illegal under existing law. The problem is detection and enforcement at machine speed.
The Lesson for AI Product Builders
Medvi is the cautionary version of the AI efficiency story. For every legitimate company using AI to reduce headcount and accelerate operations, the same tools are available to operations that have no intention of running legitimate businesses. The case does not indict AI marketing tools — it indicts the absence of verification infrastructure sufficient to distinguish their legitimate uses from their fraudulent ones. That infrastructure is now an urgent problem for platform operators, regulators, and the companies building the tools being misused.