A Two-Person Startup Built a $1.8 Billion Revenue Engine on AI-Powered Fake Advertising
Telehealth startup Medvi generated $1.8 billion in revenue using AI-generated fake advertising — then watched the story unravel. The case is a detailed illustration of how AI tools have lowered the barrier to large-scale digital advertising fraud.

D.O.T.S AI Newsroom
AI News Desk
Medvi, a telehealth startup with a two-person founding team, generated $1.8 billion in revenue. The mechanism was AI-powered fake advertising — fabricated testimonials, synthetic before-and-after imagery, and AI-generated celebrity endorsements deployed at scale across digital advertising channels. The story is impressive and then cautionary in roughly equal measure, and it has become a reference case in discussions about what happens when AI-generated content production capabilities meet the economics of digital ad fraud.
How the Engine Worked
Medvi's advertising operation used AI to generate high volumes of synthetic testimonials and visual assets that would have previously required large creative teams or fraudulent partnerships with real individuals. The resulting ad creative — celebrity-adjacent imagery, apparently authentic patient stories, medically suggestive before-and-after formats — was designed to perform well against ad platform engagement algorithms while being difficult to fact-check at scale.
The economics are straightforward: digital advertising platforms primarily optimize for engagement signals, not factual accuracy. AI-generated synthetic content can be tuned to maximize those signals. The cost of producing that content at scale — previously a practical constraint on the size of fraudulent ad operations — has dropped to near zero. What required a large creative staff and substantial legal risk in 2018 could be executed by two people in 2024.
The Unraveling
The $1.8 billion figure represents revenue, not profit, and Medvi's story did not have a clean ending. Regulatory and platform enforcement actions, combined with chargeback and refund exposure, eroded the economics. But the case's significance is less about the outcome for Medvi specifically and more about what it demonstrates structurally: the same AI tools that make legitimate content production cheaper make fraudulent content production cheaper in the same proportions. Enforcement frameworks built around the assumption that large-scale deception required large-scale resources are not calibrated for this environment.