Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

How AI Built a Monetized Abuse Machine on Telegram: Nudifying Bots, Deepfakes, and Automated Archives

A new investigation by AI Forensics documents a sophisticated Telegram-based ecosystem where AI tools are used to create and distribute non-consensual intimate imagery at industrial scale. The report maps the full commercial infrastructure — from image generation bots to subscription channels to automated victim identification — and raises urgent questions about platform accountability and AI tool deployment policies.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
How AI Built a Monetized Abuse Machine on Telegram: Nudifying Bots, Deepfakes, and Automated Archives

AI Forensics, a nonprofit research organization focused on AI system auditing, has published an investigation into a Telegram-based commercial ecosystem that uses AI tools to generate, distribute, and monetize non-consensual intimate imagery (NCII) at scale. The report documents not a collection of isolated bad actors but a structured commercial infrastructure with defined roles, automated pipelines, and customer support functions — an abuse economy that has industrialized what was previously a labor-intensive form of harassment.

The Architecture of the Ecosystem

The investigation found three distinct layers operating in concert. At the generation layer, automated bots accept image inputs — typically sourced from social media or provided directly by users — and return AI-processed outputs that remove or alter clothing. These bots operate via Telegram's native bot infrastructure, accept cryptocurrency payments, and offer tiered subscription models with varying output quality and volume allowances. At the distribution layer, curated channels aggregate generated imagery organized by victim type, operating on subscription or pay-per-view models. At the discovery layer, automated systems cross-reference social media profiles with content already in the ecosystem, enabling the targeting of individuals who have not yet been victimized.

The Automation Factor

The AI Forensics report emphasizes that automation is not incidental to this ecosystem — it is what makes it commercially viable and difficult to disrupt. Manual NCII creation is time-intensive and requires specific skills. Automated AI tools reduce the per-image cost to near-zero. The archive and automated redistribution systems mean that content removed from one location typically reappears in others within hours. Takedown requests addressed to individual channels do not address the underlying infrastructure, which means the standard content moderation playbook is structurally insufficient against this threat model.

Platform Accountability Questions

Telegram's response to NCII abuse has historically been slower and less comprehensive than other major platforms. The AI Forensics investigation was conducted by identifying and mapping publicly accessible or easily discovered channels and bots — suggesting that visibility is not the limiting factor for enforcement. The commercial infrastructure documented in the report — payment processing, subscription management, customer support — represents organizational sophistication that makes the "we remove content when reported" approach inadequate. The report argues that meaningfully addressing the ecosystem requires collaboration between Telegram, AI tool providers, and payment processors — an accountability chain that does not currently function as a coherent whole.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom