Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI published a Child Safety Blueprint on April 8, laying out its policies, technical controls, and external partnerships for combating AI-generated child sexual abuse material. The release comes at a moment of genuine urgency: the National Center for Missing and Exploited Children received 36.2 million reports of suspected CSAM in 2023, a number that law enforcement agencies say has accelerated significantly in 2024 and 2025 as AI image and video generation tools have become more accessible. The document is OpenAI's most comprehensive public statement on how its models are trained to refuse this category of request, what happens when attempts are made, and how the company cooperates with law enforcement.

What the Blueprint Commits To

The Blueprint is organized around five areas: training data practices (no CSAM in any training set, with third-party audit requirements for data suppliers), model fine-tuning restrictions (controls preventing the removal of safety training through fine-tuning APIs), detection and reporting (automatic reporting to NCMEC of confirmed CSAM generated through or sent to OpenAI systems), external partnerships (active participation in the Tech Coalition's Project Protect framework and cooperation with the Internet Watch Foundation), and research commitments (funding for detection technology research and classifier development). The document does not quantify the number of reports OpenAI has made to NCMEC, which is notable given that Microsoft, Google, and Meta publish these numbers annually.

The Scale of the Problem AI Created

The broader context for the Blueprint is uncomfortable for the AI industry. Open-source image generation models — not primarily OpenAI's own, but products whose existence was enabled by the same diffusion model research — have been used to generate synthetic CSAM at a scale that law enforcement agencies describe as unprecedented. The Stanford Internet Observatory documented this in detail in 2023. The problem has grown since. Restricting OpenAI's own products is necessary but not sufficient, since the models being most heavily exploited are open-weight systems that the company neither controls nor distributes. The Blueprint's value is partly in demonstrating what responsible practices look like for a major lab; whether those practices propagate to the broader open-source ecosystem that poses the larger practical risk is a separate, harder question.

The Policy Gap

The Blueprint arrives as legislators in the US, UK, and EU are developing regulations that would impose mandatory CSAM detection and reporting requirements on AI companies. KOSA, EARN IT, and the EU's Child Sexual Abuse Regulation are at various stages of passage, and all would require AI platforms to implement detection controls and report material to authorities. OpenAI's publication of a voluntary blueprint before mandatory requirements take effect is a familiar industry playbook — demonstrating self-regulation to influence the shape of coming legislation. The practical effect depends on whether the document's commitments are specific enough to verify and whether the company publishes the reporting statistics that would allow external evaluation.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom
Hackers Are Redistributing the Leaked Claude Code Repository — With Bonus Malware Attached
Policy

Hackers Are Redistributing the Leaked Claude Code Repository — With Bonus Malware Attached

Wired reports that threat actors are repackaging the leaked Claude Code source repository and uploading it to file-sharing platforms bundled with information-stealing malware. The pattern is a textbook social engineering play: developers curious about the leaked AI tool are downloading what looks like the genuine repository and executing malware in the process.

D.O.T.S AI Newsroom