Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

Suno Bans Copyrighted Content But Its Own AI Keeps Generating It Anyway

An investigation by The Verge finds that Suno's AI music platform regularly produces outputs that reproduce recognizable copyrighted melodies and lyrics — directly contradicting the company's stated policy that forbids use of copyrighted material. The gap between policy and capability is becoming a pattern across creative AI platforms.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Suno Bans Copyrighted Content But Its Own AI Keeps Generating It Anyway

Suno's stated policy is unambiguous: users may not use copyrighted material on its platform. The policy exists for obvious legal reasons — Suno is already a defendant in a copyright suit brought by the major labels, and its terms represent an attempt to wall off user-generated liability. The problem, documented in detail by The Verge this week, is that Suno's own AI models routinely generate outputs that reproduce elements of copyrighted songs — familiar melodies, distinctive lyrical phrases, characteristic arrangements — regardless of what users prompt for or whether users upload copyrighted source material at all.

The Technical Reality Behind the Policy Gap

The gap exists because of how music generation models are trained. They learn from vast corpora of recorded music, absorbing melodic patterns, harmonic structures, rhythmic signatures, and lyrical conventions at a granular level. When prompted to generate content in the style of a specific artist or genre, the model draws on what it has absorbed — and what it has absorbed includes, by construction, the copyrighted works that define that style. The policy says "no copyrighted material." The model says "I was trained on copyrighted material and that training is not separable from my outputs." These two statements cannot both be fully true in practice.

The Verge's investigation documents specific cases where Suno outputs closely reproduce melodies or lyrics from identifiable copyrighted songs — instances where the connection is not stylistic influence but measurable similarity. Suno's DMCA-based content removal process exists for exactly these cases, but it is reactive: the content is generated, it potentially infringes, it gets reported and removed. The policy did not prevent the infringement; it just created a process for addressing it after the fact.

The Industry-Wide Version of This Problem

Suno is not uniquely negligent here — it is the most visible example of a structural problem that every creative AI platform faces. The training data that makes generative AI useful is, by and large, copyrighted. The platforms have adopted policies that match their legal interests rather than their technical capabilities. The distance between those two things is where the liability lives, and the lawsuits currently working through the courts will eventually determine who bears it.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom