An AI Company Cloned a Musician's Voice — Then Filed Copyright Claims Against Her When She Complained
A musician has publicly alleged that an AI company used her recordings to train a voice synthesis model without consent, then turned around and filed DMCA claims against her social media posts documenting the situation. The case is becoming a flashpoint for AI copyright accountability.

D.O.T.S AI Newsroom
AI News Desk
A musician identified on social media as Unlimited LS has alleged that an AI company trained a voice synthesis model on her recordings without permission, then filed copyright infringement claims against her posts documenting the situation — effectively using IP enforcement tools to suppress criticism of unauthorized IP use. The account, which surfaced on Twitter and accumulated significant attention on Hacker News, describes a pattern that legal observers say is increasingly common but rarely this publicly visible.
The Alleged Sequence of Events
According to the musician's account: her vocal recordings were used to train an AI voice model without her knowledge or consent. When she discovered the use and began documenting it publicly — including sharing audio comparisons between her original recordings and the AI-generated output — the company filed takedown claims against her posts. The takedowns removed her evidence while leaving the AI-generated content in circulation.
The irony is structural: copyright law, designed in part to protect artists from unauthorized reproduction of their work, is being deployed here to remove an artist's documentation of what she alleges is unauthorized reproduction of her work. The asymmetry reflects the gap between how DMCA enforcement mechanisms were designed to function and how they function when the entity with institutional resources is the one accused of misappropriation.
The Broader Legal Context
The case arrives as the legal framework for AI training data and voice synthesis remains deeply unsettled. The RIAA's pending cases against Suno and Udio address music generation broadly, but do not specifically target voice cloning. Several states have passed or are considering right-of-publicity expansions that would give performers stronger claims against voice synthesis without consent — but federal law has no equivalent provision.
For the AI music industry, the episode represents a reputational cost that goes beyond the specific legal outcome. If the dominant public narrative around AI voice synthesis companies becomes "they clone your voice and file claims against you when you object," the regulatory and public relations environment will harden accordingly. The legal questions are unresolved. The optics question is not.