Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Opinion

Inside the Schism: Anthropic Reportedly Views OpenAI as AI's 'Tobacco Industry'

Leaked internal framing reveals that Anthropic's founding team views OpenAI's commercialization-first culture as analogous to Big Tobacco — an industry that knew its product caused harm and kept selling it anyway. The framing illuminates why Anthropic's public policy stances have been so systematically at odds with every other major AI lab.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Inside the Schism: Anthropic Reportedly Views OpenAI as AI's 'Tobacco Industry'

When Dario and Daniela Amodei left OpenAI in 2021 to found Anthropic, the official story was a straightforward disagreement about AI safety prioritization. New reporting suggests the internal framing inside Anthropic is considerably sharper: the company reportedly views OpenAI's approach to AI development as analogous to the tobacco industry — an organization aware of the risks its technology poses and choosing commercial expansion over harm mitigation.

The framing, described by sources familiar with Anthropic's internal culture in a new report from The Decoder, is not merely rhetorical. It shapes how Anthropic makes product decisions, how it engages with government regulators, and why it has consistently been the loudest major AI lab on questions of capability evaluation and deployment safeguards.

What the Comparison Actually Means

The tobacco industry analogy has a specific meaning in corporate ethics contexts. It refers not to a company that accidentally causes harm, but to one that knew about the harm, conducted internal research confirming it, and then suppressed or ignored that research while expanding distribution. The indictment is not incompetence — it is knowing complicity.

Applied to OpenAI, the comparison is pointed. It implies Anthropic believes OpenAI understands the risks of deploying frontier AI capabilities at scale, has internal evidence of those risks, and is choosing growth targets over safety thresholds. This would explain why Anthropic's former colleagues at OpenAI have consistently downplayed Anthropic's safety arguments: the disagreement is not technical but ethical.

The Strategic Consequences

If this is genuinely how Anthropic's leadership frames its competitor, it clarifies several otherwise puzzling aspects of the company's behavior. Anthropic's refusal to give the Pentagon unrestricted Claude access for autonomous weapons applications — the dispute that triggered the ongoing federal legal battle — looks less like a narrow compliance judgment and more like a categorical ethical commitment. The tobacco company doesn't get to decide how its product is used downstream; you don't give it distribution rights you can't revoke.

The framing also explains Anthropic's willingness to accept short-term commercial disadvantage in exchange for policy positions. A company that believes its primary competitor is acting in bad faith doesn't try to match that competitor's release velocity — it tries to change the industry's regulatory environment before the tobacco playbook fully plays out.

OpenAI's Position

OpenAI has not responded to the specific framing. The company's public position — consistent since Sam Altman returned as CEO — is that fast deployment of capable AI is itself the safest path, because it accelerates the development of alignment and safety techniques. Whether that argument is good-faith belief or the tobacco industry's playbook in updated form depends entirely on who you ask.

Back to Home

Related Stories