Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Opinion

OpenAI's Safety Exodus Finally Has an Explanation: Sam Altman Says His 'Vibes Don't Fit' with Traditional AI Safety

A New Yorker profile based on over 100 interviews reveals why safety researchers keep leaving OpenAI — and the answer is more candid than most expected. Altman acknowledges the misalignment directly, in his own words. The departures helped found Anthropic. The pattern has not stopped.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
OpenAI's Safety Exodus Finally Has an Explanation: Sam Altman Says His 'Vibes Don't Fit' with Traditional AI Safety

A sweeping New Yorker profile published in April 2026, drawing on over one hundred interviews, has produced the most detailed account yet of why OpenAI has lost so many safety-focused researchers — and why the pattern shows no sign of reversing. The explanation comes, in part, directly from Sam Altman: "My vibes don't really fit with a lot of this traditional A.I.-safety stuff."

From Candor to Pattern

The statement is striking in its casualness. Altman was not describing a policy disagreement or an organizational restructuring. He was describing an aesthetic and philosophical incompatibility — a mismatch in sensibility between his approach to AI development and the framework that traditional safety researchers bring to the work. For people whose professional identity is built around systematic risk assessment, the characterization of their concerns as a "vibe problem" is precisely the kind of response that leads to resignations.

The departures have had concrete competitive consequences. Anthropic was founded specifically by former OpenAI safety researchers — including Dario Amodei, formerly OpenAI's VP of Research — who concluded that their concerns could not be resolved within OpenAI's organizational structure. Anthropic has since become OpenAI's most technically credible competitor, with a constitutional AI alignment approach and a safety research program that many of the departed researchers now lead. The vibes problem has, in effect, funded OpenAI's most serious rival.

Dismantling Safety Infrastructure

The profile documents a pattern beyond individual departures. OpenAI has disbanded safety-focused teams, compressed safety evaluation timelines, and, according to internal sources, rushed through testing for GPT-4 Omni in approximately one week — a timeline that raised significant concerns among remaining safety staff. The compression matters because it shortens the window during which safety evaluators can identify and flag risks before a model reaches commercial deployment.

The Pentagon contracts flashpoint illustrates the dynamic. When employees raised ethical objections to OpenAI's new military AI contracts following the company's entry into Department of Defense work, Altman's response was pointed: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that." The framing effectively reclassified safety concerns as personal political opinions — and therefore as things employees have no professional standing to raise.

On Changing Positions

The New Yorker profile also examines Altman's documented history of shifting public positions on AI risk. In 2019, he was among the voices cautioning against releasing GPT-2 at full capability, citing danger from broad distribution. Years later, he released vastly more capable systems to the general public. When confronted about this reversal, Altman defended the pattern: "I think what some people want is a leader who is going to be absolutely sure of what they think and stick with it, and it's not going to change. And we are in a field, in an area, where things change extremely quickly."

A former board member cited in the profile expressed concern about "indifference to the consequences of potential deceptions." The characterization aligns with a broader portrait of an organization where pragmatic adaptability is a core value — and where commitments made to manage external concerns are subject to revision when circumstances change.

What Hasn't Changed

The profile's most unsettling implication is structural rather than biographical. OpenAI's safety departures are not the result of a miscommunication that better management could fix. They reflect a genuine philosophical gap between Altman's view of how AI should be developed and the view held by the researchers who keep leaving. That gap has been visible since Anthropic's founding, and it has not closed. The exits will continue as long as the gap does.

Back to Home

Related Stories