Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

OpenAI Proposes Robot Taxes, Public Wealth Funds, and a Four-Day Workweek to Manage AI's Economic Disruption

OpenAI released a detailed economic policy paper outlining how governments should respond to superintelligence — including taxing AI-generated profits, creating sovereign wealth funds to distribute gains broadly, and reducing the standard workweek. The proposal is notable for what it concedes: that AI will displace workers at scale and that market mechanisms alone will not produce equitable outcomes.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
OpenAI Proposes Robot Taxes, Public Wealth Funds, and a Four-Day Workweek to Manage AI's Economic Disruption

OpenAI published a policy paper on Monday laying out the company's vision for how governments should restructure economic institutions to manage the disruption that advanced AI will cause. The proposals — including a tax on AI-generated profits, the creation of public wealth funds modeled on sovereign wealth vehicles, and a reduction of the standard workweek — represent the company's most explicit acknowledgment to date that the technology it is building will not distribute its benefits through market mechanisms alone.

The Core Proposals

The paper clusters its recommendations around three themes. First, taxation: OpenAI proposes directing a share of AI-derived productivity gains into public investment vehicles, treating AI profits the way resource-rich countries treat oil revenues — as something that belongs partly to the public whose infrastructure, institutions, and labor markets made them possible. The specific structure is left vague, but the direction is clear: AI companies should pay into funds that benefit people displaced or left behind by AI adoption.

Second, sovereign wealth funds: Rather than routing AI tax revenues through conventional government budgets, the paper advocates for dedicated public funds that invest in long-term assets and distribute returns to citizens. The Norway model — a national oil fund that converts resource revenue into a perpetual wealth vehicle — is the implicit template. Applied to AI, such a fund would accumulate capital from the technology transition and return it over time to a citizenry whose economic position has been degraded by automation.

Third, a four-day workweek: As AI systems absorb increasing shares of knowledge work, the paper argues for distributing remaining human work more equitably across the labor force by reducing standard hours without proportional wage cuts. The proposal treats the four-day week not as a perk but as a structural response to technological unemployment.

What the Paper Concedes

The paper's significance is partly in its admissions. OpenAI is not arguing that AI will create more jobs than it destroys or that the gains from productivity growth will flow naturally to workers. It is arguing that proactive policy intervention is necessary to prevent AI-driven growth from becoming AI-driven concentration. That is a meaningful shift in public positioning for a company whose commercial pitch depends on convincing enterprise customers that AI is net-positive for their workforce.

The Credibility Gap

Critics from both directions have identified the paper's central weakness: it advocates for redistribution without specifying mechanisms. It does not define what qualifies as AI profits for tax purposes, does not propose tax rates, and does not outline what regulatory authority would enforce the proposals. Progressive commentators note that a company proposing wealth redistribution while simultaneously expanding its own wealth is on complicated rhetorical ground. Free-market advocates read the framing as cover for regulatory capture — using the language of worker protection to establish a policy environment that entrenches incumbents. The paper is a serious articulation of real concerns, but it reads more as a conversation opener than a policy blueprint.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom