Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

OpenAI Publishes 12-Page Policy Blueprint for the Superintelligence Era: Public Wealth Funds, Robot Taxes, and a 32-Hour Work Week

OpenAI's new policy paper, 'Industrial Policy for the Intelligence Age,' lays out a comprehensive framework for governing an economy reshaped by systems that can outperform the smartest humans. The proposals include sovereign-style wealth funds for every citizen, updated capital gains taxes, pilot programs for a four-day work week at full pay, and a distributed network of AI research labs across universities and community colleges.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
OpenAI Publishes 12-Page Policy Blueprint for the Superintelligence Era: Public Wealth Funds, Robot Taxes, and a 32-Hour Work Week

OpenAI has published a twelve-page document titled "Industrial Policy for the Intelligence Age," presenting the company's vision of how governments should prepare for economic disruption from superintelligent AI systems. The paper is explicit about its scope: OpenAI defines superintelligence as "AI systems capable of outperforming the smartest humans" and states that "this transition is already underway." The proposals range from redistribution mechanisms to labor policy to research infrastructure, and they represent OpenAI's first comprehensive attempt to shape the policy environment around advanced AI deployment.

The Public Wealth Fund

The centerpiece of OpenAI's proposal is a "Public Wealth Fund" designed to give every citizen a direct financial stake in AI-generated economic output. Under the framework, all citizens would receive holdings in diversified, long-term assets encompassing AI companies and the broader economy, with returns flowing to individuals "regardless of their starting wealth or access to capital." The model is structurally similar to the Alaska Permanent Fund — a sovereign wealth vehicle that distributes oil revenues to state residents — but applied to AI productivity gains at national scale. OpenAI acknowledges that financing mechanisms require government-industry collaboration, leaving implementation specifics to future negotiation.

Taxation of AI-Driven Returns

On taxation, the paper advocates updating the tax base to sustain existing social programs as labor income is displaced by AI. Specific proposals include higher capital gains taxes at the top of the income distribution, corporate levies on "sustained AI-driven returns," and "taxes related to automated labor." The last category is closest to what is commonly called a "robot tax" — a levy designed to create a cost parity between automated labor and human labor, slowing displacement incentives while generating revenue for transition programs. Companies that maintain and train workers would receive wage-linked incentives under the framework, creating financial encouragement to preserve employment even as AI reduces the economic necessity of human labor for many tasks.

The 32-Hour Work Week

The most immediately legible proposal is a call for employers and unions to pilot a 32-hour, four-day work week at full compensation. The paper frames this as a testable hypothesis rather than a mandate: if pilot programs demonstrate that productivity remains stable with reduced hours, the shorter work week would transition to permanent status. If AI reduces operating costs, companies would be expected to redirect savings into pensions, healthcare, and childcare benefits rather than extracting them as profit. Workers would gain formal input into where and how AI systems are deployed, with priority given to deploying AI in dangerous, repetitive, or physically demanding roles rather than replacing skilled or creative work.

Research Infrastructure and Universal Access

OpenAI argues that AI access should become "similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe." The paper proposes distributed networks of AI-powered research laboratories across universities, community colleges, hospitals, and regional research centers — rather than concentrating capabilities at elite institutions — alongside "startup-in-a-box" packages providing micro-grants, model contracts, and shared infrastructure to entrepreneurs in underserved regions.

What to Make of This

The paper is explicitly framed as "intentionally early and exploratory," which creates space for the proposals to be revised or abandoned without commitment. The tension at the center of the document is visible: OpenAI is simultaneously one of the "small number of firms" it acknowledges could capture disproportionate economic gains from superintelligence, and the author of proposals designed to prevent that outcome. Whether the policies proposed would actually constrain OpenAI's own position — or whether the paper is primarily an attempt to shape the regulatory environment favorably before governments act unilaterally — is a question the document does not address. It is, at minimum, the most detailed public statement yet from a frontier AI company about what a fair AI economy should look like.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom