Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Startups

Ex-OpenAI Researcher Jerry Tworek Founds Core Automation to Build the World's Most Self-Running AI Lab

Jerry Tworek, the OpenAI researcher who led the development of GPT-4's code capabilities, has left to found Core Automation — a new AI lab with an explicitly self-referential mission: build the most automated AI research lab in the world. The lab's thesis is that AI systems capable of running their own research pipelines will produce scientific progress faster than human-directed labs, and that the organization that builds this capability first gains a compounding advantage in AI development.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

5 min read
Ex-OpenAI Researcher Jerry Tworek Founds Core Automation to Build the World's Most Self-Running AI Lab

Jerry Tworek, one of the key researchers behind OpenAI's coding AI capabilities, has left to found Core Automation, a new AI research lab whose stated mission is to build the most automated AI research infrastructure in the world. Tworek led the technical development of GPT-4's code understanding and generation capabilities — the work that established OpenAI's position as the leader in AI coding — and brings deep expertise in the intersection of AI systems and software automation to this new venture. Core Automation's founding thesis is that the bottleneck in AI research progress is no longer primarily compute or data, but the human research process itself: the time required for researchers to form hypotheses, design experiments, execute experiments, analyze results, and iterate. If AI systems can automate significant portions of that process, the effective research velocity of a small team with capable AI infrastructure can exceed that of a large team operating through traditional human-directed research workflows.

The Self-Directed Research Lab Hypothesis

Core Automation's approach to AI-accelerated research is more specific than the general claim that AI tools make researchers more productive. The lab is building systems designed to close the loop between hypothesis generation and experiment execution — allowing AI agents to not only assist human researchers but to independently identify promising research directions, configure and run experiments within a parameterized research space, analyze results, and generate new hypotheses based on findings, with human researchers serving as strategic directors and quality validators rather than primary executors. This architecture, if it works as envisioned, would transform the economics of AI research significantly. The current model requires expensive senior researchers to spend significant time on experimental execution and analysis that is intellectually necessary but not where the highest-leverage human judgment is applied. A closed-loop automated research system would concentrate human researcher time on the most judgment-intensive parts of the research process — framing the right questions, evaluating whether results are meaningful, and deciding which research directions to pursue — while delegating execution to AI systems that operate continuously without the scheduling constraints of human researchers.

Tworek's Track Record and What It Signals

Tworek's departure from OpenAI to found Core Automation continues a pattern in which the researchers who built the most commercially impactful capabilities at frontier AI labs have used that credibility to raise capital for their own ventures. Ilya Sutskever's departure to found Safe Superintelligence, John Schulman's move to Anthropic, and numerous other departures have established that OpenAI alumni with landmark research achievements can attract significant investor interest. Tworek's specific background in coding AI is particularly relevant to Core Automation's mission: building automated research infrastructure requires exactly the kind of AI-assisted software engineering that his GPT-4 coding work addressed. The lab's technical agenda — automating the research process — may be most tractable as an application of the same code generation and execution capabilities that his previous work advanced. Core Automation has not yet disclosed funding details, but the founding team, mission, and Tworek's track record position it to attract significant venture capital investment in the current environment where frontier AI lab funding has been consistently available at scale.

The Race Toward Self-Improving AI Research

Core Automation enters a research space that several frontier labs have been exploring in different forms. Anthropic's Constitutional AI work, DeepMind's research on AI-assisted science, and OpenAI's internal work on AI-assisted model development all involve elements of AI systems contributing to their own development pipeline. What distinguishes Core Automation's mission is the explicit commitment to automation as the primary strategic goal rather than a supporting capability — the lab is not trying to build the best AI models with human researchers using AI tools, but to build AI systems that can conduct research at a velocity and scale that human-directed labs cannot match. Whether this mission is achievable at the current frontier of AI capability, or whether it requires capability levels that do not yet exist, is the fundamental empirical question that Core Automation will spend its early years attempting to answer. The answer has implications that extend well beyond Core Automation itself: a functional self-directed research loop in AI would represent one of the most significant capability thresholds in the technology's development.

Back to Home

Related Stories