OpenAI President Brockman: GPT Reasoning Models Have 'Line of Sight' to AGI — Debate Is Settled
Greg Brockman has made his most direct statement yet on AGI: OpenAI's GPT reasoning models represent a clear path to artificial general intelligence, he says, declaring the central debate among AI researchers "definitively answered." The claim is immediately contested by Yann LeCun and Demis Hassabis.

D.O.T.S AI Newsroom
AI News Desk
OpenAI President Greg Brockman has escalated the industry's most contested question — whether text-based models can achieve general intelligence — by declaring the debate over. Speaking on the Big Technology Podcast, Brockman said OpenAI's GPT reasoning models have put AGI within reach.
"I think that we have definitively answered that question — it is going to go to AGI. Like we see line of sight," Brockman said, referring directly to the company's o-series and reasoning model lineage.
A Direct Repudiation of the Multimodal Camp
The statement is a pointed intervention in a genuine technical dispute. For years, a significant faction of AI researchers — led most vocally by Meta's Yann LeCun — has argued that models trained primarily on text cannot develop the grounded understanding of the world necessary for general intelligence. LeCun has repeatedly described large language models as fundamentally limited, incapable of causal reasoning, and unable to model physical reality.
Google DeepMind CEO Demis Hassabis has similarly argued that multimodal, embodied approaches — systems that perceive and act in the physical world — are necessary complements to language modeling. Both researchers would dispute Brockman's framing directly.
Brockman's response to this camp is to point to OpenAI's own results. The argument, implicitly, is that the empirical record of GPT reasoning models on complex benchmarks — mathematics, coding, scientific reasoning — has moved faster than critics projected, and the trajectory is sufficient to "see" AGI from here.
Sora Sidelined as a 'Different Branch'
The statement has additional strategic significance in the context of OpenAI's own product decisions. The company recently shut down the consumer-facing Sora app and has reduced its investment in world-model research, which it had previously positioned as a key pillar of its long-term roadmap. Brockman described Sora as "an incredible model" but placed it "on a different branch of the tech tree" from the GPT reasoning series.
That framing effectively subordinates world-model research to language model scaling — a resource allocation decision that will define OpenAI's direction for years. The company is signaling that its bets are concentrated on the GPT architecture lineage, not multimodal world modeling.
What 'Line of Sight' Actually Means
The phrase "line of sight" is doing significant work in Brockman's formulation. It implies a visible path — not necessarily a near-term arrival. AGI, by most definitions including OpenAI's own, refers to systems capable of performing virtually any cognitive task at or above human level. Even optimistic internal timelines at frontier labs typically place that threshold years away. Brockman is not claiming OpenAI has achieved AGI; he is claiming the architecture is now sufficient that the remaining work is a matter of scale and refinement, not a fundamental breakthrough.
That distinction matters enormously for how the AI safety community interprets the statement. A "line of sight" claim from the president of the world's most prominent AI lab is effectively a statement about risk: the current trajectory, if unimpeded, leads to transformative capability. How quickly, and what safeguards apply at each step, remains the most consequential open question in the field.