GPT -5 Pro impresses with its complex, layered response to prompt. The crown jewel in the GPT-5 rolling this month even made Openai CEO Sam Altman nervous with some of his answers. But you should not confuse radiant algorithmic models with genuine independent thinking, according to Dr. Ben Goertzel, who helped popularize the term artificial general intelligence (AGI) in the early 2000s.
Now the CEO of the Artificial Superintelligence Alliance and the TrueAgi Inc., and the founder of the SingularityNet, Goertzel wrote an essay paid tribute to the GPT -5 Pro as “a remarkable technical performance”, which he finds useful for formatting research articles, parsed mathematical framework and improves his own price. But he is not mistaken for the model’s abilities for actual brain -style brains.
“These models, impressive as they are, are completely lacking in the creative and forthcoming spark that characterizes human intelligence on its best,” wrote Goertzel. “More basic, they literally know ‘don’t know what they’re talking about.’ Their knowledge is not grounded in experience or observation, it is a pattern that matches at an extraordinarily sophisticated level, but patterns that nevertheless match.
No matter how fast or thorough the model’s performance is, it is ultimately low. You can be dazzled by the view, but there is nothing going on during the statistical inference. People who see a blurry line between the GPT -5 Pro and AGI are not surprising, he rushed to add as it can emulate logic, expand reasoning and look like a thought process is happening, but it is nothing like a human or animal brain. Stringing together associations taught by training are not the same as drawing on memory, experience or a vision of future goals.
“This distinction is not semantic nitpicking. Real AGI requires basic knowledge in both external and internal experience,” wrote Goertzel. “As for these basic aspects of open cognition, today’s LLMs are very subordinate than a year old human child, their incredible intellectual facility despite.”
Agi’s future
GPT -5 Pro and its siblings are built on an increasingly strained condition that scaling of large language models will inevitably produce AGI. He also suggested that the current LLM approach be merged into a business model that limits innovation. Openai, he notes, trying to build AGI and sell scalable chatbot services to billions of users. The Agi mark, he warns, is thrown too freely. While the GPT-5 Pro and other tools are undeniably powerful, it is after premature and possibly misleading to call them mind.
“GPT5-PRO deserves recognition as a remarkable performance in AI-Technique. For researchers and professionals who need sophisticated technical assistance, it is currently unmatched,” wrote Goertzel. “But we should not be mistaken for step -by -step improvements in large -scale naturally linguistic pattern that matches for progress towards real artificial general intelligence.”
Goertzel’s description of a real AGI is a model that is constantly learning new things, regardless of a user interacting with it. The continuous development of a mind, the human experience, goes far beyond the specific training and implementation of an AI model. GPT -5 Pro is frozen the moment it is implemented; A sealed jar of intelligence.
Goertzel’s work would smash this jar and spread the intelligence over decentralized systems. Finally, he hopes to produce an intelligence that does not mimic how brains work but act as one, with internal models of the world and beliefs it would update over time.
“The road to AGI will not be found by simply scaling current approaches. It requires basic innovations in how we ground knowledge, enable continuous learning and integrate different cognitive abilities,” concludes Goertzel. “GPT-5 and its successors are likely to play important supportive roles in future AGI systems, but the leading role requires more innovative actors we are still creating.”



