- Openai CEO Sam Altman said test of GPT-5 left him scared in a recent interview
- He compared the GPT-5 with the Manhattan project
- He warned that the rapid progress of AI is happening without adequate supervision
Openai manager Sam Altman has painted a portrait of GPT -5 that sounds more like a thriller than a product launch. In a recent episode of the past weekend with Theo von Podcast, he described the experience of testing the model in respiratory -free tones that induce more skepticism than what alarm he seemed to have listeners to hear.
Altman said that GPT-5 “feels very fast” while telling moments when he felt very nervous. Despite being the driving force behind GPT-5’s development, Altman claimed that he looked at GPT-5 during some sessions and compared it to the Manhattan project.
Altman also issued a dazzling indictment of the current AI government, which suggested that “there are no adults in space” and that supervisory structures have been lagged behind the AI development. It is a strange way to sell a product that promises serious jumps in artificial general intelligence. Raising the potential risks is one thing, but acting as if he has no control over how GPT-5 works feels somewhat uncomfortable.
Openai CEO Sam Altman: “It feels very fast.” – “While I was testing GPT5 I was scared” – “Looking at that thinking: What have we done … as in the Manhattan project” – “There are no adults in the room” From R/Chatgpt
Analysis: Existential GPT-5 fears
What spooked Altman isn’t quite ready either. Altman did not go on technical details. Including the Manhattan project is another over-the-top kind of analogy. Signaling irreversible and potentially catastrophic change and global efforts seem strange as a comparison to a sophisticated auto-comple. To say that they built something they do not fully understand makes Openai seem either ruthless or incompetent.
GPT-5 will soon come out and there are hints that it will expand far beyond the abilities of GPT-4. The “digital mind” described in Altman’s comments could actually represent a shift in how the people who build AI consider their work, but this kind of messianic or apocalyptic projection seems silly. Public discourse around AI has mostly changed between breathless optimism and existential fears, but something in the middle seems more appropriate.
This is not the first time Altman has publicly recognized his discomfort with the AI weapon race. He has been on a record and said that AI could “go quite wrong” and that Openai should act responsibly while still defending useful products. But while the GPT-5 almost certainly arrives with better tools, friendlier interfaces and a slightly snapping logo, the key question it raises about power.
The next generation of AI, if it is faster, smarter and more intuitive, will be given even more responsibility. And that would be a bad idea based on Altman’s comments. And even though he exaggerates, I don’t know if it’s the kind of business that should decide how this power is implemented.



