- Sam Altman says humanity is “close to building digital superintelligence”
- Intelligent robots that can build other robots “are not so far away”
- He sees “whole classes of jobs go away” but “capabilities will go straight up straight and we get all better things”
In a long blog post, Openai CEO Sam Altman has edited his vision for the future and reveals how artificial general intelligence (AGI) is now inevitable and changing the world.
In what could be seen as an attempt to explain why we haven’t achieved AGI yet, Altman seems to be a pain to emphasize that progress with AI as a gentle curve rather than a quick acceleration, but that we are now “past the event horizon” and that “when we look back on a few decades, the gradual changes will have made up something big.”
“From a relativistic perspective, the singularity happens a little after bit,” writes Altman, “and the merger happens slowly. We climb up the long arc of exponential technological progress; it always looks vertical and looks forward and flat that goes backwards, but it is a smooth curve.”
But even with a more decelerated timeline, Altman is convinced that we are on our way to AGI, and predicts three ways it will shape the future:
1. Robotics
Of particular interest in Altman is the role that Robotik will play in the future:
“2025 has seen the arrival of agents who can perform real cognitive work; writing computer code will never be the same. 2026 will probably see the arrival of systems that can find out new insights. 2027 can see the arrival of robots that can perform tasks in the real world.”
After all, to perform real tasks in the world that Altman imagines, the robots would, after all, be humanoid, since our world, after all, is designed to be used by humans.
Altman says “… Robots that can build other robots… are not that far away. If we have to make the first million humanoid robots for the old-fashioned way, but then they can serve the entire supply chain-digging and refining minerals, driving trucks, driving factories, etc.-to build more robots that can build more chip factory etc.
2. Job loss, but also opportunities
Altman says society will have to change to adapt to AI on the one hand through job losses, but also through increased opportunities:
“The speed of technological advances will continue to accelerate, and it will continue to be the case that people are able to adapt to almost everything. There will be very hard parts like whole classes of jobs going away, but on the other hand, the world will become so much richer so fast that we will be able to seriously entertain new political ideas we could never before.”
Altman seems to balance the changing job landscape with the new opportunities that Superintelligence will bring: “… Maybe we go from solving high energy dysik one year to the beginning of space colonization next year
3. AGI will be cheap and widely available
In ALTMAN’s bold new future, superintelligence will be cheap and widely available. When describing the best way forward, Altman first suggests that we solve the “adjustment problem”, which involves getting “… AI systems to learn and act against what we collectively really want in the long term”.
“So [we need to] Focus on making superintelligence cheap, widely available and not too concentrated with any person, business or country … Giving users a lot of freedom, within the broad border, society must decide, seems very important. The sooner the world can start a conversation about what these broad boundaries are and how we define collective adaptation, the better. “
It’s not necessarily like that
Reading Altman’s blog, which is a kind of inevitability behind his prediction of the fact that humanity is marrying continuously towards Agi. It is as if he has seen the future and there is no room for doubt in his vision, but is he right?
Altman’s vision is in stark contrast to the recent paper from Apple, which suggested that we are much further away from achieving AGI than many AI numbers of people want.
“The illusion of thinking,” a new research document from Apple says that “despite their sophisticated self -reflection mechanisms taught through reinforcement learning, these models do not develop generalizable problem -solving functions for planning tasks, with performance collapses to zero beyond a particular complexity threshold.”
The research was done on large reasoning models, such as Openais O1/O3 models and Claude 3.7 Sonnet Thinking.
“Especially regarding is the modintuitive reduction in reasoning efforts when problems are approaching critical complexity, which suggests an inherently calculated scaling limit in LRMS,” says the paper.
In contrast, Altman is convinced that “Intelligence for cheap to meter is good in grips. This may sound crazy to say, but if we told you back in 2020, we should be where we are today, it probably sounded more crazy than our current predictions around 2030.”
As with all the predictions of the future, we will find out if Altman will be just enough soon.



