- Microsoft AI CEO Mustafa Suleyman warns that AI -Chatbots could effectively emulate consciousness.
- This would just be an illusion, but people who form emotional attachments to AI can be a big problem.
- Suleyman says it is a mistake to describe AI as if it has feelings or attention, with serious potential consequences.
AI companies that distinguish their creations can make the sophisticated algorithms sound even alive and aware. There is no evidence that is really the case, but Microsoft Ai CEO Mustafa Suleyman warns that even to encourage faith in deliberate AI could have serious consequences.
Suleyman claims that what he calls “apparently consciously AI” (Scai) can soon act and sound so convincingly alive that a growing number of users do not know where the illusion ends and reality begins.
He adds that artificial intelligence quickly becomes emotionally convincing enough to trick people into thinking it’s alive. It can mimic the external signs of consciousness, such as memory, emotional mirror and even apparent empathy, in a way that causes people to treat them like living beings. And when that happens, he says, things get messy.
“The arrival of seemingly conscious AI is inevitable and unwelcome,” Suleyman writes. “Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey for its illusions.”
While this may not appear as a problem for the average person who just wants AI to help write e emails or plan dinner, Suleyman claims it would be a societal question. People are not always good at telling when something is authentic or performative. Evolution and upbringing have founded most of us to believe that something that seems to be listening, understanding and responding is as conscious as we are.
AI could control all of these fields without paying attention and fooling us into what is known as ‘AI Psychosis’. Part of the problem may be that ‘AI’, as mentioned by companies right now, uses the same name, but has nothing to do with the actual self -conscious intelligent machines depicted in science fiction in the last hundred years.
Suleyman quotes a growing number of cases where users form delusional convictions after extended interactions with chatbots. From that, he paints a dystopian vision of a time when enough people are tricked into advocating AI citizenship and ignoring more pressing questions about real problems around the technology.
“In short, my central concern is that many people will begin to believe in the illusion of AIS as conscious units so strongly that they will soon advocate for AI rights, model welfare and even AI citizenship,” Suleyman writes. “This development will be a dangerous turn in AI -progress and deserve our immediate attention.”
As much as it seems like an over-the-top sci-fi kind of concern, Suleyman thinks it’s a problem we’re not ready to tackle yet. He predicts that Scai systems using large language models paired with expressive speech, memory and chat story could begin to surface in a few years. And they come not only from tech giants with billions of dollars, but from everyone with an API and a good prompt or two.
Awkward AI
Suleyman does not require a ban on AI. But he urges the AI industry to avoid language that is passionate about the illusion of machine awareness. He does not want companies to anthropomorphize their chatbots or suggest that the product actually understand or care about people.
It is a remarkable moment for Suleyman who co -founded Deepmind and Bending Ai. His work on bending specifically led to an AI -Chatbot that emphasized simulated empathy and camaraderie, and his work at Microsoft around Copilot has also led to progress in its imitation of emotional intelligence.
However, he has decided to draw a clear line between useful emotional intelligence and possible emotional manipulation. And he wants people to remember that the AI products today are really just smart pattern recognition models with good pr.
“Just as we were to produce AI that prioritizes engagement with people and interactions in the real world of our physical and human world, we should build AI, which only ever presents itself as an AI that maximizes the tool while minimizing markers of consciousness,” Suleyman writes.
“Instead of a simulation of consciousness, we must focus on creating an AI that avoids these traits – which does not claim to have experiences, feelings or feelings such as shame, guilt, jealousy, desire to compete and so on. It must not trigger human empathy – cycling by claiming that it suffers or as it wants to live autonomously, beyond us.”
Suleyman urges protection frames to prevent societal problems born of people who are emotionally binding with AI. The real danger of advanced AI is not that the machines wake up, but that we may forget that they have not done so.



