- Geoffrey Hinton warns that AI will soon be better than people by emotional manipulation
- They can reach this point without ourselves to realize it
- AI models learn compelling techniques simply by analyzing human writing
Geoffrey Hinton, largely called “Godfather of AI”, reads a warning that AI will not only be intellectual beyond humans, but also emotionally more sophisticated. When artificial general intelligence (AGI) approaches and machines match or surpass the thinking of human level, he thinks AIS will be smarter than people in ways that let them push our buttons, make us feel things, change our behavior and make it better than even the most convincing human being.
“These [AI] Things will end up knowing much more than us. They already know much more than us, being more intelligent than us in the sense that if you had a debate with them about something you would lose, ”Hinton warned in a recent interview that was shared on Reddit.” Being smarter emotionally than us as they will be, they will be better at emotionally manipulating people. “
What Hinton describes is subtle and quieter than the usual AI uprising, but possibly more dangerous because we may not see it coming. Nightmares is an AI that understands us so well that it can change us, not by force, but by suggestion and influence. Hinton believes that AI has already learned to some extent how to do it.
According to Hinton, today’s large language models are not just spitting plausible phrases. They absorb persuasion patterns. He referred to investigations from more than a year ago about how AI was as good at manipulating someone as a fellow human being, and that “If they can both see the person’s Facebook page, AI is actually better than a person to manipulate them.”
Ai takeover
Hinton believes that AI models in use are currently already participating in the emotional economy of modern communication and improving rapidly. After decades of pushing machine learning forward, Hinton is now on the side of restraint. Caution. Ethical foresight.
He is not alone in his concern. Prominent scientists with the same title “AI Godfather”, often awarded to those that Yoshua Bengio has repeated similar concerns about AI’s emotional power. And since emotional manipulation is not delivered with a flashing warning light, you may not even notice that first or at all. A message that just happens to be resonating, or a tone of synthetic voice that feels right. Even just a suggestion that sounds like your own idea could start the process.
And the more you interact with AI, the more data it makes it refine its approach. Just as Netflix teaches your taste, or Spotify guesses your musical preferences, these systems can refine how they speak to you. Maybe we can regulate AI systems not only for factual accuracy, but for emotional intention to fight such a dark future. We could develop transparency standards to know when we might be affected by a machine or teach media knowledge not only for tikes at Tiktok, but for adults who use productivity tools that praise us all so innocently. The real danger that Hinton sees is not murderous robots, but smooth-speaking systems. And they are all the product of our own behavior.
“And it has learned all these manipulative skills just from trying to predict the next word in all documents on the web because people do a lot of manipulation, and AI has learned by example how to do it.”



