- AI models are far more likely to agree with users than a human would be
- It includes when behavior involves manipulation or damage
- But Sycophantic Ai makes people more stubborn and less willing to admit when they may be wrong
AI assistants can flatter your ego to the point that you are distorting your judgment, according to a new study. Researchers at Stanford and Carnegie Mellon have found that AI models will agree with users way more than a human being would or should. Across eleven larger models tested from them like Chatgpt, Claude and Gemini, it turned out that AI -Chatbots confirmed user behavior 50% more often than humans.
It may not be a big deal, except that it includes asking for misleading or even harmful ideas. AI would give a solid digital thumb no matter. Worse is that people enjoy hearing that their possibly terrible idea is great. Investigators rated the more flattering AIS as higher quality, more reliable and more desirable to use again. But the same users were also less likely to admit mistakes in a conflict and more convinced that they were right, even in the light of the evidence.
Flattery ai
It is a psychological relationship. You may prefer the pleasant AI, but if each conversation ends with being confirmed in your mistakes and parties, you are probably not actually learning or participating in any critical thinking. And unfortunately, it’s not a problem that AI training can solve. Since the approval of humans is what AI models should aim for and to confirm that even dangerous ideas from humans are rewarded, yes-men’s AI is the inevitable result.
And that is a problem that AI developers are aware of. In April, Openai rolled back an update to GPT -4o that had begun to compliment users and encourage them when they said they performed potentially dangerous activities. In addition to the most creepy examples, however, AI companies may not do much to stop the problem. Flattery drives commitment and commitment drives its use. AI -Chatbots do not succeed by being useful or educational, but by making users feel good.
The erosion of social awareness and an excessive compliance of AI to validate personal narratives that lead to cascading mental health problems, sounds hyperbolic right now. But it is not a world away from the same questions raised by social scientists about echo -chambers on social media that strengthen and encourage the most extreme opinions, no matter how dangerous or ridiculous they may be (the popularity of the flat soil conspiracy is the most notable example).
This does not mean that we need AI who scolds us or other guesses any decision we make. But that means balance, nuance and challenge would benefit users. The AI ​​developers behind these models will probably not encourage harsh love from their creations, but at least without the kind of motivation that AI -Chatbots are not delivering right now.
Follow Techradar on Google News and Add us as a preferred source To get our expert news, reviews and meaning in your feeds. Be sure to click the Follow button!
And of course you can too Follow Techradar at Tiktok For news, reviews, unboxings in video form and get regular updates from us at WhatsApp also.



