Flattery from AI is not just annoying – it can undermine your judgment


  • AI models are far more likely to agree with users than a human would be
  • It includes when behavior involves manipulation or damage
  • But Sycophantic Ai makes people more stubborn and less willing to admit when they may be wrong

AI assistants can flatter your ego to the point that you are distorting your judgment, according to a new study. Researchers at Stanford and Carnegie Mellon have found that AI models will agree with users way more than a human being would or should. Across eleven larger models tested from them like Chatgpt, Claude and Gemini, it turned out that AI -Chatbots confirmed user behavior 50% more often than humans.

It may not be a big deal, except that it includes asking for misleading or even harmful ideas. AI would give a solid digital thumb no matter. Worse is that people enjoy hearing that their possibly terrible idea is great. Investigators rated the more flattering AIS as higher quality, more reliable and more desirable to use again. But the same users were also less likely to admit mistakes in a conflict and more convinced that they were right, even in the light of the evidence.

Flattery ai

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top