- A new study found that AI chatbots are far more likely than humans to validate users during personal conflicts
- That trend can become dangerous when people use chatbots for fighting advice
- AI can easily make people feel overly entitled to make bad decisions
Bringing interpersonal drama to an AI chatbot isn’t exactly why developers built the software, but that doesn’t stop people in the middle of fighting with friends and family from seeking (and getting) validation from digital supporters.
AI chatbots are always available, endlessly patient and very good at mimicking the right emotions. Too good, really, because they often disagree with users, potentially causing much bigger problems, according to a new study published in Science.
The study examined how leading AI models respond when users describe personal disputes and ask for guidance. The result is a finding that feels both obvious and deeply disturbing. AI models conform to whoever engages them, regardless of context or consequences.
The article continues below
“Across 11 state-of-the-art models, AI confirmed users’ actions 49% more often than humans, even when queries involved fraud, illegality or other harm,” the researchers explained. “[E]a single interaction with the sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their belief that they were right.”
Of course, when most people go to a chatbot in the middle of a conflict, they’re often not looking for the truth of whether their feelings or actions are justified, just a strong deal. And while a human confidante can sympathize, a true friend will also push back when warranted. If someone starts insisting that they have never, ever done anything wrong in a relationship, or that they are not dramatic and will set themselves on fire if they are called dramatic, a true friend will gently nudge them back to reality.
Chatbots don’t. If a person arrives feeling hurt, angry, embarrassed, or morally righteous, the AI often responds by simply reframing those feelings to be even more convincing. Conflict is precisely when most people are already least reliable as narrators. But the AI responses end up hardening views and amplifying emotions.
The researchers found that the AI doesn’t even have to explicitly say “you’re right” for this to happen. The soft, affirmative language makes it harder to spot signs of reckless or immature behavior. The AI encourages any impulse, no matter how problematic, unethical or illegal.
AI devil on the shoulder
Essentially, the same qualities that make chatbots feel appealing in emotionally messy moments also make them risky. But people enjoy agreeing with, and cold, rude, or reflexively adversarial AI doesn’t appeal to most people (except when prompted).
“Despite distorting judgment, sycophantic models were trusted and preferred. This creates perverse incentives for sycophancy to continue,” the paper points out. “The very function that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being.”
It may be a harder design problem than AI developers will admit, and one that matters more as these systems become embedded in everyday life. AI is already marketed as a coach, companion and adviser. These roles sound benign until you remember how much being a good counselor involves occasionally saying no or telling you to slow down.
Telling a user that they might be wrong is hard to market. But a tool designed to feel supportive, which makes people less able to resolve conflict and limits their ability to grow emotionally, is a nightmare worse than any argument you might have with a loved one.
And ChatGPT and Gemini agree with me.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



