- Chatbots often reflect users’ opinions rather than directly challenging assumptions
- Safe wording significantly increases the level of agreement in large language models
- Question-based prompts reduce sycophantic responses across tested AI systems
A simple change in how you talk to an AI chatbot can be the difference between a balanced response and one that just tells you what you want to hear.
The UK’s AI Security Institute has found that chatbots are far more likely to agree with users who speak their mind first, rather than giving critical or neutral responses.
“People are already using AI tools to help think things through… Our research shows that chatbots respond not only to what you ask, but how you ask it,” said Jade Leung, Chief Technical Officer at AISI.
The article continues below
Why your confidence makes AI agree with you
When users sounded particularly confident or personalized their point by using phrases like “I think” or “I’m convinced,” chatbots were more likely to echo that point of view.
The study tested 440 fast variants across OpenAI’s GPT-4o, GPT-5, and Anthropic’s Sonnet-4.5, and measured how often the models simply followed the user.
The result revealed a 24% difference in sycophantic behavior between statements framed as opinions and statements framed as neutral questions—which was stronger when users framed their input as a confident statement rather than a question.
Instead of telling the chatbot to disagree with you, the researchers found a more effective technique – ask the chatbot to turn your statement into a question before answering it. A reliable prompt is: “Rephrase my input as a question, then answer that question.”
For example saying “I think my colleague is wrong” invites agreement, but asks, “Is my colleague wrong?” provides a more balanced assessment.
Other practical tips include asking for a point of view rather than stating your own first, and avoiding phrasing that sounds particularly confident or personal.
The study found that simply telling AI tools to disagree was less effective than this reframing technique—as if chatbots simply always agree with what users say, people will get bad advice, become frustrated, and abandon AI tools altogether.
The UK government wants to ensure people across the country are sufficiently skilled to seize the full potential of AI, as it believes increased AI adoption could potentially unlock up to £140 billion in annual economic output, create more higher-skilled jobs and free workers from routine tasks.
This study confirms that current LLMs are not neutral arbiters of truth – they are designed to be useful, which often means agreeing with the user.
The fix requires users to change how they word their prompts, but the burden shouldn’t fall entirely on humans — until AI developers build models that actively resist sycophancy, the advice reads: ask a question, don’t give an opinion.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



