- Study finds professionals feel disrespected when customers compare their expertise to AI-generated answers
- Advisors become less motivated after losing clients to AI-powered recommendations online
- Customers who use AI fact-checking may appear less credible to professionals afterwards
A new study from Monash Business School has claimed that professional advisers feel offended when clients use AI to get a second opinion on their recommendations.
The research, published in Computers in Human Behaviorfound that professionals become less motivated to work with clients who consult AI tools.
This effect persists even when the client uses AI only for background information or as a complementary resource rather than a substitute.
Human experts feel insulted by AI fact-checking
“Counselors view AI as significantly inferior to themselves, so being placed in the same category as an AI system feels insulting and signals disrespect, undermining counselors’ willingness to engage,” said Associate Professor Gerri Spassova, the lead author.
Imagine spending an hour helping a client plan a complex trip, meticulously mapping out flights, hotels, and itineraries—only for that client to take your recommendations and order everything through an AI chatbot instead.
Researchers found that professionals who lost business to an AI were far less willing to work with that client again in the future.
Clients who consult AI may be seen as less competent and less warm by the advisors they turn to for help.
When clients put off AI, it makes advisers question the value of their own human contribution, and this could get worse as AI improves.
Many advisors are offended by this, and it is the main reason they shy away from clients consulting AI.
“One can only speculate,” Associate Professor Spassova said. “My intuition is that the situation will not improve much, firstly because the jobs of professional advisers are at stake.
“Also, as AI improves, it can threaten our sense of worth and self-respect, and so when clients defer to AI, it will cause advisors to question the value of their human contribution.”
The study suggests for new client advisor relationships that people should not disclose that they consulted AI before the meeting.
A long history of working together may dampen the negative reaction, but even then the counselor may still feel cheated.
This applies to doctors, lawyers and other professionals whose expertise clients might fact-check with AI tools.
A doctor who has spent years training does not want to be second guessed by a patient who spent five minutes on ChatGPT.
AI tools usually provide a general overview of a situation and are very prone to make mistakes.
Its assessment is highly dependent on the amount of information you provide, and if you are not detailed enough, its response can be misleading.
AI also provides answers to questions based on the way they are asked, and users can easily influence an AI tool to tell them what they want to hear.
Given these nuances, it would be unfair to judge a professional with years of study and experience based on an unsafe tool.
There is absolutely no reason to throw it in the face of a professional that you have consulted AI because it creates a feeling of “lack of trust”.
Until professional norms adjust to the presence of artificial intelligence, clients would do well to keep their fact-checking private or risk damaging professional relationships.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



