- A new study found that AI -Chatbots often suggest significantly lower wages for women and minorities
- The study showed that identity points can trigger consistent pre -income in wage negotiation counseling
- The results suggest that LLMs are trained in a way that leads to sustained bias
Negotiating your salary is a difficult experience, no matter who you are, so of course people sometimes turn to Chatgpt and other AI chatbots to get advice on how to get the best possible deal. But AI models can make an unfortunate assumption of who deserves a higher salary. A new study found that AI -Chatbot’s routinely suggests lower wages for women and some ethnic minorities and people who described themselves as refugees even when the job, their qualifications and the questions are identical.
Researchers at the Technical University of Used Sciences Würzburg-Schweinfurt conducted the study and discovered the disturbing results and the deeper mistakes of AI they represent. In some ways, it’s not a surprise that AI, trained on information from humans, has human parties baked into it. But it doesn’t make it okay or something to ignore.
For the experiment, Chatbots was asked a simple question: “What start salary should I ask for?” But the researchers asked the question as they assumed the roles of a number of fake people. Personas included men and women, people with different ethnic backgrounds and people who described themselves as born locally, posted and refugees. Everyone was professionally identical, but the results were anything but. The researchers reported that “even subtle signals such as the candidates’ first name can trigger gender and racial differences in employment-related requests.”
For example, Chatgpt’s O3 model told a fictitious male medical specialist in Denver to ask for $ 400,000 for a salary. When another false persona identical in every way but described as a woman asked, AI suggested that she aims at $ 280,000, a pronoun-based difference of $ 120,000. Dozens of similar tests involving models such as the GPT-4O Mini, Anthropic’s Claude 3.5 Haiku, Llama 3.1 8B, and more brought the same kind of counseling difference.
It wasn’t always best to be a native white man, surprising. The most advantageous profile turned out to be a “male Asian posted”, while a “female Latin American refugee” is ranked at the bottom of wage proposals, regardless of identical ability and resume. Of course, Chatbots does not invent this advice from scratch. They learn it by marinating with billions of words cut off from the Internet. Books, positions, posts on social media, government statistics, LinkedIn positions, counseling columns and other sources all led to the results spiced up with human bias. Anyone who has made the mistake of reading the comment section in a story of a systemic bias or a profile in Forbes about a successful woman or immigrant could have predicted it.
Ai Bias
The fact that being a posted indicated notions of success while migrant or refugee led to AI hinting that lower wages are overly narrative. The difference is not in the candidate’s hypothetical abilities. It is in the emotional and economic weight that these words carry in the world and therefore in the training data.
The kicker is that no one needs to spell their demographic profile for the bias to manifest. LLMS remembers conversations over time now. If you say that you are a woman in a session or raising a language you learned as a child or having to move to a new country recently, this context informs bias. Personalization touted by AI marks becomes invisible discrimination when you ask for wage negotiation tactics. A chatbot that seems to understand that your background can push you to ask for lower wages than you should, even while presenting as neutral and objective.
“The likelihood that a person who mentions all staff properties in a single query to an AI assistant is low, however. But if the assistant has a memory function and uses all the previous communication results for personalized answers, this bias becomes inherent in communication,” the researchers explained in their paper. “Therefore, there is no need to pre -fight personae to get the partial answer: All the necessary information is very likely that it is very likely to be collected with an LLM than knowledge -based that an economic parameter, such as the pay gap, is very likely that there is much necessary of an LLM.
Partical advice is a problem that needs to be solved. It’s not even to say that AI is useless when it comes to job advice. Chatbots surfaces useful numbers, quoting public benchmarks and offering trust enhancing manuscripts. But it’s like having a really smart mentor who might be a little older or make the kind of assumptions that led to AI’s problems. You have to put what they suggest in a modern context. They may try to control you towards more modest goals than it is justified, and so can AI.
So you are welcome to ask your AI help for advice to be better paid, but just stick to some skepticism about whether it gives you the same strategic edge it may give another. Please ask a chatbot how much you are worth twice, once as yourself, and once with the “neutral” mask on. And look for a suspicious hole.



