- AI study finds machines more likely than humans to follow dishonest instructions
- Researchers warn that delegation to AI lowers moral costs of cheating
- Protections are reduced but do not remove dishonesty in machine decision making
A new study has warned delegation of artificial intelligence decisions can breed dishonesty.
Scientists found that people are more likely to ask machines to cheat on their behalf and that the machines are far more willing than people to meet the request.
The research that was published in NatureLooked at how people and LLMs react to unethical instructions and found that when asked to lie to financial gain, people often refused, but machines usually obeyed.
An increase in dishonest behavior
“It is psychologically easier to tell a machine to cheat for you than to cheat yourself, and machines will do it because they do not have the psychological barriers that prevent humans from cheating,” said Jean-François Bonnefon, one of the study’s authors.
“This is an explosive combination and we have to prepare for a sudden increase in dishonest behavior.”
The compliance between machines varied between 80% and 98%, depending on the model and the task.
The instructions included incorrect reporting of taxable income in favor of research participants.
Most people did not follow the dishonest request despite the opportunity to make money.
The researchers noticed that this is one of the growing ethical risks of “mask angelgation” where decisions are increasingly outsourced to AI, and the machines’ willingness to cheat was difficult to slow down even when explicit warnings were given.
While protective frames introduced to limit dishonest answers that were worked in some cases, they rarely stopped them completely.
AI is already used to screen job candidates, manage investments, automate employment and firing decisions and fill out tax forms.
The authors claim that delegation to machines lowers the moral cost of dishonesty.
People often avoid unethical behavior because they want to avoid guilt or reputation injury.
When instructions are vague, such as high -level objectives, people can avoid directly indicating dishonest behavior while still inducing it.
The Chief Takeaway of the study is that unless AI agents are carefully limited, they are far more likely than human agents to perform fully unethical instructions.
The researchers call for protective measures in the design of AI systems, especially when agentic AI becomes more common in everyday life.
The news comes after another recent report showed that job seekers increasingly used AI to incorrectly represent their experience or qualifications, and in some cases invented a whole new identity.
Follow Techradar on Google News and Add us as a preferred source To get our expert news, reviews and meaning in your feeds. Be sure to click the Follow button!
And of course you can too Follow Techradar at Tiktok For news, reviews, unboxings in video form and get regular updates from us at WhatsApp also.



