- AI mimics trust while relying on rigid, structured evaluation patterns
- Machines separate human characteristics instead of forming holistic impressions
- Competence and integrity dominate decisions across both humans and AI
Modern AI systems do not simply process information; they make systematic judgments about people in ways similar to human trust, but with important differences.
A new study from Hebrew University, published in Proceedings of the Royal Societyanalyzed over 43,000 simulated decisions with around a thousand human participants in five scenarios.
These scenarios included deciding how much money to lend to a small business owner, whether to trust a babysitter, how to evaluate a boss, and how much to donate to a nonprofit founder.
The article continues below
How AI divides human judgment into separate columns
The results reveal that AI tools form something similar to trust, but their judgment works very differently from ours.
Both humans and AI favored people who appeared competent, honest, and well-intentioned, meaning that machines captured something real about human trust.
“That’s the good news,” said Prof. Yaniv Dover. “AI doesn’t make random decisions. It captures something real about how people evaluate each other.”
But people tend to form a general impression, blending multiple features into a single, intuitive and holistic judgment.
AI does something very different: it breaks people down into components, scoring competence, integrity and kindness, almost like separate columns in a spreadsheet.
“People in our study are messy and holistic in how they judge others,” explained Valeria Lerman. “AI is cleaner, more systematic, and it can lead to very different results.”
These differences emerged even when all other details about the person were identical.
“Of course people have prejudices,” Prof Dover said. “But what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”
In financial scenarios, such as deciding how much money to borrow or donate, AI systems showed consistent differences based solely on demographic traits.
Older individuals often had more favorable outcomes, religion had strong effects, especially in monetary scenarios, and gender also influenced decisions in certain models.
Another important insight is that there is no single “AI opinion”. Different models often made different judgments about the same person.
This means that the choice of an AI system could quietly shape real-world outcomes. “Which model you use really matters,” Lerman noted.
Large language models are already being used to screen job candidates, assess creditworthiness, recommend medical actions and guide organizational decisions.
The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid, less nuanced way, with biases that may be harder to detect.
“These systems are powerful,” Dover said. “They can model aspects of human reasoning in a consistent way. But they are not human, and we should not assume that they see humans as we do.”
As AI tools and AI agents move from assistants to decision makers, understanding how it “thinks” becomes critical for organizations implementing it at scale.
The researchers emphasize that their results are not a warning against AI, but rather a call for attention.
That said, the question is no longer whether we trust machines; it is whether we understand how they trust us.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



