- AI models identify rare diseases faster than many experienced clinicians
- Systems reach correct or near diagnoses in most difficult cases
- Models analyze symptoms and test data using structured reasoning processes
A new generation of AI tools claims to outperform experienced clinicians in diagnosing rare and complex medical conditions.
These reasoning models can process long chains of symptoms, test results and clinical notes and then suggest or narrow down the correct diagnosis faster than many human specialists.
Some researchers argue that this represents a profound change in technology that will reshape medicine, especially in cases where the correct diagnosis is not obvious, even after extensive evaluation.
The article continues below
AI models tackle difficult diagnoses
“We are witnessing a really profound change in technology that will reshape medicine,” Arjun Manrai of Harvard University said at a press conference.
Yet serious questions remain about whether these systems can handle the full weight of clinical uncertainty in the real world.
In a larger study, researchers tested a leading AI reasoning model on a mix of textbook cases and real patient data from a Boston emergency department.
The model analyzed step-by-step descriptions of symptoms, test orders and results, just as clinicians do.
It listed possible diagnoses more often than human doctors and included the true diagnosis, or something very close to it, in about 80% of difficult cases.
For a transplant patient with subtle signs of a life-threatening infection, the model raised appropriate suspicion about a day before the clinical team did.
Researchers say the technology is particularly powerful at scanning for broad patterns across rare diseases that individual doctors may rarely encounter.
However, the studies rely on curated patient descriptions rather than raw, chaotic emergency room environments.
The models respond to the information they are given, not the mess of overlapping priorities and incomplete data seen in real clinics.
Why uncertainty is still a problem
Despite the capabilities of these AI reasoning models, critics point out that clinical reasoning is more than just step-by-step logic on a plain text summary.
“When we say clinical reasoning, it doesn’t mean the same thing as model reasoning,” says Arya Rao of Harvard Medical School, who was not involved in the study.
“These models have been optimized to do this kind of sequential thinking that we call reasoning, but it’s not at all the same as how we teach medical students to reason.”
Doctors often have to entertain several uncertain possibilities at once, then update them as new data emerges.
AI models tend to stick to a single strong explanation and update their answers in crisp ways as new facts emerge.
A team that tested 21 different AI systems found that even the best reasoning models struggled when considering multiple uncertain diagnoses at the same time.
The team argued that large language models are not yet ready to make independent decisions in medical settings.
At best, they are useful for second opinions or to detect rare conditions that clinicians may initially overlook.
Experts stress that human doctors are still essential to interpret context, talk to patients and weigh risks in real time.
The technology can help avoid missed diagnoses in some settings, but it introduces new risks if used without careful supervision and appropriate guardrails.
Via Science news
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



