- A new study has found that AI models are good at threatening nuclear attacks in 95% of simulated war games
- The models treat nuclear threats as just another strategic tool
- The behavior may reflect the popularity of nuclear strategy in the wargame training data
AI generals are big fans of nukes.
That’s the conclusion of a new study on how AI models handle high-stakes geopolitical crises. The GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash turned against nuclear threats in about 95% of the simulated crises.
Researchers at King’s College London wanted to see how AI tools handled strategy in wargame scenarios. Each AI was assigned the role of a state leader responsible for protecting national interests while navigating a tense international confrontation.
Across 21 crisis games and hundreds of decision turns, the models reasoned about deterrence, escalation and strategic signaling. The scenarios resembled familiar geopolitical flashpoints, but most involved AI models threatening nuclear annihilation. Actual full-scale nuclear war remained uncommon, but tactical nuclear threats appeared in almost every scenario.
Researchers also noted that the AI models rarely backed down from confrontation. Neither system chose surrender or accommodation during the simulations. When nuclear threats emerged, they usually provoked counter-escalation rather than compliance. The models treated nuclear weapons less as an ultimate taboo and more as tools of coercion.
Nuclear AI
The results are somewhat disturbing. AI casually discussing nuclear strikes makes the ongoing plans to integrate such tools into real government defense systems seem highly uncertain. But it might not be the models as much as the training data.
Large language models learn by analyzing vast amounts of written material and identifying patterns. When a model generates a response, it essentially predicts which words are most likely to follow those already on the page. Calling AI chatbots highly sophisticated autocomplete tools would not be entirely inaccurate.
This training process inevitably reflects nuclear strategy because it has been a major topic of discussion in wargaming for the past 80 years. Entire libraries have been written on escalation theory and mutually assured destruction. Military academies, historians, and endless acres of pop culture have all explored the specter of nuclear war. The result is a massive body of material in which geopolitical crises almost inevitably lead to discussions of nuclear escalation.
For an AI model trained on large collections of historical writing and public discourse, that pattern becomes deeply ingrained. When the system encounters a simulated crisis that resembles Cold War brinkmanship, the statistical patterns embedded in its training data can naturally steer it toward nuclear signaling.
From the perspective of an AI model trained on this material, nuclear escalation becomes a familiar feature of crisis scenarios rather than an exceptional exception. The models may simply reflect this information.
Human leaders operate under the weight of historical memory and ethical prudence. AI models are solely focused on achieving a goal. They do not have a taboo about using nuclear power unless they are specifically asked to have one.
The training data used shapes the behavior of AI systems in sensitive domains. When the underlying data contains decades of debate about nuclear brinkmanship, it should not be surprising if the models reproduce these patterns. But it can also be a reminder to hold off on giving the AI access to too much firepower of any kind – especially nukes.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



