- AI-generated passwords follow patterns that hackers can study
- Surface complexity hides statistical predictability underneath
- Entropy holes in AI passwords reveal structural weaknesses in AI logins
Large language models (LLMs) can produce passwords that look complex, but recent tests suggest that these strings are far from random.
A study by Irregular examined password output from AI systems such as Claude, ChatGPT, and Gemini, which asked each to generate 16-character passwords with symbols, numbers, and mixed uppercase and lowercase letters.
At first glance, the results appeared strong and passed common online strength tests, with some checkers estimating it would take centuries to crack them, but a closer look at these passwords told a different story.
LLM passwords show repetitions and predictable statistical patterns
When researchers analyzed 50 passwords generated in separate sessions, many were duplicates and several followed nearly identical structural patterns.
Most began and ended with similar character types, and none contained repeated characters.
This absence of repetition may seem reassuring, but it actually signals that the output follows learned conventions rather than true randomness.
Using entropy calculations based on character statistics and model log-likelihoods, researchers estimated that these AI-generated passwords carried about 20 to 27 bits of entropy.
A true random password of 16 characters will typically measure between 98 and 120 bits using the same methods.
The gap is significant – and in practice it can mean such passwords are vulnerable to brute-force attacks within hours, even on outdated hardware.
Online password strength meters evaluate surface complexity, not the hidden statistical patterns behind a string—and because they don’t take into account how AI tools generate text, they can classify predictable output as secure.
Attackers who understand these patterns can refine their guessing strategies and narrow the search space dramatically.
The study also found that similar sequences appear in public code repositories and documentation, suggesting that AI-generated passwords may already be circulating widely.
If developers rely on these outputs during testing or deployment, the risk is compounded over time—in fact, even the AI systems that generate these passwords don’t fully trust them and can issue warnings when tapped.
Gemini 3 Pro, for example, returned password suggestions along with a warning that chat-generated credentials should not be used for sensitive accounts.
It instead recommended passphrases and advised users to trust a dedicated password manager.
A password generator built into such tools relies on cryptographic randomness rather than language prediction.
Simply put, LLMs are trained to produce plausible and repeatable text, not unpredictable sequences, so the broader concern is structural.
The design principles behind LLM-generated passwords conflict with the requirements for secure authentication, so it provides protection with a loophole.
“People and coding agents should not rely on LLMs to generate passwords,” Irregular said.
“Passwords generated through direct LLM output are fundamentally weak, and this cannot be fixed by prompt or temperature adjustments: LLMs are optimized to produce predictable, plausible output, which is incompatible with secure password generation.”
Via The register
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



