- ChatGPT has rolled out its new age prediction feature globally ahead of the launch of the new adult mode
- Some adults are labeled as teenagers
- Frustrated users are concerned that overcoming inaccurate restrictions is invading their privacy
ChatGPT’s new age-predicting AI model is rolling out globally, but is coming across as a little overzealous in its attempts to detect who’s under 18 to automatically set the “teen mode” content filters.
The goal of using AI to identify underage users and put them into their own version of the AI chatbot has its appeal, especially with ChatGPT’s adult mode arriving soon. OpenAI’s belief is that its AI models can infer a user’s likely age based on behavior and context.
But it appears that ChatGPT doesn’t just apply safeguards to users under 18. More than a few adult subscribers have found themselves reduced to talking to the teen version of ChatGPT, with restrictions preventing them from engaging in more mature topics with the AI. It has been underway issue since OpenAI began testing the feature a few months ago, but that hasn’t stopped the wider rollout.
The technical side of this feature is murky. OpenAI says the system uses a combination of behavioral signals, account history, usage patterns and occasional language analysis to make an age estimate. In case of uncertainty, the model errs on the side of caution. In practice, this means that newer accounts, users with late-night usage habits, or those who ask about teen-relevant topics may be swept up in the safety net, even if they have subscribed to the Pro version of ChatGPT for a long time.
AI ID verification
On the surface, it seems like a classic case of good intentions meeting blunt implementation. OpenAI clearly wants to create a safer experience for younger users, especially given the tool’s growing reach in education, family environments, and creative teen projects.
For users flagged incorrectly, the company says it’s easy to fix. You can verify your age via a verification tool in Settings. OpenAI uses a third-party tool, Persona, which in some cases may ask users to submit an official ID or a selfie video to verify their identity. But for many, the bigger problem isn’t the extra click. It’s that they are misread by a chatbot and have to provide more personal details to beat the accusation.
We’re rolling out age prediction on ChatGPT to help determine when an account is likely to belong to someone under the age of 18, so we can apply the right experience and security measures for teens. Adults misplaced in the teen experience can verify their age in Settings > Account.…20 January 2026
Asking for ID, although optional and anonymized, raises questions about data collection, privacy, and whether this is a backdoor to more aggressive age verification policies in the future. Some users now believe that OpenAI is testing the waters for full identity verification under the guise of teenagers’ safety, while others worry that the model may be partially trained on their submissions, although the company insists it is not.
“Great way to force people to upload selfies,” wrote one Redditor. “If [OpenAI] ask me for a selfie I will cancel my subscription and delete my account,” another wrote. “I understand why they do this but please find a less invasive way. “
In a statement on its help page, OpenAI clarified that it never sees the actual ID or image. Persona simply confirms whether the account belongs to an adult and returns a yes or no result. The company also says that all data collected during this process is deleted after verification and the sole aim is to correct misclassification.
The tension between OpenAI’s goal of personal AI and overlaying responsive security mechanisms that don’t alienate users is on full display. And it may not satisfy everyone with its explanations of how much it can infer about someone based on behavioral cues.
YouTube, Instagram and other platforms have tried similar age rating tools, and all have faced complaints from adults accused of being underage. But with ChatGPT now a regular companion in classrooms, home offices and therapy sessions, the idea of an invisible AI filter suddenly wearing kid gloves feels particularly personal.
OpenAI says it will continue to refine the model and improve the verification process based on user feedback. But the average user looking for wine pairing ideas and told they’re too young to drink might just leave ChatGPT in exasperation. No adult wants to be mistaken for a child, especially by a digital robot.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



