- New Chatgpt -The Security Rules are frustrating users
- Sensitive items are redirected to different AI models
- Openai has sent a response to user complaints
Openai is involved in another controversy over AI model that switches in chatgpt, how many pay users furious that they are redirected away from their favorite model when conversation topics become emotionally or legally sensitive.
There are plenty of threads on Reddit about the problem: In short, Chatgpt Introduced New Security Calculator this month, which leads users to a separate, more conservative AI model if Chatbot detects it should be extra careful in its answers.
It is clearly frustrated and annoyed many users who will stick to GPT-4o, GPT-5 or whatever they happen to use those who pay. There is currently no way to disable this behavior and it is not very clear when the switches happen.
“Adults deserve to choose the model that suits their workflow, context and risk tolerance,” writing a user. “Instead, we get silent overrides, secret security routers and a model picker that is now basically UI theater.”
Safety route
We have seen the strong reactions to 4o answers and will explain what is happening.September 27, 2025
Enough of a fuss has been kicked up that Openai performing Nick Turley has weighed on social media. Turley explains that the new safety routing system is for “sensitive and emotional topics” and is working on one pr. Message and temporary basis.
It is part of a broader effort to improve how Chatgpt responds to signs of mental and emotional distress, as Openai has previously explained in a blog post. From the user perspective, however, the new rules take some to get used to.
Openai clearly has a responsibility to look after users who may be vulnerable and who may need extra support from an AI Chatbot that is not so expansion and freeform, and especially young people who have access to chatgpt.
For many users who vent their anger online, it’s like being forced to watch TV with the parent controls locked in place, even if there are no children. This is probably a problem we hear more about in the coming days.



