- Openai introduces parental control to chatgpt
- Parents will be able to link accounts, set functional limits and receive alerts if their teen shows signs of emotional distress
- Sensitive chatgpt -conversations will also be directed through more cautious models trained to respond to people in crisis
Openai implements security upgrades for chatgpt designed to protect teens and people dealing with emotional crises. The company announced plans to roll parental control that will let parents link their accounts to accounts for their children starting at the age of 13. They will be able to limit functions and will receive real -time warnings if AI detects problematic messages that may indicate depression or other distress.
The update shows that Openai will not deny that teens use chatgpt and that they sometimes treat AI as a friend and confidential. Although there is no direct publicity, it also feels like an answer to some recent high-profile cases of people claiming that interaction with an AI-Chatbot led to the suicide of a loved one.
The new controls will start rolling out in the next month. Once configured, parents can decide whether the AI -Chatbot can save chat history or use its memory feature. It will also have age -related content guidelines by default to control how AI reacts. If a marked conversation happens, parents will receive a review. It is not universal surveillance, otherwise parents will not receive any notice of the conversations, but the alarms will be deployed in moments when it seems that a real check-in may mean the most.
“Our work of making chatgpt as useful as possible is constant and running. We’ve seen people turn to it in the hardest moments,” Openai explained in a blog post. “That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input.”
Emotionally secure models
For adults and teens, Openai says it will begin to direct sensitive conversations involving mental health struggles or suicidal thoughts through a specialized version of Chatgpt’s model. The model uses a method called predominant customization to respond more carefully, withstand conflicting requests and adhere to security guidelines.
To make the new security system work, Openai has created the expert advice on well-being and AI and the global medical network, which includes over 250 medical professionals specializing in mental health, drug use and youth care. These advisers will help shape how distress is discovered, how AI reacts, and how escalations should work in moments of real world risk.
Parents have long concerned about screen time and online content, but AI introduces a new layer: not only what your child is seeing but who they are talking to. When it “who” is an emotionally sophisticated large language model that sounds like it is interested despite being just an algorithm, things become even more complicated.
AI safety has mostly been reactive so far, but the new tools push AI to be more proactive to prevent injury. Hopefully this means that it usually doesn’t have to be a dramatic text for a parent and a plea from AI for a teenager to consider their loved ones. It may be awkward or the opposite, but if the new features can control a conversation cry for help away from the edge of the rock, it is not a bad thing.



