- Sam Altman defended OpenAI’s security efforts after Elon Musk blamed ChatGPT for several deaths
- Altman called AI security “really hard,” highlighting the balance between protection and ease of use
- OpenAI faces multiple wrongful death lawsuits linked to claims that ChatGPT worsened mental health outcomes
OpenAI CEO Sam Altman is not known to share the inner workings of ChatGPT. But he admitted he struggled to keep the AI chatbot both secure and useful. Elon Musk apparently sparked this insight with barbed posts on X (formerly Twitter). Musk warned people not to use ChatGPT and shared a link to an article claiming a link between the AI assistant and nine deaths.
The blistering social media exchange between two of the most powerful figures in artificial intelligence yielded more than bruised egos or legal scars. Musk’s post did not refer to the broader context of the deaths or the lawsuits OpenAI is facing related to them, but Altman clearly felt compelled to respond.
His response was somewhat more heartfelt than the usual bland corporate boilerplate. Instead, he offered insight into the thinking behind OpenAI’s tightrope walk, balancing keeping ChatGPT and other AI tools safe for millions of people, and defending ChatGPT’s architecture and guardrails. “We need to protect vulnerable users while ensuring that our guardrails still allow all of our users to benefit from our tools.”
Sometimes you complain that ChatGPT is too restrictive, and in cases like this you claim it’s too relaxed. Almost a billion people use it, and some of them may be in very fragile mental states. We will continue to do our best to get this right and we feel huge… https://t.co/U6r03nsHzg20 January 2026
After rising to praise OpenAI’s safety protocols and the complexities of balancing harm reduction with product utility, Altman suggested that Musk had no standing to make accusations because of the dangers of Tesla’s Autopilot system.
He said his own experience with it was enough to convince him it was “far from a safe thing for Tesla to have released.” In a particularly pointed aside about Musk, he added, “I don’t even want to start on some of the Grok decisions.”
As the exchange ricocheted across platforms, what stood out most was not the usual billionaire posturing, but Altman’s unusually candid framing of what AI security actually entails. For OpenAI, a company that simultaneously implements ChatGPT for schoolchildren, therapists, programmers and CEOs, the definition of “safe” means threading the needle between utility and avoiding problems, goals that are often in conflict.
Altman has not publicly commented on the individual wrongful death lawsuits filed against OpenAI. However, he has insisted that recognizing harm in the real world does not require simplifying the problem. AI reflects input, and its new reactions mean that moderation and security require more than just the usual terms of service.
ChatGPT’s security battle
OpenAI claims to have worked hard to make ChatGPT more secure with newer versions. There are a number of safety features that are trained to detect signs of distress, including suicidal thoughts. ChatGPT issues disclaimers, stops certain interactions, and directs users to mental health resources when it detects warning signs. OpenAI also claims that its models will refuse to engage in violent content whenever possible.
The public may think this is straightforward, but Altman’s post points to an underlying tension. ChatGPT is deployed in billions of unpredictable chat rooms across languages, cultures and emotional states. Excessively rigid moderation would render the AI useless in many of these circumstances, yet relaxing the rules would multiply the potential risk of dangerous and unhealthy interactions.
Comparing AI to automated car pilots isn’t exactly a perfect analogy, despite Altman’s comment. That said, one could argue that while roads are regulated regardless of whether a human or a robot is behind the wheel, AI prompts are on a rougher track. There is no central traffic authority for how a chatbot should respond to a teenager in crisis or respond to a person with paranoid delusions. In this vacuum, companies like OpenAI are left to build their own rules and refine them on the fly.
The personal element also adds another layer to the argument. Altman and Musk’s companies are in a long-running legal battle. Musk is suing OpenAI and Altman over the company’s transition from a nonprofit research lab to a limited-profit model, claiming he was misled when he donated $38 million to help found the organization. He claims that the company now prioritizes corporate profit over public benefit. Altman says the shift was necessary to build competitive models and keep AI development on a responsible track. The security conversation is a philosophical and engineering facet of a war in boardrooms and courtrooms over what OpenAI should be.
Whether or not Musk and Altman ever agree on the risks or even speak civilly online, all AI developers would do well to follow Altman’s lead in being more transparent about what AI safety looks like and how to achieve it.
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



