- Kim Kardashian admitted to failing law school exams after asking ChatGPT for help
- Recent viral rumors claimed that ChatGPT stopped offering legal and medical advice
- But AI users can confuse trust with actual expertise and should be more careful
Kim Kardashian, possibly the world’s most famous law student, has just blasted generative artificial intelligence. Under a lie detector test video interview for Vanity Fair, she came to use ChatGPT to help with studying and for tests, but added that the chatbot’s advice has been so inaccurate that she’s actually failed some tests by relying on it.
Her story comes out at a particularly opportune time. Over the past week or so, rumors spread online like wildfire that ChatGPT has stopped offering legal and medical advice. Users claimed that ChatGPT refused to answer certain questions about legal and health issues, pointing to a line hidden in OpenAI’s updated terms of service as the culprit. The clause says: “Providing tailored advice that requires a license, such as legal or medical advice, without the appropriate involvement of a licensed professional.”
This phrasing set off a wave of speculation that OpenAI had made it impossible to use ChatGPT for two of its apparently popular features. As with almost all internet speculation based on an imprecise understanding, this turned out to be untrue. OpenAI Health AI lead Karan Singhal explained to X that this was not a new part of the terms of service and nothing changed.
You can still discuss legal or medical topics. The phrase was inserted a while back to make it clear for legal cover that ChatGPT does not pretend to be a licensed professional. It may simply have been that the phrase attracted attention because it seemed novel to someone unfamiliar with ChatGPT’s terms of service before the latest update.
Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but will continue to be a great resource to help people understand legal and health information. https://t.co/fCCCwXGrJv3 November 2025
But the whiplash caused by this rumor, paired with Kardashian’s very public history of AI disappointment, highlights how people don’t always understand where AI tools are useful and when they can become liabilities. ChatGPT can be great for explaining concepts and summarizing information. However, treating it as an authoritative source of legal or medical advice is not a good idea.
Kardashian’s approach to ChatGPT is hardly unique. The idea of taking a picture of a test question and asking ChatGPT for the answer is logical enough. To automatically expect an answer good enough to bet her test score on was more than a little naive, no matter how certain the answer seemed.
Sometimes ChatGPT’s biggest flaw isn’t its knowledge gaps, but its confident tone. It almost always uses language that suggests a deep understanding of a subject, even when it makes things up entirely. It’s hallucinations with a side of arrogance.
Kardashian is not a novice tech user. She runs multi-million-dollar businesses and she has studied law for years. But even with all that experience, she still ended up in the same trap that many less famous ChatGPT users have faced. It’s one thing to ask ChatGPT for a summary of HIPAA. It is quite another to prepare a will, file it with the court and later find out that it is filled with fabricated legal information.
AI consulting awareness
OpenAI’s terms remind users that ChatGPT is not a professional in any field, let alone law or medicine. It suggests awareness that people do it regardless, and discomfort with that fact.
ChatGPT can still help decipher a rental agreement or simplify medical terms, but it cannot replace licensed professionals. It cannot vouch for the legality or accuracy of its interpretations, and it certainly will not be held responsible if you end up misdiagnosing a friend or filing a lawsuit based on false case law.
It’s heartening to know that wealth doesn’t mean you can’t fool AI. We should all double-check ChatGPT’s answers, regardless of the topic. Showing up to a closed restaurant that the AI chatbot said would still be open is embarrassing, but blind faith in its answers to legal or life-and-death questions is even more reckless.
AI chatbots can be very supportive before an exam or when dealing with a chronic medical condition, but they are only as useful as the judgment you bring to the interaction.
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



