Large language models have an awkward story of telling the truth, especially if they can’t give a right answer. Hallucinations have been a danger to AI -Chatbots since the technology debuted a few years ago. But Chatgpt 5 seems to go for a new, more humble approach not to know answers; admits it.
Although most AI -Chatbot answers are accurate, it is impossible to interact with an AI -Chatbot long before providing a partial or complete manufacture in response. AI shows just as much confidence in its answers regardless of their accuracy. AI hallucinations have plagued users and even led to embarrassing moments for developers during demonstrations.
Openai had suggested that the new version of Chatgpt would be willing to invoke ignorance of constituting an answer, and a viral X -post of Kol Tregaskes has drawn attention to the groundbreaking concept of chatgpt and says, “I don’t know – and I can’t reliably find out.”
GPT-5 says ‘I don’t know’. Love this please. pic.twitter.com/k6snfkqzbgAugust 18, 2025
Technically, baking hallucinations in how these models work. They do not derive facts from a database, even if it looks like that; They predict the next most likely word based on patterns in language. When you ask for something fuzzy or complicated, AI guesses the right words to answer it and don’t make a classic search engine hunt. Therefore, the appearance of completely composed sources, statistics or quotes.
But GPT-5’s ability to stop and say, “I don’t know,” reflects a development in how AI models handle their limitations on their answers, at least. An honest recording of ignorance replaces fictional filler. It may seem anti -climactic, but it is more important to make AI appear more reliable.
Clarity of hallucinations
Trust is crucial to AI -Chatbots. Why would you use them if you don’t trust the answers? Chatgpt and other AI chatbots have warnings built into them not to rely too much on their answers because of hallucinations, but there are always stories of people ignoring this warning and getting into warm water. If AI just says it can’t answer a question, people may be more likely to rely on the answers it gives.
Of course, there is still a risk that users will interpret the model’s self -vibrator as failure. The phrase “I don’t know” can come out as a mistake, not a feature if you are not aware that the alternative is a hallucination, not the correct answer. To admit uncertainty is not how the knowing AI-something imagines that chatgpt would behave.
But it is arguably the most human thing that Chatgpt could do in this case. Openais proclaimed The goal is artificial general intelligence, AI that can perform any intellectual task that a human can. But one of the irony of AGI is that imitation of human thinking includes uncertainties as well as capabilities.
Sometimes the smartest thing you can do is say that you know nothing. You can’t learn if you refuse to admit there are things you don’t know. And at least it avoids the sight of an AI that tells you to eat cliffs for your health.



