Chatgpt becomes smarter but its hallucinations spirales


  • Openai’s latest AI models, GPT O3 and O4-MINI, HALLUCINAT
  • The increased complexity of the models can lead to more confident inaccuracies
  • The high error speeds increase concern for AI-ADDIAFIES in applications in the real world

Brilliant but unreliable people are a staple with fiction (and history). The same context can also apply to AI, based on a study from Openai and shared with New York Times. Hallucinations, imaginary facts and equal-up lies have been a part of AI-Chatbots since they were created. Improvements to models theoretically should reduce the frequency they appear with.

Openai’s latest flagship models, GPT O3 and O4-MINI, are intended to emulate human logic. Unlike their predecessors, who mainly focused on fluid text generation, the Openai GPT O3 and O4-MINI built things through step-by-step. Openai has boasted that O1 could match or exceed the performance of Ph.D. -Students in chemistry, biology and mathematics. But Openai’s report highlights some upset results for anyone who takes chatgpt responses to face value.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top