The authenticity of ChatGPT-generated content has been questioned after a recent study revealed that the latest version of ChatGPT, GPT-5.2, pulls content from Grokipedia, an AI-generated online encyclopedia created by Elon Musk in 2023.
This revelation has caused a frenzy among researchers and journalists about the reliability of the output from artificial intelligence (AI) platforms. What makes it more worrying is that internet users are heavily dependent on these tools for information.
A report of The Guardian mentioned that GPT-5.2 referenced Grokipedia several times in its responses to various questions, including sensitive topics such as Iran’s political landscape and historical issues related to Holocaust denial.
In over a dozen test queries, Grokipedia was cited nine times, suggesting that it is integrated into the model’s information pool.
Notably, Grokipedia competes with Wikipedia but relies solely on AI for content creation and updates, showing the frightening biases and inaccuracies planted in the AI-generated content.
The OpenAI-owned chatbot has previously been flagged by critics for promoting right-wing perspectives on controversial social and political issues.
It should not be missed that ChatGPT did not refer to Grokipedia when asked about topics that contain controversial claims, such as the January 6th Capitol attack or misinformation about HIV/AIDS.
Grokipedia appeared in ChatGPT responses, mostly in obscure questions, making stronger claims beyond established facts, such as links between an Iranian telecommunications company and the Supreme Leader’s office.
This problem is not limited to ChatGPT; other major language models (LLMs), including Anthropic’s Claude, have also cited Grokipedia on various topics.
OpenAI explained that its models take help from various sources and apply security filters to reduce the spread of harmful information.
Experts describe the need for rigorous source evaluation in AI development and warned that reliance on unreliable sources can mislead users and amplify misinformation.



