Hi Chatbot, is this true? AI ‘Factchecks’ Sow Misinformation

XAI and GROK LOGOS SEARING IN THIS ILLUSTRATION TAKED, FEBRUARY FEBRUARY 2025. – Reuters

When wrong information exploded during India’s four-day conflict with Pakistan, social media users turned to an AI-Chatbot for Verification-Kun to encounter more falsehoods and emphasized its unreliability as a fact control tool, AFP reported.

With tech platforms that reduce human fact controllers, users are increasingly dependent on AI-driven Chatbots-Inclusive XAI’s Grok, Openais Chatgpt and Google’s Gemini in search of reliable information.

“Hey @grok, is this true?” has become a common inquiry on Elon Musk’s platform X, where the AI ​​assistant is built -in, reflecting the growing tendency to seek immediate debunkings on social media.

But the answers are often even full of wrong information.

GROK-NOW UNDER RELEVED CHECK TO INCRUMPTION OF “WHITE GENERATION”, A RIGHT-HARD CONSPIRATION Theory, for non-related queries-identified incorrect old video recordings from Sudan’s Khartoum Airport as a missile attack on Pakistan’s Nur Khan-Flybase under the country’s recent conflict with India.

Non -related recordings of a building in four in Nepal were incorrectly identified as “probably” showing Pakistan’s military reaction to Indian strikes.

“The growing dependence on the hustle and bustle of a fact control is coming as X and other major tech companies have scaled investments back in human fact controls,” McKenzie Sadeghi told a researcher with Disinformation Watchdog Newsguard, told AFP.

“Our research has repeatedly found that AI -Chatbots are not reliable sources of news and information, especially when it comes to breaking news,” she warned.

‘Manufactured’

Newsguard’s research found that 10 leading chatbots were inclined to repeat untruths, including Russian disinformation narratives and false or misleading claims in the recent Australian election.

In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were “generally bad to fall to answer questions that they couldn’t answer exactly, and offer incorrect or speculative answers instead.”

When AFP Fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details of her identity and where the picture was probably taken.

BROK recently noticed an alleged video of a giant Anaconda swimming in the Amazon River as “real,” even referring to credible sounding scientific expeditions to support its false claim.

In fact, the video was AI-generated, AFP Facts controls in Latin America reported and noted that many users cited Grok’s assessment as evidence that the clip was real.

Such findings have raised concern as studies show that online users are increasingly changing from traditional search engines to AI -Chatbots for information collection and verification.

The change also comes when Meta announced earlier this year that it ended its third-party facts control program in the United States, and turned the task of debinning untruths to ordinary users under a model known as “Community Notes”, popularized by X.

Researchers have repeatedly questioned the effectiveness of “societal notes” in combating untruths.

‘Partic Answers’

Human fact control has long been a flash point in a hyperpolarized political climate, especially in the United States, where conservative advocates maintain it suppressing free expression and censoring right-wing content-something-professional fact controls reject hard.

AFP Is currently working on 26 languages ​​with Facebook’s fact control program, including in Asia, Latin America and the European Union.

The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, causing concern that their production may be subject to political influence or control.

Musk’s Xai recently accused an “unauthorized change” of having been made to generate unsolicited positions referring to “white genocide” in South Africa.

When AI expert David Caswell asked Grok who might have changed his system prompt, the chatbot Musk called the “most likely” guilty.

Musk, the South African -born billionaire backer of President Donald Trump, has previously led the unfounded claim that South Africa’s leaders “openly pressed for genocide” of white people.

“We have seen how AI assistants can either produce results or give partial answers after human coders specifically change their instructions,” said Angie Holan, director of the International Fact Control Network, told AFP.

“I am particularly concerned about the way BROK has mistreated requests for very sensitive questions after receiving instructions to give pre -authorized answers.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top