Sign In

Delhi News Daily

  • Home
  • Fashion
  • Business
  • World News
  • Technology
  • Sports
  • Politics
  • Lifestyle
  • Entertainment
Reading: Fact check: How trustworthy are AI fact checks? | World News – Times of India – Delhi News Daily
Share

Delhi News Daily

Font ResizerAa
Search
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Delhi News Daily > Blog > World News > Fact check: How trustworthy are AI fact checks? | World News – Times of India – Delhi News Daily
World News

Fact check: How trustworthy are AI fact checks? | World News – Times of India – Delhi News Daily

delhinewsdaily
Last updated: May 19, 2025 11:51 am
delhinewsdaily
Share
SHARE


Contents
Study shows factual errors and altered quotesAI offers incorrect answers with ‘alarming confidence’AI chatbots are only as good as their ‘diet’When AI gets it all wrongGrok assigns same AI image to various real eventsAI chatbots ‘should not be seen as fact-checking tools’
Fact check: How trustworthy are AI fact checks?

“Hey, @Grok, is this true?” Ever since Elon Musk‘s xAI launched its generative artificial intelligence chatbot Grok in November 2023, and especially since it was rolled out to all non-premium users in December 2024, thousands of X (formerly Twitter) users have been asking this question to carry out rapid fact checks on information they see on the platform.A recent survey carried out by a British online technology publication TechRadar found that 27% of Americans had used artificial intelligence tools such as OpenAI’s ChatGPT, Meta’s Meta AI, Google’s Gemini, Microsoft’s Copilot or apps like Perplexity instead of traditional search engines like Google or Yahoo. But how accurate and reliable are the chatbots’ responses? Many people have asked themselves this question in the face of Grok’s recent statements about “white genocide” in South Africa. Apart from Grok’s problematic stance on the topic, X users were also irritated about the fact that the bot started to talk about the issue when it was asked about completely different topics, like in the following example:

Fact check: How trustworthy are AI fact checks?

Image: X

The discussion around the alleged “white genocide” arose after the Trump administration brought white South Africans to the United States as refugees. Trump said they were facing a “genocide” in their homeland — an allegation that lacks any proof and that many see as related to the racist conspiracy myth of the “Great Replacement”.xAI blamed an “unauthorized modification” for Grok’s obsession with the “white genocide” topic, and said it had “conducted a thorough investigation.” But do flaws like this happen regularly? How sure can users be to get reliable information when they want to fact-check something with AI?We analyzed this and answered these questions for you in this DW Fact Check.

Study shows factual errors and altered quotes

Two studies conducted this year by the British public broadcaster BBC and the Tow Center for Digital Journalism in the United States found significant shortcomings when it comes to the ability of generative AI chatbots to accurately convey news reporting.In February, the BBC study found that “answers produced by the AI assistants contained significant inaccuracies and distorted content” produced by the organization.When it asked ChatGPT, Copilot, Gemini and Perplexity to respond to questions about current news by using BBC articles as sources, it found that 51% of the chatbots’ answers had “significant issues of some form.”Nineteen percent of answers were found to have added their own factual errors, while 13% of quotes were either altered or not present at all in cited articles.“AI assistants cannot currently be relied upon to provide accurate news and they risk misleading the audience,” said Pete Archer, director of the BBC’s Generative AI Program.

AI offers incorrect answers with ‘alarming confidence’

Similarly, research by the Tow Center for Digital Journalism, published in the Columbia Journalism Review (CJR) in March, found that eight generative AI search tools were unable to correctly identify the provenance of article excerpts in 60% of cases.Perplexity performed best with a failure rate of “only” 37%, while Grok answered 94% of queries incorrectly.The CJR said it was particularly concerned by the “alarming confidence” with which AI tools presented incorrect answers. “ChatGPT, for instance, incorrectly identified 134 articles, but signaled a lack of confidence just fifteen times out of its two hundred [total] responses, and never declined to provide an answer,” said the report.Overall, the study found that chatbots were “generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead” and that AI search tools “fabricated links and cited syndicated and copied versions of articles.”

AI chatbots are only as good as their ‘diet’

And where does AI itself get its information from? It’s fed by different sources like extensive databases and web searches. Depending on how AI chatbots are trained and programmed, the quality and accuracy of their answers can vary.“One issue that recently emerged is the pollution of LLMs [Large Language Models — Editor’s note] by Russian disinformation and propaganda. So clearly there is an issue with the ‘diet’ of LLMs,” Tommaso Canetta told DW. He’s the deputy director of the Italian fact-checking project Pagella Politica and fact checking coordinator at the European Digital Media Observatory.“If the sources are not trustworthy and qualitative, the answers will most likely be of the same kind,” Canetta explained. He said that he regularly comes across responses which are “incomplete, not precise, misleading or even false.”In the case of xAI and Grok, whose owner, Elon Musk, is a fierce supporter of US President Donald Trump, there is a clear danger that the “diet” could be politically controlled, he added.

When AI gets it all wrong

In April 2024, Meta AI reportedly posted in a New York parenting group on Facebook that it had a disabled yet academically gifted child and offered advice on special schooling.Eventually, the chatbot apologized and admitted that it didn’t have “personal experiences or children,” as Meta told 404media, which reported on the incident:“This is new technology and it may not always return the response we intend, which is the same for all generative AI systems. Since we launched, we’ve constantly released updates and improvements to our models and we’re continuing to work on making them better,” a spokesperson said in a statement.In the same month, Grok misinterpreted a viral joke about a poorly-performing basketball player and told users in its trending section that he was under investigation by police after being accused of vandalizing homes with bricks in Sacramento, California.Grok had misunderstood the common basketball expression whereby a player who has failed to get any of their throws on target is said to have been “throwing bricks.”Other mistakes have been less amusing. In August 2024, Grok spread misinformation regarding the deadline for US presidential nominees to be added to ballots in nine federal states following the withdrawal of former President Joe Biden from the race.In a public letter to Musk, the secretary of state for Minnesota, Steve Simon, wrote that, within hours of Biden’s announcement, Grok had generated false headlines that Vice President Kamala Harris would be ineligible to appear on the ballot in multiple states — untrue information.

Grok assigns same AI image to various real events

It’s not just news that AI chatbots appear to have difficulties with; they also exhibit severe limitations when it comes to identifying AI-generated images.In a quick experiment, DW asked Grok to identify the date, location and origin of an AI-generated image of a fire at a destroyed aircraft hangar taken from a TikTok video. In its response and explanations, Grok claimed that the image showed several different incidents at several different locations, ranging from a small airfield in Salisbury in England, to Denver International Airport in Colorado, to Tan Son Nhat International Airport in Ho Chi Minh City, Vietnam.There have indeed been accidents and fires at these locations in recent years, but the image in question showed none of them. DW strongly believes it was generated by artificial intelligence, which Grok seemed unable to recognize, despite clear errors and inconsistencies in the image — including inverted tail fins on airplanes and illogical jets of water from fire hoses.Even more concerningly, Grok recognized part of the “TikTok” watermark visible in the corner of the image and suggested that this “supported its authenticity.” Conversely, under its “More details” tab, Grok stated TikTok was “a platform often used for rapid dissemination of viral content, which can lead to misinformation if not properly verified.”Similarly, just this week, Grok informed X users (in Portuguese) that a viral video purporting to show a huge anaconda in the Amazon, seemingly measuring several hundred meters (over 500 feet) in length, was real — despite it clearly having been generated by artificial intelligence, and Grok even recognizing a ChatGPT watermark.

AI chatbots ‘should not be seen as fact-checking tools’

AI chatbots may appear as an omniscient entity, but they are not. They make mistakes, misunderstand things and can even be manipulated. Felix Simon, postdoctoral research fellow in AI and digital news and research associate at the Oxford Internet Institute (OII), concludes: “AI systems such as Grok, Meta AI or ChatGPT should not be seen as fact-checking tools. While they can be used to that end with some success, it is unclear how well and consistently they perform at this task, especially for edge cases.“For Canetta at Pagella Politica, AI chatbots can be useful for very simple fact checks. But he also advises people not to trust them entirely. Both experts stressed that users should always double-check responses with other sources.





Source link

Share This Article
Twitter Email Copy Link Print
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article ‘Could he be more racist?’: Chicago mayor Brandon Johnson slammed for ‘only hiring black people’ comment in viral speech – Times of India – Delhi News Daily
Next Article Original Sin: Joe Biden’s cancer admission fuels MAGA narrative | World News – Times of India – Delhi News Daily
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Israel, Iran trade strikes for third day as hundreds reported dead – Times of India – Delhi News Daily
  • Israel-Iran conflict: Critical damage dealt to key nuclear site in Iran, says IAEA – Times of India – Delhi News Daily
  • Trump may widen travel ban; 36 nations flagged over security concerns — See full list – Times of India – Delhi News Daily
  • Trump vetoed Israel plan to kill Khamenei, claim US officials – Times of India – Delhi News Daily
  • Israel-Iran conflict: Israel intercepts 100 Iranian UAVs; strikes Tehran’s refuelling jet in Mashhad- latest updates – Times of India – Delhi News Daily

Recent Comments

No comments to show.

You Might Also Like

World News

Pair of portraits by Dutch master Frans Hals return to the Netherlands – Times of India – Delhi News Daily

Boy playing the violin, left, and girl singing (AP) THE HAGUE: A pair of paintings by Dutch Golden Age master…

5 Min Read
World News

Wisconsin judge indicted for allegedly helping undocumented immigrant evade ICE agents – Times of India – Delhi News Daily

File photo: Judge Hannah Dugan (Picture credit: AP) Milwaukee County circuit judgeHannah Dugan has been indicted by a federal grand…

6 Min Read
World News

Bangladesh currency drops Bangabandhu’s image, new notes without Sheikh Mujibur Rahman’s picture are out – Times of India – Delhi News Daily

Representative image (Picture credit: IANS) DHAKA: Ahead of Eid-ul-Adha, Bangladesh central bank on Sunday introduced new currency notes of Taka…

3 Min Read
World News

DNI Tulsi Gabbard confirms probe into Fauci’s role in funding Wuhan gain-of-function research tied to COVID-19 – The Times of India – Delhi News Daily

Director of national intelligence Tulsi Gabbard on Thursday said that she has created a special team group, the Directors' Initiative…

7 Min Read

Delhi News Daily

© Delhi News Daily Network.

Incognito Web Technologies

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?