Study Warns: AI Chatbots Still Make Mistakes in Reporting News

Study Warns: AI Chatbots Still Make Mistakes in Reporting News
Study Warns: AI Chatbots Still Make Mistakes in Reporting News
A recent study revealed that AI-powered chatbots continue to struggle with news accuracy, with some even inventing entire news sources and publishing false reports.اضافة اعلان

According to The Conversation, a month-long experiment was conducted by a journalism professor specializing in computer science to test seven generative AI systems, including Google’s Gemini, OpenAI’s ChatGPT, Claude, Copilot, Grok, DeepSeek, and Aria.

During the experiment, these tools were asked daily to identify the five most important news events in Quebec, Canada, rank them by significance, and provide summaries with direct links to journalistic sources.

Invented Sources and Fake News
The most notable failure came from Gemini, which fabricated a nonexistent news site and published a false report about a school bus driver strike in Quebec in September 2025.

In reality, there was no strike; service had been temporarily halted due to electric buses from Lion Electric being withdrawn because of a technical malfunction.

This was not an isolated case. A review of 839 responses showed that AI systems frequently cited fake sources, provided incomplete links, or distorted real reports.

Growing Risk with Increased Reliance
These findings are particularly significant as more people rely on chatbots for news. According to Reuters, around 6% of Canadians used generative AI for news in 2024.

The study warns that AI “hallucinations”—whether by fabricating events, misrepresenting facts, or adding unsupported conclusions—can spread misinformation, especially since AI responses are often presented confidently and without clear disclaimers.

Incomplete Links and Misleading Conclusions
Only 37% of the AI-generated responses included complete and accurate source links, and fully accurate summaries occurred in less than half of cases.

Often, AI tools added what researchers called “generative inferences,” such as claiming that certain reports “reignited debate” or “highlighted tensions,” even though these phrases never appeared in human news coverage.

Errors were not limited to fabrication—they also included distortion of real events, such as misrepresenting asylum cases, incorrectly reporting winners of major sports events, or providing inaccurate polling data.

Clear Warning for Users
These results align with a broader report by 22 public media organizations, which found that about half of AI-generated news responses contain significant issues, ranging from weak sourcing to major errors.

As these tools become increasingly integrated into search engines and daily habits, the report concludes with a clear warning: AI may serve as a starting point for understanding news, but it is not yet a reliable source or definitive reference for accurate information.