The study involved asking each bot to answer 100 questions about the news, using BBC sources when available, with their answers then being rated by “journalists who were relevant experts in the subject of the article.”
Regarding its own articles specifically, the BBC says 19% of AI summaries introduced these kinds of factual errors, hallucinating false statements, numbers, and dates. Additionally, 13% of direct quotes were “either altered from the original source or not present in the article cited.”
“Microsoft's Copilot and Google's Gemini had more significant issues than OpenAI's ChatGPT and Perplexity,” the BBC says, but on the flip side, Perplexity and ChatGPT each still had issues with more than 40% of responses.
"We live in troubled times,” Turness wrote. “How long will it be before an AI-distorted headline causes significant real world harm?"
Journalists have also previously butted heads with Perplexity over copyright concerns, with Wired accusing the bot of bypassing paywalls and the New York Times sending the company a cease-and-desist letter. News Corp, which owns the New York Post and The Wall Street Journals, went a step further, and is currently suing Perplexity.
“We want AI companies to hear our concerns and work constructively with us,” the BBC study states. “We want to understand how they will rectify the issues we have identified and discuss the right long-term approach to ensuring accuracy and trustworthiness in AI assistants. We are willing to work closely with them to do this.”
Read More Details
Finally We wish PressBee provided you with enough information of ( This BBC Study Shows How Inaccurate AI News Summaries Actually Are )
Also on site :