Leading AI assistants have been found to misrepresent news content in nearly half of their responses, as outlined in new research by the European Broadcasting Union (EBU) and the BBC. The study, released on Wednesday, evaluated 3,000 responses from popular AI assistants such as ChatGPT, Copilot, Gemini, and Perplexity.
The research exposed that 45% of analyzed AI responses contained significant issues, with 81% having some form of error, notably in sourcing and distinguishing between opinion and fact. The study highlights concerns that AI assistants may undermine public trust, with sourcing errors found in a third of responses, particularly affecting Google's Gemini.
The study, involving directors from 22 public-service media organizations across 18 countries, calls for greater accountab