AI is more deeply embedded in our daily lives than ever before. It’s blending seamlessly into how we work, search and stay informed. But a new study from the European Broadcasting Union (EBU) issues a stark warning: 45% of AI-generated news responses contain serious errors, and 81% have at least one issue. This could range from outdated information, misleading phrasing, to missing or fabricated sources.

We’ve previously reported that ChatGPT is wrong about 25% of the time. But this new data is even more alarming, especially as tools like ChatGPT Atlas and Google’s AI Overviews are becoming the default way many of us check the news. It’s a reminder that while the convenience is real, so is the risk.

The study: AI assistants fail the accuracy test

The EBU study tested more than 3,000 AI-g

See Full Page