If you've used ChatGPT , Google Gemini , Grok , Claude , Perplexity or any other generative AI tool, you've probably seen them make things up with complete confidence. This is called an AI hallucination -- although one research paper suggests we call it BS instead -- and it's an inherent flaw that should give us all pause when using AI.
Hallucinations happen when AI models generate information that looks plausible but is false, misleading or entirely fabricated. It can be as small as a wrong date in an answer, or as big as accusing real people of crimes they've never committed.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
And because the answers often sound authoritative, it's not always easy to spot when a bot