If you’ve ever asked ChatGPT a question only to receive an answer that reads well but is completely wrong, then you’ve witnessed a hallucination. Some hallucinations can be downright funny (i.e. the Wright brothers invented the atomic bomb), others can be a bit disturbing (for example when medical information is messed up).
What makes it a hallucination is the fact that the AI doesn’t know that it is making anything up, it’s confident in its answer and just goes on per normal.
Unlike human hallucinations, it’s not always easy to know when an AI is hallucinating. There are some fundamental things you need to know about AI hallucinations if you’re going to spot them.
What is an AI hallucination: The definition
An AI hallucination is when an AI model produces outputs that are factually in

PC World
Gossip Cop
OK Magazine
Raw Story
New York Post Health