AI models such as ChatGPT “ hallucinate ”, or make up facts, mainly because they are trained to make guesses rather than admit a lack of knowledge, a new study reveals.
Hallucination is a major concern with generative AI models since their conversational ability means they can present false information with a fairly high degree of certainty.
In spite of rapid advances in AI technology, hallucination continues to plague even the latest models.
Industry experts say deeper research and action is needed to combat AI hallucination, particularly as this technology finds increasing use in medical and legal fields.
Although several factors contribute to AI hallucination, such as flawed training data and model complexity, the main reason is that algorithms operate with “wrong incentive

The Independent Technology

Oh No They Didn't
AlterNet
Cache Valley Daily
5 On Your Side Sports
Page Six
Star Beacon
The Babylon Bee