The phenomenon of doctors presenting unfounded statements with unwavering arrogance—colloquially known as “bullshit”—has long been recognised in medical practice.12 In parallel, the tendency of large language models (LLMs) to generate plausible but factually incorrect information, termed “hallucinations,” presents a remarkably similar challenge in healthcare related artificial intelligence applications.3 These parallel behaviours stem from shared underlying mechanisms.In both cases, pressure to produce output regardless of knowledge limitations can lead to a preference for any response over none, driven by a reward seeking intention.45 Misinformation then emerges not as deliberate deception, but as a product of structural demands. As Henry Frankfurt noted in his seminal work on bullshit, t

See Full Page