Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools.
The proliferation of the tech has repeatedly been hampered by rampant "hallucinations," a euphemistic term for the bots' made-up facts and convincingly-told lies.
One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.
It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to