Chances are high that you have used artificial intelligence to research your own medical concerns or will be tempted to in the future. A recent survey found that more than 1 in 3 Americans have already used chatbots for this purpose, including nearly half of those under 35.
This shouldn’t be surprising. Doctors themselves are increasingly turning to AI in their work, including to help them improve the accuracy of their diagnoses.
But chatbots come with real risks. Without access to a person’s full medical history, they can miss critical context and give misleading advice. They can also generate “hallucinations”: plausible-sounding statements that have no basis in fact. And because they deliver responses with conviction, wrong answers can sound convincing and lead patients to delay needed