A single typo, formatting error, or slang word makes an AI more likely to tell a patient they're not sick or don't need to seek medical care.

That's what MIT researchers found in a June study currently awaiting peer review, which we covered previously. Even the presence of colorful or emotional language, they discovered, was enough to throw off the AI's medical advice.

Now, in a new interview with the Boston Globe, study coauthor Marzyeh Ghassemi is warning about the serious harm this could cause if doctors come to widely rely on the AI tech.

"I love developing AI systems," Ghassemi, a professor of electrical engineering and computer science at MIT, told the newspaper. "But it's clear to me that naïve deployments of these systems, that do not recognize the baggage that human data comes

See Full Page