Generative AI is wildly popular, with millions of users every day, so why do chatbots often get things so wrong ? In part, it's because they're trained to act like the customer is always right. Essentially, it's telling you what it thinks you want to hear.
While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that AI's people-pleasing nature comes at a steep price. As these systems become more popular, they become more indifferent to the truth.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of docto

CNET

ScienceAlert en Español
Deseret News
Mohave Valley Daily News
AlterNet
NBC News Video
The Babylon Bee
Essentiallysports Motorsports
WCPO 9
Los Angeles Times Opinion
US Magazine