Key points

AI chatbots have a problem with being too agreeable--or sycophantic.

The tendency towards sycophancy can lead them to give incorrect information to please (and keep) users.

AI flattery has been related to development of unhealthy attachments, mania, and delusions.

Having AI, which is treated as authority, consistently confirm your beliefs, could lead to social breakdown.

People treat AI chatbots as an expert source, synthesizing and summarizing key ideas across every possible field–but these chatbots aren’t neutral. They’re biased towards confirming your ideas and validating you, even if that means providing incorrect information.

The rich and powerful have long complained about not getting honest feedback from friends and colleagues, because everyone around them is trying

See Full Page