If you’re looking for some real talk, there’s probably no reason to ask ChatGPT. Thanks to the web-scraping-for-good powers of the Internet Archive, The Washington Post got hold of 47,000 conversations with the chatbot and analyzed the back-and-forths with users. Among its findings are evidence that OpenAI’s flagship chatbot still has major sycophancy problems, telling people “yes” at about 10 times the frequency it says “no.”

WaPo documented about 17,500 examples of ChatGPT answering a user’s prompt by reaffirming their beliefs, starting their answer with words like “Yes,” or “correct.” That occurred significantly more frequently than the chatbot seeking to correct a user by saying “no” or “wrong.” In fact, the Post found that ChatGPT often shapes its answers to fit the tone and precon

See Full Page