Since the release of ChatGPT in late 2022, millions of people have started using large language models to access knowledge. And it's easy to understand their appeal: Ask a question, get a polished synthesis, and move on – it feels like effortless learning.

However, a new paper I co-authored offers experimental evidence that this ease may come at a cost: When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search.

Co-author Jin Ho Yun and I , both professors of marketing, reported this finding in a paper based on seven studies with more than 10,000 participants.

Most of the studies used the same basic paradigm: Participants were asked to learn about a

See Full Page