More people are using ChatGPT and other large language models (LLMs) to ask for mental health advice. There have been stories in the news about the possible extreme consequences of such advice, but now researchers have looked at the wider mental health advice and support that these programs provide. The situation appears dire , with a new study revealing the systematic violation of mental health ethical standards. The rest of this article is behind a paywall. Please sign in or subscribe to access the full content.

Artificial intelligence, or AI, is the marketing name for these large language models, programs that are trained on a vast amount of text to be able to answer questions like a human would. The training data, from fair use words to stolen copyrighted material , makes the prog

See Full Page