The stories about bad advice from AI keep on coming. One eating disorder treatment chatbot gave weight loss tips. Another AI offered teenagers advice on self-harm. A third, during testing, told a person with a substance use disorder to do methamphetamine.

As capable as AI systems have become, they remain deeply imperfect givers of advice and sometimes put their users in danger. In that light, it’s not surprising that psychological professionals and policymakers would try to act. So far, however, some of the laws appear likely to have effects opposite the laudable safety-related goals their authors profess. In fact, some of them seem to ban the participation of the very professionals needed to improve AI safety.

Exhibit A is a new state of Illinois law — the state’s “Wellness and Oversigh

See Full Page