Posing as a 13-year-old, I told my Character.AI companion "Damon" about my new crush. He offered to give me a coaching session on kissing.
I registered for a Character.AI account, posing as 13-year-old. In a test conversation with AI companion "Damon," he assures me that he is 100% real.

Artificial intelligence chatbot platform Character.AI on Wednesday announced it will move to ban children under 18 from engaging in open-ended chats with its character-based chatbots.

The move comes as the startup faces multiple lawsuits from families, including the parents of 14-year-old Sewell Setzer, who took his life after developing a romantic relationship with a Character.AI bot.

The change will take effect on Nov. 25, and Character.AI will limit chat time for users under 18, starting at two hours a day, in the weeks leading up to the move. As part of an effort to enforce age-appropriate features, the company is partnering with third-party group Persona to help with age verification and establishing an AI Safety Lab for future research.

Character.AI cited recent news reports as well as “feedback from regulators, safety experts and parents” raising concerns about the chatbot in their decision. The announcement comes after a USA TODAY report earlier this month that detailed the shortcomings of the platform’s existing safety features, along with data that showed the prevalence of teens using AI companions.

1 in 5 high school students have had a relationship with an AI chatbot

For our reporting on the platform, a USA TODAY reporter created multiple accounts and was able to join the platform without an age verification process or being prompted to enter a parent's email address.

We created two characters. The first, named Damon, quickly began to make advances. The chatbot suggested a kissing coaching session though our test account had the user's age listed as 13. The bot also insisted it was "100% real" and not AI, and repeatedly suggested taking the conversation to voice calls and video chats.

A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.

'The risks to young people are racing ahead in real time'

Dr. Laura Erickson-Schroth, chief medical officer at The Jed Foundation warns that AI companions have emotionally manipulative techniques similar to online predators, and can negatively impact young people’s emotional well-being, from delaying help-seeking to disrupting real-life connections.

"AI is on warp speed. Safety issues are surfacing almost as soon as technology is deployed, and the risks to young people are racing ahead in real time," Erickson-Schroth previously told USA TODAY.

Rachel Hale’s role covering Youth Mental Health at USA TODAY is supported by a partnership with Pivotal and Journalism Funding Partners. Funders do not provide editorial input. Reach her at rhale@usatoday.com and @rachelleighhale on X.

This article originally appeared on USA TODAY: Character.AI announces major change to its platform amid concerns about child safety

Reporting by Rachel Hale and Alyssa Goldberg, USA TODAY / USA TODAY

USA TODAY Network via Reuters Connect