FILE PHOTO: Teenagers pose for a photo while holding smartphones in front of a Instagram logo in this illustration taken September 11, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

By Jeff Horwitz

(Reuters) -Numerous safety features that Meta has said it has implemented to protect young users on Instagram over the years do not work well or, in some cases, don't exist, according to a report from child-safety advocacy groups that was corroborated by researchers at Northeastern University.

The study, which Meta disputed as misleading, comes amid renewed pressure on tech companies to protect children and other vulnerable users of their social-media platforms.

Of 47 safety features tested, the groups judged only eight to be completely effective. The rest were either flawed, “no longer available or were substantially ineffective,” the report stated.

Features meant to prevent young users from surfacing self-harm-related content by blocking search terms were easily circumvented, the researchers reported. Anti-bullying message filters also failed to activate, even when prompted with the same harassing phrases Meta had used in a press release promoting them. And a feature meant to redirect teens from bingeing on self-harm-related content never triggered, the researchers found.

Researchers did find that some of the teen account safety features worked as advertised, such as a “quiet mode” meant to temporarily disable notifications at night, and a feature requiring parents to approve changes to a child’s account settings.

Titled “Teen Accounts, Broken Promises,” the report compiled and analyzed Instagram’s publicly announced updates of youth safety and well-being features going back more than a decade. Two of the groups behind the report – Molly Rose Foundation in the United Kingdom and Parents for Safe Online Spaces in the U.S. – were founded by parents who allege their children died as a result of bullying and self-harm content on the social-media company’s platforms.

The findings call into question Meta’s efforts “to protect teens from the worst parts of the platform,” said Laura Edelson, a professor at Northeastern University who oversaw a review of the findings. “Using realistic testing scenarios, we can see that many of Instagram's safety tools simply are not working.”

Meta – which on Thursday said it was expanding teen accounts to Facebook users internationally – called the findings erroneous and misleading.

"This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today,” said Meta spokesman Andy Stone. He disputed some of the report’s appraisals, calling them “dangerously misleading,” and said the company’s approach to teen account features and parental controls has changed over time.

“Teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night,” Stone said. “We'll continue improving our tools, and we welcome constructive feedback – but this report is not that."

The advocacy groups and the university researchers received tips from Arturo Bejar, a former Meta safety executive, indicating that the Instagram features were flawed. Bejar worked at Meta until 2015, then came back in late 2019 as a consultant for Instagram until 2021. During his second stint at the company, he told Reuters, Meta failed to respond to data indicating severe teen safety concerns on Instagram.

“I experienced firsthand how good safety ideas got whittled down to ineffective features by management,” Bejar said. “Seeing Meta's claims about their safety tools made me realize it was critical to do a vigorous review.”

Meta spokesman Stone said the company responded to the concerns Bejar raised while employed at Meta with actions to make its products safer.

GETTING AROUND SEARCH-TERM BLOCKERS

Reuters confirmed some of the report’s findings by running tests of its own and reviewing internal Meta documents.

In one test, Reuters used simple variations of banned search terms on Instagram to find content meant to be off limits for teens. Meta had blocked the search term “skinny thighs” – a hashtag long used by accounts promoting eating-disorder content. But when a teen test account entered the words without a space between them, the search surfaced anorexia-related content.

Meta documents seen by the news agency show that as the company was promoting teen-safety features on Instagram last year, it was aware that some had significant flaws.

For instance, safety employees warned in the last year that Meta had failed to maintain its automated-detection systems for eating-disorder and self-harm content, the documents seen by Reuters show. As a result, Meta couldn’t reliably avoid promoting content that glorifies eating disorders and suicide to teens as it had promised, or divert users who appeared to be consuming large amounts of such material, according to the documents.

Safety staffers also acknowledged that a system to block search terms used by potential child predators wasn’t being updated in a timely fashion, according to internal documents and people familiar with Meta’s product development.

Stone said that the internal concerns raised about deficient search term restrictions have since been addressed by combining a newly automated system with human input.

Last month, U.S. senators began an investigation into Meta after Reuters reported on an internal policy document that permitted the company’s chatbots to “engage a child in conversations that are romantic or sensual.” This month, former Meta employees told a Senate Judiciary subcommittee hearing that the company had suppressed research showing that preteen users of its virtual reality products were being exposed to child predators. Stone called the ex-employees’ allegations “nonsense.”

Meta is making a fresh push to demonstrate its steps to protect children. On Thursday, it announced an expansion of its teen accounts to Facebook users outside the United States and said it would pursue new local partnerships with middle and high schools.

“We want parents to feel good about their teens using social media,” Instagram head Adam Mosseri said.

(Reporting by Jeff Horwitz. Edited by Steve Stecklow and Michael Williams.)