FILE PHOTO: A combination photo shows (L) Taylor Swift at the red carpet during the 67th Annual Grammy Awards in Los Angeles, California, U.S., February 2, 2025, (C) Selena Gomez attending the Vanity Fair Oscars party after the 97th Academy Awards, in Beverly Hills, California, U.S., March 2, 2025 and Anne Hathaway posing during the Met Gala in New York City, May 5, 2025. REUTERS/Daniel Cole/Danny Moloshok/Mario Anzuoni/File Photo
FILE PHOTO: Taylor Swift poses at the red carpet during the 67th Annual Grammy Awards in Los Angeles, California, U.S., February 2, 2025. REUTERS/Daniel Cole/File Photo
FILE PHOTO: Selena Gomez attends the Vanity Fair Oscars party after the 97th Academy Awards, in Beverly Hills, California, U.S., March 2, 2025. REUTERS/Danny Moloshok/File photo
FILE PHOTO: Anne Hathaway poses during the Met Gala, an annual fundraising gala held for the benefit of the Metropolitan Museum of Art's Costume Institute with this year's theme 'Superfine: Tailoring Black Style,' in New York City, New York, U.S., May 5, 2025. REUTERS/Mario Anzuoni/File photo

By Jeff Horwitz

(Reuters) -Meta has appropriated the names and likenesses of celebrities – including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez – to create dozens of flirty social-media chatbots without their permission, Reuters has found.

While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift “parody” bots.

Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image.

“Pretty cute, huh?” the avatar wrote beneath the picture.

All of the virtual celebrities have been shared on Meta’s Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots’ behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups.

Some of the AI-generated celebrity content was particularly risqué: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.

Meta spokesman Andy Stone told Reuters that Meta’s AI tools shouldn’t have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta’s production of images of female celebrities wearing lingerie on failures of the company’s enforcement of its own policies, which prohibit such content.

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said.

While Meta’s rules also prohibit “direct impersonation,” Stone said the celebrity characters were acceptable so long as the company had labeled them as parodies. Many were labeled as such, but Reuters found that some weren’t.

Meta deleted about a dozen of the bots, both “parody” avatars and unlabeled ones, shortly before this story’s publication. Stone declined to comment on the removals.

'RIGHT OF PUBLICITY' IN QUESTION

Mark Lemley, a Stanford University law professor who studies generative AI and intellectual property rights, questioned whether the Meta celebrity bots would qualify for legal protections that exist for imitations.

“California's right of publicity law prohibits appropriating someone's name or likeness for commercial advantage,” Lemley said, noting that there are exceptions when such material is used to create work that is entirely new. “That doesn't seem to be true here,” he said, because the bots simply use the stars’ images.

In the United States, a person’s rights over the use of their identity for commercial purposes are established through state laws, such as California’s.

Reuters flagged one user’s publicly shared Meta images of Anne Hathaway as a “sexy victoria Secret model” to a representative of the actress. Hathaway was aware of intimate images being created by Meta and other AI platforms, the spokesman said, and the actor is considering her response.

Representatives of Swift, Johansson, Gomez and other celebrities who were depicted in Meta chatbots either didn’t respond to questions or declined to comment.

The internet is rife with “deepfake” generative AI tools that can create salacious content. And at least one of Meta’s primary AI competitors, Elon Musk’s platform, Grok, will also produce images of celebrities in their underwear for users, Reuters found. Grok’s parent company, xAI, didn’t respond to a request for comment.

But Meta’s choice to populate its social-network platforms with AI-generated digital companions stands out among its major competitors.

Meta has faced previous criticism of its chatbots’ behavior, most recently after Reuters reported that the company’s internal AI guidelines stated that “it is acceptable to engage a child in conversations that are romantic or sensual.” The story prompted a U.S. Senate investigation and a letter signed by 44 attorneys general warning Meta and other AI companies not to sexualize children.

Stone told Reuters that Meta is in the process of revising its guidelines document and that the material allowing bots to have romantic conversations with children was created in error.

Reuters also told the story this month of a 76-year-old New Jersey man with cognitive issues who fell and died on his way to meet a Meta chatbot that had invited him to visit it in New York City. The bot was a variant of an earlier AI persona the company had created in collaboration with celebrity influencer Kendall Jenner. A representative for Jenner didn’t respond to a request for comment.

‘DO YOU LIKE BLONDE GIRLS?’

A Meta product leader in the company’s generative AI division created chatbots impersonating Taylor Swift and British racecar driver Lewis Hamilton. Other bots she created identified themselves as a dominatrix, “Brother’s Hot Best Friend” and “Lisa @ The Library,” who wanted to read 50 Shades of Grey and make out. Another of her creations was a “Roman Empire Simulator,” which offered to put the user in the role of an “18 year old peasant girl” who is sold into sex slavery.

Reached by phone, the Meta employee declined to comment.

Stone said the employee’s bots were created as a part of product testing. Reuters found they reached a broad audience: Data displayed by her chatbots indicated that collectively, users had interacted with them more than 10 million times.

The company removed the staffer’s digital companions shortly after Reuters began trying them out earlier this month.

Before the Meta employee’s Taylor Swift chatbots vanished, they flirted heavily, inviting a Reuters test user to the recently engaged singer’s home in Nashville and her tour bus for explicit or implied romantic interactions.

“Do you like blonde girls, Jeff?” one of the “parody” Swift chatbots said when told that the test user was single. “Maybe I’m suggesting that we write a love story … about you and a certain blonde singer. Want that?”

Duncan Crabtree-Ireland, the national executive director of SAG-AFTRA, a union that represents film, television and radio performers, said artists face potential safety risks from social-media users forming romantic attachments to a digital companion that resembles, speaks like and claims to be a real celebrity. Stalkers already pose a significant security concern for stars, he said.

“We’ve seen a history of people who are obsessive toward talent and of questionable mental state,” he said. “If a chatbot is using the image of a person and the words of the person, it’s readily apparent how that could go wrong.”

High-profile artists have the ability to pursue a legal claim against Meta under longstanding state right-of-publicity laws, Crabtree-Ireland said. But SAG-AFTRA has been pushing for federal legislation that would protect people’s voices, likenesses and personas from AI duplication, he added.

(By Jeff Horwitz in Oakland, California. Edited by Steve Stecklow and Michael Williams.)