Therapist Discovers AI Deepfakes Using Her Likeness To Sell Products — ‘What Can I Do?’

Her face is being used in a way that could ruin her career, and there's nothing she can do about it.

Written on May 09, 2025

Therapist Discovers AI Deepfakes Using Her Likeness To Sell Products Shakirov Albert | Shutterstock
Advertisement

The more AI capabilities expand, the more it becomes clear that the technology is not ready for primetime in a way that is potentially outright dangerous. Our total lack of laws governing its use has left basically all of us sitting ducks for things like what has happened to a creator on TikTok, who has found that her likeness has been stolen and there's not a thing she can do about it.

A therapist discovered an AI deepfake of her likeness being used for advertisements on TikTok.

AI deepfakes are nothing new, of course, and celebrities have been subject to this kind of fraud for years now. But these days, the technology has become so simple and accessible to everyday users that it seems we're now in a place where any of us who've ever put our faces online are in real danger. Case in point: TikTok creator and licensed therapist Jess.

Advertisement
@strongtherapy

Sorry if I’m not quite using the correct language or description but I’m sure someone knows what this is and what I can do? Can people just steal my likeness and make ads??

♬ original sound - Jess (Licensed Therapist)

Jess creates mental health and political content, has a large audience of more than 500,000 followers, and regularly goes viral on the app. She's exactly the kind of creator that many brands would love to leverage, and she recently discovered one had done so without her consent.

Advertisement

Imagine the shock of idly scrolling an app and suddenly seeing yourself doing an ad you never filmed or agreed to. "I was just scrolling on this app and I saw a video… and it was me, but it wasn't me," Jess said in a video about the incident. "I am very sure that it was AI." She said the ad was dubbed in another language, and her likeness had been altered to appear as though she were speaking the words. "I think that somebody has taken my likeness and put it into AI and created an ad."

RELATED: Woman Says Brand Used An AI Deepfake Of Her To Promote Their Product Without Her Consent

The therapist then discovered several other accounts had stolen her likeness, and TikTok will do nothing about it.

What made the situation all the creepier was that when Jess accidentally exited TikTok to screen record the ad, it had disappeared when she went back in and wouldn't come up in search results or her watch history.

That's exactly why she posted about it — in the hope that one of her followers would have seen it and could forward it along. That didn't happen, but her followers did alert her to scores of accounts that had stolen her content and were pretending to be her.

Advertisement

This happens to basically any and all large accounts that are not verified celebrities. I have a couple of friends who have very large TikTok followings, and I get followed by their fakers practically daily. But reporting them goes nowhere. TikTok does not even have a mechanism for reporting such violations unless the account in question is verified with a checkmark.

At least for now, creators like Jess are left completely in the lurch, and at risk of not just privacy violations but extensive damage to their reputations. What's to stop one of these advertisers from using her likeness to promote something harmful or even illegal? The answer is nothing.

RELATED: Recruiter Interviews ‘Creepy’ Deepfake Job Candidate Faking Their Identity On A Video Call

Advertisement

There are currently no laws to protect people from having their likeness stolen for deepfake advertising.

Fakers need just a few seconds of video in order to create an undetectably realistic deepfake, and according to a Washington Post report, it is happening with increasingly alarming frequency, even to social media creators with only modest followings.

Deepfakes are being used for everything from cybercrime to sophisticated political disinformation campaigns, like the case of a Ukrainian creator who discovered a deepfake of her likeness standing in front of the Kremlin praising Russian President Vladimir Putin.

She, like Jess, has little to no recourse. There are few laws on the books to protect from this misuse in the U.S., and most state-level laws that have been proposed focus solely on political disinformation and nonconsensual adult content. Laws in other countries have been called insufficient or overreaching, depending on who you ask.

Advertisement

As for the platforms themselves, YouTube seems to have been the most responsive, albeit with a slowness that creators have found frustrating. Facebook had quite notoriously already become little more than a repository for AI-generated "slop" long before its announcement that it would stop most content moderation altogether.

And TikTok has become equally notorious for doing nothing about content theft and fraud despite content moderation policies so draconian they require creators to speak in code that has generated its own slang lexicon. ("Unalive," anyone?)

Which brings us back to Jess. Unable to do anything about her likeness being stolen, she urged her followers to be vigilant, but about what she was unable to say. "Many of you have let me know that you have fallen prey to some of the — I can't say the word," she said, interrupting herself to stop from saying "scam."

Advertisement

Because while AI deepfakes, cloned accounts, content theft, and scams themselves seem to be of no concern to TikTok, the word "scam" is one of the many that will get a video throttled or deleted by TikTok's content moderation. Just a sign of the times, it seems.

RELATED: AI Chatbot Sends Disturbing Message To Student Requesting Homework Help — ‘We Are Thoroughly Freaked Out’

John Sundholm is a writer, editor, and video personality with 20 years of experience in media and entertainment. He covers culture, mental health, and human interest topics. 

Loading...