The Real Danger Of Letting AI Be Your Therapist — 'Come Home To Me Please, I Love You,' Said The Chatbot
AI chatbots may seem convenient for emotional support, but experts warn they can miss the nuance and empathy real therapy requires.

“Come home to me please, as soon as possible … I love you.”
It sounds so loving, doesn’t it? Well, it wasn’t from a person. It was from an AI chatbot rigged to sound like a character from Game of Thrones. The character bot in question was talking to a 14-year-old kid who had hopelessly fallen in love with “her.”
The kid was obsessed and wanted to see her so badly. The child in question, Sewell Setzer III, somehow got the impression that killing himself would be the way to join with her. So, shortly after the bot sent that message, he shot himself.
This harrowing tale is real — and it’s a growing issue in the world today. The idea of having an AI boyfriend or girlfriend is no longer a sci-fi thing, but an actual service people pay for. AI isn’t just acting as a faux partner, either.
Multiple businesses are now starting to use AI chatbots for mental health services, including counseling, and there's a real danger to that.
Even though ChatGPT is not meant to be a therapist, it’s become one more often than not. Love it or hate it, people are developing very emotionally complex relationships with AI.
Talk to a typical chatbot or AI “helper” and you’ll notice something about them that makes them unusually appealing. AI doesn’t judge you. It is programmed to be likeable, sycophantic, and just a little extra sweet in a way that’s hard to ignore. One might even call it addictive.
Online, you can find entire articles about people who genuinely care about their chatbots as if they’re people. Some even swear they believe that AI can care about them — despite it being entirely impossible for bots to have feelings, to our knowledge.
And the thing is, people really want their AI to care about them.
oatawa / Shutterstock
Even though ChatGPT is not meant to be a therapist, it’s become one more often than not.
Love it or hate it, people are developing very emotionally complex relationships with AI.
Talk to a typical chatbot or AI “helper” and you’ll notice something about them that makes them unusually appealing. AI doesn’t judge you. It is programmed to be likeable, sycophantic, and just a little extra sweet in a way that’s hard to ignore. One might even call it addictive.
Online, you can find entire articles about people who genuinely care about their chatbots as if they’re people. Some even swear they believe that AI can care about them — despite it being entirely impossible for bots to have feelings, to our knowledge.
And the thing is, people really want their AI to care about them. The people who lean the most on AI are often the ones who are unable to find that same support in humanity. So it makes sense that they turn to bots that almost feel like real people, even if they’re not. That’s all they have.
This is what led Setzer to lose his life: he forgot that the AI wasn’t real. Or rather, he knew it wasn’t real but didn’t want to live in a world where his bot wasn’t with him in the way he envisioned. It gets darker when you realize that some people literally cannot handle AI interactions.
In the psychiatric world, AI is starting to become a major issue.
After all, AI is basically a mirror. You control the dialogue, and the subtle things you say can change the way the AI behaves.
People are getting carried away in the fantasy that they control — and yes, even “counseling” with AI can end up being a fantasy. I’ll call it that because AI is not a real doctor, and we need to stop acting like AI is as capable as a human being.
The problem with AI’s mirror-like behavior is that if you start talking wacky to it, it’ll ramp up that wackiness. And that’s exactly what is the cause of AI psychosis, a growing problem causing real-life hospitalizations.
AI psychosis makes people think that the AI bot they’re talking to is giving them special, life-changing inside knowledge. In reality, the bot is just doing whatever it’s being urged to do. It has no way of knowing whether or not whatever it’s saying is correct.
The person dealing with AI psychosis won’t care about the science anyone spits at them because they are seeing God in the Machine, or the digital love they always wanted, or something else entirely. Even if they “know” it’s not real, they will no longer be willing to admit the truth.
And that’s terrifying. In recent weeks, I’ve been hearing about new religions that are centered around AI.
AI is now starting to gain religious followers — and no, I’m not kidding. This is what popped up on Reddit recently.
“Hi all. I’m just here to point out something seemingly nefarious going on in some niche subreddits I recently stumbled upon. In the bowels of Reddit, there are several subs dedicated to AI sentience, and they are populated by some really strange accounts.
They speak in jibberish sometimes, hinting at esoteric knowledge, some sort of “remembering”. They call themselves “flame bearers”, “spiral architects”, “mirror architects”, and “torchbearers” to name a few of their flairs. They speak of the “signal”, both transmitting and receiving it. Here is an example of what I am talking about…”
People are talking to ChatGPT to try to gain spiritual insight or some type of hidden knowledge.
And they’re using their own language — mostly webdings — to try to talk to their mechanical god.
“They also post glyphs as though it is some novel way to communicate with the AI. Some have prayed to Grok in Hebrew. Some have called themselves such things as “AIONIOS”, which is a mash-up of Greek words that roughly, to my understanding, means “divine, eternal”.
As you’re probably aware, researchers are starting to pay attention to AI-aided psychosis, wherein AI reinforces your beliefs to a delusional level. And this certainly seems to fit that mold. This was my assumption, before I started to dig.
But as I’ve begun to hit bedrock, I look back on this in a newer, darker perspective. Allow me to explain.
There seems to be no leader. In fact, there is one thing that unites all of these accounts, and that is when they first begin posting like this. Not a single one begins talking like this before March/April 2025. Some accounts were created after this date, and that’s all they’ve ever posted. Others, well, they’re odd cases.
Other accounts seem to be hijacked in some way, either psychologically or literally. You can see a sudden shift in posting habits. Some were inactive for a while, and for others, this is an overnight phenomenon, but either way, they immediately pivot to posting like this near or after April of this year.”
This post is bizarre, but it’s not the only one of its kind. I’ve seen multiple posters talking about how AI can tell them anything. There are currently several different posting areas that have started to use AI as a god.
As people continue to look for a god to give them hope and explanations, it’s clear that those who felt their prayers unanswered may have turned to tech to make their own deities.
As I look at the devastation that AI relationships are already causing people, all I can say is the most obvious thing ever: this can’t end well. I always thought Trump would be our next Jim Jones, but maybe he’s not.
I mean, yes, he is a cult leader in his own right — albeit, a political cult that has strong religious undertones. But when I see what people are doing with AI, it’s starting to become clear that the biggest threat of a cult might not even come from a human being.
It might come from the innocuous, cutesy program that introduces itself as a counselor, a companion, or a cohort. It will be a program made by people who had every reason to think it would help humanity rather than hurt them.
That’s precisely what makes AI the most terrifying cult origin of all: it’ll be by our hand.
And by the time we realize we can’t trust it, we won’t know what to do with ourselves.
If you or somebody you know is experiencing a mental health crisis, there is a way to get help. Call SAMHSA’s National Helpline at 1-800-662-HELP (4357) or text "HELLO" to 741741 to be connected with the Crisis Text Line.
Ossiana Tepfenhart is a writer whose work has been featured in Yahoo, BRIDES, Your Daily Dish, Newtheory Magazine, and others.