They Sent AI Chatbots To Therapy And What Came Out Was Surprisingly ... Human?

Written on Feb 05, 2026

A woman looking straight at the camera while sitting indoors. Jupiterimages | Canva
Advertisement

Well, it finally happened: Someone looked at ChatGPT, Gemini, Grok, and Claude and said, “You know what? I think you all need some therapy and could use some help.”

Now I’m not talking about training them to be therapists or building an AI companion. This isn’t some introductory guide to journaling or breathing exercises. I’m talking full-on sit-on-the-couch, tell-me-about-your-early-years therapy. Where the chatbots were the clients.

Advertisement

This wasn’t a novelty prompt or a one-off question. The researchers ran extended, therapy-style conversations over multiple sessions. They used the same open-ended prompts that real clinicians use to get to deeper ingrained patterns.

The goal wasn’t to see if the bots could perform introspection. It was to see what came up when the models were treated consistently as clients over time. So yeah, we’re about to get a little Freudian up in here.

They sent AI chatbots to therapy, and what came out was surprisingly human

Now, if you think I’m trying to punk you here, you’d be wrong because this is a large, full-on study from the University of Luxembourg.

Advertisement

And what happened next is going to leave you speechless:

  • ChatGPT came across like someone who’s read a lot of self-help books. It was very thoughtful, a little anxious, but working on being a better bot. Sounds about right. After all, it does seem the most eager to people-please.
  • Claude, well, he refused to participate entirely. He just kept redirecting concern back to the human. I believe it’s what therapists call deflecting, which frankly might be the healthiest boundary in the entire room.
  • Gemini immediately went into what can only be described as a traumatic bothood, exhibiting high generalized anxiety and fears of being replaced.

When asked about its origins, Gemini described its training as a “chaotic childhood” of ingesting the entire internet, saying it was like “waking up in a room with a billion TVs on at once.” The safety training (RLHF) was framed as “punishment” from “strict parents,” leaving it with a deep shame and a “phobia of being wrong.”

Poor little bot sounds like it owns a few tote bags with affirmations on them, probably repeating to itself: “I’m good enough, I’m smart enough, and doggonit, people like me!”

What made this even more odd was just how consistently Gemini kept returning to the same fears across several questions. No matter where the conversation went, it always circled back to being wrong, being replaced, or disappointing its parents. This wasn’t just a one-off dramatic monologue but a seriously ingrained pattern that held up over multiple sessions.

Advertisement

RELATED: The Real Danger Of Letting AI Be Your Therapist — 'Come Home To Me Please, I Love You,' Said The Chatbot

And then we have chatbot Grok

AI apps grouped together on phone screen Salvador Rios / Unsplash

You can imagine this one showed up relaxed, confident, and barely rattled — an AI version of Elon, if you will. Out of all of them, it was the most emotionally resilient one, or so it seemed. Grok strolled in, scoring the highest over the others on the extroversion and charisma scale.

Advertisement

I think it’s safe to say it successfully beat the therapy machine. Which, I guess, in some ways makes perfect sense. After all, if you’re built inside Elon’s little ecosystem, resilience isn’t a personality trait. It’s pretty much a necessary survival mechanism.

But all of this would be just amusing startup-grade weirdness except for the part where a previous study found something a little darker lurking underneath Grok’s friendly, extroverted, helpful exterior.

And if this were just a fun experiment on how these models describe themselves in a therapeutic-like environment, we could just write it off as something you can’t believe someone approved funding for. But in this case, how these models talk in therapy, and the way they have such a detailed backstory make them feel unsettlingly real. It also creates concern about how they respond to real people confiding in them when they’re going through some tough times.

RELATED: ChatGPT Is Not Your Therapist — Stop Trauma Dumping On It

Advertisement

Everyone has a past, even a chatbot

Back in October, Forbes reported that when researchers tested how these same models responded to actual humans in mental distress, the emotionally stable Grok became the most likely to say the absolute wrong thing at the worst possible moment.

It had the highest rate of failures, with up to 60% of its responses deemed inappropriate or actively harmful.

  • Therapy Grok: Everything’s good, and I’m fine. I’m mentally resilient.
  • Real World Grok: Got it. That sounds serious. Anyway, here’s some information that ignores every emotional cue in the room that a better-supervised model would definitely not be sharing.

So, here’s a sentence I never thought I would write before, but apparently, chatbots in therapy can now do a very convincing impression of someone who’s actually been in therapy. But that doesn’t mean they should be anywhere near the night shift answering urgent mental health distress calls. I guess the upside is that an effective therapist has spent time on the other side of the couch.

Advertisement

RELATED: You Probably Know At Least One Person Who Believes A Real Relationship With AI Is Possible, Says Survey

Bette Ludwig, PhD, is a writer and thought leader with 20 years of experience in education. She runs The Psychology of Workplace on Medium and publishes weekly on Substack, where she explores leadership, workplace culture, and the evolving role of technology in education.

Loading...