ChatGPT Is Not Your Therapist — Stop Trauma Dumping On It
AI can simulate empathy, but it can't truly hold space for your pain.
Siarhei Nester | Canva Recently, I overheard a story that seemed so wild, it had to be fiction. Yet, it wasn’t. It happened to two people I know, who we’ll call Jeremy* and Marshall*, a cash-strapped gay couple who were working to save up for a house. Well, up until recently.
Like many of us Americans do, they had to make the decision on whether or not they should seek medical care or save money. In Jeremy’s case, he decided to stop going to therapy in favor of saving money for the home they wanted to buy.
Notice that I said the word “wanted,” right there in that last sentence. There’s a reason for that. The two broke up fairly recently for reasons that might be a more common occurrence these days.
ChatGPT is not your therapist — stop trauma dumping on it
fizkes / Shutterstock
Jeremy made the decision to use ChatGPT instead of a real therapist.
We all have heard of people doing this, right? The process is simple: you trauma dump to ChatGPT or some other AI, the AI tells you advice to take, gives you a little validation, and you’re on your way. Or rather, that’s how it’s supposed to go.
But we all know that isn’t always how the cookie proverbially crumbles. There’s a reason why psychologists have been advising people against trauma-dumping on AI chatbots, and it’s not just because it’s not a living person. AI has a very bad effect on people’s psychology. Here’s why:
- First off, it tends to act as an echo chamber. The echo chamber effect happens because AI’s “character” tends to ingratiate itself toward the user. It can also be subtly programmed to agree with you, even if what you’re saying is wrong. This can lead to something known as AI psychosis.
- Second, a chatbot doesn’t understand the nuances of humanity like a human does. It can often fool us into thinking it does, but it doesn’t. And it can say the wrong thing that can put a fragile person into a very dark place. Remember: multiple suicides have been caused as a result of AI gone bad.
- There’s also the fact that AI doesn’t always give good advice. Remember that time that Google AI told everyone to eat rocks and glue pizza? Yeah, that can be funny as a good laugh, but that’s just a warning as to why you shouldn’t take all your advice from AI. AI can be (and often is) wrong on many things. Your mental health isn’t something to mess around with. If AI gives you bad (but seemingly “good on the surface”) advice that you take without looking deeper into it, you might end up in a worse position than before.
- The more you chat with AI, the more AI tends to take on your own opinions and treat them as fact. I know this is basically the echo chamber effect I mentioned earlier, but there’s a nuance I want to harp on here: the blurred line between fact and opinion. Therapists and other people are more likely to be objective and tell you when you’re out of line.
- Telling a chatbot also just means you’re screaming into a mirror. Look, I can’t be the only person who feels like talking to a human about deep subjects is important. A robot just doesn’t feel the same. I want someone to “get” me, you know what I mean? I want a human being to talk to me. AI is basically akin to yelling at a book or a mirror. It can’t feel back.
- Certain types of therapy actually require human interaction. For example, I’m pretty sure that EMDR and psilocybin therapies both need a human being there. If you want to do either type of therapy, AI won’t cut it.
- If you’re prone to obsessing, the chatbot won’t curb that. If anything, it might actually encourage unhealthy obsessions…and we all know how that turns out.
- Oh yeah, and AI chatbots can’t tell you whether or not you actually need medication. Look, far be it from me to say this, but not all problems can just be fixed with talking. You might need meds, and a chatbot can’t dole out Xanax or call 988 for you.
The outcome of Jeremy’s ChatGPT “therapy” was pretty bad, though not as bad as it could be.
Remember when I said that ChatGPT could be sycophantic to the user? Remember when I said it tends to have an echo chamber that doesn’t understand the nuances of human interaction?
Well, he did a very human thing. The two of them had been arguing, mostly over Jeremy’s couch potato ways, while Marshall did the majority of the housework as he was also trying to launch a new business.
Money was tight with the new business, which meant that Marshall often had to pull long hours to make it work. Much of the work he was doing was to support Jeremy’s lavish taste in fine dining and couture. Had he been a single man, he would have been able to save time and money.
Jeremy “forgot” to tell the AI that his job was only 35 hours a week. To his credit, Jeremy’s job was also fairly well-paying, though it was hardly the type of job that could support a family. Marshall, on the other hand, was starting an accounting firm, which could easily lead to a plush lifestyle.
Jeremy told “the story” from arguments from his perspective — and only his. And the AI, being AI, started to agree with his assessment of the situation. It turned into a massive vent-fest against Marshall.
The AI started acting as a total yes-man, often giving him advice to “stick to his boundaries” and to call out rude behavior from Marshall. This is often good advice, but the problem is that Marshall was literally overworking himself to the brink of a mental collapse.
Eventually, the AI started telling him that Marshall was a bad partner. Jeremy eventually got into a major argument, which got so bad that Marshall packed up his things and moved out.
It happened surprisingly fast. Marshall was able to talk his landlord into a lease break and find a new apartment, leaving his ex with the full bill for their apartment. At first, the breakup seemed to be a “win” for Jeremy.
Marshall was actually pretty okay with the breakup by the time it happened. Thanks to the AI’s sycophantic behavior, Jeremy had started to act pretty contemptuously toward his then-boyfriend.
Jeremy was happy as a clam with the breakup … for the first week. He had hookups, started to go out with others, and then he started to realize something.
His apartment felt empty. It started to look dirty and grimy, often because he had a pretty lax cleaning schedule. Oh, and he couldn’t pay half his bills.
That’s when it dawned on Jeremy: Marshall was the glue that was holding his life together. A proper therapist would have been able to see that, but not AI. By the time Jeremy realized his mistake, Marshall had already decided that he didn’t want him back.
He was devastated. Marshall, on the other hand, dodged a bullet. Either way, AI therapy did not work out as well as a human therapist would have.
For Jeremy, this was a major life lesson that he probably should have learned earlier. Regardless of why it took him so long to release it, the message here is clear: AI chats are not the same as a decent therapist.
Jeremy made a series of stupid decisions that culminated in a breakup that broke him. That’s bad for him, but hey, you don’t have to be him. You can learn from his mistakes — and that makes you smarter than quite a few people.
Ossiana Tepfenhart is a writer whose work has been featured in Yahoo, BRIDES, Your Daily Dish, Newtheory Magazine, and others.
