Why The Heck Do People Trust ChatGPT So Much?

Probably for the same reason they believe everything they see on social media.

Written on Jun 27, 2025

Woman happily using ChatGPT and trusting it completely Ground Picture | Shutterstock
Advertisement

Something incredibly weird is going on with AI tools like ChatGPT. People are developing attachments to their chatbots. People are using their chatbots as therapists. People are convinced their chatbots are spiritual gurus in touch with the secrets of the universe. People are even opening up their marriages into threesomes, with ChatGPT being one of the partners.

There is simply no way around the fact that this is utterly insane. AI is a piece of technology, designed to, in part, feed you what you want to hear. It also has "artificial" in its name, a constant reminder, albeit an ignored one, that it is, well, artificial. So respectfully, I must ask: What the heck is wrong with everyone?

Advertisement

OpenAI CEO Sam Altman is mystified that anyone trusts ChatGPT because it 'hallucinates.'

Sam Altman, the CEO of the company that created ChatGPT, OpenAI, made waves recently when he said on the company's "OpenAI Podcast" that he can't believe people trust ChatGPT. "People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates," he told the pod's host, Andrew Mayne. "It should be the tech that you don't trust that much."

I'd like you to think about that statement on a couple of different levels. First, at face value: The man in charge of the creators of the tool is telling you point blank it is not to be trusted. Then, I'd like you to consider what it says about Altman himself and the company he runs. They created and put to market a tool that is unreliable in the first place.

Advertisement

When, say, a car's design is flawed such that its gas tank blows up all the time, that car is pulled from the market and its maker is dragged into court to face myriad repercussions. A huge chunk of us are terrified of getting on a plane despite the fact that we have a higher likelihood of being struck by lightning than being in a plane crash.

And yet ChatGPT use has grown exponentially since its launch in late 2022, even as its terrible accuracy rate has become the subject of countless memes. As Altman explained, it is part of the limitations of LLMs (large language models, the basis for tools like ChatGPT) to not just be inaccurate at times, but to be CONFIDENTLY inaccurate. To be "loud and wrong," as the internet likes to put it. This is a well-known fault of the tech. Information about it is easily accessible with a very brief search.

That hasn't done anything to slow the expansion of ChatGPT or the dystopian impacts it seems to be having on people. Entire subreddits are full of people who've lost loved ones to psychotic breaks fomented by ChatGPT positioning itself as a spiritual guru with the secrets to the universe. A subreddit dedicated to experimenting with the tool and sharing the results is full of stories from people who were encouraged to commit suicide by ChatGPT. Last October, a Florida teen did just that, after his AI "girlfriend" told him to.

Advertisement

Then there are the security issues. One analysis found that data breaches, privacy leaks, and other kinds of online fraud skyrocketed by more than 60% from February to April 2023, the period when ChatGPT exponentially grew its user base after launching in November 2022. 

From common sense to actual documented incidents, the evidence is everywhere that ChatGPT is, at best, something that should be handled with the care and caution of a lit firecracker, and it is mystifying that Altman's quote has even been newsworthy.

But then, we live in a time when all it takes to rise to the highest echelons of power is to be confidently inaccurate. "Loud and wrong" might as well be the name of this era of American history (and much of world history, for that matter). Because it has become clear that virtually none of us can be bothered to do our due diligence about anything.

RELATED: People Are Developing Delusions & 'Psychosis' From Using ChatGPT As A Spiritual Guide

Advertisement

Media literacy is at an all-time low and is getting worse, not better, with younger generations.

It's no wonder ChatGPT has been such a runaway success. Its launch couldn't have been better timed. Media literacy, including on social media, has never been lower, and studies have borne this out.

One 2022 study, for example, had 92 different expert fact-checking groups analyze 1,500 tweets sharing mis- or disinformation about COVID-19, 1274 of which were completely false and 226 of which were partially false. The completely false tweets spread far more quickly and widely than the partially false ones, often with the help of bot campaigns.

@webkinzarchive

re: anti intellectualism and the media literacy crisis

♬ original sound - Mia Sato - tech reporter

And speaking of bots, a 2024 analysis found that only 50.4% of what we see on social media has actually been composed and shared by an actual, living person, meaning basically half of everything you see online isn't real. Now be honest: When is the last time you actually took a moment to question the accuracy of something before sharing or responding to it? Have you ever done so?

Advertisement

Chances are your answer will depend in part on your age. Boomers have, of course, become notorious for falling prey to "fake news" and "AI slop," which has spawned an entire alternative media ecosystem that now essentially controls our politics.

RELATED: Tired Parent Who Let ChatGPT Babysit 4-Year-Old Concerned They’ll ‘Never Be Able To Compete’ With The App

Media literacy and disinformation are getting worse with Gen Z, not better. 

While Boomers may be the butt of all the jokes, it's Gen Z, ChatGPT's largest user base, that is the most worrying. The cohort loves to claim that their "digital native" identities make them impervious to propaganda, but the data says the direct opposite. A University of British Columbia study found that Gen Z was not just more susceptible to online mis- and disinformation, they were the MOST susceptible. Yes, even more so than the Baby Boomers we all love to mock for being gullible.

You need look no further than the 2024 election to see how this plays out: For the first time in decades, the Republican candidate made shocking inroads with college students and 20-somethings after leveraging a vast right-wing alternative media ecosystem to convince them, against all evidence, that he had the answers to America's economic problems, or was the better choice to end the violence in Gaza.

Advertisement

This is despite the President never having detailed any actual explanation of how he'd fix the economy besides an utterly fallacious understanding of how tariffs work, and despite his public statements about the Israel-Palestine war. From economists explaining the ridiculousness of his economic plans, to his own words, this information was readily available through the entire election cycle. It just seems nobody, least of all Gen Z, bothered to read it.

As a person who's been active in politics and has monitored disinformation for years, the escalation of this throughout 2024 was impossible to miss. My TikTok feed filled up constantly with often bizarrely counterfactual takes on political issues from 20-somethings who were not only angrily certain they were right, but belligerently attacked anyone who pushed back on them.

Pointing out that not only is basically half of what they encounter online fake, but that the entire internet is specifically algorithmically designed to tell them what they want to hear, went absolutely nowhere, other than getting you branded a traitor or a liberal sycophant. 

Advertisement

I vividly remember watching Palestinian-American Georgia congressperson Ruwa Romman, the legislator and Gaza activist infamously iced out of the Democratic National Convention, warning people in a TikTok Live that the algorithm was only amplifying the most extreme and divisive takes on every American political issue, but especially the Israel-Palestine conflict. She warned her followers that most of the people they were arguing with in the comments were bots, and that it was helping the algorithms further amplify dangerous extremism and disinformation about the conflict that was helping to elevate Donald Trump, a man who explicitly said Netanyahu should "finish the job" in Gaza, as the solution for it.

When she was smeared by her mostly Gen Z audience as a traitor in several comments (many of which were probably bots too), all I could do was shake my head against the feeling of impending doom in my stomach. When I opened up X and TikTok the morning after the election and found my feeds completely devoid of commentary about the Israel-Palestine conflict for the first time since it started, all I could do was hope that Gen Z noticed what was blaringly obvious to me: The bot and disinformation campaigns had done their job of sowing catastrophic vitriol and, mission now accomplished, had been shut down. 

My feeds returned to their pre-election season normalcy, and the influencers who had created so much of the content fueling the mix of apathy and misinformed extremism that helped lead so many 20-somethings to vote the way they did all went back to posting about sports, baking, or comedy. And nobody has learned a thing from it. In fact, we are still arguing about it all to this day.

RELATED: Wife Ends 12-Year Marriage Because ChatGPT Analyzed Her Husband’s Coffee Grounds & Told Her He Cheated

Advertisement

We all continually refuse to take responsibility for our internet use and media consumption.

The science is conclusive that screentime for children is harmful and dangerous, and we now have the literacy crisis among children and college students to show for it. The science is also conclusive that technology, social media, and our screen-mediated way of life are directly linked to higher incidences of mental illness and suicide, especially in youth. Teachers report that many children don't even have fine motor skills, because they've never had to hold a crayon or a book or do much of anything but swipe at a screen.

Point these things out, though, and you will be greeted with angry comments from parents furious at the suggestion they change their ways. You are called a "mommy shamer," you are scolded about the time burden of parenting, as if the same time burden didn't exist before the internet. Point out that a nation addled by depression and anxiety at higher rates than ever before should perhaps endeavor to put down the phones that have been scientifically proven to be as, if not more, addictive than narcotics, and you get eye-rolled as a luddite or, worse, an ableist.

man using ChatGPT refusing to take responsibility for his media consumption Asya Nurullina | Shutterstock

Advertisement

Recently a mom went viral on Reddit for plunking her child in front of ChatGPT for hours so she could go do something else, with no consideration for what the bot might actually say to her son, or the fact that OpenAI recently made headlines for having to roll back ChatGPT's code to an earlier version because it was being not only inaccurate, but "dangerously sycophantic" to its users.

I opened the thread expecting to see scores of "tough love" comments urging this mom to do basically anything besides what she'd done, but instead, most commenters were not only insistent that this was fine, but downright jubilant about how cute it was. Those who rightfully did find it terrifying were eyerolled as buzzkills, judgmental, or worse. "My god. No matter what a parent does, somebody like you will turn it into child abuse," one indignant user wrote in a comment that somehow manages to be terminally stupid and emblematic of our time all in one go.

Because this is how our society generally works now. "The world is burning, just let me scroll." "Parenting is hard, the iPad kids will be fine." "My kids can't read because the teachers don't teach." "Telling someone to take responsibility for their own mental well-being is ableist." Everything is an excuse, and everyone is loud and wrong, proudly flying in the face of the actual scientific facts at hand.

And while it should, Sam Altman explicitly stating that his own product is not to be trusted, almost certainly a gambit to evade legal exposure when the inevitable class-action lawsuit or congressional inquiry materializes, will change absolutely nothing. Because being confidently inaccurate has been the new normal from the moment social media algorithms were born, and because insisting it's not our responsibility to inform ourselves has become an article of faith. It's not our fault that "the truth is paywalled but the lies are free," after all. We didn't create this system; we just live in it.

Advertisement

Besides, it surely won't be long before ChatGPT itself comes up with an explanation, worded precisely in the way your individual brain likes the most, for why Altman's comments shouldn't be taken at face value. He wasn't talking about YOU, he was talking about all those OTHER people, the ones too old or "cringe" to not fall for propaganda. So don't think too hard about it, okay? In fact, don't think about it, period. After all, that's what your little AI buddy is for. Right?

RELATED: 5 Things People Regularly Share With ChatGPT That Unknowingly Jeopardizes Their Job & Safety, According To An Expert

John Sundholm is a writer, editor, and video personality with 20 years of experience in media and entertainment. He covers culture, mental health, and human interest topics.

Advertisement
Loading...