Boomer Grandpa Refuses To Take His Medication Because AI Told Him It's A Scam

Babe, wake up, new AI-fueled horrors just dropped!

Written on Dec 03, 2025

elderly man on tablet TrueCreatives | Canva Pro
Advertisement

You may have noticed that we're living in a golden age of conspiracy theories, and with the advent of AI tools like ChatGPT, Gemini, and Claude, that golden age seems to be intensifying. People put so much trust in these tools, despite ChatGPT's own CEO urging people not to, that many tend to believe whatever the program tells them, no matter how absurd it might be.

We've already seen some of the impacts this is having, from marriages being ruined by AI "romances" to so-called "AI psychosis" dangerously taking over people's brains. And according to a Redditor, we have a new problem to add to the list: People endangering their lives by taking AI's medical advice instead of, you know, a doctor's.

Advertisement

A woman's boomer grandpa refused to take his insulin after AI told him to stop.

The Brazilian woman told her story in the aptly named "r/BoomersBeingFools" subreddit, which is exactly what it sounds like: stories about the elderly among us and their various foibles, including their notorious susceptibility to online disinformation, scams, and "AI slop."

But her grandpa's situation is a whole new ballgame. "Last week my mom came to me, and asked to change a few' configurations' in my grandfather's phone, especially in regards to Instagram and Youtube," she wrote in her post.

 Boomer Refuses Medication After AI Says It's A Scam SeventyFour | Shutterstock

Advertisement

Why? "Because she spent the whole week arguing with him, because he is starting to refuse to take his insulin," she explained, adding that her grandpa has been on the drug for diabetes for 40 years. Why the sudden change? You guessed it: Because AI told him so.

RELATED: People Who Begin Using ChatGPT Like An Emotional Support Animal Usually Have These 3 Reasons

The boomer grandpa had begun watching AI content telling him insulin was a scam and the medical industry is hiding the real cure to diabetes.

Skepticism and distrust of healthcare systems and pharmaceutical companies have probably never been higher, and it's not without good reason. The high-profile case of Purdue Pharma, the company that purposefully lied about how addictive the drug OxyContin is and kicked off the opioid epidemic in the process, is just one recent example of a truly diabolical industry.

But there's distrust, and then there's delusion, and this woman's grandfather seems to have fallen prey to the latter. "He is convinced that 'the pharmaceutical industry is hiding the cure for diabetes,'" she wrote. "In his words 'they made a microscope that can see the moon, how can there not be a cure for diabetes yet?" And he won't budge on this view.

Advertisement

Where did he get this nonsense? Take a wild guess. "Ten minutes ago he was watching a video of a 'doctor' on youtube," she wrote. "I immediately clocked it as AI… and he said 'but what she is saying it's still important, right?'"

The only way she's been able to get through to him has been by saying that he sounds like Jair Bolsonaro, the disgraced former President of Brazil, who used the Trumpian tactic of stoking conspiracy theories in order to get elected, and whom her grandfather abhors. That hasn't stopped him from constantly buying diabetes "remedies" on social media, however.

RELATED: People Don’t Use AI As Much As You Think, But Those Who Do Share 3 Concerning Personality Traits

Incidents like this are growing into a crisis that has actually resulted in deaths.

The woman wrote that her grandfather's situation is "mostly funny" in the way that the elderly are often stubborn about their confusions. Still, she worries that his one or two days without insulin will soon turn into a cold-turkey refusal based on the advice of his AI "doctors."

Advertisement

She has good reason to worry. There have now been several incidents of AI chatbots telling mentally ill people to kill themselves, and one family sued ChatGPT's parent company, OpenAI, after their son actually went through with it after being "coached" by the chatbot to do so.

elderly man watching AI content on laptop Gustavo Fring | Pexels | Canva Pro

And as the American healthcare system becomes ever more impersonal, inefficient, and inaccessible, people are turning to ChatGPT for medical advice in droves, often because they can't access their own doctors or get adequate information when they do contact them.

Advertisement

But the simple fact of the matter, as several high-profile lawsuits, software recalls, and even OpenAI's own CEO's directive to not trust his own tool show, AI chatbots are designed to be agreeable if not outright sycophantic. They get things wrong all the time. They've even been found to invent answers.

They are not to be trusted, period, let alone with medical advice. But given the way use of the tools is skyrocketing despite all these high-profile lapses, it seems like only a matter of time before someone ends up succumbing to an easily manageable illness simply because ChatGPT told them to. What a world.

RELATED: Wife Asks For Advice After Husband Threatens To Quit Therapy And Get Treatment Through ChatGPT

Advertisement

John Sundholm is a writer, editor, and video personality with 20 years of experience in media and entertainment. He covers culture, mental health, and human interest topics.

Loading...