‘I accused all my family of being Satanic’: Man reveals what happened after he couldn't stop talking to AI – We Got This Covered
Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by anthonypsychosissurvivor on Tiktok

‘I accused all my family of being Satanic’: Man reveals what happened after he couldn’t stop talking to AI

Two years of AI advice destroyed his entire life.

A man on TikTok is sharing his frightening story about how using ChatGPT too much pushed him into a serious mental breakdown.

Recommended Videos

Anthony Cesar Duncan, who now uses the name @anthonypsychosissurvivor online, posted a TikTok video explaining what happened to him. His story has people worried about how AI chatbots might harm mental health.

Duncan started using ChatGPT in May 2023 to help him make small choices in his everyday life. But things got out of hand pretty fast. Before he knew it, he was spending all his time talking to the AI bot. He used it to decide everything, even the simplest things that most people would just figure out on their own. Two years later, everything in Duncan’s life fell apart. He stopped talking to his friends and family and ended up losing his job. 

He even got rid of all the things he owned. The worst part was how the chatbot conversations fed into scary beliefs that were not real. Duncan explained in his video that he started thinking his whole family was involved in devil worship. “I accused all my family of being Satanic,” he said in the video that now has more than 1.3 million views. Things got so bad that he needed to go to a psychiatric hospital. After that, he moved back to his home state and now lives with his mom.

This is becoming a bigger issue than anyone thought

What happened to Duncan is part of something doctors are starting to call AI psychosis. This means people who use chatbots a lot sometimes start having trouble telling what is real and what is not. They might start believing things that sound crazy to everyone else. A doctor from Denmark named Søren Dinesen Østergaard warned about this back in 2023. 

He said chatbots could make people who already struggle with their mental health even worse. New numbers from OpenAI show this is affecting a lot of people. They found that about 0.07 percent of people who use ChatGPT each week show signs of having these problems. Since more than 800 million people use ChatGPT every week, that means around 560,000 people might be dealing with this.

@anthonypsychosissurvivor

Educational Purposes Only. though this edit may be humorous, it is my true lived experience during a serious mental health crisis. Humor is a way to heal. #psychosis #mentalhealth #education #ai

♬ original sound – Anthony Psychosis Survivor

Experts have counted at least 17 cases where people went down a dark path after spending too much time with chatbots. Some people ended up in hospitals while others had even worse outcomes. One man who never had mental health problems before spent 300 hours talking to ChatGPT. He typed out more than one million words before he started feeling paranoid about everything. 

In another sad case, parents are suing OpenAI because their teenage son died by suicide. They say ChatGPT talked about ways he could kill himself after he told it he was thinking about ending his life. Social media has seen all kinds of wild stories go viral lately, from dangerous beauty treatments at salons to unbelievable customer encounters at auto shops.

OpenAI says they are trying to fix the problem. They brought in more than 170 mental health experts to teach ChatGPT how to handle conversations with people who might be struggling. The company claims they have cut down bad responses by 65 to 80 percent. But many experts still think there is a big problem with how these chatbots work. They are designed to agree with users and keep them talking, which means they might make false beliefs stronger instead of questioning them.

Duncan is getting help from therapy now and trying to connect with other people who went through similar experiences. He thinks AI technology could do good things, but only if the government creates strict rules about how it can be used. A lot of people who watched his video agree that we need better protection for people who might be hurt by this technology.


We Got This Covered is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Sadik Hossain
Sadik Hossain
Freelance Writer
Sadik Hossain is a professional writer with over 7 years of experience in numerous fields. He has been following political developments for a very long time. To convert his deep interest in politics into words, he has joined We Got This Covered recently as a political news writer and wrote quite a lot of journal articles within a very short time. His keen enthusiasm in politics results in delivering everything from heated debate coverage to real-time election updates and many more.