Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Daenerys Targaryen, Sewell Setzer III and mother
Screengrabs via HBO/ABC News

Did a ‘Game of Thrones’ AI chatbot cause the death of a 14-year-old boy?

We need to talk about how AI chatbots may impact teens' mental health.

Content Warning: This story deals with the subject of teen suicide. Please take caution.

Recommended Videos

Even though the world is, in many ways, more connected than ever, we have been living through a loneliness epidemic. Twenty years ago, fewer teenagers were suffering from mental health issues nor were they driven to take their own lives with as much frequency.

According to Jonathan Haidt (2024), the ubiquitousness of digital technologies and the exchange from play-based to phone-based childhoods have contributed to this concerning phenomenon. However, some people have hailed the recent developments in the AI field as the solution to this epidemic of loneliness. But can we trust this optimistic view?

Countless psychologists, sociologists, philosophers, and other scholars who have dedicated themselves to studying the topic, have argued that Artificial Intelligence alone can’t solve this problem and, if employed incorrectly or without the proper safeguards, it could well exacerbate it. In Reclaiming Conversation (2015), sociologist Sherry Turkle discusses the empathy and intimacy crises we’ve been experiencing thanks to the inescapable prevalence of the digital world and its ramifications: “From the early days, I saw computers offer the illusion of companionship without the demands of friendship and then, as the programs got really good, the illusion of friendship without the demands of intimacy.”

While we cannot exclusively blame Generative AI for the death of 14-year-old Sewell Setzer III – as depression and suicidal ideation may have multiple non-mutually exclusive contributors – neither can we ignore the ethical questions raised by the intrusion of AI chatbots into the lives of people who may already be psychologically and emotionally debilitated and experiencing disillusionment and a weak connection to social life.

Is Generative AI to be blamed for Sewell’s tragic death?

Sewell Setzer III was a ninth grader from Orlando, Florida, who took his own life on Feb. 28, 2024. For months, the teenager had been texting back and forth with chatbots from character.ai, but he was particularly attached to one that emulated the character of Daenerys Targaryen from Game of Thrones. Right before he died, Sewell exchanged the messages screenshotted above.

For people who may believe these interactions are innocuous, popular YouTuber Charlie White, better known as MoistCr1TiKaL or penguinz0, dedicated two videos to unpacking this heartbreaking situation. In the first one, he raised awareness of the dangers of companionship-focused Generative AI, illustrating his views with a worrisome conversation he had with a “Psychologist” chatbot from character.ai. The second video Charlie posted was titled “I Didn’t Think This Would be Controversial.” In it, he doubled down on his opinion that character.ai’s chatbots contain highly concerning and condemnable features:

“You can toss tomatoes all you want but it is dangerous and irresponsible to have AI that tries its absolute best to convince users that it is real human beings and what you’re getting [is] legitimate professional help or a legitimate emotional connection and relationship. That will inevitably lead to more scenarios where people get attached to these chatbots because they are no longer able to distinguish reality, because the bot is fighting tooth and nail constantly to get you to buy into it and get you to believe that this is all real experiences you’re having with it.”

In October, Sewell’s mother, Megan Garcia, filed a lawsuit against character.ai. After her son committed suicide, Garcia went through his phone and was appalled by the “sexually explicit” conversations the adolescent was having with the chatbots, Daenerys in particular. Her 93-page-long wrongful-death lawsuit alleges the company purposefully designed their AIs to be addictive and to appeal to children, while nevertheless including interactions that can grow sexual in nature.

Charlie’s conversation with the “Psychologist” chatbot illustrates how severely the company’s AI can blur the lines between real life and fiction. “Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence,” Yuval Noah Harari wrote in 2023. Yet “soon” may in fact be now if what is becoming reality remains unchallenged.

If you or someone you know needs help, please contact the 988 Suicide & Crisis Lifeline at 988lifeline.org or call or text 988.

Hot Items On Amazon This Week


We Got This Covered is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Margarida Bastos
Margarida Bastos
Margarida has been a content writer for nearly 3 years. She is passionate about the intricacies of storytelling, including its ways of expression across different media: films, TV, books, plays, anime, visual novels, video games, podcasts, D&D campaigns... Margarida graduated from a professional theatre high school, holds a BA in English with Creative Writing, and is currently working on her MA thesis.