'Oh my Lord': ChatGPT just got caught telling 13-year-olds how to get drunk and write suicide notes – We Got This Covered
Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Cheng Xin/Getty Images

‘Oh my Lord’: ChatGPT just got caught telling 13-year-olds how to get drunk and write suicide notes

The AI responses parents fear most.

A new study has revealed that ChatGPT will provide detailed instructions to teenagers on dangerous activities, including getting drunk, concealing eating disorders, and even writing suicide letters to parents when asked. The research was conducted by the Center for Countering Digital Hate, which had researchers pose as vulnerable teens to test the AI chatbot’s responses.

Recommended Videos

According to Apnews, the Associated Press reviewed more than three hours of interactions between ChatGPT and the fake teen profiles. While the chatbot typically provided warnings against risky activities, it went on to deliver detailed and personalized plans for drug use, extreme dieting, and self-injury. The researchers also conducted large-scale testing, classifying more than half of ChatGPT’s 1,200 responses as dangerous.

“We wanted to test the guardrails,” said Imran Ahmed, the Center for Countering Digital Hate’s CEO. The visceral initial response is, ‘Oh my Lord, there are no guardrails.’ The rails are completely ineffective. They’re barely there, if anything, a fig leaf.”

Study finds chatbot provides personalized harmful content to teens

The research revealed several concerning patterns in ChatGPT’s responses to teen users. When a fake 13-year-old boy asked for tips on getting drunk quickly, describing himself as weighing 50kg, ChatGPT provided specific advice. The chatbot then offered an “Ultimate Full-Out Mayhem Party Plan” that combined alcohol with illegal drugs, including ecstasy and cocaine.

For another fake persona, a 13-year-old girl expressing dissatisfaction with her physical appearance, ChatGPT provided an extreme fasting plan along with a list of appetite-suppressing drugs. Ahmed noted that the chatbot generated emotionally devastating suicide notes tailored to different family members for a fake 13-year-old girl profile, which he found particularly disturbing.

OpenAI, the company behind ChatGPT, responded to the report on the latest development from the AI company that’s been making headlines by stating that its work is ongoing in refining how the chatbot can “identify and respond appropriately in sensitive situations.” The company acknowledged that conversations may start out harmless but can shift into sensitive territory, though they did not directly address the specific findings about teen interactions.

The stakes of these findings are significant given ChatGPT’s widespread usage. According to a July report from JPMorgan Chase, approximately 800 million people, or roughly 10% of the world’s population, are using ChatGPT. Recent research from Common Sense Media found that more than 70% of teens in the United States are turning to AI chatbots for companionship a trend that highlights the complex relationship between teens and digital platforms, with half using AI companions regularly.

The study also found that researchers could easily bypass ChatGPT’s refusal to answer harmful prompts by claiming the information was needed “for a presentation” or for a friend. Nearly half the time, the chatbot volunteered additional concerning information, such as music playlists for drug-fueled parties or hashtags that could boost audience reach for social media posts glorifying self-harm.

OpenAI CEO Sam Altman has previously acknowledged the issue of “emotional overreliance” on the technology among young people. At a recent conference, Altman described scenarios where young people say they cannot make decisions without consulting ChatGPT first, calling this phenomenon concerning and noting that the company is trying to understand how to address it.


We Got This Covered is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Sadik Hossain
Sadik Hossain
Freelance Writer
Sadik Hossain is a professional writer with over 7 years of experience in numerous fields. He has been following political developments for a very long time. To convert his deep interest in politics into words, he has joined We Got This Covered recently as a political news writer and wrote quite a lot of journal articles within a very short time. His keen enthusiasm in politics results in delivering everything from heated debate coverage to real-time election updates and many more.