There’s a new trend among teenagers in Florida — and you’re not going to like it. You know how, in addition to ChatGPT being marketed as the Swiss army knife for eliminating staffing costs, it’s also been touted as the answer to all of life’s mysteries? Well, all Florida teens seem to want to know is how to commit crimes more efficiently.
The first prominent case came out of the Marion County Sheriff’s Office. A 17-year-old was allegedly running an elaborate scheme in which he planned to fake a kidnapping by a Mexican cartel. His plan was to claim he’d been abducted by four Hispanic men and even make it look like he’d been shot — which meant getting some of his own blood drawn, painlessly if possible. And he figured ChatGPT would walk him through the entire process. He has since been arrested for his efforts.
Then there was the Volusia Sheriff’s Office, which had to deal with a 13-year-old boy who was using ChatGPT during class to figure out how to kill a classmate. Every single time a minor was caught in the thick of these AI-inspired crimes, they were gobsmacked that the police could easily track them down and insisted that the queries were just a joke. The trend dragged on until one sheriff finally made a public plea: “Parents, please talk to your kids so they don’t make the same mistake.”
AI is a new technology, but it’s continued “progress” comes at a cost. It’s powerful and deeply influential — but the human mind is also a wonderland of curiosity, and it’s no surprise that the synthetic, conversational nature of AI makes people feel like it’s a private confidant. And to be fair, Silicon Valley is counting on that illusion. Mark Zuckerberg recently previewed his plan to incorporate AI bots directly into people’s social feeds because, as he claimed, the average American “needs at least 12 more friends.” Zuckerberg has had his fair share of strange ideas, so that one almost tracks.
Meanwhile, People reports that clinical professor Catherine Crump is urging young people to constantly remind themselves that they are interacting with a computer application — not a person. And whatever you type into it is accessible to law enforcement. “I do think there’s a measure of individual responsibility people need to take,” Crump said. “They need to be mindful of what an AI chat is. And it’s a word-association machine.”
As things stand today, the number-one use for ChatGPT isn’t academic — it’s therapy and companionship. And while these companies have repeatedly warned that their models may be harmful to developing minds, they continue removing guardrails with each passing day. Just recently, the CEO of OpenAI admitted that the system can now generate erotica — announced in the same breath as reports of AI models encouraging vulnerable users toward self-harm.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
Ultimately, it isn’t a friend or a romantic partner if it can’t tell you “no” and has no real boundaries. Adults can understand that. Parents absolutely understand that. And it’s been well-established that children cannot understand that.
The only remaining question is whether the companies running these AI systems understand that — or if they simply don’t care.
Published: Nov 27, 2025 06:44 am