Professor drops chilling truth about why students are now sabotaging their own essays, and yes, it is to protect their academic careers – We Got This Covered
Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Image by jothamsutharson on Pixabay.

Professor drops chilling truth about why students are now sabotaging their own essays, and yes, it is to protect their academic careers

Students have to write badly to get around cheating detectors.

College students are now intentionally sabotaging their own work to avoid being flagged by AI detection tools. Dr. Sam Illingworth, a professor at Edinburgh Napier University in Scotland, recently shared his observations on Reddit, pointing out a disturbing pattern where students deliberately add typos and use bad grammar in their assignments.

Recommended Videos

It sounds counterintuitive, right? Well, it turns out they’re trying to fool the AI detectors. According to Newsweek, Illingworth noted that some students are even running their perfectly human-written papers through “AI humanizer” tools, all just to dodge those pesky false positives. 

He put it pretty starkly, saying, “We’ve created a system where competent writing is treated as suspicious.” This system makes students second-guess their own abilities and forces them into these weird strategies.

With AI checkers being a part of any formal submission process, sometimes you don’t even get to defend yourself

Getting falsely accused of using AI can have major consequences for students. Illingworth mentioned several instances where students were incorrectly penalized, compromising their studies. The big problem here is that AI detection systems just aren’t very good at what they do. As we rely on AI more, we are starting to see crazy stories about AI errors, whether it is AWS’s AI deleting code because it didn’t like it, or a child’s toy giving disturbing advice

A 2023 study that looked at 14 different AI-detection systems found that none of them could even hit 80 percent accuracy. Researchers identified “serious limitations” and even characterized these systems as “unsuitable” for detecting AI cheating in classrooms. They concluded, “Our findings strongly suggest that the ‘easy solution’ for detection of AI-generated text does not (and maybe even could not) exist.”

An April 2023 study from Stanford University revealed that a shocking 61 percent of essays written by non-native English writers were flagged by seven different AI-detection tools, and an astonishing 97 percent got flagged by at least one. James Zou, a senior author of that study, warned that these detectors are simply too unreliable right now and need serious improvements and rigorous evaluation.

Illingworth’s biggest concern with these tools is their inherent bias. He explained that false positives disproportionately affect students based on their race, nationality, or first language. He called it “institutional prejudice, automated and given a confidence score.” He also admitted that he can’t reliably spot AI writing just by eye, and since the technology is so good now, “basing academic consequences on [eye detection] is dangerous.”

Comment
byu/calliope_kekule from discussion
inProfessors

Despite this, Ilingworth believes that there is genuine potential for AI in education. It isn’t a discipline issue with students, but a rational adaptation to the tools available to them. He argues that it falls to the educators to create use cases for it outside detection, which “is a dead end.” This includes using it as a thinking partner or drafting tool, critically and ethically. One that can also be passed to the students. 

This way, they don’t have to police something they aren’t equipped to understand.


We Got This Covered is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Jaymie Vaz
Jaymie Vaz
Jaymie Vaz is a freelance writer who likes to use words to explore all the things that fascinate her. You can usually find her doing unnecessarily deep dives into games, movies, or fantasy/Sci-fi novels. Or having rousing debates about how political and technological developments are causing cultural shifts around the world.