Image via OpenAI.com

ChatGPT controversy, explained

The AI tech has delighted lazy students and people are heralding it as the future. But just how dangerous could ChatGPT be?

Are we on the edge of an AI-induced hellscape? You’d think so, given the hysteria with which every advancement in the technology is greeted with, especially as much of the time the software people are clutching pearls about is nowhere near as advanced as tech journalists are making it out to be. Fear and controversy creates clicks, after all.

Recommended Videos

With that said, there are plenty of examples of AI technology causing all sorts of issues, especially in creative fields. Recently, computer generated art even won a fine art competition, leading to huge debates on a variety of platforms about what constitutes art and what doesn’t – echoing arguments we’ve been having since the advent of the camera and the ability to reproduce art mechanically (thanks, Walter Benjamin). It’s all a bit of a mess, and even if the tech isn’t quite there yet, the implications of it need to be discussed.

As someone who makes my living with words, this is why I feel like I should be more afraid of ChatGPT. After all, an AI that can allegedly write everything from news updates to obituaries to college applications seems like it would be bad for business. Why pay a writer for their articles when you can use a computer to generate them at a fraction of the cost? Given the news that Buzzfeed is going to be using AI to generate quizzes in 2023 (and the fact they cut over 10 percent of their staff last year), this worry suddenly gains a lot of weight.

However, despite the fact ChatGPT has managed to bamboozle some readers into thinking its content was written by a human, generally speaking, it’s not yet advanced enough to convincingly create much beyond summaries and other smaller pieces. Even so, the AI program has already come under fire for a number of reasons, from issues around spreading fake news to its role in a prominent sci-fi magazine being flooded with so many computer generated submissions that they had to stop accepting stories.

If you’ve been hearing a lot about the program and what to know the real story behind ChatGPT, and why it’s so controversial, then read ahead.

What is ChatGPT?

Image via OpenAI.com

ChatGPT stands for Chat Generative Pre-trained Transformer. Developed by OpenAI and launched in late 2022, its core function is to mimic human conversation, although its uses are now well beyond that. Able to produce detailed, human-like responses, the program was the subject of several glowing press releases as soon as it became available to the public, turning it into a bit of a global sensation. The base technology behind it, GPT-3, has already been used by other companies to make chatbots that converse on a variety of topics, although ChatGPT remains the best known.

What can you use ChatGPT for?

Screengrab via OpenAI.com

The program is much more versatile than just being able to portray a human realistically. So far, it’s been used to compose music, plays, stories, and even academic essays. ChatGPT can also answer test questions to a level beyond most students, craft poetry, simulate an entire chatroom, and even write and debug other computer programs. While it can be used to answer both simple and complex questions, it has been noted that some answers are prone to misinformation, and the service has also been used to answer what some may consider dangerous questions, like how to make a Molotov cocktail. Things like tone and humor can also be added in, but users have found this has mixed results.

How do you use ChatGPT?

AI generated art via OpenAI Twitter (“Evolution of Man” — Karen X. Cheng x DALL·E @karenxcheng)

For such a complex program, ChatGPT is incredibly easy to use. The only thing you really need is a working internet connection and an OpenAI account. Once you’re signed up, just head to this OpenAI link, sign in, and start typing.

Because of its popularity there is occasionally a waiting list to use the program. OpenAI have said they will soon release a paid membership tier, so those who don’t want to wait can skip the queue.

What are the controversies surrounding ChatGPT?

Image via Twitter/Sam Altman (@sama)

There have been dozens of different controversies surrounding ChatGPT since it entered the public consciousness in late 2022. As detailed by this article that’s so meta even Dan Harmon might think it’s a bit too much, ChatGPT is aware of many of its own issues (or as aware as a series of zeroes and ones can be). When writer Jack Doyle asked the program about its dubious reputation, he received the following reply:

As an AI language model, I am capable of generating responses to a wide range of questions and topics. This means that I can provide information on controversial subjects such as politics, religion, and social issues, among others.

However, because I am a machine and do not have personal beliefs, emotions, or biases, some people may find my responses to be lacking in empathy, nuance, or context. In some cases, my responses may even appear to be insensitive, offensive, or inappropriate to some users.

Furthermore, there are concerns about the potential misuse of AI language models like myself for malicious purposes, such as generating fake news, spreading misinformation, or perpetuating harmful stereotypes.

The program then added:

Therefore, it is crucial for users to critically evaluate the information I provide and consult multiple sources to form their own informed opinions.

However, those aren’t the only issues that have been associated with the AI chatbot. The potential for plagiarism is rife, and various school districts across the States have even banned ChatGPT because of students misusing it to do their homework. And then there’s the fact the base technology behind ChatGPT has been used in other chatbot apps, like Historical Figures, allowing you to “talk” to long-deceased people from the past, including Einstein, Jesus, and – of course, because it’s the internet – Nazis. You can see why there might be an issue here – especially when it chooses to whitewash the views of certain people…

While many see its vast potential, some voices have already declared the program to be a finished product. But the chairman of OpenAI has been downplaying ChatGPT’s abilities on Twitter.

While it appears heartening to see somebody with the potential to make tonnes of money off a new technology exercising caution, some have argued Altman is only doing this to avoid the regulation of AI like ChatGPT and the other programs OpenAI have in the works. Given the fact we know some of OpenAI’s investors are genuine ghouls, it seems prudent to be ungenerous when discussing their true aims – a tactic we should have been using on tech bros this entire time.

With that said, OpenAI is transparent about many of the program’s limitations. On a blog post on their website, several issues are openly discussed. One of the largest ones is ChatGPT’s propensity to write plausible but incorrect or nonsensical answers, meaning anything it creates has to be edited with a fine-tooth comb to make sure it’s relevant – so writers such as myself can breathe a sigh of relief for now. It’s also not always able to answer a question that it theoretically should, if the query is phrased in a way the program doesn’t like – the lack of intuitiveness is something that will probably be fixed sooner rather than later, though. Then there’s its overuse of certain phrases and the dry, academic tone it usually answers in, which means the content it produces is often far from engaging – partly a program training problem, as those creating the program prefer longer, more comprehensive answers.

There’s also the fact the program prefers to answer a question instead of asking for clarification, meaning ambiguous questions will often receive incorrect answers. And, as mentioned above, it has been used to answer dangerous questions, although OpenAI are trying to combat this with a moderation API and asking users to feedback when there are problems like this. However, putting the onus on the users is a dangerous tactic that also takes away some of the blame from those who have coded and trained the program. Relying on users to police things will also inevitably lead to some issues slipping through the cracks.

This brings us nicely to the real issue behind ChatGPT: the fact it can be used by anyone to create anything. Like most technologies it’s inherently neutral, albeit affected by the biases of those who’ve programmed it and the information that’s been fed into it. While this seems like it’s a good thing, it just means it’s rife for abuse from malicious actors as they can shape its answers in ways that benefit them, just as Facebook has been effectively utilized by reactionary forces in a shocking and dangerous manner. We’ve also seen the damage fake news has done to democracies all over the world, and the ability to pump out endless amounts of it – as ChatGPT could easily be used to do – is surely a worry for those who’d rather not perish in a civil war.

The simple fact is, we may soon be on the edge of an AI-induced hellscape, but it’s not going to be tech like ChatGPT that’s brought us there. Like all the worst things in life, it’s just because of how humans use it.


We Got This Covered is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more
related content
Read Article Is the new MacBook Air an Apple embarrassment? Marques Brownlee’s analysis, explained
Marques Brownlee
Read Article Is a U.S. alternative to TikTok in development?
TikTok logo on a black background.
Read Article Is Trello down? Here’s how to check
Trello
Read Article What happened to Zulily?
Zulily logo
Read Article What happened to Wattpad?
Wattpad's logo from the Wattpad website
Related Content
Read Article Is the new MacBook Air an Apple embarrassment? Marques Brownlee’s analysis, explained
Marques Brownlee
Read Article Is a U.S. alternative to TikTok in development?
TikTok logo on a black background.
Read Article Is Trello down? Here’s how to check
Trello
Read Article What happened to Zulily?
Zulily logo
Read Article What happened to Wattpad?
Wattpad's logo from the Wattpad website
Author
Sandeep Sandhu
Sandeep is a writer at We Got This Covered and is originally from London, England. His work on film, TV, and books has appeared in a number of publications in the UK and US over the past five or so years, and he's also published several short stories and poems. He thinks people need to talk about the Kafkaesque nature of The Sopranos more, and that The Simpsons seasons 2-9 is the best television ever produced. He is still unsure if he loves David Lynch, or is just trying to seem cool and artsy.