Home Tech

How to keep cheese from sliding off from pizza — Google’s new AI search melts under pressure of the mystery

.... and it ain't the only deadly arrow in its quiver.

Google AI search
Photo via Getty Images/ Remix by Apeksha Bagchi

Tech giants are scrambling to serve up the most impressive artificial intelligence, but it seems they forgot to check if their AI assistants have any common sense.

Recommended Videos

Following in the footsteps of Bing Search, Google recently launched its own AI overview feature as part of its generative search experience. The idea is simple: when you search for something, Google’s AI will provide a brief overview or answer right at the top of the results page. This would not only save time but also position Google once again as a leader in innovative search solutions.

However, the execution has been less than stellar, turning what could be a powerful tool into something of an internet joke. 

One glaring example of this was when a user asked how to prevent cheese from sliding off a pizza and received the advice to use “non-toxic glue.”

I’m sorry, but if your pizza needs glue to keep the toppings on, you’ve got bigger problems than cheese slippage. As if that wasn’t bad enough, this genius recommendation was based on an 11-year-old Reddit post. Way to stay current, Google.

The issue isn’t limited to bizarre culinary tips. When asked about which presidents went to UW Madison, Google’s AI confidently listed “President Andrew Jackson” as a graduate…in 2005.

I guess Google’s AI missed the memo that Andrew Jackson died in 1845. Maybe it was too busy inventing new units of measurement, like the “kilotomato.” I can’t wait to see that one added to the metric system.

Moreover, when asked about the benefits of running with scissors, the AI declared it a great cardio exercise. I’m sure emergency room doctors everywhere would beg to differ.

Speaking of questionable recommendations, don’t even get me started on the AI’s incredibly insensitive response to a user searching for “I’m feeling depressed.” Google’s AI thought it would be a great idea to suggest the user to jump off a bridge. That’s not just inappropriate; it’s downright dangerous and callous.

Now, to be fair, the AI did clarify that this suggestion was based on a Reddit post. However, imagine if you went to a therapist and told them you were feeling depressed, and their response was, “Well, I saw on Reddit that jumping off a bridge might help.” You’d be horrified, and rightfully so. But that’s essentially what Google’s AI is doing here. By failing to filter out this dangerous content and instead presenting it as a viable option, Google’s AI is not only being insensitive but also irresponsible.

But why is this happening?

Well, it’s important to remember that Google’s AI overview feature is still in its experimental phase. The answer also lies in a little something called “AI hallucinations.” It refers to a phenomenon when an AI starts making things up or “imagining” things that aren’t actually true. These AI hallucinations aren’t just limited to Google, either. Microsoft’s Bing Chat has had its fair share of blunders, from making up fake news articles to expressing a desire to steal nuclear launch codes. But as the world’s largest search engine, Google’s missteps are under a microscope. Their image-generating AI had earlier come under fire for spewing out biased and offensive responses.

The root of the problem lies in the fact that these AI models are trained on vast amounts of online data, including social media posts, news articles, and even Reddit threads. And as we all know, the internet isn’t exactly a bastion of truth and accuracy. So when these AI assistants start spewing nonsense, it’s really just a reflection of the garbage they’ve been fed. And when you couple that with the immense pressure these tech companies are under to one-up each other in the AI race, you get a recipe for disaster.

These tech companies are so focused on winning the AI race that they’re forgetting about the actual humans who have to use their products. They’re being too selfish and busy cackling over their algorithms while the rest of us are left to deal with the fallout of their creations. These AI systems are even being used in healthcare and finance, so the risks of AI hallucinations become more serious. As for keeping that cheese on the pizza — perhaps it’s best if we stick to more traditional methods.

Exit mobile version