Google’s AI Overviews Gains Embarrassing Reputation Among Users

Google’s AI Overviews Gains Embarrassing Reputation Among Users

Reading time: 3 min

Google Search users have been getting weird and fake information since the company rolled out the new AI Overviews feature last week. The new update, a new custom Gemini model launched in the United States on May 14, aims to show users “helpful” information when they type a query on Google Search, but a few results have been surprisingly inaccurate and have gone viral on social media.

A user shared on X the results of their query “cheese not sticking to pizza”, and AI Overviews provided an answer that suggests adding “about ⅛ cup of non-toxic glue to the sauce to give it more tackiness.” The post went viral, and users discovered that the information used by the generative AI came from an 11-year-old post on Reddit.

It has recently been confirmed that Google and OpenAI have partnerships with Reddit to feed their AI models, raising a comprehensive concern over the information being massively spread. Other users have discovered more untrustworthy or parody sources used by AI Overviews as reliable sites, like the satirical online publication The Onion. When someone asked about the rocks they should eat, AI Overviews recommended “at least one small rock per day.”

Hundreds of funny answers and memes have been shared in the past few days, affecting the company’s reputation for providing accurate, helpful information. And unfortunate timing, considering Google recently announced its new AI model, hoping to compete against ChatGPT.

Google’s Response

Google shared a document yesterday, AI Overviews: About Last Week, recognizing the “odd” overviews and providing an explanation of why this happened.

“AI Overviews generally don’t ‘hallucinate’ or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available,” states the article.

The tech company explained that its experts have tested the new features and considered common queries, but opening the service to millions of users in the United States resulted in novel and unexpected searches.

Google also seemed to blame users for testing the intelligence’s limits. “We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results,” reads the announcement. It added that not all the viral images were real and that “there have been a large number of faked screenshots shared widely.”

Google acknowledged that it needs to improve the AI tool and assured that actions have already been taken, like “limiting the inclusion of satire and humor content” and limiting the use of user-generated content in responses. They’ve also removed the feature for news and health content “where freshness and factuality are important.”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
5.00 Voted by 1 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...