Google explains how AI Overviews operates, common issues, and how it got wrong misinterpreted queries:

AI Overviews work very differently than chatbots and other LLM products that people may have tried out. They’re not simply generating an output based on training data. While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index. That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.)

In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

In a small number of cases, we have seen AI Overviews misinterpret language on webpages and present inaccurate information. We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies.

I’m glad Google addressed the recenr debacle that has been floading the media lately. Even though the media never forgets, something I’ve always respected Google for was owning a mistake. Google continues to explain how Google Search is always getting better, and recent updates focus on improving results for a wider range of queries, including new ones that haven’t been seen before. They’ve made improvements to identify nonsensical questions and limit unreliable sources like satire and user-generated content. Additionally, they’ve added safeguards to prevent AI Overviews from appearing for topics where accuracy is critical, such as news and health.

In the short term this may bring some understanding to the situtation, but the dents in Google Search’s armor in the matter of public opinion may take a while to get buffed out.

See the full statement on The Keyword