The search feature was damaged by the artificial intelligence


Keeping AI Summaries to a Minimum: Why They Cannot Be Used for Artificial Intelligence, or Why They Shouldn’t

It was not mentioned in the post of rolling back the summaries. There will be constant monitoring of feedback from users and adjustments to the features that are needed.

Only four are described: better detection of “nonsensical queries” not worthy of an AI Overview; making the feature rely less heavily on user-generated content from sites like Reddit; offering AI Overviews less often in situations users haven’t found them helpful; and strengthening the guardrails that disable AI summaries on important topics such as health.

The original form of the new search upgrade did not all go well, as noted inReid’s post. The company made “more than a dozen technical improvements” to AI Overviews, she wrote.

And it’s not just users on social media who were tricked by misleading screenshots of fake AI Overviews. The New York Times issued a correction to its reporting about the feature and clarified that AI Overviews never suggested users should jump off the Golden Gate Bridge if they are experiencing depression—that was just a dark meme on social media. Some people suggest that we returned dangerous results for subjects, like leaving dogs in cars and smoking while pregnant. “Those AI Overviews never appeared.”

He believes the error was due to a sense of humor failure. “We saw AI Overviews that featured sarcastic or troll-y content from discussion forums,” she wrote. Foruming can be a good source of first-hand information, but it can also lead to less-than-helpful advice like using glue to get cheese to stick to pizza.

Rock eating is a topic that’s not talked about online and that makes it difficult for a search engine to draw on. According to Reid, the AI tool found an article from The Onion, a satirical website, that had been reposted by a software company, and it misinterpreted the information as factual.

“We are more accurate because we put a lot of resources into being more accurate,” Socher says. A custom-built web index is used by You.com to help LLMs steer clear of incorrect information. It uses a citation mechanism that can explain when sources are conflicting and selects from multiple different LLMs. Still, getting AI search right is tricky. WIRED found on Friday that You.com failed to correctly answer a query that has been known to trip up other AI systems, stating that “Based on the information available, there are no African nations whose names start with the letter ‘K.’” It had aced the previous tests.

Socher says wrangling LLMs takes considerable effort because the underlying technology has no real understanding of the world and because the web is riddled with untrustworthy information. “In some cases it is better to actually not just give you an answer, or to show you multiple different viewpoints,” he says.

“You can get a quick snappy prototype now fairly quickly with an LLM, but to actually make it so that it doesn’t tell you to eat rocks takes a lot of work,” says Richard Socher, who made key contributions to AI for language as a researcher and in late 2021 launched an AI-centric search engine called You.com.

A week after its algorithms advised people to eat rocks and put glue on pizza, Google admitted Thursday that it needed to make adjustments to its bold new generative AI search feature. The episode highlights the risks of Google’s aggressive drive to commercialize generative AI—and also the treacherous and fundamental limitations of that technology.

Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The current AI boom is built around LLMs’ impressive fluency with text, but the software can also use that facility to put a convincing gloss on untruths or errors. Using the technology to summarize online information promises can make search results easier to digest but it is hazardous when online sources are contractionary or when people may use the information to make important decisions.