How trustworthy is your search engine? A survey of Google’s chatbots and their impact on the trustworthiness of the language model for search engines
That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Google’s Bard flubbed a question about the James Webb Space Telescope in its own tech demo, confidently answering incorrectly. Those in the field know that the answers created by ChatGPT to questions don’t know the answer and are hallucinating.
Some in the industry wonder whether the bot has changed the perception of when it’s acceptable or ethical to use machine learning to generate text and images.
A 2022 study1 by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.
If search bots make enough errors, they have a chance to undermine users’ perception of search engines as neutral arbiters of truth.
Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines give users a list of links and allow them to decide whether or not to trust them. By contrast, it’s rarely known what data an LLM trained on — is it Encyclopaedia Britannica or a gossip blog?
“It’s completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Urman.
She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as ‘featured snippets’, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and ‘knowledge panels’ — summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.
Also last week, Microsoft integrated ChatGPT-based technology into Bing search results. Sarah Bird, Microsoft’s head of responsible AI, acknowledged that the bot could still “hallucinate” untrue information but said the technology had been made more reliable. In the days that followed, Bing claimed that running was invented in the 1700s and tried to convince one user that the year is 2022.
Around the same time, the race to create large language models using artificial intelligence and the movement to make ethics a key part of the design process began. In the past year, Microsoft, Meta, and Nvidia have all released projects based on the artificial intelligence that is included in the search results. Also in 2018, Google adopted AI ethics principles said to limit future projects. Large language models carry heightened ethical risks that can lead to harmful speech. These models are more likely to make things up.
OpenAI’s process for releasing models has changed in the past few years. Executives said the text generator GPT-2 was released in stages over months in 2019 due to fear of misuse and its impact on society (that strategy was criticized by some as a publicity stunt). In 2020, the training process for its more powerful successor, GPT-3, was well documented in public, but less than two months later OpenAI began commercializing the technology through an API for developers. By November 2022, the ChatGPT release process included no technical paper or research publication, only a blog post, a demo, and soon a subscription plan.
Do We Live in a World of Computers? What do We Need to Know About Screening Films? How Do We Learn to Write on a Computer?
Is there a requirement for a screening of the film in corporate boardrooms? A new study shows Americans are concerned about the pace of technological change. Alexa, play Terminator 2: Judgment Day.
“This technology is incredible. I think it’s the future. But, at the same time, it’s like we’re opening Pandora’s Box. safeguards to adopt it safely.
“There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of — how do we teach people to write in a world with ChatGPT? We’ve taught people how to do math in a world with calculators. I believe we can survive that.