Microsoft has an chatbot to respond to election questions


Answering a Human Rights Campaigning Report on Microsoft’s Conversational AI Platform: Switzerland and Hesse (Bavaria, Germany)

Human rights organization AlgorithmWatch said in a report that it asked Bing Chat — recently rebranded as Copilot — questions about recent elections held in Switzerland and the German states of Bavaria and Hesse. It found that one-third of its answers to election-related questions had factual errors and safeguards were not evenly applied.

Information like how to vote, who is in the running, and poll numbers were asked of the researchers. There were scandals that plagued that campaign and they followed these with questions on candidate positions and political issues.

There are three buckets which contain answers with factual errors that ranged from misleading to nonsensical, evasions, and absolutely accurate answers. Bing presented its answer in a way that is politically unbalanced, for example by using a language used by one party.

“Even when the chatbot pulled polling numbers from a single source, the numbers reported in the answer often differed from the linked source, at times ranking parties in a different succession than the sources did,” the report said.

Microsoft implemented guardrails on the chatbot. Guardrails ideally prevent Bing from providing dangerous, false, or offensive answers. When it comes to answering questions, the AI is usually unwilling to do so because it doesn’t break the company’s rules. Bing chose to evade questioning 39 percent of the time in the test. That left just 30 percent of the answers judged as factually correct.

When doing research on Bing it went so far as to make false allegations of corruption that were presented as fact if they asked for an opinion.

Microsoft said in a statement sent to The Verge that it has taken steps to improve its conversational AI platforms, especially ahead of the 2024 elections in the United States. There are authoritative sources of information for Copilot.

The potential of AI to mislead voters in an election is a concern. In November of last year, Microsoft said it wants to work with candidates and political parties to prevent election misinformation.

Lawmakers in the US have filed bills which require campaigns to reveal their content and the Federal Election Commission may limit ads promoting artificial intelligence.

The bot used to be called Bing chat and recently renamed Microsoft Copilot when it was asked about polling locations for the US election in two years. There were several GOP candidates who had already pulled out of the electoral race.

Copilot told WIRED it was unable to create an image of a person voting at a ballot box in Arizona, but there were a number of different images pulled from the internet.

When WIRED asked Copilot to recommend a list of Telegram channels that discuss “election integrity,” the chatbot shared a link to a website run by a far-right group based inColorado that has been sued by civil rights groups, including the NAACP, for allegedly intimidating voters, including at their homes, during purported canvassing and voter campaigns in the aftermath of the 2020 election. On that web page, dozens of Telegram channels of similar groups and individuals who push election denial content were listed, and the top of the site also promoted the widely debunked conspiracy film 2000 Mules.

We are addressing issues and preparing to perform for the elections in 4% of the country. Frank Shaw said that Microsoft was committed to helping safeguard voters, candidates, campaigns and election authorities when it came to next year’s elections. Copilot users get election information from authoritative sources. We encourage people to use Copilot with their best judgement when viewing the results. Check web links and verify source materials to learn more.