The Wired Politics Map: Generative AI for Elections Across the Gender and Classification Scales, and from Sabotage to Madness
The list and map will be continuously updated over the course of the next four years. In the map, you’ll be able to see each country where we’ve identified a use of generative AI in its elections, and how many times. You will be able to see more information about every instance, including when it happened and what it was. In addition to the information you see on each example, we will also be keeping track of the companies, tools, and platforms involved.
The global electorate now has to contend with this new tech. Deepfakes can be used for everything from sabotage to satire to the seemingly mundane: Already, we’ve seen AI chatbots write speeches and answer questions about a candidate’s policy. But we’ve also seen AI used to humiliate female politicians and make world leaders appear to promote the joys of passive-income scams. AI has been used to deploy bots and even tailor automated texts to voters.
Hi! I’m Vittoria Elliott. I’m a reporter on the WIRED Politics desk, and I’m taking over for Makena this week to talk about politicians rising from the dead in India and the rapper Eminem endorsing opposition parties in South Africa.
Facebook isn’t a robot: AI is a tool for influence automation in the 2023-2024 election year, says Jessica Walton
Many Americans have their eyes set on November, but 2024 has already been a big election year for the rest of the world. India is wrapping up its vote, the EU is getting ready for its parliamentary elections, and South Africa and Mexico will hold elections this week. The largest election year in history has seen more people online than ever before.
In her research, the network would use real-seeming Facebook profiles to post articles, often around divisive political topics. She says that the articles are written by an artificial intelligence. “And mostly what they’re trying to do is see what will fly, what Meta’s algorithms will and won’t be able to catch.”
The first threat report from OpenAI shows how the actors from Russia, Iran, China and Israel have tried to use its technology for foreign influence operations across the globe. The report named five different networks that OpenAI identified and shut down between 2023 and 2024. In the report, OpenAI reveals that established networks like Russia’s Doppleganger and China’s Spamoflauge are experimenting with how to use generative AI to automate their operations. They are not very good at it.
While it is a relief that these actors haven’t mastered the art of generativeAI, it is certainly worrisome that they are experimenting with it.
The OpenAI report reveals that influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code. It struggles with basic words and meanings, and even with the words they use, so much so that one network was named “Badgrammatical.” The Bad Grammar network was so sloppy that it once revealed its true identity: “As an AI language model, I am here to assist and provide the desired comment,” it posted.
A network used the chat app Telegram to test the code that would allow them to automate posts on the site, using a tool that has been used by extremists for years. This worked at times, but other times the same account posting as two different characters gave away the game.
Influence campaigns on social media are often more innovative than employees of the platforms because they learned the platforms and their tools better. While these initial campaigns may be small or ineffective, they appear to be still in the experimental stage, says Jessica Walton, a researcher with the CyberPeace Institute who has studied Doppleganger’s use of generative AI.
The report painted a picture of several ineffectual campaigns with crude propaganda, seemingly denying fears of the potential for this new technology to spread misinformation, particularly during an election year.