newsweekshowcase.com

What do we know about how misinformation is spread online?

Nature: https://www.nature.com/articles/d41586-024-01618-z

How Much AI is Used to Create Misinformation on Social Media? The Challenges of AI-Generated Misinformation and Propaganda

Can social-media firms do more? WhatsApp’s decentralized nature presents both opportunities and challenges for content distribution. Although it enables the widespread dissemination of information, it also complicates efforts to regulate and moderate content. GenAI can create content that is personalized and targeted at a large scale. Without effective moderation mechanisms, platforms risk becoming breeding grounds for misinformation and propaganda.

Nonetheless, as the technology matures and becomes more widely accessible, the need for vigilance and countermeasures will rise. GenAI can be used to generate a large volume of images rapidly at low cost and offers the ability to customize content dynamically to resonate with specific cultural or social groups. Its use can also obscure the origin of the content, making it difficult to trace its creators. This combination of cost- efficiency and anonymity could make a difference in public opinion.

Rapid infrastructure development, including the modernization of train stations, is a key Policy goal for the Bhartiya Janata Party which seeks to portray India as a swiftly modernizing economy to both domestic and international audiences. We don’t provide direct evidence indicating the creation or dissemination of Artificial Intelligence-generated images.

Digital privacy and security can be compromised to target artificial intelligence-generated misinformation, which can be counter-productive. We firmly caution against proposals that might undermine end-to-end encryption on platforms such as WhatsApp.

Collaboratively evolving new regulatory frameworks, involving governments, the technology industry and academic bodies, can ensure that standards keep pace with AI advancements. The age of synthetic media requires the building of technology that can detect AI at scale, which is the final piece of the puzzle.

It’s not known how much Artificial Intelligence is used to create fake news, and how often it’s used. They don’t know how much false information is shared on social media or how it affects public opinion. These uncertainties underscore the challenges of tackling AI-driven misinformation and highlight the need for more robust research and monitoring to grasp the full implications of these technologies.

It is important to note that misinformation can easily be created through low-tech means, such as by misattributing an old, out-of-context image to a current event, or through the use of off-the-shelf photo-editing software. Thus, genAI might not change the nature of misinformation much.

Over the past 12 months, several high-profile examples of deepfakes produced by genAI have emerged. These include images of US Republican presidential candidate Donald Trump in the company of a group of African American supporters, an audio recording of a Slovakian politician claiming to have rigged an upcoming election and a depiction of an explosion outside the US Department of Defense’s Pentagon building.

Genai Content in WhatsApp: The First Results from a Large-Scale Study of the State of the Art and the Future of India

There was a series of messages collected between August and October last year. This provided a data set of around two million messages from mostly private WhatsApp groups, giving us, for the first time, a sense of what ordinary WhatsApp users were seeing and discussing. The topics varied broadly, encompassing religion, education, village life and other national and local matters.

The content that spread quickly was marked on the app as forwarding many times. Such a tag is placed on content that is forwarded through a chain that involves at least five hops from the original sender. Five hops could mean that the message has already been distributed to a very large number of users, although WhatsApp does not disclose the exact number.

Of the 1,858 viral messages, fewer than two dozen contained instances of genAI-created content — a footprint of just 1%. This figure might be a tad undercount, and it does not include text messages, but the finding suggests that there is a low prevalence of Genai-created content in India. That seems to have persisted during the multi-phase general election that just concluded in the country. We have not found any spike in the amount of Genai content used for political campaigning. It might not be as big of an impact on election misinformation as was feared.

Nonetheless, these are early days for the technology, and our first findings showcase genAI’s power to manufacture compelling, culturally resonant visuals and narratives, extending beyond what conventional content creators might achieve.

One category of misleading content we identified relates to infrastructure projects. Visually plausible genAI images of a futuristic train station, allegedly depicting a new facility at Ayodhya — a city many Hindus believe to be the birthplace of the god Lord Ram — managed to spread widely. The pictures showed a station with depictions of Lord Ram on its walls. Since the demolition of a mosque in December 1992 by Hindu nationalists, there have been religious tensions at the site.

Our study shows that there is a chance of personalized campaigns in the future, even though we don’t think that’s going to happen at the moment. Even if such content resembles animation, it can still be effective as long as the imagery is hyper-idealized for viewers with sympathetic beliefs. This emotional engagement and visual credibility can allow for persuasive content, although further study is needed.

The first step to tackling misinformation must be for companies to engage more with researchers. The studies that have already been performed show that it is possible to collaborate ethically on data while ensuring people’s privacy. Taking steps against the spread of provable falsehoods does not mean a curb on freedom of speech, if it is done transparent. If companies are not willing to share data, regulators should compel them to do so.

There is a similar opacity in another, even less-well-studied part of the misinformation ecosystem: its funding. The web has boosted production of misinformation thanks to the advert-funded model. When there are automated advertising exchanges, they assign ad space to companies based on which sites people are looking at, and the sites get a cut if users click on the ads.

“The Holocaust did happen. There are millions of lives saved by the vaccine COVID-19. There was no large scale fraud in the election. These are three statements of indisputable fact. Indisputable, and yet, in a few quarters of the Internet.

One of the articles that appeared in the recent issue of Nature was written byUllrich Ecker, a cognitive scientist from the University of Western Australia in Perth. It is a crucial time to highlight this subject. With more than 60 percent of the world being online, false and misleading information is easy to find and spread, with consequences from increased vaccine hesitancy and greater political polarization. In a year in which more than four billion people are expected to vote in major elections, sensitivities around misinformation are heightened.

This period included the attack on the US Capitol on 6 January 2021, after which Twitter deplatformed 70,000 users deemed to be trafficking misinformation. The authors show that there was a big drop in sharing of misinformation. It is difficult to say whether the ban made users less likely to share stories about the stolen Presidential election or if the Capitol violence had an effect on that. Either way, the team writes, the circumstances constituted a natural experiment that shows how misinformation can be countered by social-media platforms enforcing their terms of use.

It looks like such experiments are not likely to be repeated. He said that he and his colleagues were lucky to have been able to collect data prior to the attack, and that they were able to harvest it during the time that twitter was not very open with data. Since its takeover by entrepreneur Elon Musk, the platform, now rebranded X, has not only reduced content moderation and enforcement, but also limited researchers’ access to its data.

Exit mobile version