First Results from the Meta Research Collaboration on Facebook, Twitter, and the Politics of Political Discourse: A Call to Arms with the Clegg Foundation
The findings are in line with the idea that Facebook is only a small part of the media landscape and that most people’s belief systems are informed by a variety of sources. Facebook might have removed “stop the steal”-related content in 2020, for example, but election lies still ran rampant on Fox News, Newsmax, and other sources popular with conservatives. The rot in our democracy runs much deeper than what you find on Facebook; as I’ve said here before, you can’t solve fascism at the level of tech policy.
The researchers weren’t paid by Meta, but the company seemed pleased with the results, which were released today in four papers in Nature and Science. Nick Clegg, Meta’s president of global affairs, said in a statement that “the experimental findings add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization” or have “meaningful effects on” political views and behavior.
Meta and researchers continue to explore new ways of working together despite the setbacks. On Thursday, the first research to come out of this work was published.
Social Media Makes People Think Outside the Election Environment: A Reappraisal of Meta’s 2020 Research Project in a Big Data Scenario
“We don’t know what would have happened had we been able to do these studies over a period of a year or two years,” Guess said at a press briefing earlier this week. He noted that there is no way to account for the fact that many people have had Facebook andInstagram accounts for a decade or more. If social media had not been around for a while, we wouldnt know what the world would have been like.
“Surveys during and at the end of the experiments showed these differences did not translate into measurable effects on users’ attitudes,” Kupferschmidt writes. The people who attended did not differ in how they felt about immigration, COVID-19 restrictions, or their trust in media and political institutions, as compared to people who did not attend. They also were no more or less likely to vote in the 2020 election.”
“I think there are unanswered questions about whether these effects would hold outside of the election environment, whether they would hold in an election where Donald Trump wasn’t one of the candidates,” says Michael Wagner, a professor of journalism and communication at University of Wisconsin-Madison, who helped oversee Meta’s 2020 election project.
Meta’s Clegg also said that the research challenges “the now commonplace assertion that the ability to reshare content on social media drives polarization.”
Researchers weren’t quite so unequivocal. Resharing makes content from unreliable sources better, according to a study published in Science. A study shows that the misinformation caught by the platform’s third party fact checkers is primarily consumed by conservatives, which has no equivalent on the opposite side of the political aisle.
In principle, the company agreed to do that. But it has been a long road. The Cambridge Analytica data privacy scandal of 2018, which originated from an academic research partnership, has made Meta understandably anxious about sharing data with social scientists. Social Science One, a later project by Meta, went nowhere as it took so long to produce data that its biggest backers quit before producing anything of note. (Later it turned out that Meta had accidentally provided researchers with bad data, effectively ruining the research in progress.)
In each of the experiments, the tweaks did change the kind of content users saw: Removing reshared posts made people see far less political news and less news from untrustworthy sources, for instance, but more uncivil content. Replacing the algorithm with a chronological feed lead to more untrustworthy content even though it only cut hate and intolerance out of the feed by half. Users in the experiments also ended up spending much less time on the platforms than other users, suggesting they had become less compelling.
By themselves, the findings fail to confirm the arguments of Meta’s worst critics, who hold that the company’s products have played a leading role in the polarization of the United States, putting the democracy at risk. There’s no suggestion of changing the feed in such a way that it would have a positive effect.
Science headlined its package on the studies “Wired to Split,” leading to this amazing detail from Horwitz: “Representatives of the publication said Meta and outside researchers had asked for a question mark to be added to the title to reflect uncertainty, but that the publication considers its presentation of the research to be fair.”
The good news is that more research is on the way. There will be 12 more studies released in the same time period. We will be able to draw stronger conclusions in their totality.
Academy collaboration research models don’t really understand how complicated data architecture and programming code are at corporations such as Meta. “Simply put, researchers don’t know what they don’t know, and the incentives are not clear for industry partners to reveal everything they know about their platforms.”
He stated that the main problem was that the research was done on Meta’s terms instead of the scientists’. There are some good reasons for this — Facebook users have a right to privacy, and regulators will punish the company mightily if it is violated — but the trade-offs are real.
Independentness by permission isn’t all it’s cracked up to be. It is a sign of things to come in the academy, because it shows how generous the academy is with data and research opportunities for a small group of researchers. Scholarship is not wholly independent when the data are held by for-profit corporations, nor is it independent when those same corporations can limit the nature of what it studied.”
Tucker believes that the project will encourage further research but warns that it is still up to Meta and other social-media platforms. Tucker is hopeful that society will take action to ensure that this kind of research persists in the future.
In a study2 in Science, a group of 23,361 Facebook users and 21,373 Instagram users were separated into two groups based on how they received their news and information. The theory was that giving users the latest news and information would broaden the content that they saw.
Two other interventions, published in Science3 and Nature4, also showed little effect: one limited “reshared” content — which comes from outside a user’s network but is reposted by a connection or a group to which the user belongs — and the other limited content from “like-minded” users and groups.
For Joshua Tucker, co-director of the Center for Social Media and Politics at New York University and a lead investigator on the project, the lesson is clear: some of the proposed solutions for reducing online echo chambers and improving social discourse would probably not have had much of an impact during the 2020 election. But Tucker acknowledges that there are limits to the inferences that can be drawn from the research.
The authors say that all of the data that were collected will be available for researchers. Nonetheless, some academics question the model, in part because it depends entirely on the willingness of Meta to participate.