Meta says you have to tell your fakes or they might just pull them


The Role of AI in Political Blogs: Clegg Downplays Meta’s Detection of Fake Videos and Audios

Meta is testing the use of large language models trained on its Community Standards, to find an efficient triage mechanism for its tens of thousands of human reviewers. It appears to be a precise and efficient way of making sure that edge cases for which we want human judgement are sent to our human reviewers.

There are already plenty of examples of viral, AI-generated posts of politicians, but Clegg downplayed the chances of the phenomena overrunning Meta’s platform in an election year. “I think it’s really unlikely that you’re going to get a video or audio which is entirely synthetic of very significant political importance which we don’t get to see pretty quickly,” he said. “I just don’t think that’s the way that it’s going to play out.”

Meta is relying on users to fill the void. The company warned on Tuesday that it may penalize accounts if they fail to reveal when they post a real or fake video or audio.

Meta has been working with groups like Partnership on AI to build on existing content authenticity initiatives. Content Credentials is a system launched by Adobe that puts content provenance info into images. When it was released in the public domain, the SynthID watermark was extended to audio files.

Can Meta Help Manipulating Media? The Case of Joe Biden, whose video of an inappropriate touching of his Grand-Grandparent’s Chest, Revisited

“For those who are worried about video, audio content being designed to materially deceive the public on a matter of political importance in the run-up to the election, we’re going to be pretty vigilant,” he said. “Do I think that there is a possibility that something may happen where, however quickly it’s detected or quickly labeled, nonetheless we’re somehow accused of having dropped the ball? Yeah, I think that is possible, if not likely.”

Some manipulated media is not fake and may be helped by Meta’s new policies. A ruling released on Monday by Meta’s Oversight Board of independent experts, which reviews some moderation calls, upheld the company’s decision to leave up a video of President Joe Biden that had been edited to make it appear that he is inappropriately touching his granddaughter’s chest. But the board said that while the video, which was not AI-generated, didn’t violate Meta’s current policies, it should revise and expand its rules for “manipulated media” to cover more than just AI-generated content.

That expands on Meta’s requirement, introduced in November, that political ads include a disclosure if they digitally generated or altered images, video or audio.

“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” Clegg said.

The labels will only apply to images from some companies once they include technical data and watermarks in their images. Images created with Meta’s own AI tools are already labeled “Imagined with AI.”

Still, there are gaps. Other image generators, including open-source models, may never incorporate these kinds of markers. Even if that content doesn’t have a title, Meta is working on automated detection tools that can detect it.

An image of the pope in a white coat went public last year, causing internet users to debate whether he was really that stylish. Fake images of former President Donald Trump being arrested caused similar confusion, even though the person who generated the images said they were made with artificial intelligence.

Meta, the company that owns the platforms, said on Tuesday that it will start labeling images created with artificial intelligence in the coming months. The move comes as tech companies — both those that build AI software and those that host its outputs — are coming under growing pressure to address the potential for the cutting-edge technology to mislead people.

Fading the Biden: How to Use Artificial Intelligence to Fool and Manipulate a Fake Voting Experience in New Hampshire

Millions of people will be voting in high-profile elections around the world this year. Experts and regulators have warned that deepfakes — digitally manipulated media — could be used to exacerbate efforts to mislead, discourage and manipulate voters.

According to Hany Farid, a professor at UC Berkeley School of Information, anyone interested in using generative AI maliciously will most likely turn to tools that aren’t watermarking or betraying its nature. The creators of the fake call, which was made to some New Hampshire voters using the voice of President Joe Biden, did not mention its origins.