newsweekshowcase.com

Meta is cracking down on violent content in the wake of Hamas attacks

The Verge: https://www.theverge.com/2023/10/13/23915742/meta-israel-hamas-content-moderation-arabic-hebrew-speakers

Stop Them from Measuring Fake News and the War on Social Media – Meta’s CEO Jeremy Breton and the EU

The group was formed after news of the Hamas attack broke, and the platform had a policy against violent speech and violent entities. The CEO said the platform had responded to over 80 take down requests received in the EU “within required timelines” but noted that it had not received any notices from Europol about illegal content on the service.

We are waiting on the EU’s response. Before, X was told that noncompliance with the DSA could lead to an investigation and possible fines.

Yaccarino’s letter also emphasizes how X has been using Community Notes in an attempt to combat misinformation on its platform, noting that over 700 unique notes are being shown on the platform relating to the attacks. The volunteer- powered system is under a lot of strain, with some community notes taking hours or even days to be approved, and other posts failing to be labeled at all.

“If you want to show that you walk the talk, please send me your email,” Breton wrote to Musk. “My team remains at your disposal to ensure DSA compliance, which the EU will continue to enforce rigorously.”

The EU has warned top social media executives about the risk of their platforms being used to spread misinformation about the Israel-Hamas war, warning them that they can face severe financial penalties.

In a letter to the Meta’s CEO, he gave him a 24 hours to come up with a plan to stop the spread of fake news and war- related misinformation on the platform.

Breton with the EU noted in his letter to Zuckerberg that regulators are not only focused on content related to the Israel-Hamas war, but also falsehoods that could spread in connection with upcoming election in Europe in countries including Poland and the Netherlands.

“Because your platform is used extensively by children and teenagers, you have a particular obligation to protect them from violent content depicting hostage taking and other graphic videos, which are reported to be widely circulating on your platform without appropriate safeguards,” Breton wrote.

Since Hamas militants attacked Israel on Saturday morning, fabricated photos and video clips and other bogus content purporting to portray the violence in the region have been wreaking havoc on social media platforms, making sorting fact from fiction a daunting task.

Changing Social Media Platforms to Ensure Security in the Light of EU’s Digital Services Act (Critical Commission’s Brief Report)

Musk made alterations to the platform’s verification policies last year. Now, anyone willing to pay a monthly fee can receive a blue check mark — a badge previously reserved for credible news organizations and notable people. Now, having a paid-for “verification” mark boosts the reach of posts, an arrangement that misinformation experts say has contributed to considerable chaos on the site.

The EU’s Digital services act is one of the toughest online safety laws in the world and the swift response came in the face of it. The law carries stiff fees for tech companies that violate the rules.

Under the law, social media platforms like Facebook, Instagram, Twitter and TikTok must quickly remove posts inciting violence, featuring manipulated media seen as propaganda, or any kind of hate speech, or be hit with financial penalties that far exceed what any U.S. authority has ever imposed: Up to 6% of a company’s annual global revenue.

“Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact-checkers in the region to limit the spread of misinformation. Al Tolan said Meta would continue its work as the conflict unfolded.

The risk of fake and manipulated images and facts generating with the intent to influence elections is taken very seriously by the DSA.

Meta removed seven times as many pieces of content on a daily basis in the three days immediately after the terrorist attacks in Israel compared to the two months prior. The disclosure came as part of a blog post in which the social media company outlined its moderation efforts during the ongoing war in Israel.

Even more recently, Meta’s track record on moderation has not been perfect. Members of its Trusted Partner program, which is supposed to allow expert organizations to raise concerns about content on Facebook and Instagram with the company, have complained of slow responses, and it’s faced criticism for shifting moderation policies surrounding the Russia-Ukraine war.

The languages spoken by the response team do not feature in X’s outline of moderation. The European Commission has sent a request to X under the Digital Services Act, citing the alleged spread of illegal content and disinformation, including the spreading of terrorist and violent content and hate speech.

Exit mobile version