newsweekshowcase.com

The Supreme Court is challenging the social media platforms shield

The Verge: https://www.theverge.com/23435358/first-amendment-free-speech-midterm-elections-courts-hypocrisy

Nohemi Gonzalez vs YouTube: Attacks on the First Amendment and the Prevalence of the Second Amendment in the Paris Shooting

And more could be coming: the Supreme Court is still mulling whether to hear several additional cases with implications for Section 230, while members of Congress have expressed renewed enthusiasm for rolling back the law’s protections for websites, and President Joe Biden has called for the same in a recent op-ed.

Many Section 230 cases have tragic circumstances. The family of Nohemi Gonzalez, who was killed in the Paris massacre with 128 other people, are involved in a lawsuit. The lawsuit claims that YouTube gave help to terrorists under the Anti-Terrorism Act. At the heart of this dispute is not merely that YouTube hosted ISIS videos, but, as the plaintiffs wrote in legal filings, YouTube’s targeted recommendations of ISIS videos. “Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos,” the plaintiffs wrote. In other words, YouTube allegedly showed ISIS videos to those more likely to be radicalized.

“Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” lawyers for the family argued in their petition seeking Supreme Court review.

They said that section 230 bars claimed websites were publishers of third-party content. “Publishers’ central function is curating and displaying content of interest to users. Petitioners’ contrary reading contravenes Section 230’s text, lacks a limiting principle and risks gutting this important statute.”

Platform moderation has become one of the few enforcement mechanisms to punish bad behavior. People look for blame when bad behavior spirals out into something worse and worse. And the reality is that no politician can blame the First Amendment, so they blame 230 instead. The lesson is that politicians tend to always talk about the First Amendment being in danger when they talk about regulating Big Tech.

These attacks on the First Amendment are already affecting some of the most vulnerable Americans, but they have far-reaching implications for everyone. The Texas and Florida laws rules aren’t even written carefully enough to ensure they only apply to “Big Tech.” According to some interpretations, Texas law means thatWikipedia can’t take down edits that are not in line with its standards. If courts rule in favor of the Republicans, your inbox could get a lot messier.

The courts and lawmakers have turned the cultural backlash against Big Tech into glib sound bites and political warfare, rather than grappling with technology’s effects on democracy. You can see a mess of mutually exclusive demands on the surface of internet regulation. Some of the people who are most open to dismantling the First Amendment don’t even want to admit that they are.

The First Amendment is loved by almost every American politician. Many of them profess to hate another law: Section 230 of the Communications Decency Act. The clearer they are that they hate the First amendment and think Section 230 is okay, the funnier it becomes.

First Amendment Amendment Amendment Preserving the First Principles: How Web Sites Should Respond to Lobby Laws, Competition, and the Laws of Political Correctness

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law was enacted in 1996, and courts have interpreted it since then. It means that websites, newspapers, gossip blogs, listserv operators, and other parties can’t be sued for hosting or reposting illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.

It is weird to be in a position of arguing against limits on corporate power to control speech. Facebook, TikTok, Twitter, and other companies all play a huge role in public discourse and exercise a huge amount of influence over how Americans can connect with each other. It’s getting harder and harder to talk to other people in a way that’s not monitored and approved by an increasingly small number of companies.

But making false claims about pandemic science isn’t necessarily illegal, so repealing Section 230 wouldn’t suddenly make companies remove misinformation. The First Amendment protects scientific claims. Imagine if researchers and news outlets were sued for publishing good-faith assumptions that were proven incorrect when they said covid wasn’t being airborne.

Removing Section 230 protections is a sneaky way for politicians to get around the First Amendment. Without 230, the cost of operating a social media site in the United States would skyrocket due to litigation. Sites could face lengthy lawsuits over legal content if they were not able to invoke a straightforward 230 defense. And when it comes to categories of speech that are dicier, web platforms would be incentivized to remove posts that might be illegal — anything from unfavorable restaurant reviews to MeToo allegations — even if they would have ultimately prevailed in court. All of this would burn a lot of time and money. It’s no wonder platform operators are up to the task. When politicians gripe, the platforms respond.

How bad are state laws for speech in the age of social media? Insights from the ludicrous case of Johnny Depp

It’s also not clear whether it matters. Jones declared corporate bankruptcy during the procedure, tying up much of his money indefinitely and leaving Sandy Hook families struggling to chase it. He treated the court proceedings contemptuously and used them to hawk dubious health supplements to his followers. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. It gave him another platform to proclaim himself a martyr.

The defamation case was brought by Johnny Depp in order to get Amber Heard to come forward as a victim of abuse. Heard lacked Jones’ social media savvy and her case was not cut-and-dried. The case turned into a humiliation of Heard, thanks to the incentives of social media but also because of the failure of the courts to respond to the way things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.

Up until this point, I’ve almost exclusively addressed Democratic and bipartisan proposals to reform Section 230 because those at least have some shred of substance to them.

Republican-proposed speech reforms are ludicrously, bizarrely bad. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.

The bans should not be allowed as long as the First Amendment is in place. They are government regulations for speech. Texas’ Fourth Circuit Court of appeals threw a wrench in the works when it upheld its law without explaining its reasoning. The court published its opinion, which legal commentator Ken White found to be the most angry and incoherent First Amendment decision he had ever read.

The Supreme Court temporarily blocked the Texas law, and its recent statements on speech haven’t been reassuring. Clarence Thomas will be involved in the case since he went out of his way to argue that you should be able to treat it like a public utility. (Leave aside that conservatives previously raged against the idea of treating ISPs like a public utility in order to regulate them; it will make your brain hurt.)

Thomas, as well as two other conservative justices, voted against putting the law on hold. Some people think that Elena Kagan voted as a protest against the shadow docket where the ruling took place.

There are laws in Texas and Florida that need to be supported by a useful idiot. The rules are rigged so that political targets are punished more than regular people. They attack the Big Tech platforms for their power, ignoring the fact that internet service providers control the chokepoints which allow access to those platforms. It is not possible to save a movement that wants to eliminate the entire copyright system and punish Disney for being out of line, because it is so intellectually bankrupt.

And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. On top of getting books pulled from schools and libraries, Republican state delegate in Virginia dug up a rarely used obscenity law to stop Barnes & Noble from selling the graphic memoir Gender Queer and the young adult novel A Court of Mist and Fury — a suit that, in a victory for a functional American court system, was thrown out earlier this year. The panic over “grooming” affects all Americans, not just the LGBTQ Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”

There is a tradeoff here: if you read the First Amendment closely, you can see that almost allsoftware code is speech and therefore services are impossible to regulate. While Section 230 has been used by both Airbnb and Amazon to defend against claims of providing faulty physical goods and services, it remains open for companies with core services that have less to do with speech and more to do with software.

Balk’s Law is obviously an oversimplification. The internet platforms change us by creating different kinds of posts and subjects. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. It’s been shown that humanity at scale can be really ugly. It is not a viable legal case if the abuse is spread out or comes from one person and doesn’t rise to the level of a viable case.

The Supreme Court has scheduled arguments for two major internet moderation cases in February of 2023. There will be hearings for the two cases on February 21st and February 22nd.

Editor’s Note: Katie Harbath is a fellow at the Bipartisan Policy Center and former public policy director at Facebook. BPC accepts funding from the tech companies in their efforts to get information about elections to their users. The author’s views are contained in this commentary. Read more opinion articles on CNN.

When it comes to moderation of online content, social media firms have a responsibility to balance competing interests and differing views of the world.

When thinking about this problem, it’s important not to just tackle it by looking at what any piece of content says. Instead, a multi-pronged approach is needed looking not just at the content but also the behavior of people on the platform, how much reach content should get, and more options for users to take more control over what they see in their newsfeeds.

First of all, a platform has to make sure that everyone has the right to free speech and can safely express their opinions. Every platform has to moderate their content.

The era of harassment: how platforms can manage the reach of relevant content in their feeds and where to remove it. The case of Twitter

Some content, like child pornography, must be removed under the law. Advertisers don’t want some legal but bad content in their feeds.

Nobody likes when an online mob harasses them. All that will do is drive people away or silence them. That platform isn’t a true free speech platform. A recent instance is Twitter, a company where its former head of trust and safety fled his home due to threats he received following criticism against him. The platforms are trying to shut down harassment when users coordinate it online.

Second, there are more options beyond leaving the content up or taking it down. Meta characterizes this as remove, reduce and inform; instead of taking potentially problematic, but not violating, content down, platforms can reduce the reach of that content and/or add informative labels to it to give a user more context.

The option is necessary since many of the most engaging posts are borderline, meaning they go right up to the rules. Here the platform may not feel comfortable removing content such as clickbait but will want to take other action because some users and advertisers might not want to see it.

Source: https://www.cnn.com/2022/12/21/opinions/twitter-files-content-moderation-harbath/index.html

Speech is Not Free Reach, but It Is The Law: Social Media, Shadow Bing, and the Digital Reputation of Terrorists

Some argue — as they did about one installment of the Twitter files — that the reduction in reach is a scandal. The author of the book “Free Speech Does Not Mean Free Reach” said that free speech does not mean free reach.

The third point is transparency. Who is making the decisions and how do they rank their priorities? Shadow banning is an issue about the amount of people who do not see the content if it isn’t shown to them.

They are angry that they don’t know what happened. Platforms need to do more on this front. For instance, Instagram recently announced that people could see on their accounts if they are eligible to be recommended to users. This is because they have rules that accounts that share sexually explicit material, clickbait or other types of content won’t be eligible to be recommended to others who don’t follow them.

Lastly, platforms can give users more control over the types of moderation they are comfortable with. Francis Fukuyama is a political scientist. With middleware, users would be able to decide what they see in their feeds. It will allow them to figure out what they need to feel safe online. Some platforms such as Facebook allow people to change their feed to a chronological one.

The issue of speech and safety is very difficult to address. Our society is evolving with regard to speech, online accountability, and how we hold people accountable.

We will need to understand how platforms make these decisions in order to figure this out. Governments need to find the right ways to regulate platforms, and academic organizations outside of these platforms need to be willing to say how they would make some of these difficult calls.

Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.

The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.

Defaming the Section 230 Executive Order in the United States and Beyond: Recommendations in Reddit and the Anti-Defamation League

The executive order faced a number of legal and procedural problems, not least of which was the fact that the FCC is not part of the judicial branch; that it does not regulate social media or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.

The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.

The US Supreme Court now has an opportunity to dictate how far the law can go if Section 230 is changed to the courts.

Tech critics have called for added legal exposure and accountability. The social media industry has grown up shielded from the courts and the development of a body of law. The anti-Defamation League wrote in a Supreme Court brief that it is very irregular for a global industry that wields staggering influence to be protected from judicial inquiry.

For the tech giants, and even for many of Big Tech’s fiercest competitors, it would be a bad thing, because it would undermine what has allowed the internet to flourish. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.

“‘Recommendations’ are the very thing that make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. “It is users who upvote and downvote content, and thereby determine which posts gain prominence and which fade into obscurity.”

People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”

Exit mobile version