Section 230, Google, and the Californian Nohemi Gonzalez Case: Attacks on YouTube, Telecommunications, and Democracy in the Age of Terrorism
Section 230 was written in 1995 and passed in 1996. Yet a review of the statute’s history reveals that its proponents and authors intended the law to promote a wide range of technologies to display, filter, and prioritize user content. This means that eliminating Section 230 protections for targeted content or types of personalized technology would require Congress to change the law.
The Section 230 case involves tragic circumstances. Nohemi Gonzalez, a California State University student, was killed in the Paris attacks with 128 other people. The lawsuit, filed against Google, alleges that its subsidiary YouTube violated the Anti-Terrorism Act by providing substantial assistance to terrorists. At the heart of this dispute is not merely that YouTube hosted ISIS videos, but, as the plaintiffs wrote in legal filings, YouTube’s targeted recommendations of ISIS videos. The users that were selected for recommendations were those who were likely to be interested in video items from the Islamic State and would be familiar with what the site had to offer. In other words, YouTube allegedly showed ISIS videos to those more likely to be radicalized.
It was not noticed that Section 230 was part of the 1996 telecommunications laws. The House of Representatives added Section 230 in response to two developments. First, the Senate’s version of the telecommunications bill imposed penalties for the transmission of indecent content. Section 230 was hailed as an alternative to the Senate’s approach to pornography, and as a compromise, both sections were included in the bill that President Bill Clinton signed into law. (The next year, the Supreme Court would rule the Senate’s portion unconstitutional.)
A former public policy director at Facebook is a fellow at the Bipartisan Policy Center. BPC accepts funding from some tech companies, including Meta and Google, in their efforts to get authoritative information about elections to their users. The author has their own views expressed in this commentary. Read more opinion articles on CNN.
Every day, social media companies working on content moderation must balance many competing interests and different views of the world and make the best choice out of a range of terrible options.
People might have a right to say something, but they don’t have a right for everyone to see it. It is argued that platforms that prioritize engagement as a key factor in deciding what people see in their feeds is unfair because it is often content that evokes an emotional response that gets the most likes, comments and shares.
First, a platform needs to make sure people have the right to freedom of expression. Every platform needs to moderate their content just like they need to moderate free expression.
Measuring the Reach of Online Moobs: The Case for Borderline Content Dissemination and Social Media Harassment
Some content, like child pornography, must be removed under the law. Users and advertisers don’t like to see bad stuff in their feeds, such as hate speech.
Moreover, no one likes when an online mob harasses them. All that will do is drive people away or silence them. That is not a true free speech platform. The former head of trust and safety left his home because of threats after Musk criticized him. Other platforms are taking steps to shut down what they call brigading, in which people work together to harass each other.
Second, there are more options beyond leaving the content up or taking it down. Meta characterizes this as remove, reduce and inform; instead of taking potentially problematic, but not violating, content down, platforms can reduce the reach of that content and/or add informative labels to it to give a user more context.
This option is necessary as many of the most engaging posts are borderline — meaning they go right up to the line of the rules. The platform will want to take action because some users and advertisers may not want to see it.
Source: https://www.cnn.com/2022/12/21/opinions/twitter-files-content-moderation-harbath/index.html
The Third Point: How Do We Stand Up for Who We Are and What We Don’t Want to See on Facebook, Twitter, LinkedIn, Twitter and Other Social Networks?
Some argue — as they did about one installment of the Twitter files — that the reduction in reach is a scandal. Renee DiResta from the Internet Observatory wrote that free speech doesn’t mean free reach.
This leads to the third point: transparency. Who is making the decisions and how are they ranking their priorities? Shadow banning, a term used to refer to when a content creator is upset that their content isn’t seen as much, isn’t just a matter of one person being upset.
They are upset that they don’t know what’s happening and what they did wrong. Platforms need to do more on this front. For instance, Instagram recently announced that people could see on their accounts if they are eligible to be recommended to users. They do not have a rules that will allow other people to recommend accounts that share sexually explicit material, clickbait and other types of content.
Users can be given more control over the types of moderation they want. Political scientist Francis Fukuyama calls this “middleware.” People can decide which types of content they like in their feeds, with the help of middleware. This will allow them to determine what they need to feel safe online. People can switch from a ranking feed to a chronological one on some platforms.
ackling the problem of speech and safety is very difficult. We are in the middle of developing our societal norms for the speech we are ok with online and how we hold people accountable.
Platforms have to change, and other countries have passed legislation trying to make this possible. Germany was the first country in Europe to make it mandatory for platforms with 2 million users to remove hate speech within seven days. In 2021, EU lawmakers set out a package of rules on Big Tech giants through the Digital Markets Act, which stops platforms from giving their own products preferential treatment, and, in 2022, we’ve seen progress with the EU AI Act, which involved extensive consultation with civil society organizations to adequately address concerns around marginalized groups and technology, a working arrangement that campaigners in the UK have been calling for. In Nigeria, the federal government issued a new internet code of practice as an attempt to address misinformation and cyberbullying, which involved specific clauses to protect children from harmful content.
For the past ten years, the biggest companies in the tech industry have effectively been allowed to mark their own homework. They’ve protected their power through extensive lobbying while hiding behind the infamous tech industry adage, “Move fast and break things.”
There are no processes for defining what “Significant harm” is or how platforms would have to measure it, as noted by the Carnegie UK Trust. Academics and other groups have raised the alarm over the bill’s proposal to drop the previous Section 11 requirement that Ofcom should “encourage the development and use of technologies and systems for regulating access to [electronic] material.” Other groups have raised concerns about the removal of clauses around education and future proofing—making this legislation reactive and ineffective, as it won’t be able to account for harms that may be caused by platforms that haven’t gained prominence yet.
The Online Safety Bill places the obligation of care to monitor illegal content onto the platforms themselves and is in work for several years. It could potentially also impose an obligation on platforms to restrict content that is technically legal but could be considered harmful, which would set a dangerous precedent for free speech and the protection of marginalized groups.
In 2020 and 2021, YouGov and BT (along with the charity I run, Glitch) found that 1.8 million people surveyed said they’d suffered threatening behavior online in the past year. The survey found that 26% of people had been members of the gay rights community, and 25% had experienced racist abuse online.