Nohemi Gonzalez: The Supreme Court Judges Revisited Section 230 for Protecting Social Media from ISIS Terrorism
That doesn’t put Google or the rest of the internet in the clear, though. Gonzalez almost certainly won’t be the last Section 230 case, and even if this case is dismissed, Google attorney Lisa Blatt faced questions about whether Section 230 is still serving one of its original purposes: encouraging sites to moderate effectively without the fear of being punished for it.
In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad who was the first person in her large family to graduate from college. This week, lawyers for her family and others are in the Supreme Court challenging a law enacted more than a quarter century ago—a law that protects social media companies from what the families see as the role of internet companies in aiding and abetting terrorist attacks.
Lawyers for the family petitioned the Supreme Court to review the case, saying that the videos that users viewed on YouTube were central to recruiting from areas outside of Syria and Iraq.
In the nearly 27 years since the United States Congress passed Section 230 of the Communications Decency Act, courts have broadly interpreted it to protect online communities for being legally responsible for user content, laying the foundation for the business models of Facebook, Yelp, Glassdoor, Wikipedia, community bulletin boards, and so many other sites that rely on content they don’t create.
Section 230 was written in 1995 but does not mention targeting or personalization. Yet a review of the statute’s history reveals that its proponents and authors intended the law to promote a wide range of technologies to display, filter, and prioritize user content. This means that eliminating Section 230 protections for targeted content or types of personalized technology would require Congress to change the law.
The First Amendment isn’t the First Law: The Power of the Internet, Social Media, and the Courts for Its Defamation
To put it bluntly, the First Amendment doesn’t work if the legal system doesn’t work. Arguing over the rare exceptions to free speech doesn’t matter if people can’t be meaningfully censured for serious violations or if verdicts are vestigial afterthoughts in cases filed mostly for clout. It will be useless if the courts don’t take it seriously.
Rather than seriously grappling with technology’s effects on democracy, many lawmakers and courts have channeled a cultural backlash against “Big Tech” into a series of glib sound bites and political warfare. Scratch the surface of supposedly “bipartisan” internet regulation, and you’ll find a mess of mutually exclusive demands fueled by reflexive outrage. Some of the people most vocal in their defense of the First Amendment are also the ones most willing to dismantle it.
Conservatives have been complaining for years that Section 236 allows social media platforms to suppress conservative opinions for political reasons.
No provider or user of an interactive computer service will be treated as the publisher or speaker of the information they provide.
The law passed in 1996 and courts have interpreted it ever since. It effectively means that web services — as well as newspapers, gossip blogs, listserv operators, and other parties — can’t be sued for hosting or reposting somebody else’s illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.
The oft-neglected key here is illegal speech. Many well-deserved critiques of the internet and social media — that it helps spread false stories about covid or QAnon-style Satanic panics, that it lets huge crowds of people dogpile teachers or nurses with angry messages, or that it facilitates hate speech at a large scale — don’t actually involve illegal speech. There are a few cases still in flux, like the defamation lawsuits against Fox News for provably false and unsupported statements about voting machine manufacturers. But defamation is a difficult bar to meet. On the campaign trail, Joe Biden claimed that Section 230 of the social network’s protocol let it host false information. He was reprimanded by the social media network for allowing the spread of vaccine misinformation, and Democratic senator Amy Klobuchar suggested removing Section 230 protections for health misinformation.
Making false claims about science can be legal, so it wouldn’t make companies remove misinformation if section 230 were repealed. There is a good reason that the First Amendment protects scientific claims. Imagine if researchers and news outlets got sued for publishing assumptions they later learned were wrong, like covid not being airborne.
Removing Section 230 protections is a sneaky way for politicians to get around the First Amendment. Without 230, the cost of operating a social media site in the United States would skyrocket due to litigation. Unable to invoke a straightforward 230 defense, sites could face protracted lawsuits over even unambiguously legal content. Even if they would have won in court, web platforms would still remove posts that might be illegal, even if they weren’t. All of this would consume a lot of time and money. It takes a lot of work to keep 230 alive. When politicians gripe, the platforms respond.
On the Stability of Social Media. Amber Heard, the Sandy Hook Martizen, and a Republican-Proposed First Amendment Amendment Amendment
It’s not clear if it matters. The Sandy Hook families were left struggling to chase Jones’ money, after he declared corporate bankruptcy during the procedure. He treated the court proceedings contemptuously and used them to hawk dubious health supplements to his followers. Legal fees and damages have harmed his finances but the legal system hasn’t been able to change his behavior. It gave him another platform to declare himself a martyr.
This case is different from Johnny Depp’s defamation suit against Heard who identified herself as a victim of abuse, implicitly at the hands of Depp. Amber Heard’s case was less cut-and-dried than Jones’, but she lacked Jones’ shamelessness or social media acumen. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can hurt people who have a reputation, and the worst offenders are already beyond shame.
Whitehouse said at the hearing that he was prepared to make a bet if the committee took a vote on a repeal of Section 230. “The problem, where we bog down, is that we want 230-plus. We want to repeal 230 and then have ‘XYZ.’ We don’t know what the ‘yz’ are.
Republican-proposed speech reforms are not good. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.
As it stands, the First Amendment should almost certainly render these bans unconstitutional. They are government speech regulations! But while an appeals court blocked Florida’s law, Texas’ Fourth Circuit Court of Appeals threw a wrench in the works with a bizarre decision to uphold its law without explanation. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”
The Supreme Court temporarily blocked a Texas law and its recent statements on speech haven’t been reassuring. It’s almost certain to take up either the Texas or Florida case, and the case will likely be heard by a court that includes Clarence Thomas, who’s gone out of his way to argue that the government should be able to treat Twitter like a public utility. Conservatives were against treating Internet Service Providers like a public utility in order to get them regulated; this is why it will make your brain hurt.
The justices voted against putting the law on hold. (Liberal Justice Elena Kagan did, too, but some have interpreted her vote as a protest against the “shadow docket” where the ruling happened.)
Only an idiot would support the laws in Texas and Florida. Basic consistency is being sacrificed by the rules which are rigged to punish political targets. They attack “Big Tech” platforms for their power, conveniently ignoring the near-monopolies of other companies like internet service providers, who control the chokepoints letting anyone access those platforms. There is no saving a movement that is intellectually bankrupt, as evidenced by the proposal to blow up the copyright system in order to punish Disney for not staying out of line.
And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. A Republican state delegate in Virginia used an obscenity law to try and stop Barnes and Noble from selling two books, one of which was a graphic memoir about gender queer. The panic over grooming affects everyone, including the LGBTQ Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”
But once again, there’s a real and meaningful tradeoff here: if you take the First Amendment at its broadest possible reading, virtually all software code is speech, leaving software-based services impossible to regulate. Airbnb and Amazon have both used Section 230 to defend against claims of providing faulty physical goods and services, an approach that hasn’t always worked but that remains open for companies whose core services have little to do with speech, just software.
Balk’s Law is not always easy to understand. Internet platforms change us, by rewarding certain kinds of posts, subjects and linguistic quirks. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. It has been found that humanity can be ugly at scale. It’s possible that vicious abuse can come from one person, or that it can be spread out into a campaign of threats, lies, or terrorism involving thousands of different people, not amounting to a viable legal case.
There will be arguments for two internet moderation cases on the Supreme Court. The hearings for Gonzalez and TaAmneh will be held on February 21st and 22nd, respectively.
Tech companies who are involved in the litigation cite the statute to argue that they shouldn’t have to face lawsuits for helping terrorists by hosting or recommending terrorist material.
The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.
How far can Section 230 go after the Twitter Deadlock? The Case of Reddit, Google, and the FCC: A Large-Scale Limitation on Recommendations
The executive order faced a number of legal and procedural problems, not least of which was the fact that the FCC is not part of the judicial branch; that it does not regulate social media or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.
If the two parties can’t agree on what should replace Section 230, there’s a bipartisan hatred for it.
The US Supreme Court has an opportunity to dictate how far the law can go after the deadlock gave rise to much of the momentum for changing Section 230 to the courts.
A lawyer representing the plaintiffs challenging the law repeatedly failed, for instance, to offer substantial limiting principles to his argument that could trigger a deluge of lawsuits against powerful sites such as Google or Twitter or threaten the very survival of smaller sites. Some justices were not comfortable with the idea that the sky is falling.
For the tech giants, and even for many of Big Tech’s fiercest competitors, it would be a bad thing, because it would undermine what has allowed the internet to flourish. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.
“‘Recommendations’ are the very thing that make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. Users determine which posts get prominence and which fade into obscurity by upvoting and downvoting.
People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”
The facts in the Twitter and the Google case are similar, even if they’re posing different legal questions. The case could be resolved without any need for the courts to consider Section 230 if the finding is that Twitter is not responsible under the ATA.
The lawsuit argues that the law’s boundaries are undecided, despitegoogle’s assertion that it’s protected by Section 230. “[Section 230] does not contain specific language regarding recommendations, and does not provide a distinct legal standard governing recommendations,” they said in yesterday’s legal filing. They’re asking the Supreme Court to find that some recommendation systems are a kind of direct publication — as well as some pieces of metadata, including hyperlinks generated for an uploaded video and notifications alerting people to that video. By extension, they hope that could make services liable for promoting it.
There are many unanswered questions regarding introducing liability to the algorithmic decision making. Even if it responds to a direct search for a false statement or a terrorist video, should Google be punished for returning search results that link to defamation? And conversely, is a hypothetical website in the clear if it writes an algorithm designed deliberately around being “in cahoots with ISIS,” as Justice Sonia Sotomayor put it? While it (somewhat surprisingly) didn’t come up in today’s arguments, at least one ruling has found that a site’s design can make it actively discriminatory, regardless of whether the result involves information filled out by users.
Twitter v. Taamneh, meanwhile, will be a test of Twitter’s legal performance under its new owner Elon Musk. The suit concerns a separate Islamic State attack in Turkey, but like Gonzalez, it concerns whether Twitter provided material aid to terrorists. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.
Justice Amy Coney Barrett questioned Twitter’s liability for a retweet of a link to a terrorist video. The justice asked whether artificial intelligence should be treated differently than algorithms because of the actual data being provided by the platform. Justice Brett Kavanaugh worried about the consequences of any broad decision in the case. It could, he said, “crash the digital economy,” and “lawsuits will be nonstop.”
Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.
He says that social media companies make more money when users stay online longer and he adds that one way to do that is by using technology to recommend other things.
He claims social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.
“The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . Government officials. He told them that.
Social Media and the Second Supreme Court Case Against Recommendations in the U.S., Latin America, and the Middle East
“We don’t believe it’s okay for extremists to post on our products or platforms, that’s why we invest in human review and detection technology to make sure that happens,” she says.
Prado acknowledges that social media companies today are a lot different from the social media companies of 1996. She doesn’t think the courts should be able to change the law if there is a change.
There are many “strange bedfellows” among the tech company allies in this week’s cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.
The Biden administration has a different position. Columbia law professor Timothy Wu summarizes the administration’s position this way: “It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230.”
If you grouping content together, sorting through billions of pieces of data for search engines, and then recommend it as an example of illegal conduct, that is okay, but actually doing so is not.
The economic model of social media companies is very different than it was if the Supreme Court adopted that position. The tech industry says there is no easy way to distinguish between aggregating and recommending.
It’s probable that these companies would be defending their conduct in court. Getting over the hurdle of proving enough evidence to justify a trial is different from filing suit. The Supreme Court made it very difficult to jump that hurdle. On Wednesday the court hears a second case about that problem.
For nearly three hours on Tuesday, the nine justices peppered attorneys representing Google, the US government and the family of Nohemi Gonzalez, an American student killed in a 2015 ISIS attack, with questions about how the court could design a ruling that exposes harmful content recommendations to liability while still protecting innocuous ones.
Eric Schnapper argued that most of the suits would likely be tossed out because the ruling for Gonzalez wouldn’t have any far-reaching effects.
Later, Justice Elena Kagan warned that narrowing Section 230 could lead to a wave of lawsuits, even if many of them would eventually be thrown out, in a line of questioning with US Deputy Solicitor General Malcolm Stewart.
“You are creating a world of lawsuits,” Kagan said. You also have these presentational and prioritization choices that can be adjusted to your needs.
Stewart doesn’t agree with the idea that there would be lots of lawsuits, but they aren’t suits that have any chance of prevailing, especially after the court makes a recommendation.
Multiple justices pushed Schnapper to clarify how the court should treat recommendation algorithms if the same algorithm that promotes an ISIS video to someone interested in terrorism might be just as likely to recommend a pilaf recipe to someone interested in cooking.
Many of the justices lost track of what he was trying to say when he attempted multiple explanations, including a hypothetical about the difference between videos on the internet.
Roberts added: “It may be significant if the algorithm is the same across … the different subject matters, because then they don’t have a focused algorithm with respect to terrorist activities… Then it might be harder for you to say that there’s selection involved for which you can be held responsible.”
Barrett raised the issue again in a question for Justice Department lawyer Stewart. She asked: “So the logic of your position, I think, is that retweets or likes or ‘check this out’ for users, the logic of your position would be that 230 would not protect in that situation either. Correct?
Stewart said there was distinction between an individual user making a conscious decision to amplify content and an algorithm that is making choices on a systemic basis. But Stewart did not provide a clear answer about how he believed changes to Section 230 could affect individual users.
“People have focused on the [Antiterrorism Act], because that’s the one point that’s at issue here. If tech companies no longer had Section 230 immunity, there will be many more defamation suits according to Chief Justice John Roberts.
Justice Samuel Alito posed for Schnapper a scenario where a competitor of a restaurant created a video making false claims about the restaurant violating health code and YouTube refusing to take the video down despite knowing its defamatory.
Kagan seized on Alito’s hypothetical later on in the hearing, asking what happens if a platform recommended the false restaurant competitor’s video and called it the greatest video of all time, but didn’t repeat anything about the content of the video.
Defending the search engine was agreed to by lawyer Lisa Blatt. She argued that the 1996 federal law at issue in this case was aimed at shielding internet platforms from lawsuits.
“That the internet never would have gotten off the ground if everybody would have sued was not what Congress was concerned about at the time it enacted this statute,” Jackson said.
The brief written by Oregon Democratic Senator Ron Wyden and former California Republican Representative Chris Cox explained that Section 230 was intended to shelter the internet from lawsuits over how websites manage their platforms.
Social Media vs. Search Engines: What Happens If Google is Left Behind? The Case of Google and the Future of the Internet
In questions for Google’s foes, Justice Sonia Sotomayor suggested that platforms could face liability if they created a search engine that was discriminatory. She put forth an example of a dating site that wouldn’t match individuals of different races. Justice Barrett returned to the hypothetical in her questioning of Blatt.
Multiple justices – even as they were not sympathetic to the tech foes’ arguments – suggested that that Google and its allies were playing Chicken Little in the stark warnings they gave the court about how a ruling against Google would transform the internet.
If the Internet were destroyed if there was liability for posting and refusal to take down videos that it knows are false, then would it be due to a lack of funds?
Do we have to go to Section 230 if you lose tomorrow? Is it possible that you would lose on that ground? Barrett asked Schnapper.
Nine justices set out Tuesday to determine what the future of the internet would look like if the Supreme Court were to narrow the scope of a law that some believe created the age of modern social media.
That hesitancy, coupled with the fact that the justices were wading for the first time into new territory, suggests the court, in the case at hand, is not likely to issue a sweeping decision with unknown ramifications in one of the most closely watched disputes of the term.
The Anti terrorism act of 1990 authorizes lawsuits for injuries if a person is injured by an act of international terrorism.
The case of Gonzalez v. Google, a landmark case over Section 230 of the Communications Decency Act of 1996, revisited
Concerns were raised about artificial intelligence, endorsements, and even restaurant reviews, as oral arguments focused on a maze of issues. But at the end of the day, the justices seemed deeply frustrated with the scope of the arguments before them and unclear of the road ahead.
Justice Samuel Alito said he was completely confused by the argument you were making. Justice Ketanji Brown Jackson said that he was thoroughly confused. “I’m still confused,” Justice Clarence Thomas said halfway through arguments.
Justice Elena Kagan even suggested that Congress step in. We are a court. We have no idea about these things. You know, these are not like the nine greatest experts on the internet,” she said to laughter.
Chief Justice John Roberts tried to make an analogy with a book seller. He said that a book seller would send a reader to a table of books with related content if they recommended certain information.
Supreme Court Justice Elena Kagan made the wryly self-deprecating comment early in oral arguments for Gonzalez v. Google, a potential landmark case covering Section 230 of the Communications Decency Act of 1996. The remark was a nod to many people’s worst fears about the case. Gonzalez could overturn core legal protections for the internet, but it will be decided by a court that has shown an appetite for reversing precedent and reexamining longstanding speech law.
Today’s hearing focused heavily on “thumbnails,” a term Gonzalez family attorney Eric Schnapper defined as a combination of a user-provided image and a YouTube-generated web address for the video. Several justices were not convinced that Section 230 protections should be taken away due to a recommendation sorting system and URL creation. It was suggested by Kagan and others that the thumbnail issue would go away if they simply renamed videos or gave more information.
Do Facebook, Twitter, ISIS, and Google aren’t like the nine greatest experts on the internet? Justice Elena Kagan and her colleagues at the European Court Bench
On the other side are some of the most valued companies in the world, including Facebook, Twitter, and many smaller companies which together represent a large portion of the US economy.
Justice Elena Kagan seemed to sum up the countervailing winds when discussing how the EU deals with these issues, including levying a huge fine against Google. But, she noted, that fine was not levied by a court.
Gesturing to her colleagues on the bench, Kagan added, “You know, these are not like the nine greatest experts on the internet,” a comment followed by laughter in the courtroom.
That said, the justices tried their best, repeatedly trying to find a line between what is permissible for internet providers to do in organizing content on their platforms.
Lawyer Eric Schnapper, representing the family of Nohemi Gonzalez, the young woman killed in Paris, said the algorithms are the same, but when it comes to ISIS videos, the result is that companies are encouraging illegal conduct covered by the Federal Antiterrorism Act—a law that bars material aid to terrorist groups.
He said that what’s happening in YouTube is that they are not doing that. I type in a video and they’re sending me a catalogue of thumbnail images.
Source: https://www.npr.org/2023/02/21/1158628409/supreme-court-section-230-arguments
“We Know What You’re Reading” and “We Aren’t” Searching For Users”: The 96-Circumscritch
“The basic features of topic headings, up next, trending now . . . we would say are core, inherent,” she said. “They’re no different than expressing what is implicit in any publishing.”
It was argued that if the court were to prevent Searches for Users from being made, that would be very different from what Congress intended when it granted platforms immunity.
While the justices indicated that it might be better for Congress to take on the task of modifying the 1996 law, at the same time, several fired some pointed shots across the bow, hinting at limited patience with internet platform providers. There are more cases that are expected next term.