Twitter v. Taamneh as a Case Study of Counterterrorism in the U.S. Internet and the Trump-Bushera Era
The Trump administration tried to turn some of those criticisms into concrete policy that would have had significant consequences, if it had succeeded. For example, in 2020, the Justice Department released a legislative proposal for changes to Section 230 that would create an eligibility test for websites seeking the law’s protections. The Federal Communications Commission was instructed by the White House to interpret Section 230 in a more narrow way.
The two cases could result in a fundamental change to how platforms can recommend content, particularly material produced by terrorist organizations. Both stem from lawsuits claiming that YouTube, Twitter, and other platforms provided support for Islamic State attacks by failing to remove — and, in some cases, recommending — accounts and posts by terrorists. Section 230 of the Communications Decency Act is used to shield web services from liability for illegal content. Twitter v. Taamneh covers a distinct but related question: whether these services are providing unlawful material support if they fail to kick terrorists out.
The family’s petition asked the Supreme Court to review the case, arguing that the videos on YouTube were the central way in which the group enlisted support and recruits outside of Syria and Iraq.
The First Amendment and the Courts: How Does Section 230 Affect and Protects Online Freedom of Human Content? Sentiment Laws, Lawyers, and Judges
The way the courts have interpreted section 230 has been uniformly interpreted by the lower courts. They have said that under the law, social media companies are immune from being sued for civil damages over most material that appears on their platforms. The law tries to encourage social media companies to remove material that is obscene, lewd, violent, harassing or otherwise objectionable.
The justices asked a lot about the extent to which “targeted recommendations” turn social media platforms from neutral, public spaces to publishers of potentially harmful content. Companies were transformed from passive platforms to active ones due to the development and deployment of these content targeting strategies on social media.
To put it bluntly, the First Amendment doesn’t work if the legal system doesn’t work. Arguing over the rare exceptions to free speech doesn’t matter if people can’t be meaningfully censured for serious violations or if verdicts are vestigial afterthoughts in cases filed mostly for clout. And it’s especially useless if the courts themselves won’t take it seriously.
The crux of the case is the question of whether Tech companies should be held liable for the harmful content their users post on their platforms, something for which they are protected under the 1996 Telecommunications Act, a piece of legislation that was meant to increase competition in broadcasting. The protection has shielded companies that have large reach and influence on platforms from being held responsible for harms caused by content from extremists. But it is also a fundamental underpinning of free speech online.
There is a proposal to change the speech law to make it clear that their aims are better than the current law. Legal experts like Danielle Citron have also proposed fixing specific problems created by Section 230, like its de facto protections for small sites that solicit nonconsensual pornography or other illegal content. There are serious criticisms of these approaches, but they’re honest attempts to address real legal tradeoffs.
The First Amendment, Section 230, and the Loss of Unlawful Slander: What Happens When You Can’t Stop Worrying
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The law was passed in 1996, and courts have interpreted it expansively since then. The judgement means that websites, newspapers, gossip sites, listserv operators, and other parties aren’t allowed to be sued for hosting or reposting illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.
It is easier to hurt people with illegal and legal speech today than it has ever been. But the issue is far bigger and more complicated than encouraging more people to sue Facebook — because, in fact, the legal system has become part of the problem.
But making false claims about pandemic science isn’t necessarily illegal, so repealing Section 230 wouldn’t suddenly make companies remove misinformation. There is a good reason why the First Amendment protects scientific claims. Imagine if news outlets were sued for publishing good-faith assumptions that were later proved to be wrong, like covid not being airborne.
Removing Section 230 protections is a sneaky way for politicians to get around the First Amendment. Without 230, the cost of operating a social media site in the United States would skyrocket due to litigation. It is difficult to invoke a straightforward 230 defense and sites may face lengthy lawsuits over legal content. And when it comes to categories of speech that are dicier, web platforms would be incentivized to remove posts that might be illegal — anything from unfavorable restaurant reviews to MeToo allegations — even if they would have ultimately prevailed in court. All of this would cause a great deal of time and money to be wasted. It is no wonder platform operators do their best to keep 230 alive. When politicians gripe, the platforms respond.
The Story of Johnny Depp and the State of the Laws: A Case Study in a Case of a Self-Injuring Maritalist
It’s also not clear whether it matters. Jones declared corporate bankruptcy during the procedure, tying up much of his money indefinitely and leaving Sandy Hook families struggling to chase it. He used court proceedings to market health supplements to his followers. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. He could use it as a platform to declare himself a martyr.
The defamation case that Johnny Depp was involved in was related to Amber Heard who said she was a victim of abuse. Amber Heard’s case was less cut-and-dried than Jones’, but she lacked Jones’ shamelessness or social media acumen. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.
Whitehouse said he would make a bet if the Senate Judiciary Committee took a vote on Section 230 repeal, it would clear the committee with every vote. We want 230-plus, that’s the problem. We want to repeal 230 and then have ‘XYZ.’ We do not agree on the meaning of the abbreviation XYZ.
Republican proposals for speech reforms are terrible. We found out over the past year, after Republican legislatures in Texas and Florida passed bills that banned moderation on social media, how bad it had become.
As it stands, the First Amendment should almost certainly render these bans unconstitutional. The speech regulations are government ones. While an appeals court blocked a Florida law, the Fourth Circuit Court of Appeals in Texas reversed itself and upheld it. The court’s opinion was the most incoherent First Amendment decision I’ve ever read, stated legal commentator Ken White.
The Supreme Court temporarily blocked the Texas law, but has not been reassuring recently about its views on speech. It’s almost certain to take up either the Texas or Florida case, and the case will likely be heard by a court that includes Clarence Thomas, who’s gone out of his way to argue that the government should be able to treat Twitter like a public utility. Conservatives had raged against the idea of treating internet service providers like a public utility and it would make your brain hurt.
Three conservative justices, including Thomas, voted against putting the law on hold. Some think Elena Kagan voted in a protest against the shadow docket where the ruling took place.
A useful idiot wouldn’t support Texas and Florida laws. The rules are rigged to punish political targets at the expense of basic consistency. They attack “Big Tech” platforms for their power, conveniently ignoring the near-monopolies of other companies like internet service providers, who control the chokepoints letting anyone access those platforms. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.
And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. A Republican state delegate in Virginia obtained an obscenity law to stop Barnes and Noble from selling a graphic memoir about the sexual preferences of lesbian, gay, bisexual, and queer people. A disingenuous panic over “grooming” doesn’t only affect LGBTQ Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”
But once again, there’s a real and meaningful tradeoff here: if you take the First Amendment at its broadest possible reading, virtually all software code is speech, leaving software-based services impossible to regulate. Section 230 is used by bothAirbnb andAmazon to defend against claims of providing faulty physical goods and services, an approach that has not always worked but it is still open for companies with core services that have little to do with speech, just software.
Balk’s Law is obviously an oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. And it turns out humanity at scale can be unbelievably ugly. It’s not a viable legal case to have vicious abuse come from a single person and then be spread out into a campaign of threats, lies, or terrorism involving thousands of different people.
The FCC and the US Supreme Court: Two case reshaping the online speech and content moderation of terrorists in the 21st century
The Supreme Court is set to hear back-to-back oral arguments this week in two cases that could significantly reshape online speech and content moderation.
Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.
The law’s central provision states that websites and their users cannot be considered as publishers or speakers of other people’s content. In plain English, that means that the person who created the piece of content has no responsibility for the content being shared on a platform other than their own.
The FCC is not part of the judicial branch, as well as being an independent agency that does not take direction from the law, were some of the legal problems facing the executive order.
The result is a hatred of Section 229 even if the two parties do not agree on what to do with it.
The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.
The Supreme Court will take up legal exposure for user content on tech platforms in a case, one day after the justices debated whether or not to hold search engine giant, Google and its subsidiary, YouTube, liable for the way it organizes content from the terrorist group Islamic State.
For the tech giants, and even for many of Big Tech’s fiercest competitors, it would be a bad thing, because it would undermine what has allowed the internet to flourish. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.
The Case against Google and Twitter for Abetting an International Terrorism Group Using Reddit, Twitter, and YouTube, and the Supreme Court of Justice
According to the company and several volunteer Reddit moderators,Recommendations are the most important component of a vibrant place. “It is users who upvote and downvote content, and thereby determine which posts gain prominence and which fade into obscurity.”
The legal regime would cause people to stop using Reddit, and the moderators to stop volunteering if they were allowed to recommend posts that were libelous.
The second case, Twitter v. Taamneh, will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.
Tech platforms could be at risk for more liability for how they run their services due to the allegation that seeks to carve out content recommendations.
The case has been commented on by the Biden administration. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” The government argued in its brief that the protections do not extend to the company’s own speech and not that of others.
According to the questions from the justices during Wednesday’s arguments, they believed that the Court was leaning in favor of the defense that even though the terror group was using a social network, they were not giving aid to commit a specific act of terror.
A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month delayed a decision on whether to hear those cases, asking instead for the Biden administration to submit its views.
Twitter v. Taamneh, meanwhile, will be a test of Twitter’s legal performance under its new owner Elon Musk. There is a seperate Islamic State attack in Turkey, but the same issue as Gonzalez is related to whether it was provided with material aid to terrorists. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.
Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.
Most of the money is made by advertisements and social media companies make more money when users stay online longer.
Social media companies must no longer have their cake and eat it too. It is necessary for them to be held accountable for failing to quickly deploy the resources and technology that they could have used to prevent violent content on their platforms.
The Supreme Court is a Court that doesn’t know anything, and it isn’t. The Social Media Industry as a Black Hole
The White House chief of staff, the FBI director, and the attorney general are some of the people involved in the case. those government officials . . . told them exactly that,” he says.
She believes that there is no place for extremists on any of the products or platforms that they use.
Prado says that social media companies of 1996 are not the same as today’s companies. She says that the courts should not be involved if there is a change in the law.
There are many strange bedfellows among the tech company allies. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.
But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration’s position this way: “It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230.”
Representing certain content together, sorting through the billions of pieces of data for search engines is ok, but showing illegal activity is not.
If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry doesn’t have an easy way to distinguish between recommendations and aggregation.
And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial–those are two different things. The Supreme Court made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.
There was laughter on February 21 when Justice Elena Kagan said that the Supreme Court is a court that doesn’t know anything. We are not, like, the nine greatest experts on the internet.”
One prominent example of this supposedly “biased” enforcement is Facebook’s 2018 decision to ban Alex Jones, host of the right-wing Infowars website who later was slapped with $1.5 billion in damages after harassing the families of the victims of a mass shooting.
Editor’s Note: Former Amb. The goal of the Coalition for a safer Web is to develop technologies and policies to expedite the de-platforming of hate and extremists on social media. The views expressed in this commentary are his own. CNN has more opinion on it.
In other words, the platforms are treated as benign providers of digital space and have limited liability exposure to whatever customers decide to upload onto that space. There was a thought that new internet service companies might be in danger of financial ruin because of a lot of lawsuits they would likely face for publishing defamatory content.
Advertisers have to rely on platforms to make sure they remove offensive content or hope for the best, hoping that watchdog groups or community members will flag items that they wouldn’t want their brands to be associated with. Meanwhile, days to months can go by before a platform deletes offensive accounts. In other words, advertisers don’t have absolute confidence in social media companies to not sponsor extremists.