SCOTUS Speaks in Part and Silent in Part on Extremist Content and Platform Liability

Platforms are in the clear for general liability, but the court appears wary on specifics

SCOTUS Speaks in Part and Silent in Part on Extremist Content and Platform Liability
Photo by Mat Napo on Unsplash

The long awaited decisions of Gonzalez v. Google and Twitter v. Taamneh have just been released by the Supreme Court. Both ask the question on whether a platform can be held liable for recommending extremist or violent content that inspires terrorist attacks.1 Technologists have been looking at this case closely to hear the court’s opinion around a law you may have heard called Section 230. This is a law that, in effect, reduces the speech liability of platforms by platform members so long as the platform makes a good faith effort to “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

These two cases were both based on the parties’ theory that Google and Twitter should be held liable for promoting recommendations of extremist content thereby resulting in the radicalization of individuals who committed acts of mass violence. In the words of the court,

Plaintiffs…allege that for several years the companies have knowingly allowed ISIS and its supporters to use their platforms and “recommendation” algorithms as tools for recruiting, fundraising, and spreading propaganda; plaintiffs further allege that these companies have, in the process, profited from the advertisements placed on ISIS’ tweets, posts, and videos.

The theory is that Google, Facebook, and Twitter all aided actions taken by ISIS through their lack of action in removing content, permitting revenue sharing on some ISIS accounts through advertising, and claiming that this was substantial “aiding and abetting” under the Justice Against Sponsors of Terrorism Act. This allows an individual to not only sue the person who committed the violent act, but to any party that substantially assisted them. In other words, are Google, Facebook, and Twitter secondarily liable for these acts of mass violence?

To “Abet” or Not To “Abet?” That is the Question

SCOTUS refers to an old case about a victim’s estate suing an attacker’s live-in partner for aiding and abetting in the victim’s murder. This is primarily because the same theory of secondary liability applies. This case provides a three-step test to determine secondary liability:

  1. The harming party must have done something to harm the victim;
  2. The alleged accomplice must have been aware of the illegal activity at the time aid was provided; and
  3. The aid was “knowingly and substantially” provided to the harming party.

The “substantialness” of the act depends on the nature of the act, how much assistance was provided, whether they were present during the act, the aide’s relationship to the offending individual, their state of mind, and the duration of the assistance. In applying this test here, the court muses to what extent a party can be held secondarily liable for one’s actions.

In the application to this case, the parties argue whether the “aiding and abetting” has to be to a specific person or to the act. In other words, can a party be held liable for aiding the organization as a whole or does it have to be the specific attack that the plaintiffs were affected by?

The court agrees with the plaintiffs in so far as satisfying the first two elements of this test. Namely, that a violent act occurred and there was awareness of the illegal activity when the aid was provided. But, the court disagreed on the “knowing” and “substantial” provision of aid. Here the court argues that the platforms are generally available to everyone, recommend content in the same way to any person regardless of the type of content, and did not provide any meaningful support through “words of encouragement” or logistical support. In discussing the algorithm the court says,

As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.

One of the primary reasons the court does not give credence to the plaintiff’s theory is that it “rests so heavily on defendants’ failure to act.” For a liability claim to exist, there must be some demonstration that there was a duty of care present in the first place.

So Where Does This Place Section 230?

Frankly, we don’t know. The court in it’s separate opinion on Gonzales v. Google declines to address the Section 230 question because the underlying claim of aiding and abetting is flawed. Overall, that’s rather frustrating for technologists that were looking for some form of clarity on the court’s perspective on how immune platforms can be to liability. What is interesting is that the court agreed that the platforms knew about the illegal activity happening on their systems. The failure on the plaintiff’s side was that the platforms didn’t specifically help this particular attack.

That particular element stands out to me, the court seems torn between stating that platforms know about the illegal content but also acknowledge that there are billions of hours of content that flow through them at any given time. The court also sidelined the question about Google sharing advertising revenue with ISIS because the plaintiffs did not specify in enough detail the amount of the support stating that “it thus could be the case that Google approved only one ISIS-related video and shared only $50 with someone affiliated with ISIS; the complaint simply does not say, nor does it give any other reason to view Google’s revenue sharing as substantial assistance.”

What Are Tech Platforms Taking Away From This?

While this is a “win” for the platforms, I would be vary wary if I was on the platform side today. The fact that SCOTUS effectively waved past the awareness question of the secondary liability test would set off warning bells. While the platforms were not held liable because they didn’t uphold that the support was substantial to the specific act of violence that occurred, it laid out a number of theories that could heighten the substantialness of that support.

For features as a whole, it seems that the court is willing to accept the argument that as long as those features are available to everyone, they are willing to view platforms as a “bystander” and not having any particular duty. However, for features that may be available to a more limited subset of users, there may be more room for future claims. What would happen if we had a repeat of a Christchurch shooter that uses live functionality on a platform? What if the company had thousands of reports of the problematic content that was flagged? What if the recommended video was specifically calling for an attack on a particular location? These are questions that would keep me up at night if I were in their shoes.


  1. An aside, I will only use the term “terrorist” in this post to refer to the language the court uses and in reference to the language in 18 USC 2331(1). The term has been used primarily and inconsistently in reference to violence committed by Muslims. As a Muslim myself, I will opt to use alternative terminology that does not have the same rhetorical reductive impact to a community.