Loading...
Our digital collaboration
hub has gone live

The World Wide Web was only a few years old when Congress passed the Telecommunications Act of 1996,(1) which was designed to encourage development of new technologies and promote competition among those fledgling industries. Certain portions of the Act aimed at regulating indecency and obscenity on the internet included a provision explicitly protecting online content providers from being considered responsible for third-party speech. This protection has helped the internet flourish into the omnipresent role it currently plays across business, culture, and society.

GONZALEZ V. GOOGLE LLC

Technology has come a long way since 1996 and the United States Supreme Court is currently grappling with how that protection applies in a world of search engines and social media. In Gonzalez v. Google LLC,2 the Supreme Court examines Section 230(c)(1) of the Telecommunications Act, which states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.3

Gonzalez involves the parents of a terrorist attack victim (the Petitioners) who seek to hold Google accountable for promoting videos on its website, YouTube, made by the terrorist group that claimed responsibility for the attack. The Petitioners accuse Google of aiding and abetting terrorism by spreading the group’s message through YouTube’s algorithms that recommend its content to its users.

The key threshold question in Gonzalez is whether the protection afforded by Section 230(c)(1) applies, which boils down to whether a platform’s algorithmically recommended content constitutes a form of speech separate and apart from the substance of content disseminated by unaffiliated third parties on that platform.

The Petitioners claim that YouTube’s algorithm creates speech in the complex and proprietary way it selects the videos and other content recommended to YouTube users. In this way, YouTube is not merely relaying the speech of “another information content provider”—meaning the protections of Section 230(c)(1) should not apply and courts should be able to consider the question of platform liability. In contrast, Google contends Section 230(c)(1) means that nothing posted on any website—whether social media posts, search engine results, or any other forum or medium allowing third-party content—can impute liability to the host website for the consequences attributable to that content.

Google argues that nothing about the algorithm’s recommendations should be considered speech because YouTube applies the same algorithm to all types of videos—a “neutral application” of the algorithm, a term Justice Clarence Thomas suggested during oral argument.4 According to Google, without this protection from liability for what its algorithm puts in front of its users, any website with third-party content would be forced to choose between (a) allowing fully unregulated conduct to the inclusion of “filth, violence, [and] hate speech”(5); or (b) heavily moderating its content to remove anything that could lead to liability—for both the third-party creator and the host website.

Petitioners reject Google’s argument and contend that “neutrality” does not matter to determine whether the online platforms are immune from liability under Section 230(c)(1). They claim the use of the same algorithm among all users does not change the fact that the algorithm is Google’s design and creation, making the resulting selection of recommended content a form of speech outside the scope of that law.

At oral argument on February 21, 2023, counsel for the Petitioners suggested the Supreme Court should broadly not apply the protections of Section 230(c)(1) any third-party content selected in any way by the host website, including social media platforms and search engine results. Justice Sonia Sotomayor suggested that holding search engines accountable for dangerous third-party content recommended to users specifically looking for that content may be “going to the extreme”, demonstrating the justices’ possible reluctance to create such sweeping liability.6 On the other side of the argument, counsel for Google asserted that any algorithm or method of selecting recommended content should be protected under Section 230(c)(1)—even if the algorithm is designed to promote dangerous conduct—as long as the same algorithm is applied evenly to all users. Justice Elena Kagan wondered aloud whether Section 230 “should . . . really be taken to go that far?”(7)

FUTURE IMPLICATIONS

Gonzalez has opened the door to some important questions with wide-reaching implications to consider about the relationship of a manufacturer or content-creator and the online platforms hosting or promoting their content. There is potential exposure for an online platform that take further steps to direct content to the user—a third-party marketplace, for example, where the online platform actively controls the products it hosts, what information it chooses to provide to consumers, and the types of algorithms to implement.

Can an online marketplace be liable when it bases recommendations based not only on the consumer’s search history and activity, but based on the content that the consumer’s friends or family are viewing? If the third-party marketplace’s algorithm advertises two products to be used together in an unsafe manner—contrary to the manufacturer’s warnings—who is liable to the consumer that is harmed by using those products together? If that consumer sues the manufacturer, can the manufacturer recover against the marketplace that gave the consumer the awful idea? Simply put, at what point does a hosting platform bear responsibility for what it shows its users?

The world has changed dramatically since Section 230(c)(1) was passed in 1996, and this issue will become only more important as the amount of time people spend online continues to increase. It is reported that the “average U.K. adult spends 16 hours per week on Facebook and nine hours on YouTube” and “most Americans look at their phones up to 352 times daily to check email, texts, Facebook, Instagram, and other sites and apps.”8 Meanwhile, social media platforms have demonstrated a propensity to prioritize self-interest in how they craft their user-content algorithms.

In 2021, the disclosure of Facebook’s internal documents revealed that after a modification to the website’s algorithm three years earlier, data scientists reported “unhealthy side effects” that rewarded publishers and political parties for “reorienting their posts towards outrage and sensationalism.”(9) Through its algorithm, a website like Facebook can “sculpt" the information landscape according to its business priorities.”10 The Supreme Court may be poised to reshape the responsibilities (and liabilities) that come with that power.

At a time when the internet has never been more of an intertwined and ubiquitous presence in our lives, the Supreme Court’s interpretation of Section 230(c)(1) in Gonzalez has the potential to forever change the online landscape.

Attorneys at Hahn Loeser will be closely monitoring this case (with a decision expected in June 2023), and other developments in this area of law.

For more information or a consultation regarding your dispute, please contact Sarah Dunkley sdunkley@hahnlaw.com or Jack Battaglia jbattaglia@hahnlaw.com, or you can reach either of them by phone at 312-637-3000.

1. Telecommunications Act of 1996, Pub. LA. No. 104-104, 110 Stat.

2. No. 21-1333 (cert. granted Oct. 3, 2022).

3. 47 USC § 230(c)(1).

4. Tr. of Oral Arg. 6-7.

5. Id. at 125.

6. Id. at 37-39.

7. Id. at 127-32.

8. Waltower, S., How Much Time Do Americans Spend Online and Texting?, Business News Daily (updated Feb. 21, 2023) (accessed at https://www.businessnewsdaily.com/4718-weekly-online-social-media-time.html).

9. Hagey, K. & Horwitz, J., Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead., The Wall Street Journal (Sept. 15, 2021) (accessed at https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=article_inline) (Facebook’s CEO Mark Zuckerberg “resisted some of the proposed fixes . . . because he was worried they might hurt the company’s other objective—making users engage more with Facebook”).

10. Oremus, W., et al., How Facebook Shapes Your Feed, Washington Post (Oct. 26, 2021) (accessed at https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/).