- Transition from an analog to a digital public sphere, with speech and associational rights regulated by companies; virality over veracity in online discourse; tensions between quantity and quality of information; implications for democracy
- Business model concerns, including new conceptions of monopoly and market power of digital platforms, as well as government efforts to promote market competition (e.g., antitrust regulation)
- Technology behind efforts to regulate speech in online communities, including content moderation practices, frontiers/innovations in speech regulation
- Comparative analysis of how global platforms operate in diverse communities with different speech traditions and politics
Content moderation is a really hard problem.
- Obvious to us in tech, but not to ordinary citizens
- What makes speech valuable? What about technology that changes that?
- The rise of private superpowers not beholden to the first amendment: Facebook and Twitter’s policies on expression is more consequential than France's.
- There are more options than a binary show/don't show when it comes to content moderation
- Subject content to fact-checking, warning messages, lower its reach through visibility filtering, user timeouts, etc.
- Whether content exists is different from whether content is seen
Salesforce has taken unspecified action to stop the RNC from sending messages from possibly inciting violence. How far up the stack or down the stack should content moderation go? Telecom companies? Telecom companies are already supposed to be content moderators with regard to spam and robocalls…
”Awful but lawful” must be codified as “lawful” is a very low bar with regard to content moderation.
There is an unprecedented level of scope to community standards on social media.
When you can't define the science behind the “art” of moderation, how do you move forward?
We’ve seen three approaches to platform restrictions:
- Banning an individual from a platform (e.g., Trump banned from Twitter, FB)
- Removing an app from the App Store or Google Play Store (e.g., Parler app removed from both stores)
- Suspending hosting for an app/social network (e.g., AWS cutting Parler’s cloud hosting)
…Are these approaches equally valid? If not, why not?
Deplatforming Parler was a deplatforming of a platform. Apple asked Parler to provide a content moderation plan within 24 hours before kicking them off. Will Apple ask that of others? Is Apple now a content moderation reviewer?
Should we think of AWS as similar to or different than the major social media platforms with respect to the power/right they should have to deny access to their services?
How should the app store or cloud service provider determine whether or not to allow an app? Should it be based on the app’s stated Terms of Service or how the app is actually being used?
International human rights standards on incitement differ from existing content moderation policies… and international human rights standards value freedom of expression.
Will fringe audiences radicalize further as the large forum public spaces online disallow their conversation?
In a polarized society, people will look to the individuals with the most power and influence and thus the greatest ability to cause harm and further division.
Is it justifiable for a platform to handle accounts of public officials differently than those of the general public? Should this policy be different in democracies than it is in non-democracies? How so?
- Political officials that have an incentive to stay in power despite not being democratically elected should be held to the same standard citizens are, without special treatment (fact checking, rule breaking, etc.)
Should Twitter join Facebook’s review board? Mark shouldn’t decide things alone, and neither should Jack.
Does a review board remove accountability from the company itself? Is that a good thing? The review board has a different incentive structure than business needs. The review board is an Appeal Court, not a Decision Body. Is public transparency in the board — its members, its budget, its process — even more important than what the board does?
What are decisions that courts should make vs. decisions that companies should make?
Historically, decisions and documents in major decisions are internal, which means there is a lack of public transparency and oversight.
- Mill, John Stuart. All Minus One: John Stuart Mill’s Ideas on Free Speech, Illustrated. Edited by Richard V. Reeves et al., Heterodox Academy, 2018.
- Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” John Perry Barlow Library, Electronic Frontier Foundation, 8 Feb. 1996.
- “Permanent Suspension of @RealDonaldTrump.” Twitter, Inc., 8 Jan. 2021.
- Nicas, Jack, and Davey Alba. “Amazon, Apple and Google Cut Off Parler, an App That Drew Trump Supporters.” The New York Times, 10 Jan. 2021.
- Klonick, Kate, and Sue Halpern. “Inside the Making of Facebook's Supreme Court.” The New Yorker, 12 Feb. 2021.
- Douek, Evelyn. “The Rise of Content Cartels.” Knight First Amendment Institute, Columbia University, 11 Feb. 2020.
- Newton, Casey. “Facebook Calls Australia's Bluff.” Platformer, 18 Feb. 2021.
- Mchangama, Jacob. “Rushing to judgment: Examining government mandated content moderation.” Lawfare, 6 Feb. 2021.
- Kantrowitz, Alex. “Glenn Greenwald on Substack, Content Moderation, and Joe Rogan.” Audio version. Big Technology — OneZero, 3 Feb. 2021.
- Barrett, Paul M., and J. Grant Sims. “February 2021 Report Release | False Accusation: The Unfounded Claim That Social Media Companies Censor Conservatives.” NYU Stern Center for Business and Human Rights, New York University, 1 Feb. 2021.
- Roose, Kevin, et al. “Facebook Struggles to Balance Civility and Growth.” The New York Times, 24 Nov. 2020.
- Kaye, David. Speech Police: the Global Struggle to Govern the Internet. Columbia Global Reports, 2019.
- Thompson, Ben. “A Framework for Moderation.” Stratechery by Ben Thompson, 25 July 2020.
- van Mill, David. “Freedom of Speech.” Stanford Encyclopedia of Philosophy, Stanford University, 1 May 2017.
- Lyons, Kim. “Peloton Appears to Have Removed QAnon-Related Hashtags from Its Platform.” The Verge, 10 Oct. 2020.