[This is an excerpt from James Grimmelmann’s paper “The virtues of moderation” published in the Yale Journal of Law and Technology (2015) . The relevance to our webzine should be obvious… The excerpt presented below was pieced together by SciSal editor Dan Tippens and is reproduced here with permission from the author. Part II will appear in a couple of days.]
If you’ve never seen the image known as “goatse,” trust me — you don’t want to. But if you have, you understand why it was such a disaster when this notoriously disgusting photograph showed up on the website of the Los Angeles Times on June 19, 2005. It wasn’t a hack. The newspaper had invited its readers to post whatever they wanted. One of them posted a gaping anus. It had started off innocently enough. Inspired by Wikipedia, the Times launched a “wikitorial,” an editorial that any of the paper’s readers could edit. At first, readers fought over its position: should it be for or against the Iraq War? Then one boiled the argument down to its essence — “Fuck USA” — touching off an edit war of increasingly rapid and radically incompatible changes. By the second day, trolls were posting hardcore pornography, designed to shock and disgust. The Times pulled the plug entirely in less than forty-eight hours. What had started with “Rewrite the editorial yourself” ended with the admission that “a few readers were flooding the site with inappropriate material.”
The wikitorial debacle has the air of a parable: the Los Angeles Times hung a “KICK ME” sign on its website, and of course it got kicked. Open up an online community, and of course you’ll bring out the spammers, the vandals, and the trolls. That’s just how people act on the Internet. But consider this: the Times’ model, Wikipedia, is going into its thirteenth year. It is the sixth most-visited website on the Internet. And despite being a website “that anyone can edit,” it remains almost entirely goatse-free. Anarchy on the Internet is not inevitable. Spaces can and do flourish where people collaborate and where all are welcome. What, then, separates the Wikipedias from the wikitorials? Why do some communities thrive while others become ghost towns?
The difference is moderation. Just as town meetings and debates have moderators who keep the discussion civil and productive, healthy online communities have moderators who facilitate communication. A community’s moderators can promote posts or hide them, honor posters or shame them, recruit users or ban them. Their decisions influence what is seen, what is valued, what is said. When they do their job right, they create the conditions under which cooperation is possible. Wikipedia, for all its faults, is moderated in a way that supports an active community of mostly productive editors. The Los Angeles Times, for all its good intentions, moderated the wikitorial in a way that provided few useful defenses against vandals. Wikipedia’s moderation keeps its house in order; the Times gave arsonists the run of the place.
The Problem of Moderation
By “moderation,” I mean the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse. Our object of study is an online community. A community can be as small as the handful of people on a private mailing list or as large as the Internet itself. Communities can overlap, as anyone on both Twitter and Facebook knows. Communities can also nest: the comments section at Instapundit is a meaningful community, and so is the conservative blogosphere. There is little point in being overly precise about any given community’s boundaries, so long as we can identify three things: the community’s members, the content they share with each other, and the infrastructure they use to share it.
The Internet as a whole is both an agglomeration of numerous communities and a sprawling, loosely knit community in its own right. Its moderation includes both the moderation within its constituent communities and moderation that cannot easily be attributed to any of them. Thus, even though it is not particularly helpful to talk about Google as a community in its own right, it and other search engines play an important role in the overall moderation of the Web.
Members can wear different hats: there are owners of the infrastructure, moderators of the community, and authors and readers of content. For example, on YouTube, Google owns the infrastructure; video uploaders are authors; video viewers are readers; and the moderators include everyone who clicks to flag an inappropriate video, the algorithms that collate user reports, and the unlucky YouTube employees who manually review flagged videos. Owners occupy a privileged position because their control over infrastructure gives them unappealable control over the community’s software-based rules. This control lets owners decide who can moderate and how. Moderators, in turn, shape the flow of content from authors to readers. Of course, members can wear multiple hats. “NO SPOILERS!” is both content and a gently chiding act of moderation.
Members have varied motivations. Authors want their messages to be seen; readers with diverse tastes seek content of interest to them. Moderators, like authors, want to promote the spread of content they care about. All of them can derive personal fulfillment and a sense of belonging from participation. Because the same person could be an author, reader, moderator, and owner, these motivations interrelate. Thus, for example, users connect their computers to peer-to-peer networks to download files they want, but in the process they make files on their computers available to other users. They are willing to act as owners supplying infrastructure because of the value they receive as readers receiving content. Similarly, participants on a discussion forum may shoulder some of the work of moderation by flagging unwanted posts for deletion because they enjoy being part of a thriving community. Divergent motivations become important only when there is a clear separation of roles (e.g., paid professional moderators) or when a community is torn between participants with incompatible goals (e.g., amateur and professional photographers).
From these individual motivations, we can derive goals for moderation overall. Broadly speaking, moderation has three goals. First, a well-moderated community will be productive: it will generate and distribute valuable information goods. Some of these information goods are valuable in themselves (Welcome to Night Vale fan fiction), others because they facilitate transactions (Freecycle listings), and others because they are part of socially important systems (political discussions). Productivity is the greatest common divisor of moderation goals, the one that everyone can agree on. Members share in the gains from productivity as authors and readers. Society gains, too, when valuable information spreads beyond the community — a classic example of a positive spillover.
Second, moderation can increase access to online communities. Openness is partly about efficiency: more members can make the community more productive. But openness also has moral consequences: cutting people off from a community cuts them off from the knowledge the community produces. Openness exists along a spectrum. A wiki usable by anyone on the Internet is more open than a wiki open to anyone on a school’s network, which is in turn more open than a password-protected wiki open only to the graduate students of the geology department. An important aspect of openness is democracy — participation in moderation and in setting moderation policy. Again, part of the justification is instrumental: broad participation can help make moderation more effective. But it can also be important in itself for members to have a voice in making moderation decisions. Democratic moderation is online self governance.
Third, a well-moderated community will have low costs: it will do its work while making as few demands as possible on the infrastructure and on participants. Costs here include the obvious computational ones — servers, hard drives, network connections, electricity, etc. — but also include the work required of participants, such as flagging a post for removal, removing a flagged post, or appealing an incorrectly removed post. Each individual decision may be small, but they add up quickly. Yahoo saved one million dollars per year in customer support costs by substantially automating its moderation system for Yahoo Answers. These virtues are incomparable. Different moderation techniques inevitably trade off among them. Excluding the heaviest users, for example, hurts productivity and openness while also reducing costs. Even productivity and cost, both efficiency concerns, have distributional components: two members may agree that a burden is worth bearing but disagree on who should bear it.
The interface between infrastructure and information is vulnerable to some predictable forms of strategic behavior, including spam, harassment, and other famous pathologies of online life. These are the abuses against which moderation must guard. Moderation need not prevent them entirely — and probably cannot without killing the commons — but it must keep them within acceptable bounds, and without driving up the costs of moderation itself to unacceptable levels. The abuses fall into four broad categories: congestion, cacophony, abuse, and manipulation. The first pair of problems involves overuse. Each participant’s contribution of content makes demands both on the infrastructure and on other participants. At the infrastructure level, overuse causes congestion, which makes it harder for any information to get through and can cause the infrastructure to stagger and fall. At the content level, overuse causes cacophony, which makes it harder for participants to find what they want. In trademark terms, they must incur search costs to sort through the information available to them.
Both congestion and cacophony are problems of prioritization: bad content crowds out good, to the private benefit of the content’s promoters but at an overall cost to the community. The difference is that in congestion, the resource constraint is the infrastructure’s capacity, whereas in cacophony, the constraint is participant’s attention. Spam is the classic example of overuse causing both congestion and cacophony. A denial-of-service attack is an attempt to create congestion for its own sake.
Like abuse, manipulation is distinctively a problem of information exchange: it is possible whenever some information can be deleted entirely, or when participants can exploit each other’s cognitive limits. A classic pathological case of manipulation is the edit war, in which wiki users with conflicting ideologies engage in a wasteful conflict to make a page reflect their point of view. The difference is that, in abuse, the content itself is the problem, while in manipulation, worthwhile content is handled in a way that harms the community. The dueling pro- and anti-war edits to the Los Angeles Times wikitorial were manipulation; the pornography that followed was abuse.
 Grimmelmann, James. The virtues of moderation. Yale J.L @ Tech. 42 (2015).