The virtues of moderation, part II

forum_marketing_strategy_users_are_more_engaged_in_online_and_offline_activities_id6428830edited by SciSal

[This is an excerpt from James Grimmelmann’s paper “The virtues of moderation” published in the Yale Journal of Law and Technology (2015). The relevance to our webzine should be obvious. The excerpt presented below was pieced together by SciSal editor Dan Tippens and is reproduced here with permission from the author. Part I was published a couple of days earlier.]

Now that we have seen the problems that moderation faces, we can discuss how it solves them. We have already met the basic definitions: a community of members who use shared infrastructure to exchange content. The members have roles: as owners of infrastructure, as authors and readers of content, and as moderators. These are the nouns in the grammar of moderation.

Techniques (Verbs)

The real study of moderation begins with the verbs of moderation — the basic actions that moderators can take to affect the dynamics of a community. There are four: excluding, pricing, organizing, and norm-setting [in this post, we will only focus on excluding, organizing, and norm setting].

Excluding

Exclusion is fundamental in property theory because of its simplicity. Rather than attempt to calibrate specific uses, one simply excludes outsiders from all uses. In an online community, exclusion deprives the community of the contributions that those who are excluded could have made. But that loss can be justified when exclusion inhibits strategic behavior. It can be used against any form of strategic behavior by targeting those users who engage in that behavior — for example, to reduce congestion by excluding known spammers. The processes used to decide who will be excluded can fall anywhere along the spectrum from highly precise to absurdly crude. Mark Lemley individually vets each subscriber to the CyberProfs mailing list; for a time, Facebook was available only to users with a .edu email address. At any level of precision, a particularly important decision is whether the default is inclusion or exclusion. A default of inclusion gives everyone, well intentioned or not, at least one bite at the apple. Exclusion can also be applied independently to different roles. It is common, for example, to let anyone read a discussion board but to allow only registered users to post.

Organizing

Organization shapes the flow of content from authors to readers. It is the verb of moderation that most takes advantage of the informational capabilities of computers. Categorizing messages on a bulletin board by topic is organization. So is searching them by keyword, counting the number of messages, or deleting off-topic messages. These are all ways of remixing authors’ contributions to give readers a more satisfying experience. It is helpful to think of organizing techniques as being built up from several basic operations:

• Deletion is the removal of content. A bulletin board administrator who excises off-topic and profanity-laden posts is engaged in deletion.

• Editing is the alteration of content. It ranges from correcting typos to changing the very essence of a post. At the limit, editing is deletion plus authorship: the moderator rejects an author’s reality and substitutes her own.

• Annotation is the addition of information about content. eBay’s feedback system annotates buyers and sellers; Facebook’s Likes annotate posts and comments; Amazon’s user-written reviews and lists are annotations that have crossed the line and become content in their own right.

• Synthesis is the transformative combination of pieces of content. Wikipedia is the ultimate example of synthesis. There, small and heterogeneous changes by individual users are synthesized into entire encyclopedia entries. On a smaller scale, an online poll synthesizes individual votes into totals.

• Filtering is deletion’s non-destructive cousin: the content is still there, but readers see a specialized subset of it. A search engine filters; so does a blog’s list of the ten most recent comments. At the limit, filtering asymptotically approaches deletion: the ten-thousandth search result might as well not exist.

• Formatting is the styling of content for presentation to readers. Good typography improves readability; sensible ordering and grouping of images makes it possible to scan through them quickly. Like the other verbs, organization is itself costly but can reduce strategic behavior.

Organization directly attacks cacophony by helping readers see only the content they want. At the same time, organization indirectly reduces cacophony by reducing the incentives for authors to create low-value content that readers don’t want and will never see. Only deletion directly attacks congestion, but all forms of organization have the same indirect effect of reducing the incentive to spam. On the other hand, organization can be a tool for manipulation in the hands of self-interested moderators. Think, for example, of a Judean People’s Front sympathizer deleting every mention of the People’s Front of Judea on a Roman forum [2]. Finally, depending on how it is used, organization can either greatly amplify or greatly inhibit abuse: compare a gossip site that deletes crude sexual comments with one that invites them.

Norm-Setting

Moderation’s biggest challenge and most important mission is to create strong shared norms among participants. Norms can target every form of strategic behavior. For example, if every author refrains from personal attacks, there is no further personal-attack problem to be solved. Beneficial norms, however, cannot simply be set by fiat. By definition, they are an emergent property of social interactions. Moderators have limited power over group norms. Most of the levers they can pull will only nudge norms in one direction or another, possibly unpredictably. Good norm-setting is a classic example of knowhow. There are heuristics, but knowing whether to chastise an uncivil user publicly or privately is not a decision that can be made in the abstract. Blogger Jason Kottke summed up the challenges of norm-setting with characteristic verve:

“Punishing the offenders and erasing the graffiti is the easy part . . . [F]ostering ‘a culture that encourages both personal expression and constructive conversation’ is much more difficult. Really fucking hard, in fact . . . it requires near constant vigilance. If I opened up comments on everything on kottke.org, I could easily employ someone for 8-10 hours per week to keep things clean, facilitate constructive conversation, coaxing troublemakers into becoming productive members of the community, etc. Both MetaFilter and Flickr have dedicated staff to perform such duties . . . I imagine other community sites do as well. If you’ve been ignoring all of the uncivility on your site for the past 2 years, it’s going to be difficult to clean it up. The social patterns of your community’s participants, once set down, are difficult to modify in a significant way.”

Some communities depend on shared norms. Discussion groups, for example, are acutely sensitive to group norms. It only takes a few determined spammers or trolls to bring a discussion to a screeching halt. But other communities can prosper even when some norms are widely flouted. Spammers and trolls still abound on the Internet, but they have not yet managed to ruin it for everyone. Google may not be able to make spammers clean up their act, but it can hide their antics. The difference illustrates the two roles that the other verbs of moderation can play. Sometimes, they keep order directly, in the face of bad behavior; at other times, they keep order indirectly, by encouraging good behavior. That is, the other three verbs are both substitutes for and sources of norms, and communities vary in the balance they strike between these two roles.

Moderators can influence norms directly by articulating them. They can do this either in general, with codes of conduct and other broad statements of rules, or in specific cases by praising good behavior and criticizing bad. The difference is the difference between “Don’t post images containing nudity” and “This post has been deleted because it contained nudity.” Note, however, that stating a norm does not automatically promote it. There is empirical evidence that, in some circumstances, expressing a norm about user behavior can induce exactly the opposite response. Moderators can also influence norms indirectly, through the other verbs. A list of “new and noteworthy posts” doesn’t just help users find good posts through organization, it also educates them in what makes a post good in the first place. Put another way, moderators can use the other three verbs not just to regulate but also to nudge.

The flip side of this point, though, is that any time a moderator uses one of the other verbs, she nudges participants’ norms, whether she intends to or not. For example, excluding a well-known commenter can reduce participants’ sense of trust in a moderator, even if the exclusion is justified. Experienced moderators evaluate every design decision in terms of its effects on community norms. A few particularly important ways to promote good norms reflect the accumulated wisdom of community managers.

By far the most significant is fostering a sense of shared identity that reinforces participants’ sense of belonging and their commitment to the good of the community. Another is the initiation of new participants, who must be taught the community’s expectations at the same time as they are made to feel welcome. Highlighting good behavior and hiding bad behavior reinforce participants’ sense that good behavior is prevalent while also teaching them what to do. As a result, designers frequently worry about how to balance competitive and cooperative impulses. Competition can spur users to individual effort at the cost of social cohesion, and different communities strike the balance differently.

Distinctions (Adverbs)

Transparently / Secretly 

Every moderation decision has some observable consequences, but some are more observable than others. Transparent moderation makes explicit and public what the moderators have done and why, revealing what the overall moderation policies are and how they apply in each specific case. Secret moderation hides the details. This distinction is really a spectrum: moderation could be transparent about the what but not the why, or transparent only some of the time. Generally speaking, transparency takes additional work to implement, just as having judges give reasoned explanations of their decisions increases the judicial workload. It is easier to be secretive about some kinds of moderation than others. Someone who is excluded from a community will generally be able to tell that they are being denied access, although it is sometimes possible to disguise the fact that it is deliberate.

Organization has the most room for secrecy. Search users don’t know what pages Google hides from them; Facebook users may not realize that the News Feed is only a partial list of posts from friends. Conversely, secret norms are close to an oxymoron: norms must be known to be effective. The choice between transparency and secrecy in exclusion, pricing, and organization can have indirect effects on norms. On the one hand, transparency enhances legitimacy, providing community support for moderation; on the other hand, secrecy raises fears of censorship and oppression.

Centrally / Distributedly 

Moderation decisions can be made either centrally by a single moderator whose decision affects the entire community, or by multiple distributed moderators whose individual decisions affect only part of the community. For the most part, the consequences are as one would expect, and track the usual legal debates about hierarchy and federalism. Centralized moderation provides consistency. Distributed moderation promotes diversity.

Centralized moderation offers the ability to stop unwanted content and participants by creating a single checkpoint through which all must pass: a spammer kicked off of Facebook will not bother anyone else on Facebook. But chokepoints are also single points of failure: a spammer who gets through on Facebook can bother a lot of people. In comparison, distributed moderation offers more robustness and defense in depth.

Centralized moderation offers a clear focal point for policy-making. If you don’t like my post, you know where to complain. Distributed moderation permits those with ideological differences to agree to disagree: if you don’t want to read my weblog, no one is putting it in front of you. In a sense, the choice between centralized and distributed exclusion is the choice between a single community and many. But norms, by their nature, cannot be fully centralized. The power to adopt, shape, or reject them is always in the hands of members. The larger a community, the more competing voices and normative focal points it is likely to have.

Identity

The final community characteristic is the distinction between identity and anonymity. At one extreme, participants in an online community could be completely identified, bringing with them a complete biography of their online and offline lives. At the other, they could be completely anonymous. Compare Google+, which launched with a strict, stringently enforced, and much-criticized “real names” policy, with 4chan, where “most posts . . . are disconnected from any identity.”

There are many gradations in between. Participants could have identities that mostly match their offline lives, but in which the details are potentially questionable, as in an online dating service where participants sometimes lie about their height. They could have rich and persistent but avowedly fictitious identities, as in virtual worlds where they play the same avatar thirty hours a week for years. They could have stable but thin identities, as on a discussion board that uses pseudonyms and keeps users’ real names and email addresses secret. They could have thin identities purely as a matter of convention, as in some blogs’ comment sections, where a commenter can pick a fresh display name with each comment.

Participants could even have one level of identifiability at the infrastructure level (supply a valid email address to sign up) but a different level at the content layer (that email address is hidden from other participants). Whatever its nature, the most important role of identity is creating stable reputations through time so that others can link past behavior to a present identity. All four verbs of moderation can tap into identity. Exclusion absolutely depends on it; without identity, the distinction between “outsiders” and “insiders” collapses. You can identify the unwanted outsiders and blacklist them or identify the wanted insiders and whitelist them, but both versions require some notion of identity. As anyone who has moderated a blog’s comments can testify, this is a huge problem for communities that are open to new and unknown members from the Internet at large. There is often no way to tell that a “new” commenter is actually an old and well-known miscreant, back from a ban for another round of malice.

It is well known that identifiability plays a significant role in setting social norms. Persistent reputations make it possible for participants to build credibility as respected elders within the community. They make it possible to hold participants accountable for their actions, enabling the effective monitoring and graduated sanctions beloved by commons scholars. By contrast, anonymity enables consequence-free norm violation and can undermine the appearance of reciprocity among real human beings. But stronger identity is not always better. Sometimes it creates a badge for misbehavior: a leaderboard is an invitation to fame-seeking cheaters.

Making participants more anonymous (for example, by resetting a server) can drive trolls away because it deprives them of the opportunity to make a (bad) name for themselves. Paradoxically, both identity and its opposite — anonymity — can be expensive to establish. Externally produced identity requires participants to prove facts about themselves, which can cost both time and money. It also requires owners and moderators to be prepared to check these assertions, which too is costly. Internally produced reputation systems require participants to take the time to learn about, comment on, and rate each other.

An important question for online communities is who controls these socially constructed identities: users themselves, the community, or infrastructure owners. Anonymity might seem cheaper, but genuinely effacing participants’ identities requires some significant effort — deleting log files, stripping out snooping software, and taking action against participants who “out” one another’s offline identities.

Finally, identity can be the enemy of privacy, for good and for bad. Divulging information about oneself is itself a cost. Privacy is virtually a precondition for some kinds of speech. Some conversations simply cannot take place in public. This phenomenon can be good: think of therapeutic conversations on a discussion board for adult victims of childhood abuse. It can also be bad: think of virulently misogynistic and racist conversations on a law student board. Two forms of abuse are characteristically tied to the misuse of identity. Impersonation — the hijacking of another’s identity — requires that participants have recognizable identities to hijack. And sock puppetry — creating fake personas to create the false appearance of support for a position — requires that the community recognize personas as distinct participants in the first place. Both become possible when a community accepts claims of identity that it is not capable of properly validating.

_____

[2] Monty Python: What have the romans ever done for us.

8 thoughts on “The virtues of moderation, part II

  1. The whole idea of norms in an online forum is slightly extraordinary. Given that online fora are mostly open to anyone, and anonymous in practice even when not in theory, the fact that norms can develop is remarkable. Perhaps it’s just me, but I think I always assumed that norms were a kind of negative restriction, endured by community members because they had very limited choices, and needed to have a community.

    But online, those things aren’t true. There is no need for anyone to be a part of any online community; and there are very few restrictions on people’s choices. No-one needs to endure norms which seem restrictive to them.

    So it must be the case that where norms exist in an online community, they are in fact agreeable (at least the vast majority of the time) to all members. In some ways that makes them less like norms, and more like a kind of identity. It also implies that norms cannot be imposed (without running the risk of dispersing the community and starting again with new people); they can only be built.

    Like

  2. Fostering ‘a culture that encourages both personal expression and constructive conversation is much more difficult

    Indeed.

    a culture that encourages both personal expression and constructive conversation” happens when the group develops a shared set of norms. It is not an accident but is the result of the group learning from the editors/authors and in part from each other. The question is how one promotes this learning process.

    Moderation alone cannot do this since it is a blunt binary instrument that can harm as much as it helps. I suggest that four essential elements must be present to develop a culture of personal expression and constructive conversation in forums such as Scientia Salon.

    1) stimulating essays that contain at least some provocative thinking. The honey that draws in the bees;
    2) mild moderation to restrain egregious abuses. The filter that keeps out the wasps;
    3) guidelines to good conversational practice. The route map. See the 12 Commandments of Commentary as an example;
    4) responsive authors and Editors. The tour guides.

    My own observations lead me to believe, that, while all four elements are essential, responsive authors and editors do the most to develop effective communities. That is because by responding to commentators they make them feel valued. This is an important reason why they comment, because they feel they have something useful/valuable to say and they want recognition of that.

    The nature of the response can
    1. give affirmation/recognition;
    2. encourage thoughtful responses by examining them even when not agreeing with the conclusions;
    3. discourage less useful responses;
    4. guide the commentator to deeper insights;
    5. plainly unhelpful comments can be ignored. That is also a form of guidance which is quickly noted.

    This brings up the issue of partisanship in the way authors/editors respond. They, like the rest of us have points of view. The author has expressed his point of view in his essay, which should be enough. I suggest that while the authors may defend their point of view, the editors, when responding, should put aside their points of view and adopt an even handed approach, treating all point of view as worthy of attention. It is not their job to win arguments.

    I believe that having responsive editors/authors who also adopt an even handed approach do the most to develop the stated aim of “a culture that encourages both personal expression and constructive conversation“. The way they respond communicates the norms and reinforces them.

    Now it is time to give credit. Massimo is particularly good in the way he takes excerpts from comments and replies to them. Now we need the authors to do more of the same. I also think that Dan-T could become more involved in this process. Moreover I recommend that Aravis should be roped in to take advantage of his pedagogical skills and supplement this editorial guidance function.

    And it is time for some mild criticism. When performing this function of responsiveness I believe that the Editors should be less partisan. I don’t believe they should be winning arguments.

    Like

  3. Interesting thoughts, Labnut. Thanks for sharing. Perhaps the editors will discuss ways we can be more effective in our responses to comments (though I think Massimo and Aravis are the most relevant people here).

    Liked by 1 person

  4. Labnut has obviously given these matters a great deal of thought and he makes some valid points (as does Phil, though I think he is using the word ‘norms’ in an idiosyncratic way).

    But my impression (please correct me if I am wrong) is that things are not working out on this site as planned. As labnut suggests, authors often fail to comment, for example.

    The meta-honey (to extend labnut’s metaphor) is the audience (size as well as quality) because a big audience attracts not only quality commenters but also quality authors.

    SciSal started off with some big name authors and with a comment section more active than it now is. There were sometimes hundreds of comments per essay and, though certain commenters tended to dominate the discussion, there was still a wider variety of commenters than we seem to have now.

    Most importantly, I have the sense that the wider (silent) audience was both bigger and more varied back then. But I don’t have access to the data, so I’m really just guessing.

    As I indicated previously, I favour a slightly more open approach — e.g. filtering (or deleting) only blatantly offensive or obviously crankish comments.

    My point here, however, is that this sort of discussion may be a bit self-indulgent and ultimately futile if the site is failing to hold on to old readers or attract large numbers of new ones.

    Liked by 1 person

  5. Mark,

    Just in terms of facts: our audience has actually grown, in terms of number of readers, and it is as (geographically at the least) varied as always. Getting authors engaged has always been a challenge, and I’m doing my best. Readers seem to be doing very well in the comments section anyway. The fact that there are fewer comments is an obvious outcome of the 5-5-500 rule, more than of moderation policies (we actually reject very few comments, and very often by the same 2-3 repeated offenders…). Overall, I’m happy, but of course that’s my assessment.

    Liked by 2 people

  6. Aravis,
    …catering a wedding, for friends

    A philosopher catering for a wedding!
    Food for thought?
    A marriage of minds?
    Occam’s razor says you can’t have your cake and eat it(except in the multiverse).
    Here is a suitable cake design – http://bit.ly/1GRgIqi
    Presumably a professorial caterer is one step better than a professional caterer?

    Mark,
    …given these matters a great deal of thought

    I think you have just given us a pithy definition of philosophy – giving matters a great deal of thought.

    Liked by 2 people

  7. Couple of notes.

    First, to Mark English, actually going back to the first essay. One thing you said there reminded me of John Horgan’s “The End of Science” book. It’s a good read.

    Next, to both you and Massimo on comments.

    1. Quality, not quantity, right?
    2. Just three weeks ago, the Harris piece (of course, he’s a lightning rod anyway, and rather than wrestle with a few other commenters I skipped it), got more than 100 comments. Three weeks before that, part two of Block got 70+. And, Massimo’s well-placed kick in the shins for both “movement” skepticism and atheism got more than 200. The piece on causation had more than 100 comments and the chimp rights nearly 100. I think it looks lower at times because two-part essays get comments split between the two parts.
    This particular piece is a good example of that. It’s primarily a set of extended bullet points off the first essay, going in depth about particular issues, tools, and styles of moderation.
    3. Who was reading this particular post yesterday, anyway, as far as the paucity of comments on it? Not a slam on this post, just placing it in the context of a MUCH bigger news reality in the States.

    Labnut You have to remember that the editors here, primarily Massimo, but sometimes Dan, along with Aravis, are also authors here. At least on pieces where they’re the authors, they should have as much right to “advocacy” in comments as anybody else. And, Massimo, as publisher as well, to analogize to the newspaper world, should have a right to advocacy on anything, as part of explaining why a particular piece is on this site.

    It is, after all, his site.

    ==

    As for the piece?

    First, on the tools. I think good moderation should make clear in what order such tools will generally be used. That, in turn, connects with norm-setting; spelling out style of moderation usually has some effect on content, and focus, of comments. As for clarity, transparency, etc., I think it’s helpful if they are as public/open as possible, and when a particular norm is questioned, that a response tends that way, too.

    To riff on a comment I made in the first essay, that’s why a lot of ppl don’t like Facebook. Changes of terms of use appear confusing, and designed for monetary ends. Beyond that, the great difference in filtration on nudity/sex vs violence has never really been explained, other than perhaps an “assumed background” from FB being American-based, and that difference being one of America in general vs. the rest of the “Western” world.

    Liked by 1 person

Comments are closed.