Inventive Ways to Control Trolls

To keep the peace on the ever-expanding Stack Exchange Network of online communities, owners Joel Spolsky and Jeff Atwood introduced the timed suspension of disruptive users’ accounts. Over time the transparency of the timed suspension process proved to be occasionally inefficient when discussions arose regarding the merits of certain suspensions. This led the administrators of the communities to investigate other ways of moderating problematic users.

What they found were three fantastically devious secret ways to effectively control trolls and other abusive users on online communities: the hellban, slowban, and errorban:

A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost. It’s a clever way of enforcing the “don’t feed the troll” rule in the community. When nothing they post ever gets a response, a hellbanned user is likely to get bored or frustrated and leave. I believe it, too; if I learned anything from reading The Great Brain as a child, it’s that the silent treatment is the cruelest punishment of them all. […]

(There is one additional form of hellbanning that I feel compelled to mention because it is particularly cruel – when hellbanned users can see only themselves and other hellbanned users. Brrr. I’m pretty sure Dante wrote a chapter about that, somewhere.)

A slowbanned user has delays forcibly introduced into every page they visit. From their perspective, your site has just gotten terribly, horribly slow. And stays that way. They can hardly disrupt the community when they’re struggling to get web pages to load. There’s also science behind this one, because per research from Google and Amazon, every page load delay directly reduces participation. Get slow enough, for long enough, and a slowbanned user is likely to seek out greener and speedier pastures elsewhere on the internet.

An errorbanned user has errors inserted at random into pages they visit. You might consider this a more severe extension of slowbanning – instead of pages loading slowly, they might not load at all, return cryptic HTTP errors, return the wrong page altogether, fail to load key dependencies like JavaScript and images and CSS, and so forth. I’m sure your devious little brains can imagine dozens of ways things could go “wrong” for an errorbanned user. This one is a bit more esoteric, but it isn’t theoretical; an existing implementation exists in the form of the Drupal Misery module.

Tags:

Comments

6 responses to “Inventive Ways to Control Trolls”

  1. These are *brilliant* – my mind is racing to think of other ideas. I wonder if you could do anything to do with removing anonymity somehow, since that seems to be one of the reasons that many trolls get comfortable.

  2. I think these methods are great for managing trolls without overtly censoring their comments/activities.

    Clive Thompson discussed some other methods to reduce their impact, but I find his solutions not as elegant as this.

    Your idea of removing anonymity also goes to the heart of one solution suggested in the New York Times a while ago: per­sis­tent pseu­do­nymity.

    However I do prefer your idea of removing the anonymity of trolls, rather than keeping an identity persistent across the Internet. I can imaging using IP geolocation in some way here, for example by making the displayed name change from “Anonytroll” to “Troll in the UK” to “Troll in Greater London” to “Troll in North London” to, finally, “Troll in Camden, London”.

    Nice…

  3. Oh nice.. disemvowelling! I like the idea of the comment remaining but changed somehow, so that the rest of the community gets a sense of what’s okay and what’s not.

    It’s interesting to think about how you can enforce non-anonymity, in increasing measures.. the geo IP definitely works, but you could in theory require people to give away more and more details about themselves, or to prove somehow that it’s true (e.g. this is a recent photo of me here’s a newspaper) or to have to use a facebook login that has over 20 friends, etc. You could require that they make a verified email publicly available, or to disclose where they work. Not sure if there are any legal constraints to this but I don’t see why not. Amazing to think about the huffington post needing 25 f-t staff to comb through 35,000 comments a day.. no reason why they couldn’t be involved with enforcing and validating the anonymity-disclosure too.

  4. Hey Lloyd one other thing – was reading last night about the #MyTramExperience video and reading some of the comments on The Periscope Post article about it.

    Until a moment ago I would have said that using Facebook comments – and so removing some anonymity – is a great way to stop hateful comments.

    But some of these comments prove me wrong! I forgot about the people who are *proud* of their extreme views. I guess that’s less about trolling and more about what kind of free speech you’re willing to allow on your platform.

  5. A good idea but by no means a new one. Philip Greenspun describes doing this back in the 90s –

    http://philip.greenspun.com/panda/case-studies.html (scroll down to find case 4: apparently they didn’t have internal anchors in the 90s 😉

  6. Ah, thanks for that, Daniel. I should have guessed that Greenspun had done something similar in his time…

    If you’re interested in reading more on the history of these methods of controlling trolls, I also suggest having a look at this MetaFilter discussion on the topic.

    It looks like at least one of these methods originated on the Citadel BBS system in the mid-1980s. Amazing, really.