You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

The Longest Now


Soft, distributed review of public spaces: Making Twitter safe
Monday October 27th 2014, 2:56 pm
Filed under: %a la mod,ideonomy,knowledge,popular demand,wikipedia

Successful communities have learned a few things about how to maintain healthy public spaces. We could use a handbook for community designers gathering effective practices. It is a mark of the youth of interpublic spaces that spaces such as Twitter and Instagram [not to mention niche spaces like Wikipedia, and platforms like WordPress] rarely have architects dedicated to designing and refining this aspect of their structure, toolchains, and workflows.

Some say that ‘overly’ public spaces enable widespread abuse and harassment. But the “publicness” of large digital spaces can help make them more welcoming in ways than physical ones – where it is harder to remove graffiti or eggs from homes or buildings – and niche ones – where clique formation and systemic bias can dominate. For instance, here are a few ‘soft’ (reversible, auditable, post-hoc) tools that let a mixed ecosystem review and maintain their own areas in a broad public space:

Allow participants to change the visibility of comments:  Let each control what they see, and promote or flag it for others.

  • Allow blacklists and whitelists, in a way that lets people block out harassers or keywords entirely if they wish. Make it easy to see what has been hidden.
  • Rating (both average and variance) and tags for abuse or controversy can allow for locally flexible display.  Some simple models make this hard to game.
  • Allow things to be incrementally hidden from view.  Group feedback is more useful when the result is a spectrum.

Increase the efficiency ratio of moderation and distribute it: automate review, filter and slow down abuse.

  • Tag contributors by their level of community investment. Many who spam or harass try to cloak in new or fake identities.
  • Maintain automated tools to catch and limit abusive input. There’s a spectrum of response: from letting only the poster and moderators see the input (cocooning), to tagging and not showing by default (thresholding), to simply tagging as suspect (flagging).
  • Make these and other tags available to the community to use in their own preferences and review tools
  • For dedicated abuse: hook into penalties that make it more costly for those committed to spoofing the system.

You can’t make everyone safe all of the time, but can dial down behavior that is socially unwelcome (by any significant subgroup) by a couple of magnitudes.  Of course these ideas are simple and only work so far.  For instance, in a society at civil war, where each half are literally threatened by the sober political and practical discussions of the other half, public speech may simply not be safe.




[…] Soft, distributed review of public spaces: Making Twitter safe […]

Pingback by Disqus Comment System free wordpress plugin | stackguide-Download free themes,browse scripts & windows app 11.01.14 @ 4:56 am

[…] Making public spaces online safe  – Sam Klein offers some solutions.“…the “publicness” of large digital spaces can help make them more welcoming in ways than physical ones – where it is harder to remove graffiti or eggs from homes or buildings – and niche ones – where clique formation and systemic bias can dominate.” […]

Pingback by URLs of wisdom (3rd November) | Social in silico 11.04.14 @ 7:01 pm

[…] Password  Why Are New York City’s Subway Platforms So Hot? | Co.Design | business + design  The Longest Now  Copywrong  Our Machine Masters – NYTimes.com  When consumers become media for themselves […]

Pingback by Doc Searls Weblog · After-Christmas Free Tab Sale 12.26.14 @ 9:56 am





Bad Behavior has blocked 183 access attempts in the last 7 days.