Recently, Facebook has been accused of actively censoring the accounts of conservative bloggers. As might be expected, Facebook posters from the opposite end of the social and political spectrum have reported liberal censorship as well. Perhaps the problem isn’t a systematic political bias, but instead overzealous application of censorship defined by Facebook’s community standards. Individual interpretation of proscribed content categories may lead to erring on the side of “protection” of users rather than protection of free speech.
Diane Sori, a blogger for Patriot Factor, reports that she has repeatedly been blocked from posting. As an experiment, The Examiner attempted to post some content, and was warned to slow down before being blocked for two days. According to the website FacebookCensorship.com. Facebook has actively been censoring conservative content for some time now, while leaving left-wing and liberal content untouched, even if it could reasonably be deemed as offensive.
As counterpoint to the discussion of conservative censorship, Liberal Lamp Post presents examples of censorship of liberal posts, specifically in blocking links to a liberal guide to Republican talking points and other material, with blocks lasting for 15 days. Commenters on the site go on to note examples of apolitical areas of animal rescue, OxFam charity, and outside-US issues that have also been blocked under the banner of anti-spamming.
In October last year, Facebook came under fire for censoring an anti-Obama meme posted by the account Special Operations Speaks (SOS). While Facebook is known to have an automatic spam detection filter, it also has a staff of human moderators who manually check content for anything deemed offensive or inappropriate. The deletion of the anti-Obama meme was done by one of these moderators in accordance with Facebook’s policy. Facebook subsequently reversed the decision and apologized for it. Because of these decisions and reversals, many people feel the policies are incomprehensible and/or inconsistently applied. For instance Facebook has prohibited photos of breastfeeding mothers and drunk people sleeping with things drawn onto their faces, but not crushed heads, excessive blood, or humorously offensive content.
In December 2012, Richard Gage, the founder of an organization known as Architects & Engineers for 9/11 Truth, found that his page had been taken down along with the pages of several of his peers. A reporter for the alternative news website Infowars, Darrin McBreen, has also had his page removed, having been told by Facebook that he “should be careful about making political statements” and that “Facebook is about building relationships not a platform for your political viewpoint.”
Facebook supposedly instituted the community standard policies in order “to balance the needs and interests of a global population,” and to protect its users from spam, hate speech, and abuse. This is a reasonable position given how quickly the user experience would degenerate if automatic spammers and abusive trolls were allowed to run amok on the network. The problem, of course, is that no organization can be completely neutral and that what constitutes offensive content is always subjective. Attempting to police the content of users who question the truth of 9/11, criticize Barack Obama, or spin Republican talking points certainly seems misguided, even if it is not politically motivated.
Used as a political tool, Facebook could be incredibly powerful. In 2010 and 2012 elections in the United States, Facebook allowed users to tell their friends when they voted. According to Facebook’s research, this may have increased turnout by as much as 2.2%. But as Harvard University Professor Jonathan Zittrain has pointed out, Facebook could use this power to try to influence elections; what if they only showed the voting message to people that they thought were from one party? To be clear, Facebook hasn’t done such a thing. However, this thought experiment demonstrates the risks if Facebook is not even handed in their content removal policies. Couldn’t skewing the content removed (and the content that remains) influence the political leanings of users in the same way an “I voted” message would?
Going beyond systematic political censorship, is it appropriate for Facebook to impose any censorship through the lens of the sensibilities of the moderators? It’s difficult for individuals to maintain total objectivity in controversial areas once they are authorized to judge posts against vague policy that simply cannot provide rules for consistent treatment of every possibility. Personal bias is likely to creep into moderators’ interpretation of the already lengthy community standards. Moreover, in order to keep operations cost low, Facebook moderators are given only half a second to look at each page. As a result, they might miss controversial content or make mistakes when determining whether content is ‘appropriate’.
Facebook could avoid this problem by taking a more hands-off approach to potentially offensive content. While Facebook has chosen to implement a comprehensive policy, outlawing anything which they deem to be violent or threatening, hate speech, bullying, spam, pornography, fraud, as well as anything which violates copyright or encourages self-harm, Twitter has chosen a far more liberal policy. Twitter allows almost everything except pornography, copyright infringement, threats and impersonation of someone in a way that is meant to be misleading. Twitter does have a policy which allows them to remove content following a government request, but they don’t have to do so, and have already refused to do so several times. Rather than banning a user or deleting “offensive” content, Twitter instead helpfully suggests that users simply block users that they find offensive. It seems clear that this is a sensible option, which preserves the so-called offender’s right to free speech and allows each user to make a personal decision on what is and what is not acceptable.
Jean-Loup Richet, Special Herdict Contributor