Social Media Censorship

Recently, Facebook has been accused of actively censoring the accounts of conservative bloggers. As might be expected, Facebook posters from the opposite end of the social and political spectrum have reported liberal censorship as well. Perhaps the problem isn’t a systematic political bias, but instead overzealous application of censorship defined by Facebook’s community standards. Individual interpretation of proscribed content categories may lead to erring on the side of “protection” of users rather than protection of free speech.

Diane Sori, a blogger for Patriot Factor, reports that she has repeatedly been blocked from posting. As an experiment, The Examiner attempted to post some content, and was warned to slow down before being blocked for two days. According to the website FacebookCensorship.com. Facebook has actively been censoring conservative content for some time now, while leaving left-wing and liberal content untouched, even if it could reasonably be deemed as offensive.

As counterpoint to the discussion of conservative censorship, Liberal Lamp Post presents examples of censorship of liberal posts, specifically in blocking links to a liberal guide to Republican talking points and other material, with blocks lasting for 15 days. Commenters on the site go on to note examples of apolitical areas of animal rescue, OxFam charity, and outside-US issues that have also been blocked under the banner of anti-spamming.

In October last year, Facebook came under fire for censoring an anti-Obama meme posted by the account Special Operations Speaks (SOS). While Facebook is known to have an automatic spam detection filter, it also has a staff of human moderators who manually check content for anything deemed offensive or inappropriate. The deletion of the anti-Obama meme was done by one of these moderators in accordance with Facebook’s policy.  Facebook subsequently reversed the decision and apologized for it. Because of these decisions and reversals, many people feel the policies are incomprehensible and/or inconsistently applied.  For instance Facebook has prohibited photos of breastfeeding mothers and drunk people sleeping with things drawn onto their faces, but not crushed heads, excessive blood, or humorously offensive content.

In December 2012, Richard Gage, the founder of an organization known as Architects & Engineers for 9/11 Truth, found that his page had been taken down along with the pages of several of his peers. A reporter for the alternative news website Infowars, Darrin McBreen, has also had his page removed, having been told by Facebook that he “should be careful about making political statements” and that “Facebook is about building relationships not a platform for your political viewpoint.”

Facebook supposedly instituted the community standard policies in order “to balance the needs and interests of a global population,” and to protect its users from spam, hate speech, and abuse. This is a reasonable position given how quickly the user experience would degenerate if automatic spammers and abusive trolls were allowed to run amok on the network. The problem, of course, is that no organization can be completely neutral and that what constitutes offensive content is always subjective. Attempting to police the content of users who question the truth of 9/11, criticize Barack Obama, or spin Republican talking points certainly seems misguided, even if it is not politically motivated.

Used as a political tool, Facebook could be incredibly powerful. In 2010 and 2012 elections in the United States, Facebook allowed users to tell their friends when they voted.  According to Facebook’s research, this may have increased turnout by as much as 2.2%.  But as Harvard University Professor Jonathan Zittrain has pointed out, Facebook could use this power to try to influence elections; what if they only showed the voting message to people that they thought were from one party?  To be clear, Facebook hasn’t done such a thing.  However, this thought experiment demonstrates the risks if Facebook is not even handed in their content removal policies.  Couldn’t skewing the content removed (and the content that remains) influence the political leanings of users in the same way an “I voted” message would?

Going beyond systematic political censorship, is it appropriate for Facebook to impose any censorship through the lens of the sensibilities of the moderators? It’s difficult for individuals to maintain total objectivity in controversial areas once they are authorized to judge posts against vague policy that simply cannot provide rules for consistent treatment of every possibility. Personal bias is likely to creep into moderators’ interpretation of the already lengthy community standards. Moreover, in order to keep operations cost low, Facebook moderators are given only half a second to look at each page. As a result, they might miss controversial content or make mistakes when determining whether content is ‘appropriate’.

Facebook could avoid this problem by taking a more hands-off approach to potentially offensive content.  While Facebook has chosen to implement a comprehensive policy, outlawing anything which they deem to be violent or threatening, hate speech, bullying, spam, pornography, fraud, as well as anything which violates copyright or encourages self-harm, Twitter has chosen a far more liberal policy.  Twitter allows almost everything except pornography, copyright infringement, threats and impersonation of someone in a way that is meant to be misleading. Twitter does have a policy which allows them to remove content following a government request, but they don’t have to do so, and have already refused to do so several times. Rather than banning a user or deleting “offensive” content, Twitter instead helpfully suggests that users simply block users that they find offensive. It seems clear that this is a sensible option, which preserves the so-called offender’s right to free speech and allows each user to make a personal decision on what is and what is not acceptable.

Jean-Loup Richet, Special Herdict Contributor

Executive order on cyberthreat information sharing has implications for online speech

After touching on cybersecurity in last month’s State of the Union, President Obama signed an executive order to promote increased information sharing about cyberthreats between government agencies and private corporations. The executive order directs government agencies to produce timely unclassified reports on cyberthreats for Congress and to facilitate the sharing classified cyberthreat information with private companies that manage critical infrastructure.

While the order describes an expanded mode of information sharing from the government to private companies, it does not explicitly promote increased information sharing from private companies to the government. According to Wired, the order gives a nod to privacy concerns by “referenc[ing] established safeguards, such as the Fair Information Practice Principles” for data that private companies share with the government and calls for an assessment of the civil liberties implications of information-sharing programs. The executive order does not grant any exceptions to existing privacy law for private corporations, meaning that they are no more likely to share information with the government than they were previously. In this sense, the order is more sensitive to privacy and surveillance concerns than the roundly criticized CISPA bill, which was reintroduced in the House of Representatives last week and grants broad exemptions from privacy laws to companies that share cyberthreat data with the government.

The Verge worries that the definitions of “cyberthreat” and “critical infrastructure” as used in the executive order might be too broad. The White House has clarified that cyberthreats include “web site defacement, espionage, theft of intellectual property, denial of service attacks, and destructive malware.” Hence, “last month’s apparent hacking and defacement of MIT’s website in honor of late internet activist Aaron Swartz could be considered a ‘cyber threat’.” However, this seems like a faulty conclusion. The order addresses itself to information sharing about classified cyberthreats to critical infrastructure. MIT’s web site hardly qualifies as “critical infrastructure,” which the order specifies as “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” Moreover, the hacking of MIT’s website was probably not a classified matter.

More broadly speaking, although acts of public protest or free speech may fall under categories like “web site defacement” or “denial of service attacks” and hence constitute “cyberthreats,” that alone isn’t enough to instigate information sharing under the executive order; the order is concerned with government sharing of classified information about cyberthreats to critical infrastructure. The information sharing program is voluntary: eligible private companies can opt-in to receive information from the government. It also seems that for now, companies retain discretion on how to act on received information, meaning that the government can’t coerce companies to act in a particular way.

The executive order itself only provides for one-way (from government to corporations) information sharing in the context of critical infrastructure, so the potential for harm to civil liberties is certainly mitigated. On the other hand, increased provision of information from the government to private corporations could in itself constitute a pressure towards action that corporations might not have otherwise taken. For instance, government notice of a speech act (like the MIT hack) as “cyberthreat” might strongly influence a private company to censor or delete simply because the “cyberthreat” label is so loaded. Furthermore, the order isn’t the end of the road, and it may open the gates to legislation less protective of privacy and free speech. Laws governing information sharing practices are still in flux, and invasive bills like CISPA are still being pushed forward.

Building a More Transparent Web: Twitter and Herdict

On Monday Twitter released their latest transparency report covering the period from July 1, 2012 to December 31, 2012.  Their report shows a slight increase in user information requests, a larger increase in content removal requests, and a slight decrease in copyright notices.

One thing we are proud of at Herdict is that Twitter’s transparency report also includes data from Herdict.  The data we contributed is our crowdsourced accessibility data for the sites in Twitter’s queue on Herdict.  The data for the five countries with the most inaccessible reports for these sites is below:

We are thrilled to be able to support Twitter’s transparency efforts.  As Twitter has become a more central element of how we use the Internet (from following celebrities to organizing anti-government protests), it has also become a more frequent target of censorship, filtering, and other actions designed to make it harder to access.  (Of course, sometimes it goes down for innocuous reasons, too.)  It can be hard for a company like Twitter to know if an ISP in any given country is blocking their service.  By using Herdict’s crowdsourced platform to collect data about Twitter’s accessibility, Twitter can better understand where they are being blocked in real time.  And because Herdict data is public, searchable, and sortable, we all benefit from greater information about the extent and nature of various web blockages — for Twitter and the other 26,000+ domains in our database.

With so many online services becoming critical to our day-to-day lives, transparency about those services is also becoming critical.  For example, users of Skype are rightly demanding information about the confidentiality of conversations conducted over over Skype.  Real-time information about where and when a site or service may be inaccessible is just one piece of the bigger transparency picture, but it is an important piece.

We hope others will find ways to use Herdict data, either for improving transparency or for research.  We have built a variety of tools to make it easier to use Herdict data or to contribute to it.  Organizations can create queues of important sites, and we’ve built (and are continuing to improve) APIs for both filing reports and accessing our real-time data.  And we are always happy to work with others on new ways to use Herdict data or to make it better.

« Older posts       Newer Posts »