You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Facebook Content Deletion: Anti-Spam or Censorship?

Facebook’s policy on censorship is back in the spotlight this month after their removal of an image containing anti-Obama sentiments from the Special Operations Speaks PAC’s (SOS) Facebook page.  The image, created in the style of a meme, refers to the Navy SEALS’ capture of Osama Bin Laden under President Barack Obama’s orders.  It then alleges that Obama denied the SEALS any backup when they were being overwhelmed by forces in Benghazi.  Four Americans were killed in the attack on the Consulate, including the US Ambassador to Libya Chris Stevens, two security contractors (former Navy SEALS Tyrone Woods and Glen Doherty) who were both working for the CIA, and a fourth Embassy staff member.  During the current election campaign, the Obama administration has been facing questions regarding their lack of awareness concerning Embassy security.

Larry Ward, the president of Political Media, Inc (the media company that handles SOS postings) and the administrator of the SOS Facebook page posted the image himself.  The image received approximately 30,000 shares and 24,000 likes within 24 hours of being posted, before Facebook deleted it.  According to Ward, Facebook sent him a message telling him that he had violated Facebook’s Statement of Rights and Responsibilities.  He copied the message and pasted it onto the original image, then re-posted it to Facebook, adding a link to the Facebook ‘feedback comment’ inbox for page visitors to complain about the censorship of the image.  After several hours, that image was also removed and the SOS account was suspended for twenty-four hours.

In a press release by SOS, Larry Bailey claims that the meme was removed because Obama supporters were offended and worried about the power of its influence so close to the Presidential election.  He believes that Facebook’s staff has a liberal bias, and they are the ones responsible for the deletion of both images in an attempt to “quietly squelch opposition to what is a clear leadership failure that resulted in the tragic deaths of some of our nation’s heroes.” For its part, Facebook denies this and claims that the removal, in both cases, was a mistake. Additionally, they have apologized and said that they won’t remove it again. SOS has since reposted the image on their page, where it remains.

This is not the first time that Facebook has seemingly censored users for unclear or inconsistent reasons.  One of the most famous examples of this was in January 2011 when they temporarily disabled film critic Roger Ebert’s blog  for apparent “abusive content.”  The post contained images related to the development of Ebert’s prosthetic jaw, though they weren’t medically graphic.  After Facebook investigated the removal, it emerged that a number of Facebook users flagged his page as “abusive” after taking exception to an unrelated Tweet Ebert sent regarding the death of Jackass cast member Ryan Dunn.

Facebook’s anti-spam measures can be used to block content when a group of Facebook users intentionally report content as abusive.  However, Facebook also has automatic filters that can erroneously remove content. When Robert Scoble attempted to comment on Max Woolfe’s post about the current blogging scene, he received a message stating: “This comment seems irrelevant or inappropriate and can’t be posted. To avoid having comments blocked, please make sure they contribute to the post in a positive way.”  Scoble asked on his Facebook page, “Wow, does Facebook do sentiment analysis on comments and keeps you from posting negative comments?” and pointed out that his comment, reposted on his Google+ page, was inoffensive and just supported Woolfe’s view on the matter.

Facebook informed Scoble that his comment had triggered the anti-spam measures, and subsequently many users began posting on his page to try to replicate the error message: some people were finding that they were getting blocked for saying a specific thing, yet others were not blocked for saying it.

Facebook told TechCrunch that the problem was due to the length of both Scoble’s comment and the thread he was posting on. Threads that are long and popular attract spammers due to the exposure they’ll receive. Because Woolfe’s thread was long, Scoble’s comment included “@” links, and Scoble is not “friends” with Woolfe, Facebook’s automatic filters were more suspicious of his comment.  Facebook released an official statement which includes the admission of a mistake and a promise that they were making adjustments to their classifier system.

Scoble, in his response to the Facebook PR team, admitted that the problem lay more with the wording of Facebook’s message to him, which seemed to carry judgment on the nature of his comment as opposed to stating that his comment appeared to be spam.

Such mistakes are rare – Facebook manages millions of messages – and when users trigger anti-spam filters, they usually deserve it. But in the case of SOS’ Facebook page, a member of Facebook watchmen – not an automatic spam filter – deleted the post twice. What kind of policies are these watchmen following? What behaviours are they filtering and how? Are there controls to ensure that these decisions are politically neutral?  Such questions are part of what can only become a more heated debate on the extent and means of speech regulation.

Jean-Loup Richet, Special Herdict Contributor

About the Author: Jean-Loup Richet

7 Comments to “Facebook Content Deletion: Anti-Spam or Censorship?”

  1. Netizen Report: Baku Edition – Consent of the Networked:

    […] of Rights and Responsibilities which forbids tagging people without their consent. Herdict.org analyzes the challenges for social networking companies of enforcing anti-spam mechanisms without inflicting […]

  2. phot9397:

    facebook improper pay violation of the rights of the users, because they decide whether to accept or reject any information.

  3. EMT:

    I find it hard to believe Facebook has the time and resources to seek out content and censor it. It seems much more reasonable that popular posts end up getting flagged as spam or the comments within them do. Therefore, the Facebook’s software kicks in and goes to work.

    I think these instances are similar to when Google search gives suggestions that are off-color or somewhat offensive. Google isn’t creating the “auto-suggestions” the are just results based on previous searches.

  4. сглобяеми къщи:

    Is there any place we can see the original picture? Thanks.

  5. HB:

    Hi,

    i dont understand that people can do something like that..
    facebook did a great job about that!

    btw, great post !

  6. B.:

    I understand the Anti Spam thing but censorship is just unnecessary.

  7. BRI:

    “Such mistakes are rare – Facebook manages millions of messages” – Thanks for the reminder, we tend to focus on the one issue and not remember that its quite literally one in a million.