UK Filtering Proposal: An Analysis

Back in October of this year, David Cameron proposed a censorship scheme which would make it harder to access pornography in the UK. Under the plan, the UK’s biggest four ISPs — BT, Talk Talk, Virgin, and Sky — would automatically block access to pornographic sites for anyone with an existing broadband contract unless they opted out. Ostensibly, users would be given a choice, but it would be a choice that many users would be reluctant to make due to embarrassment or fear of recrimination.

The proposal paves the way for increased governmental regulation of the Internet in the UK. Despite the fact that Cameron suggested censorship would apply only to pornographic sites (and supposedly only to prevent the increasing sexualisation of children), the reality is that there is no reliable way to filter out only pornographic sites. The result would be over-inclusiveness rendering many acceptable sites blocked simply for containing related terms. The sites affected would be those that discuss topics such as sexuality, sexual health, and safe sex; these topics are essential to education, public safety, and public health.

Glenys Roberts, from the UK paper Daily Mail, heartily agreed with the proposal in an article entitled ‘We need protection from more than just porn on the internet.’ The paper stated that adults, not just children, need protection from ‘sudden unexpected exposure to internet porn’ because ‘there is nothing quite so upsetting as coming upon pornographic images you do not want to see’. In fact, the article went even further, suggesting that all vaguely ‘upsetting’ imagery should be excised from the Internet including ‘people in unnatural positions doing unimaginable things to each other,’ ‘beautiful girls in come-hither postures,’ ‘laboratory animals undergoing horrific experiments,’ ‘images posted by the anti-fur brigade or those against the production of foie gras or cruelty to dogs in China,’ and even ‘anything supernatural designed to terrify me out of my wits’.

On the contrary, I would suggest that for most people there are far more upsetting things than coming across the occasional porn site. In fact, even imagery that is intentionally upsetting serves an important speech function. Certain upsetting images draw our attention to human and animal abuses all over the world, helping to rally support for ending these abuses. And these supposedly ‘terrifying’ supernatural images and tales are simply part of all the weird and wonderful ways in which human beings choose to express themselves. I wonder if Glenys Roberts would also prefer to have all of the ‘upsetting’ imagery taken out of news reports? Would she prefer we never had the opportunity to see anything at all which remotely threatens the stultifying status quo?

The most dangerous part of Cameron’s proposal, of course, is that it makes censorship the default position. This makes it possible for many other sites to be covertly and automatically censored, without prompting the furor that overt censorship would cause in situations unrelated to the cause of child protection.

Cameron’s proposal is unlikely to be implemented as is. Recent reports indicate  that the four biggest ISPs in the UK have refused to agree to the proposal. While they will give new users the option to block all pornography, all existing contracts will remain the same. This will also be an active choice rather than a default position, requiring new subscribers to opt into censorship. And only a few people will even have that option because only 5% of broadband customers ever change providers, meaning that very few people enter into new contracts.

It seems as though Cameron’s attempt to regulate UK Internet access has been thwarted. All of those who were worried about their children’s access to pornography will simply have to use their parental controls as they have always have done. Glenys Roberts and all those who are ‘upset’ by the strange things on the Internet will simply have to restrain themselves from searching for them. However, with leaks from December’s World Conference on International Telecommunications suggesting that many world leaders view freedom of expression on the Internet as a problem, and the US’s participation in the dubious and non-transparent ‘Trans-Pacific Partnerships’ negotiations, it seems that this freedom is in imminent danger.

With David Cameron having reportedly tried to stop rioters from communicating via social media during the London riots of 2011 (before his plan became known to the public and was compared to the actions of Egyptian PM Hosni Mubarak), it is clear that Cameron sees the freedom of the Internet, and social media in particular, as a threat. Although the UK sided with the US in their refusal of the new ITRs (International Telecommunication Regulations) proposed at the WCIT last week, this only indicates that the UK government understands that such a move would be extremely unpopular. This makes it abundantly clear that as long as the opposition remains strong, censorship can be fought and the freedom of the Internet defended.

Jean-Loup Richet, Special Herdict Contributor

Does WCIT-12 Represent a Real Threat to Internet Freedom?

With the World Conference on International Telecommunications (WCIT-2012) being held from 3-14th December, some people are raising concerns over how much control governments will have over Internet censorship. The International Telecommunication Union (ITU) is a United Nations agency which is convening the conference in Dubai. The ITU implements the  International Telecommunications Regulations (ITRs – available for download in a number of formats here), and the aim of the conference is to review and revise these ITRs.

The ITU summarizes the ITRs (signed by one hundred and seventy eight countries) as setting out general principles which facilitate the “free flow of information around the world, promoting affordable and equitable access for all and laying the foundation for ongoing innovation and market growth.” The ITRs “serve as the binding global treaty outlining the principles which govern the way international voice, data and video traffic is handled, and which lay the foundation for ongoing innovation and market growth.”

The ITRs were last negotiated in 1988, and according to the International Telecommunication Union, “there is broad consensus that the text now needs to be updated to reflect the dramatically different information and communication technology (ICT) landscape of the 21st century.” Considering how different that landscape now is, it’s clear that this update is long overdue. The problem is that many people view any attempt to change or regulate the Internet as a threat. It has even been suggested that WCIT-12 may herald “the end of the Internet” itself, or that the days of communicative freedom we have enjoyed are over.

The ITU has emphasized the importance of communication as a human right, but that has not stopped Google and others from raising the alarm about the potential outcome of this conference.  As part of the ITU’s background brief for the meeting, it quotes the International Covenant on Civil and Political Rights, adopted by the UN General Assembly in 1966 as part of the Universal Declaration of Human Rights. The covenant declares that “a free, uncensored and unhindered press or other media is essential in any society to ensure freedom of opinion and expression,” also including that “the public also has a corresponding right to receive media output.” However, the ITU also mentions potential restrictions on these rights too: “In its Article 19, the treaty also makes clear that restrictions on communication can only be imposed according to law and if they are necessary in order to ‘respect the rights or reputations of others,’ or to protect national security, public order, or public health or morals.”

These potential restrictions are what has led Google to issue a call to arms, alerting internet users to the stakes of the meeting and inviting them to take action to keep the internet “free and open.” Google has called WCIT-2012 a “closed-door meeting” which some governments want to use “to increase censorship and regulate the Internet.”

Google knows understands the risks of government censorship.  In its bi-annual Transparency Report released last month, Google revealed that “requests from worldwide governments to remove Google search results and other services spiked more than 70% in the first half of 2012.  The report revealed that there were 1,791 requests to remove 17,746 pieces of content through June alone.”  The country that was second on the list was America, with two hundred and seventy three requests compared to one hundred and eighty seven during the last six months of 2011.  Only Turkey had a higher number: “just over 500 requests in the first half of 2012 to remove content from the internet, a 45% rise from the previous six months.” Google also reported an increase – by 15% – in government surveillance requests for user data.

Information Week has fanned the flames of fear, indicating that an ITU takeover alongside revisions to the ITRs could lead to a violation of the communications as a human right facet: if the UN extends the ITU’s purview to “include ISPs and the Internet-based exchange of information in general,” Larry Seltzer (BYTE Editorial Director) posits that it “would allow foreign government-owned Internet providers to charge extra for international traffic and allow for more price controls.” In other words, users may be forced on to more restricted domestic networks if they can’t afford to access more open international platforms.

Not everyone, however, thinks the UN is posing a big threat to Internet freedom. At Tech Crunch, Frederic Lardinois describes how the perception of threat is due mostly to the misunderstanding of the nature of the ITRs, and that the focus of the conference will be likely be issues such as taxation, interoperability and how to provide access to broadband in developing countries.

At Lawfare, national security expert Jack Goldsmith points out that the conference is unlikely to result in any major changes to the Internet and that it is more important for what it represents. After all, changes to the ITRs will be made only by consensus, meaning that every nation has the power of veto. Considering how differently nations such as the US and China regard the issue of Internet censorship, it’s very unlikely that they will be able to come to a consensus. Moreover, the ITU itself has no power to enforce the ITRs, meaning that they do not involve a loss of sovereign rights to the ITU or any other UN body.

These weak rules, however, do not render the conference completely useless. According to Jack Goldsmith, “the ITRs might enhance domestic regulatory power in those nations by providing political or legal cover or support for such regulation.” It is not the ITU itself that we have to fear, but the national governments who are likely to use WCIT-12 as cover for their own censorious aims.

Whether or not Google is right in their assertion remains to be seen; whatever the outcome of the conference, it is clear that interference with Internet freedom will not be taken lightly.

Jean-Loup Richet, Special Herdict Contributor

 

Facebook Content Deletion: Anti-Spam or Censorship?

Facebook’s policy on censorship is back in the spotlight this month after their removal of an image containing anti-Obama sentiments from the Special Operations Speaks PAC’s (SOS) Facebook page.  The image, created in the style of a meme, refers to the Navy SEALS’ capture of Osama Bin Laden under President Barack Obama’s orders.  It then alleges that Obama denied the SEALS any backup when they were being overwhelmed by forces in Benghazi.  Four Americans were killed in the attack on the Consulate, including the US Ambassador to Libya Chris Stevens, two security contractors (former Navy SEALS Tyrone Woods and Glen Doherty) who were both working for the CIA, and a fourth Embassy staff member.  During the current election campaign, the Obama administration has been facing questions regarding their lack of awareness concerning Embassy security.

Larry Ward, the president of Political Media, Inc (the media company that handles SOS postings) and the administrator of the SOS Facebook page posted the image himself.  The image received approximately 30,000 shares and 24,000 likes within 24 hours of being posted, before Facebook deleted it.  According to Ward, Facebook sent him a message telling him that he had violated Facebook’s Statement of Rights and Responsibilities.  He copied the message and pasted it onto the original image, then re-posted it to Facebook, adding a link to the Facebook ‘feedback comment’ inbox for page visitors to complain about the censorship of the image.  After several hours, that image was also removed and the SOS account was suspended for twenty-four hours.

In a press release by SOS, Larry Bailey claims that the meme was removed because Obama supporters were offended and worried about the power of its influence so close to the Presidential election.  He believes that Facebook’s staff has a liberal bias, and they are the ones responsible for the deletion of both images in an attempt to “quietly squelch opposition to what is a clear leadership failure that resulted in the tragic deaths of some of our nation’s heroes.” For its part, Facebook denies this and claims that the removal, in both cases, was a mistake. Additionally, they have apologized and said that they won’t remove it again. SOS has since reposted the image on their page, where it remains.

This is not the first time that Facebook has seemingly censored users for unclear or inconsistent reasons.  One of the most famous examples of this was in January 2011 when they temporarily disabled film critic Roger Ebert’s blog  for apparent “abusive content.”  The post contained images related to the development of Ebert’s prosthetic jaw, though they weren’t medically graphic.  After Facebook investigated the removal, it emerged that a number of Facebook users flagged his page as “abusive” after taking exception to an unrelated Tweet Ebert sent regarding the death of Jackass cast member Ryan Dunn.

Facebook’s anti-spam measures can be used to block content when a group of Facebook users intentionally report content as abusive.  However, Facebook also has automatic filters that can erroneously remove content. When Robert Scoble attempted to comment on Max Woolfe’s post about the current blogging scene, he received a message stating: “This comment seems irrelevant or inappropriate and can’t be posted. To avoid having comments blocked, please make sure they contribute to the post in a positive way.”  Scoble asked on his Facebook page, “Wow, does Facebook do sentiment analysis on comments and keeps you from posting negative comments?” and pointed out that his comment, reposted on his Google+ page, was inoffensive and just supported Woolfe’s view on the matter.

Facebook informed Scoble that his comment had triggered the anti-spam measures, and subsequently many users began posting on his page to try to replicate the error message: some people were finding that they were getting blocked for saying a specific thing, yet others were not blocked for saying it.

Facebook told TechCrunch that the problem was due to the length of both Scoble’s comment and the thread he was posting on. Threads that are long and popular attract spammers due to the exposure they’ll receive. Because Woolfe’s thread was long, Scoble’s comment included “@” links, and Scoble is not “friends” with Woolfe, Facebook’s automatic filters were more suspicious of his comment.  Facebook released an official statement which includes the admission of a mistake and a promise that they were making adjustments to their classifier system.

Scoble, in his response to the Facebook PR team, admitted that the problem lay more with the wording of Facebook’s message to him, which seemed to carry judgment on the nature of his comment as opposed to stating that his comment appeared to be spam.

Such mistakes are rare – Facebook manages millions of messages – and when users trigger anti-spam filters, they usually deserve it. But in the case of SOS’ Facebook page, a member of Facebook watchmen – not an automatic spam filter – deleted the post twice. What kind of policies are these watchmen following? What behaviours are they filtering and how? Are there controls to ensure that these decisions are politically neutral?  Such questions are part of what can only become a more heated debate on the extent and means of speech regulation.

Jean-Loup Richet, Special Herdict Contributor

« Older posts       Newer Posts »