The fourth day of the fourth month of the year can be written “4/04.” When you visit a website that doesn’t load, you may see the error code 404. So it is appropriate that this day should be an opportunity to reflect on all of those pages that people can’t reach due to censorship and blocking, and we applaud EFF’s 404 Day: A day of Action Against Censorship in Libraries.
Often when we think of censorship, we think of state-directed censorship, of the kind that we’ve seen in Turkey over the past couple of weeks. But not all censorship is on such a grand scale. In the United States, public libraries and schools often add filtering software to their public computers. At its best, it keeps lewd and harmful content off shared computer resources. But more often than not, it makes unavailable important educational content about bullying, sexual health, and sexual orientation, for example. And for many, libraries’ and schools’ computing resources are their only way to access the Internet.
404 Day aims to raise awareness about these important issues.
One way to contribute to 404 Day is to report to Herdict sites that are blocked in your public schools or libraries. Herdict is a platform for aggregating crowdsourced reports about website inaccessibility, and the tool can be used to aggregate reports from these public locations You can help reports sites in several ways. For example, while using a school or library computer you can:
1) Report a specific site that is blocked using the field on the Herdict homepage.
2) Test some educational sites about bullying, sexual health, and sexual orientation, and report them using this list.
3) Test some of the other sites in the Herdict reporter.
If you do file a report, be sure to mark it as from a school or library by clicking on “add details to report,” as shown below.
This week, Microsoft’s Bing search engine was the subject of accusations that they were exporting Chinese censorship to the rest of the world. And a few days later it still isn’t entirely clear what is happening with Bing. GreatFire.org (and Herdict partner) continues to assert that Bing is censoring Chinese-language search results even for users outside of China, while Microsoft asserts that differing search results are due to the different algorithms applied to Chinese and English language searches, not censorship. But what this whole kerfuffle demonstrates is how it is getting harder to identify censorship; the more companies like Google and Microsoft use opaque, proprietary algorithms to reshape the web they present to us, the less we understand about the results we see.
GreatFire’s blog post provides a good analysis of what they believe is occurring. Putting aside the causes, GreatFire has observed the following behaviors:
- Both international and Chinese versions of Bing are now displaying partial or generic censorship notice when viewed in the United States for certain searches.
- Searching for FreeWeibo on Chinese Bing in both China and the US triggers a censorship notice.
- Searching for FreeWeibo on Chinese Bing in Both China the US returns the FreeWeibo Facebook page but not a link to FreeWeibo itself.
- Searches for identical terms on Chinese language Google return more foreign media links than searches on Chinese language Bing, which returns more state media links.
GreatFire suggests all of these behaviors are evidence of Microsoft extending the censorship for Chinese results in China to Chinese results regardless of a Bing user’s location. Microsoft offers a more benign explanation:
The reason results are different for Chinese and English queries however, is because searches in different languages are fundamentally different queries. A result may show lower in one language versus another for a variety of reasons, such as fewer users choosing that link in English results compared to users who searched in another language.
Microsoft’s algorithm is a black box (as is that of every search engine), meaning it is nearly impossible for us to determine the truth. There was a time when search results were fairly comparable; I could compare two sets of results, and draw conclusions of censorship if links were missing from one and not the other. But as search results are increasingly customized by geography, language, and user, this is becoming harder and harder to do. I can’t even compare search results with my officemate five feet away, let alone search results with someone halfway around the world. Thus we are left guessing at whether these discrepancies in Bing’s results are censorship or legitimate differences in methodology.
One frightening possibility is that the algorithms that Microsoft has created could unintentionally reinforce Chinese censorship. Search engines strive to show users the most relevant and helpful results. Showing links to sites that users can’t access is, for most people, neither relevant nor helpful. If most Chinese-language Bing users are in China, the Chinese language algorithm might begin to disfavor search results for sites that are blocked in China. And if it the algorithm is very sensitive, it may eventually stop showing links to censored sites altogether. In theory this could happen without any active or intentional removal of these sites on the part of Microsoft employees; Bing’s artificial intelligence could simply “learn” that almost no one using Chinese-language Bing is clicking the links that GreatFire believes are censored.
Thus the problem isn’t just that it is getting harder to track this kind of censorship (although it is), but that it is getting harder to define censorship. Is it censorship if Microsoft isn’t showing links in Chinese-language Bing that most of its Chinese-language users can’t even access? Is it censorship if this is the result of an AI learning algorithm that is trying to serve up the results that would be most relevant?
It may turn out that GreatFire is correct and this is active censorship. And if that is the case then Microsoft will have to explain how their behavior is consistent with their obligations under the Global Network Initiative. But if this is something else, Microsoft (as well as other search engines) should begin to think about how they can be more transparent about their search algorithms. In the absence of transparency, blaming an opaque search algorithm will become an easy excuse for all kinds of behavior that threatens an open Internet.
Last month, the governing Australian Coalition had a bizarre and confusing reversal about their plans to impose Internet censorship on both mobile broadband carriers and home ISPs. The Coalition announced their Online Safety Policy, just 72 hours before polls opened for voting in the federal election. The Coalition won the election, but instead of pursuing the policy, they quickly and surprisingly disavowed the plans.
Just before the election, the Coalition announced their plans for a new opt-out internet filter, which would be similar to the filter recently imposed in the UK. Under the plan, mobile internet providers would be forced to filter adult content until users proved their age, while home ISPs would also apply filtering as a default “unless the customer specifies otherwise.” The policy document states that “the Coalition does not support heavy-handed regulation of the internet,” but many, including the Australian Pirate Party’s New South Wales candidate Brendan Molloy thought the plan was indeed heavy handed:
Opt-out filtering treats everyone like a child by default, and puts those who choose to opt-out from the Government-chosen list of acceptable websites on a list of deemed ‘undesirables’ that can be later abused. This is a reprehensible policy and we will fight it to the death.
The policy was also unpopular online, with some voters stating they had decided not to vote for the Coalition as a result.
But almost as soon as the policy was announced, the Coalition reversed course and disavowed their own plan. Malcolm Turnbull, party communications spokesperson, released a statement to say that the document was misleading:
The policy which was issued today was poorly worded and incorrectly indicated that the Coalition supported an ‘opt-out’ system of internet filtering for both mobile and fixed-line services. That is not our policy and never has been.
The statement went on to say that their policy was intended only to encourage carriers to make software available for parents, and to encourage parents to take responsibility for their children’s activities online. But this explanation rings hollow. After all, the disavowal came only a few hours after Turnbull himself had promoted the policy on triple j’s Hack, and after Liberal Coalition MP Paul Fletcher had clearly stated that their intentions as a party were “[to] work with the industry to arrive at an arrangement where the default is that there is a filter in the home device, the home network, that is very similar to the filters that are available today.” It seems unlikely that this document was created in error; as Greens senator Scott Ludlam pointed out, it was a “complex policy document” which has “clearly had a lot of work go into it.”
So if the policy wasn’t a mistake, what was it? It has been suggested that the introduction of this policy a mere 72 hours before the election was an attempt to ‘sneak’ it past the voters. This clearly failed. The online community clearly noticed the policy. Although their attention didn’t alter the expected outcome of the election, is it possible it contributed to the quick reversal of the policy? Over a few dozen hours, a storm of social media comments drew voters’ attention to the policy and its flaws, and this could help explain the quick reversal. A policy that perhaps once would have been able to slip under the radar was caught in the spotlight of social media and defeated.
It remains to be seen whether the policy will re-emerge in future, but for now the Australian people are free to install internet filtering software as they please, and protect their children as they feel necessary.
Special Herdict Contributor
« Older posts