You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Why Does Beijing Censor?

The article  by Gary King, Jennifer Pan and Margaret Roberts about the prevailing modes of Chinese Government censorship offer an interesting example of the unintended effects of the use of social computing to achieve the ends of commissarial control.

According to King, Pan and Roberts, a multi-level and multi-stage censorship system is operated across Chinese cyberspace by agents working directly for the Chinese Government or private entities subject to directives on policy, and penalties for noncompliance.

Firstly, Chinese cyberspace is “enclosed” by the Great Firewall, which whilst defeatable by modern VPN systems works adequately to filter out most content from outside China that the authorities deem undesirable. A corollary of this state of affairs is the exclusion of many internet services and platforms that are dominant in the West, and in their place the proliferation of Chinese alternatives more amenable to State control. Some Western corporate leaders have in fact predicted the eventual decoupling  of the global internet into a Chinese-inspired one and a Western-dominated counterweight. Essentially, a Great Internet Schism.

Secondly, filtering tools are embedded in the application layers of the national internet to block specific keywords and trigger-words for dissent or other undesirable content.

Thirdly, internet service providers and content platform operators (including social media networks) employ censors to manually redact posts or remove them altogether if they are deemed to be incompatible with proper speech behaviour.

Lastly, the Government itself employs an even larger swarm of operatives, many of them low-ranking members of the Communist Party, to sanitise the web. At the time of the article’s publication in 2013, the authors’ estimate of the number of these censors was about 2.5 million, of which about 10% to 12% were government workers. Since the majority of these censors are private employees, it is no wonder that censorship costs could amount to as much as 30% of total running costs for a social media business. The study found that about 13% of all social media posts are censored (lower than an earlier Carnegie Mellon study that reported a figure of 16% for the biggest networks).

King et al then proceeds to examine the primary logic behind the Chinese censorship regime. Per their assessment of the literature, two broad theories of the motivation of the Chinese authorities stand out: a) their massive investment in online censorship aims to suppress criticism of the regime and of state policy and b) disrupting organised, unauthorised, “collective action”, whether or not such actions involve anti-regime sentiment.

King et al vote emphatically for the latter goal as by far the most supported of the two, according to their detailed analysis of content excised from the 1382 websites they investigated.

On top of this theoretical stance, the authors believe that analysing the selective emphasis of China’s censors on particular types of content, and conscious neglect of others, enables a deeper and clearer view into the mind of the authorities regarding their priorities among different citizen-expressions of perceived actual and potential sedition.

These are all highly provocative and intriguing claims, and by sheer dint of meticulous inventory work, they offer a very useful starting point for formulating a coherent view of a managed political system’s posture to the speech rights and attitudes of citizens.

I have several reservations about the arguments in the article, nonetheless, and in this post I shall detail a few.

Firstly, it is a widely known feature of social media and “massive online content platforms” to practice “feeding”, a content delivery method whereby the most “popular” and/or “relevant” content appear most prominently in the view/”timeline” of the user.

King et al does not discuss the impact of this “algorithmic artefact” at all. Yet, the coordination effect of such algorithms are likely to be highly relevant to the process of identifying which speech forms, expressions and posts are most likely to pose any kind of risk to the social order, whether we accept the thesis that disruptive collective action is the top-of-mind focus of the Chinese government or, instead, choose to weigh any concerns on the part of the authorities about criticism of the State and its officials as being of, at least, equal importance to posts associated with collective action.

Algorithmic patterns about the spread of particular memes, evolving sentiment, subtext evaluation of perceived disinformation and misinformation (regardless of ideological perception of the “mobilizational potency” of the content being shared) offer a computer-assisted, real-time, vista to human censors that can mirror the “highly efficient” and “military precision” features of the coordinated censorship effort that King et al attributes to administrative clarity about which particular expressions of dissent the authorities might perceive, as a matter of policy, to be dangerous to the stability of the political system.

Such algorithmic artefacts can also simulate the efficiency of consensus-building amongst administrative units at different levels of the censorship regime that King et al reports. By using data analytics, the spread of sentiment starting to “get out of control” should be trackable as an objective matter making consensus much more mechanical than implied by the assumption of efficient and sophisticated inter-departmental deliberation in the article.

My second point is a variation on the first. Algorithmic personalisation of “newsfeeds” and tracking of post/expression popularity and resonance (including the tracking of endorsement gestures, such as hashtags, emojis, “like” and “share”, or similar social media behavioural grammar) enables a selection of “dangerous” posts irrespective of whether their specific content is favourable or unfavourable to the regime. The decision would be based less on their content and incitement potential and more on a simple calculation of their traction, especially where the context concerns any issue considered “unapproved for discussion” by the Authorities.

This alternative hypothesis would be compatible with both the theory that the State frowns equally on criticism of its policies and leaders and on statements/expressions with mobilizational power, on the one hand, and the theory that the State almost exclusively targets posts with incitement potential, on the other (i.e. the King et al theory). The state, according to this hypothesis, essentially targets posts that are likely to be seen and endorsed by many people and thus develop into a narrative likely to get out of control. Whether or not such “seed posts” are critical or supportive of the government is not very important in the larger scheme of things since what matters is their tendency to prolong discussion and exacerbate tension, under the pretext of debate, in connection with discussions perceived to be unfavourable to the government’s interests or unapproved as worthy for social amplification.

That last sentence is a bridge to my final point. Since the King et al article was written, the dynamic nature of real-time censoring has been in evidence on several occasions in Chinese cyberspace. It has been noted that banned “trigger words” can be updated in real-time to fragment and splitter debate considered unapproved or unworthy, as was the case during the discussion around President Xi’s term-limit abolition policy . The rather expansive range of target-sentiments over the period completely discredits the notion that only speech capable of triggering imminent mobilisation for open dissent is targeted by Chinese censors. In fact, an increasing tendency to completely remove accounts that generate “bad content” suggests a “sanitisation” rather than a “tempering” approach to online speech censorship.

The biggest dent in the “collective action only” theory is however even more straightforward: the increasing focus on overseas critics whose views are far from likely to serve as fodder for organised dissent. Considering that quite a number of overseas critics tend to mock State policy rather than threaten it in any serious way, the suppression of their presence in Chinese cyberspace aligns quite faithfully with a general trend that views lampooning or vulgarisation  by artists and other creative types very poorly, not because they can spark “collective action” but because they represent the threat of a slow corrosion of respect for official authority, a major Confucian anathema.

In short, China has always seen the policing of cyberspace as nothing more than an extension of the public order regime in physical spaces. Behavior in cyberspace requires regulation because there are publicly approved standards of conduct. Online debates, discussions, and narratives that are unlikely to promote constructive “cognitive conduct” is just as corrosive to the state’s organising mission as is “physical” antisocial conduct like petty corruption or prostitution.

This is afterall the avowed aim of the Central Propaganda Department, and against this no norm of the sanctity of intellectual privacy can prevail.

 

Leave a Comment

Log in