You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Hidden Code & Warped Logic

ø

Many Americans would be surprised to learn, if they have not been paying enough attention to the ongoing debate about the interaction between privacy and cryptography, that the settled law in the US today is that a Judge can order a person to remember that which they have forgotten.

Furthermore, should such a person elect not to obey the order they can be detained indefinitely at the pleasure of the Judge, on the basis that the “keys are in their own pocket”.

But this is precisely where the United States find itself today following the refusal[1] of the United States Third Circuit[2] to review a District Court’s decision to uphold a contempt order against Francis Rawls[3], a former Policeman charged with possession of child pornography in 2015, for claiming inability to recall the passwords to external drives suspected to contain encrypted pornographic images of minors. With further appeal to the US Supreme Court virtually impossible[4], this “power to compel” opinion has indeed become binding precedent.

Whilst the opinion of the Appeal Court Judges is couched in refined judicial reasoning on various nuanced arguments such as whether the “right against self-incrimination” no longer applies when it is a “foregone conclusion” that the information being sought by the Government is a mere formality because the Government already knows that which it seeks to find, the end effect remains the same: the Government now has power over memory[5].

The implications of this power penetrate deep into the very notion of “privacy of thought” itself, raising novel questions about the positioning of cryptography in our emerging understanding of risks to that notion.

“Privacy of thought” is not as hazy a concept as it may sound at first. There is a long and rich literature on the First Amendment dimensions of “freedom of conscience”, as well as on derivatives of Fourth and Fourteenth Amendment protections, connected with many of the issues raised by the US Government’s intent to compel or restrain various mental activities, and how it may seek to override the barriers placed in its way by cryptography.

Among these discussions are those that center on the use of “truth serums” of various kinds, from the electronic (like polygraphs) to the chemical (like sodium pentothal). If a government can compel a citizen to “remember a password” because the government believes that they are lying when they say they no longer remember it, then the question of whether a truth serum might be justified is illuminated greatly by looking at the current, legitimised, alternative: indefinite incarceration on grounds of contempt. Some might contend that in such circumstances a truth serum might be the humane option.

Then there is the issue of context. Alan Westin, writing in the Columbia Law Review in 1966, discussed extensively the differential acceptability of “truth serum” type technologies for intelligence and national security purposes versus for law enforcement purposes.[6] In his view, a truth serum, even when its false positives and false negative rates are as high as 25%, may still be warranted in the national security and counter-espionage contexts. More fascinatingly, he left open the question of whether higher effectiveness and a considerable lowering of the error rate and degree of intrusiveness might lower the barrier for the use of such technologies in routine law enforcement. This is an unresolved question even today.

In terms of legal practicality, the starting point is the thinness of the law on mental coercion. There is a tendency in some legal circles to assert the definitiveness of Townsend v. Sain[7], the only US Supreme Court case where the purported use of truth serum chemicals featured strongly in the arguments backing a successful appeal. The truth, however, is that Townsend is primarily a Habeas Corpus case, with somewhat inconclusive extensions to the self-recrimination and related rights more central to the issue of “mental inviolability”, which lies at the heart of the “privacy of thought” doctrine.

At the core of this issue is the growing ubiquity of technologies such as strong encryption and their capacity to expand the domain of thought beyond the strict walls of the mind and more indissolubly bind thought with its reproduction.

When a person can encrypt most of their communications and expressions, the true meaning and intent of those communications become as hard to decipher as the contents of their mind. When the civil power senses this new phenomenon, it experiences the temptation to abandon the pretence of restraint against mental coercion and assert more openly, even if also somewhat obliquely, that which it has never explicitly repudiated.

It is easy to dismiss the Rawls case because we are dealing with “child pornography” here, a through and through heinous crime. One might argue that such situations outrages our collective conscience to the point where narrow exceptions to the “normal order” should not be exaggerated as presenting a systemic threat. But this perspective stems from a misunderstanding of the deeper issue.

The elements of crime always turn on intent, on mental orientation, something that Courts – and, to a lesser extent, law enforcers – approach via approximation. The ease and coherence of that approximation is the entire pillar supporting the integrity of the justice system.

Let’s go back to “child pornography”, a form of deviance universally damned without room for equivocation. In 2014, a UK man was convicted for possessing manga and anime illustrations of children in states and poses considered “sexually indecent” by the Court.[8] The man continued to protest his innocence on the basis that the images were “art”. Complicating this situation even further, such images would indeed be considered “artistic” and exempt from criminal proceedings in Japan, where they originate.[9]

The determination, therefore, whether nude, prepubescent, cherubim and peasant girls in renaissance art[10] should be treated differently from sexually expressive Japanese anime characters, as they often are in the West, may hinge on various constructions of mental orientations attempted by a Court in the rarefied setting of a courtroom.

Whether the owner is a decadent artist, creepy amateur hobbyist, or “evil” criminal deserving of a custodial sentence to deter others of his ilk, are questions that turn on sometimes arbitrary explorations of intent, sentiment and other mental gestures. Verbal testimony is rarely seen as sufficient.

Investigators have thus tended to require unrestrained access to computing devices and the leeway to reconstruct bits of data and information into something approaching a replica of the “mental guilt state” of the individual under scrutiny. Until strong encryption became so accessible, this “forensic mind reading” task was greatly aided by the government’s vast digital inquisitorial resources, at least in comparison with the individual, no matter how skilled or wealthy that individual might be.

Now that such assumptions no longer hold, the United States and many other powerful governments find themselves in a quandary similar to the one confronted by the Catholic Holy Office and its Inquisitions as their hunt for heresy evolved from suppressing clearly identified denominations and creeds to “decoding” gnostic and mystic texts in order to uncover “hidden meanings” that could convict the suspect regardless of their open expressions and confessions of orthodox faith.

In one particularly laughable episode, Marguerite Porete, was burned at stake in the High Middle Ages for refusing to give the Holy Office in Paris the interpretation of her “Mirror of Simple Souls” that the prying inquisitors believed was deferential enough to purge her of the sin of heresy. Many freemasons and others of like ilk suffered similar fates because of this “decoding” of “ciphertext” problem.

The striking parallel with contemporary times is in the fact that when the quest for truth is expanded beyond law enforcement to the intelligence and national security contexts, the situation is compounded by a higher order problem. Whilst the standards for accuracy may be lowered, a la Westin, the risks of doing so may be more dangerous since some inaccuracies can produce disastrous false negatives[11].

Another less nuanced way of making the same point is saying that there is a very high risk of powerful governments and non-state actors using decoys and obfuscations in what has come to be known as “deniable encryption”.[12] A crude obfuscation scheme is one where the passkey is multi-nested such that only a part of the original message, or even a decoy, is disclosed upon naïve decryption. The “decoytext” may nevertheless be meaningful to a recipient with full context of their origination or, alternatively, may be relied upon to reconstruct the intended message.

When it is thus no longer possible to assert with high conviction, for instance in a court setting, that a decrypted “plaintext” is indeed the original text that was encrypted, we find that we are back to where we started: the truth is only in the minds of their owners.

What happens when such technologies become as widespread among the civilian population as naïve, strong, encryption is becoming today? How would courts determine that the truth has not been forthcoming when the issue is no longer as simple as “frustrating the court’s search warrant by refusing to remember a password”? How would contempt powers help when what is at stake is: “what is truth”? When the “truth serum” alternative yields discredited outputs?

The fascinating thing is that this is a non-problem. The court system has for millennia resolved itself to its modest role of adjudicating among approximations of truth in a world where mental gates cannot be reliably breached. Justice and law enforcement – and dare we add, national security and intelligence gathering – have been reconciled to this reality for eons. And there is nothing perverse about that. We live in a world that strives for equilibrium among competing values and emerging compromises.

What is perverse is the newfound zeal for “total truth”, a height that is unattainable without a totalitarian descent into a regime of cerebral surveillance at will.

 

 

[1] Opinion accessible from the worldwide web at: https://cdn.arstechnica.net/wp-content/uploads/2017/03/rawlsopinion.pdf (last retrieved: 2nd November 2018)

[2] One of the country’s thirteen courts of appellate jurisdiction.

[3] See the District Court’s ruling here: https://regmedia.co.uk/2017/08/30/rawls.pdf (retrieved from the www on 2nd November 2018).

[4] The Supreme Court of the United States has complete discretion in the exercise of its appellate authority over lower courts, and in recent years fewer than 2% of review requests have been met.

[5] The clearly inadequate legal counsel and representation received by Rawls is not relevant in this discussion.

[6] Westin, A. (1966). Science, Privacy, and Freedom: Issues and Proposals for the 1970’s. Part II: Balancing the Conflicting Demands of Privacy, Disclosure, and Surveillance. Columbia Law Review, 66(7), 1205-1253. doi:10.2307/1120983

[7] For summaries and links to the text, see: https://caselaw.findlaw.com/us-supreme-court/372/293.html (retrieved from the www on 2nd November, 2018).

[8] See: https://www.gazettelive.co.uk/news/teesside-news/anime-fan-convicted-over-illegal-7958896 (retrieved from the www on 2nd November 2018)

[9] See, for instance, https://www.nytimes.com/2014/06/19/world/asia/japan-bans-possession-of-child-pornography-after-years-of-pressure.html?_r=0 (retrieved from the www on 2nd November 2018).

[10] Such as those triumphantly celebrated in collections such as this one: https://www.royalacademy.org.uk/exhibition/the-renaissance-nude

[11] Consider the revelations that US Intelligence Agencies routinely misread the Al Qaeda threat prior to 9/11 despite copious hints and explicit reports.

[12] For additional information on obfuscation-encryption schemes, see for instance: https://eprint.iacr.org/1996/002.ps, https://eprint.iacr.org/2011/046 and https://eprint.iacr.org/2013/454.pdf.

Averting a Tragedy of the Crypto-Commons

ø

In the annals of cryptography’s transcendence from the obscurity of geekdom to the central place it occupies in today’s privacy and human security discourse, Whitfield Diffie’s intellectual clash in the late 70s with the NSA, which was intent on limiting the spread of strong encryption it could not break, has attained the heights of legend.

But the more dramatic episodes in the crypto epics belong more aptly to the saga of Lucifer, IBM’s first major foray into commercial cryptography and the earnest efforts in the early 1970s by Horst Feistel, who had joined IBM after growing disillusioned at the NSA, to portray Lucifer as something much grander than a mere prop for banking IT security.

It is perhaps also worthy of note that it was at IBM, and within the same rarefied circles spawned by the Watson Center, that Diffie met Martin Hellman, his now equally famous co-conspirator against the 56-bit data encryption standard (DES) that the NSA induced IBM to foist on the emerging world of network computing.

In Feistel’s 1973 paper on cryptography and “computer privacy”, published well before the more celebrated Diffie-Helman monograph, he not only outlined the vision for 128-bit encryption, something considerably more robust than the NSA-preferred 48-bit ciphers, he also exhibited a prescience about modern day concerns about individual and, more vitally, group “data bank privacy” that is truly remarkable.

It would seem that Feistel’s understanding of Lucifer’s true possibility was, at least during the early development phase, connected with some of the conclusions he drew in that paper: “it would be surprising if cryptography, the traditional means of ensuring confidentiality in communications, could not provide privacy for a community of databank users.

Feistel’s strong interest in the cryptographic prospects of networking security and the privacy needs and rights of user groups cuts a direct line to today’s complex transactional and interoperable frameworks, starting with a modified Lucifer’s application to banking ATM configurations and the presaging of future e-commerce platforms by its successors.

In a cloud computing world, some of the faint echoes of Feistel’s work on Lucifer have started to ring louder again. His concerns that networking magnifies the risks of privacy breaches are now standard fare in most analyses of the need for “pervasive cryptography” and “encryption by default”, both positions that he clearly articulated half a century ago.

It is not surprising then that the same arguments made by state security and law enforcement authorities in that era concerning the risk posed by ubiquitous strong encryption to legal surveillance for crime prevention, appear to have resurrected en masse as debate over the role of cryptology in safeguarding privacy has picked up again.

This is sad because in the intervening period since the days of Lucifer so much has happened to warrant a more nuanced view of the risks addressed through, but also complicated by, ubiquitous encryption.

The crowning glory of the Diffie-Helman turn in cryptography is undoubtedly the solving of the key distribution problem through the path shown by Public Key Infrastructure (PKI) design. The core logic of PKI has, however, always been driven by a chain or web of unilateral and bilateral functions. One person encrypts, and only they or a second nominated person can decrypt. In solving the problem of how two parties can securely exchange information without fear of interception by an eavesdropping third party, a new risk was introduced: the complete repudiation of third-party rights in the transaction, however legitimate.

In a polycentric computing world, there are some important limitations to this model, which go beyond the usual discussions about private key compromise, revocation, complexity, and certificate authority integrity. There are concerns about justified access to medical records in the event of an emergency, parenting obligations towards minors, cryptoviral extortion, and the data considerations involved in executing the digital estate of deceased persons as a function of probate law. When cross-border issues arise in any of these contexts, confusion multiplies multifold. A veritable tragedy of the cryptocommons, a situation where everyone in pursuing their best privacy interests enfeeble our collective respect for privacy, may lie just across the horizon.

Yet, not only are such discussions often overshadowed by the narrower concerns and perspectives of enterprise actors, their civil implications have also become hijacked by the focus on governments’, particularly the US Government’s, desire to insert backdoors into encryption or limit the spread of strong encryption.

The US is, of course, far from being the only major country with curious export/import restrictions on encryption products or “forced decryption” rules/regulations, many of the world’s sophisticated governments have similar or even more stringent precepts.

The US however gets a lot of attention because of its completely outsized position in the global app economy. Its disproportionate influence on digital commerce means that until September 2016, many of the world’s startups were in violations of its cumbersome laws requiring advanced registration before products containing cryptographic tools could move in and out of the US. That is essentially every app today; and with the major app stores largely controlled by US owned firms but used for distribution by tech companies around the world, the notions of what constituted an “export” or an “import”, perceptions developed when software was still predominantly sold on disks, were becoming strained to breaking point.

Cloud computing and the digital distribution of software thus interacted with ubiquitous encryption in ways designed to frustrate law enforcers, a situation thrust into the limelight, but also much obscured, by the FBI-Apple iPhone forced unlocking dispute. It is fascinating how such an interesting incident raised so few of the profound issues precipitated by polycentric ubiquitous (PU) encryption.

The real, deep, problem posed by the current model of heavily interoperable digital applications hosted remotely, used collaboratively across multiple domains and geographies, and intimately intertwined with all manner of everyday services that have only recently been digitised, is that a “trust management” problem is still being treated as an “information security” one. Even worse, this dissonance has been fully globalised.

The Wassenaar Arrangement, an international soft law regime promulgated in the Netherlands by 42 states, has a few loose provisions in its infosecurity section attempting to deal with the trans-frontier issues.

It would surprise some commentators to find out that within this club of advanced cyberpowers (although missing China, Israel and emerging cyber powers such as Brazil), the US posture towards strong encryption proliferation is actually dovish, which is saying a lot about state attitudes globally.  More importantly, Wassenaar is far from offering any framework for addressing the broader transnational civil and social complications arising from PU encryption culture.

Attempting to address this complex, multifaceted, problem starts with recognising the “distributed” and “disaggregated” nature of the strong encryption – third party rights conundrum. In many ways, blockchain is a crude attempt to address this puzzle, but more in favour of transparency than of privacy. It is sad that Ralph Merkle’s ideas, co-opted for this agenda, contain at their root much more than have been realised in the crypto-trust space.

The key to unknotting the conundrum, I think, is “civil sortition”, a new institution that enables the formation of trusteeship rings and groups as a form of “trust collateral” in the deployment of strong encryption. This is precisely what “key escrows” are not.

Key escrows, having begun life as a discredited “compromise” pushed by some law enforcement authorities in response to the tech industry’s denouncement of attempts by the former to push breakable encryption and/or backdoors in ciphers, all in the name of denying criminals the safe haven of totally impenetrable communications, never really matured into anything technically significant. They have been criticised as being open to abuse because their implementation often requires reliance on legacy social institutions highly susceptible to establishment manipulation.

Civil sortition offers a radical approach whereby a pseudorandom chain of network participants (human and parahuman) receives fragments of a “scrambler/descrambler” keypad. Whilst the real-world identity of the participants are obscured, they are uniquely identified and securely and consistently reachable so long as they remain active in the network. Any member of the network can request decryption of any encrypted message backed by a pre-programmed escalation matrix. The number of lots required for descrambling or permanent scrambling, for the sequencing of lots, and for the weighting of lots must all fit into this matrix. For some data types, influence of matrix design is heavily weighted towards the data contributors, and in others less so.

In this manner, whenever strong third party rights to data arise in any context, the communal groups affected cannot be held hostage to the whims of the nominal controller of such data. The extent to which government agencies genuinely reflect strong and legitimate third party rights to any particular data would then become a matter for network adjudication without allowing for oppressive determination by arbitrary privilege.

s data repositories continue to intermingle, and the demand for stronger forms of privacy protection grows in tandem, only radical new data access regimes can reconcile all legitimate rights, and thus avert any prospective tragedy of the crypto-commons.

“Sovereign Privacy” & Why China’s Social Credit System Won’t Work As Supposed

ø

The Chinese Government’s plans to introduce a “social credits” scheme to rate and rank the behaviour and conduct of its citizens far beyond their financial circumstances (the current focus of Western-created “credit scoring” systems) has predictably rattled observers.

One journalist summed up the situation starkly:

“The Communist Party’s plan is for every one of its 1.4 billion citizens to be at the whim of a dystopian social credit system, and it’s on track to be fully operational by the year 2020.”[i]

Many of the discussions have followed similar lines, focusing on the harrowing implications of such an intrusive state-run machine for individual freedoms and the right to privacy.

What has been less investigated is the essential structure of the social algorithms required to achieve the objectives of the Chinese government, and in particular the tensions between technical efficiency and political economy when mass surveillance is devolved to machine power and incorporated into social-behavioural systems in the presence of capitalist incentives.

Few treatments of the notion of “sovereign privacy” give it any respect. Yet, there are framings of the “state secrecy” question that goes beyond mere necessity (especially in such contexts as “law enforcement” and “national security”).

LSE’s Andrew Murray provides an interesting angle in his brief 2011 take on transparency:

“All bodies corporate (be they private or public) are in fact organisms made up of thousands, or even tens of thousands, of decision makers; individuals who collectively form the ‘brain’ of the organisation. The problem is that individuals need space to make decisions free from scrutiny, or else they are likely to make a rushed or panicked decision.”[ii]

When viewed as a “hive” of personnel insecurities, biases, errors, stereotypes, ambitions and proclivities, Central Governments emerge out of the monolithic pyramid we tend to envisage atop the panopticon of general surveillance and descend onto a more examinable stage, where their foibles and miscalculations and misdiagnoses can also receive useful attention.

Because the Communist Party’s 90 million members are an integral part of its overall structural integrity, its social management policies rely greatly on their ability to participate and contribute.[iii]

Many of the 20 million people who work in the 49000 plus state enterprises, especially from middle management and up, are fully paid-up members of the party. Some estimates put the percentage of the country’s 2 million press and online censors who belong to the party at 90%. Last year, the last barrier between the Party and command at all levels of state paramilitary and security institutions was removed, bringing an even larger number of non-career security commissars into both operational and oversight positions.

Such broad-based participation in the “social management strategy” might at first sight appear to favour the decentralised nature of social credit-based control. The only problem with that view is that the strategists behind the scheme see it in purgatory terms:

“The main problems that exist include: a credit investigation system that covers all of society has not yet been formed, credit records of the members of society are gravely flawed, incentive mechanisms to encourage keeping trust and punishments for breaking trust are incomplete, trust-keeping is insufficiently reward, the costs of breaking trust tend to be low; credit services markets are not developed, service systems are immature, there are no norms for service activities, the credibility of service bodies is insufficient, and the mechanisms to protect the rights and interests of credit information subjects are flawed; the social consciousness of sincerity and credit levels tend to be low, and a social atmosphere in which agreements are honoured and trust are honestly kept has not yet been shaped, especially grave production safety accidents, food and drug security incidents happen from time to time, commercial swindles, production and sales of counterfeit products, tax evasion, fraudulent financial claims, academic impropriety and other such phenomena cannot be stopped in spite of repeated bans, there is still a certain difference between the extent of sincerity in government affairs and judicial credibility, and the expectations of the popular masses.”[iv]

The goal is as much about moral self-policing as it is about social control. Self-policing inevitably induces low-intensity and highly diffuse factionalism and clique politics.

Chinese observers certainly understand the critical factor of power-play in these circumstances, as is obvious from the following passage by PhD student, Samantha Hoffman:

“The first is the struggle for power within the Party. The Party members in charge of day-to-day implementation of social management are also responsible to the Party.  As the systems were being enabled in the early 2000s, these agencies had a large amount of relatively unregulated power. The age-old problem of an authoritarian system is that security services require substantial power in order to secure the leadership’s authority. The same resources enabling management of the Party-society relationship can be abused by Party members and used against other within the Party (War on the Rocks, July 18, 2016). This appears to be the case with Zhou Yongkang, Bo Xilai, and others ahead of the 18th Party Congress. The problem will not disappear in a Leninist system, which not subject to external checks and balances. And it is why ensuring loyalty is a major part of the management of the party side of “state security”.[v]

But Hoffman and many like her misconstrue the implications of fragmented trust for social credit based control.

Complex social algorithms over time start to amplify signals that their makers do not fully understand and cannot control in advance. We have seen this many times with even much simpler systems like Facebook, Twitter and Instagram, whose operators have extremely narrow objectives: maximising attention retention to attract advertisers.

In a system designed to compel conformance to ideal criteria and yet dependent on large numbers of participants to shape that criteria, deviance can easily become more prominent when algorithms start to reinforce once latent patterns. Whether it is preening on Facebook or bullying on Twitter, there is a fundamental logic in all simple systems trying to mould complex behaviours, and this logic tends to accentuate deviancy because algorithms are signal-searching.

This is where the “sovereign privacy” point comes in. A state like China seeks inscrutability. It also seeks harmony of purpose. Social algorithms tend to want to surface hidden patterns and concentrate attention. A time-lag renders algorithm-tweaking for specified ends in advance highly unreliable. Very often, the operator is relying on surfaced trends to manage responses. The danger of rampant “leaking” of intention and officially inadmissible trends rise exponentially as the nodes in the system – financial, political, social, economic, psychological etc – increase. The “transparency” that results from the inadvertent disrobing of the intents of millions of Chinese state actors does not have to be the kind that simply forces the withdrawal of official propaganda positions. It can also be the kind that reveals which steps they are taking to regain control of the social management system.

The problem is somewhat philosophical. Right now, membership in the Communist Party and public conformance with the creed is non-revelatory. Integrating multiple “real behaviour” nodes together to compel “sincerity”, as is the official goal of the program could immediately endanger the status of tens of millions of until-that-moment perfectly loyal cadres and enforcers of moral loyalty. The proper political economy response, at least in the transition stages, is to flatten the sensitivity of the algorithms. Doing so however removes the efficiency, which alone makes the algorithms more effective than the current “manual” social conformity management system.

Unfortunately, such efficiency would render redundant large swathes of the current order. Which in turn means that lower levels of the control pyramid have very little incentive in providing complete data. The effect of highly clumpy data exacerbates algorithmic divergence from other aspects of social reality (in the same way that Twitter fuels political partisanship in America as oppose to merely report it) and prompts “re-interpretations” of the results churned out by the system. Over time, the system itself begins to need heavily manual policing. The super-elite start to distrust it. Paranoia about the actions of their technocratic underlings grow in tandem. Along with dark fears about a “Frankenstein revolt”.

At the core of all of this is the simply reality that any system that can realistically achieve mass deprivation of privacy will threaten sovereign privacy as well, and would thus not be allowed to attain that level of intrusion by the powers that be.

Notes:

[i] “China’s ‘social credit’ system is a real-life ‘Black Mirror’ nightmare”. Megan Palin. 19th September 2018

[ii] Andrew D Murray. 2011. “Transparency, Scrutiny and Responsiveness: Fashioning a Private Space within the Information Society”. The Political Quarterly.

[iii] See: Yanjie Bian, Xiaoling Shu and John R. Logan. 2001. “Communist Party Membership and Regime Dynamics in China.” Social Forces, Vol. 79, No. 3, pp. 805-841.

[iv] “State Council Notice concerning Issuance of the Planning Outline for the Construction of a Social Credit System (2014-2020)”. GF No. (2014)21.

[v] Samantha Hoffman. 2017. “Managing the State: Social Credit, Surveillance and the CCP’s Plan for China”. China Brief Volume, 17 Issue 11.

The Fidelity of Errors: Why Biometrics Fail in African Elections

ø

It is trite knowledge that all biometric systems rely on probabilistic matching, and that every successful biometric authentication of an individual is merely a statement about the likelihood of their being who they are expected to be.

What is more interesting is the determination of the factors that go into establishing what the reasonable margin of error should be when processing such a match, or, more accurately, when estimating these probabilities.

Officially speaking, the error tolerance margin should be determined by empirical testing based on data about the false match rate and the false non-match rate; in essence: the interplay between the “false negatives” and “false positives” record obtained from the field.

The practice in many situations, however, flouts this normative standard, with the effect of leaving calibration determination to the competing forces of commercial logic and political expediency. And no where more so than in the context of sophisticated African elections.

To fully illustrate this point, it bears clarifying, at a very high level, some key technical concepts.

Some errors are intrinsic to the probabilistic nature of biometric technology itself, but many others are non-inherent or extrinsic, and these sources of defect include factors that range from the technical to the sociological, such as:

  • Topological instabilities in matching as a result of source object (such as a finger) positioning in relation to the measurement device
  • Poor scanning of biometric attributes
  • Storage and retrieval quality
  • Personnel and manpower inadequacies
  • Environment and ambience (for example, lighting and aural ambience can affect voice and visual biometric authentication in varying ways)

Extrinsic sources of error are typically institutional, and thus literally impossible to eliminate without wholesale interventions in the design and user environment typically considered out of scope when constructing even the most complex biometric-enabled systems.

Intrinsic sources of error are far more complex in their provenance and, not surprisingly, very difficult to adequately explain in a post of this length.

In general though, these errors emanate from the statistical procedures used to reduce ratios, measurements and correlations compiled from the physical imaging or recording of phenotypical (or, in the near future, genotypical) features of designated source objects associated with the target individual or subject of interest. We might refer to this type of error as representing a level of “systematic risk” indispensably present in the use of the underlying concepts of biometry themselves as a means of precise differentiation of sociobiological humans.

The common effect of both types of error is however straightforward: the need for large-scale biometric-enabled programs and systems to anticipate considerable errors in the margin of performance.

The anticipated risk that results from this error-certainty bifurcates into a probability that the machine shall include individuals who should be excluded from authentication, and an alternative probability that the machine shall exclude individuals who should have been included.

At the intersection of these two sets of expectations lies the acceptable threshold, which, of course, moves up or down depending on which of the two main probabilities are of the strongest concern, a determination strictly dependent on institutional context and the stakes of failure.

In the case of elections, the strongest tension is between the delegitimating of results by permitting a certain margin of the same fraud against which biometric systems are usually introduced to stem, on the one hand, and, on the other hand, disenfranchising several electors whose right to vote and influence the selection of their leaders many constitutions insist should be treated as sacrosanct.

Empirical testing in the user environment prior to the design of the biometric instruments would be the obvious sensible prerequisite to the calibration of the thresholds for rejection and inclusion. Yet, they are almost always, across Africa, performed hardheartedly, with the result often that basic challenges stemming from occupational (some types of work deface fingerprints faster than others, for example), environmental (higher amounts of dust, sweat, etc), infrastructural and other challenges, quickly ricochet into higher than expected rates of false negatives.

Because false negatives are more “visible” (they exclude and therefore antagonise), the feedback loop against false non-match rates are far stronger than false match rates, which in the field can only be surfaced by mystery shopping, a prospect highly infeasible in the normal, politically charged, election scenario.

False positives do show up in the end though, when despite biometric authentication and biometrically sanitised electoral registers, overvoting incidents, notionally impossible in the presence of the technology, are recorded. But this being a delayed effect, and likely only discoverable in the event of a disputed and judicially scrutinised outcome, the incentive to set threshold rates low enough to minimise false non-match rates and, as a corollary, raise the prevalence of false match rates, is much stronger.

Consequently, in the two African countries where the most sophisticated biometric systems have so far been deployed in recent general elections – Ghana and Kenya – an interesting trend was observed. Whereas in previous elections, biometric exclusion incidents were numerous and resulted in serious altercations at polling stations, more recent elections saw very few of these types of incidents. Yet, following the decision of Opposition parties to challenge the elections in the law courts, alarming evidence emerged of outcomes that should theoretically have been barred by the deployment of biometric apparatus. Incidents such as over-voting.

Why is testing so poorly done? Primarily because most of the vendors behind these solutions tend to be foreign systems integrators with minimal exposure to the sociological context of processes they are expected to model, and also because commercial considerations often militate against a serious examination of system design. Much too often, contracting procedures are shrouded in technical secrecy and undue backroom backscratching, with claims of proprietary standards and jostling for advantage by political connected rent-seekers.

With cost per voter in the most sophisticated African electoral systems now among the highest worldwide, many observers are beginning to wake up to the urgent need for outcomes-driven reforms of the technical safeguards of universal suffrage on the continent.

Why Does Beijing Censor?

ø

The article  by Gary King, Jennifer Pan and Margaret Roberts about the prevailing modes of Chinese Government censorship offer an interesting example of the unintended effects of the use of social computing to achieve the ends of commissarial control.

According to King, Pan and Roberts, a multi-level and multi-stage censorship system is operated across Chinese cyberspace by agents working directly for the Chinese Government or private entities subject to directives on policy, and penalties for noncompliance.

Firstly, Chinese cyberspace is “enclosed” by the Great Firewall, which whilst defeatable by modern VPN systems works adequately to filter out most content from outside China that the authorities deem undesirable. A corollary of this state of affairs is the exclusion of many internet services and platforms that are dominant in the West, and in their place the proliferation of Chinese alternatives more amenable to State control. Some Western corporate leaders have in fact predicted the eventual decoupling  of the global internet into a Chinese-inspired one and a Western-dominated counterweight. Essentially, a Great Internet Schism.

Secondly, filtering tools are embedded in the application layers of the national internet to block specific keywords and trigger-words for dissent or other undesirable content.

Thirdly, internet service providers and content platform operators (including social media networks) employ censors to manually redact posts or remove them altogether if they are deemed to be incompatible with proper speech behaviour.

Lastly, the Government itself employs an even larger swarm of operatives, many of them low-ranking members of the Communist Party, to sanitise the web. At the time of the article’s publication in 2013, the authors’ estimate of the number of these censors was about 2.5 million, of which about 10% to 12% were government workers. Since the majority of these censors are private employees, it is no wonder that censorship costs could amount to as much as 30% of total running costs for a social media business. The study found that about 13% of all social media posts are censored (lower than an earlier Carnegie Mellon study that reported a figure of 16% for the biggest networks).

King et al then proceeds to examine the primary logic behind the Chinese censorship regime. Per their assessment of the literature, two broad theories of the motivation of the Chinese authorities stand out: a) their massive investment in online censorship aims to suppress criticism of the regime and of state policy and b) disrupting organised, unauthorised, “collective action”, whether or not such actions involve anti-regime sentiment.

King et al vote emphatically for the latter goal as by far the most supported of the two, according to their detailed analysis of content excised from the 1382 websites they investigated.

On top of this theoretical stance, the authors believe that analysing the selective emphasis of China’s censors on particular types of content, and conscious neglect of others, enables a deeper and clearer view into the mind of the authorities regarding their priorities among different citizen-expressions of perceived actual and potential sedition.

These are all highly provocative and intriguing claims, and by sheer dint of meticulous inventory work, they offer a very useful starting point for formulating a coherent view of a managed political system’s posture to the speech rights and attitudes of citizens.

I have several reservations about the arguments in the article, nonetheless, and in this post I shall detail a few.

Firstly, it is a widely known feature of social media and “massive online content platforms” to practice “feeding”, a content delivery method whereby the most “popular” and/or “relevant” content appear most prominently in the view/”timeline” of the user.

King et al does not discuss the impact of this “algorithmic artefact” at all. Yet, the coordination effect of such algorithms are likely to be highly relevant to the process of identifying which speech forms, expressions and posts are most likely to pose any kind of risk to the social order, whether we accept the thesis that disruptive collective action is the top-of-mind focus of the Chinese government or, instead, choose to weigh any concerns on the part of the authorities about criticism of the State and its officials as being of, at least, equal importance to posts associated with collective action.

Algorithmic patterns about the spread of particular memes, evolving sentiment, subtext evaluation of perceived disinformation and misinformation (regardless of ideological perception of the “mobilizational potency” of the content being shared) offer a computer-assisted, real-time, vista to human censors that can mirror the “highly efficient” and “military precision” features of the coordinated censorship effort that King et al attributes to administrative clarity about which particular expressions of dissent the authorities might perceive, as a matter of policy, to be dangerous to the stability of the political system.

Such algorithmic artefacts can also simulate the efficiency of consensus-building amongst administrative units at different levels of the censorship regime that King et al reports. By using data analytics, the spread of sentiment starting to “get out of control” should be trackable as an objective matter making consensus much more mechanical than implied by the assumption of efficient and sophisticated inter-departmental deliberation in the article.

My second point is a variation on the first. Algorithmic personalisation of “newsfeeds” and tracking of post/expression popularity and resonance (including the tracking of endorsement gestures, such as hashtags, emojis, “like” and “share”, or similar social media behavioural grammar) enables a selection of “dangerous” posts irrespective of whether their specific content is favourable or unfavourable to the regime. The decision would be based less on their content and incitement potential and more on a simple calculation of their traction, especially where the context concerns any issue considered “unapproved for discussion” by the Authorities.

This alternative hypothesis would be compatible with both the theory that the State frowns equally on criticism of its policies and leaders and on statements/expressions with mobilizational power, on the one hand, and the theory that the State almost exclusively targets posts with incitement potential, on the other (i.e. the King et al theory). The state, according to this hypothesis, essentially targets posts that are likely to be seen and endorsed by many people and thus develop into a narrative likely to get out of control. Whether or not such “seed posts” are critical or supportive of the government is not very important in the larger scheme of things since what matters is their tendency to prolong discussion and exacerbate tension, under the pretext of debate, in connection with discussions perceived to be unfavourable to the government’s interests or unapproved as worthy for social amplification.

That last sentence is a bridge to my final point. Since the King et al article was written, the dynamic nature of real-time censoring has been in evidence on several occasions in Chinese cyberspace. It has been noted that banned “trigger words” can be updated in real-time to fragment and splitter debate considered unapproved or unworthy, as was the case during the discussion around President Xi’s term-limit abolition policy . The rather expansive range of target-sentiments over the period completely discredits the notion that only speech capable of triggering imminent mobilisation for open dissent is targeted by Chinese censors. In fact, an increasing tendency to completely remove accounts that generate “bad content” suggests a “sanitisation” rather than a “tempering” approach to online speech censorship.

The biggest dent in the “collective action only” theory is however even more straightforward: the increasing focus on overseas critics whose views are far from likely to serve as fodder for organised dissent. Considering that quite a number of overseas critics tend to mock State policy rather than threaten it in any serious way, the suppression of their presence in Chinese cyberspace aligns quite faithfully with a general trend that views lampooning or vulgarisation  by artists and other creative types very poorly, not because they can spark “collective action” but because they represent the threat of a slow corrosion of respect for official authority, a major Confucian anathema.

In short, China has always seen the policing of cyberspace as nothing more than an extension of the public order regime in physical spaces. Behavior in cyberspace requires regulation because there are publicly approved standards of conduct. Online debates, discussions, and narratives that are unlikely to promote constructive “cognitive conduct” is just as corrosive to the state’s organising mission as is “physical” antisocial conduct like petty corruption or prostitution.

This is afterall the avowed aim of the Central Propaganda Department, and against this no norm of the sanctity of intellectual privacy can prevail.

 

Pigeon Drones vs Falcon Drones

ø

The Paul Allen backed Zipline Corporation has since 2016 operated a commercial drone service in Rwanda that reportedly transports 20% of blood needed for emergency transfusions in that country.

Zipline is now working to expand its services to as many African countries as possible. Tanzania and Ghana are next in line to patronise the company’s services.

At first glance, a transport drone service shouldn’t raise any major privacy issues. Zipline’s drone-enabled life-saving missions, their much uplifting character notwithstanding, are not all that removed, conceptually and operationally, from the several, widely discussed, models of drone-enabled commerce around the world.

Amazon, Google, 7-11, Walmart and many major corporations around the world, particularly in the US, have all launched major R&D efforts to make package delivery by drones a ubiquitous and mundane reality by the close of this decade.

The implications of this trend for supply chain management, precision transportation, routing optimisation and various other major operational functions of modern business are said to be legion. Zipline for instance claims to have reduced delivery cycle time from 4 hours to 15 minutes on blood transport trips. In fact, one of the earliest backers of the Zipline effort in Rwanda was the UPS Foundation, the charitable arm of the American postal giant. Government agencies, humanitarian relief organisations and civil networks are all expected to get into the game.

Unlike surveillance and anti-personnel drones in the service of state security and military entities, civilian applications in the transport, commerce and logistical domains are rarely scrutinised for their privacy impacts.

It is perhaps not too surprising then that despite a thorough search for evidence showing that the design of Zipline operations in Rwanda and elsewhere operate with a definite privacy policy, I have been unable to find any. But there is no point singling out Zipline. No drone-commerce initiative that has received attention of late has a publicly accessible privacy policy. And there is virtually no serious scholarly work on the subject.

The reason is obviously that unless “surveillance” or other privacy-touching processes can be observed in the form to which our analytical tools have grown accustomed, concerns about privacy rarely surface. This is the “form-factor fixation” dimension of privacy appreciation and analysis.

And, yet, Zipline’s service is dripping with all manner of privacy ramifications, which will only grow as its operations grow in scale and sophistication, and particularly if it is to be truly successful in contributing substantially to health outcomes.

To be able to deliver the right types of blood to the right patients in the right locations, whether on a routine or emergency basis, information about the patient must be input in the electronic ordering, routing, and coordination platforms of Zipline. To optimise results tracking, minimise errors, and enhance accountability for quality of service, delivery performance data needs to be linked to individual patient outcomes. At any rate, certain portions of a patient’s medical records would be critical in any emergency intervention.

How is this different for a normal ambulance or other legacy transport system though? This: the fact of remote operation. Remote operation introduces important elements regarding the personnel dimension that can alter standard privacy arrangements. The persons operating the drone need not be actual health responders. They do not have to be operating from a controlled health facility, and they are not subject to various measures which today are designed to be tied to specific physical coordinates of health infrastructure units. To the extent that Zipline is an private company whose technicians and contractors operate outside the legacy health system, and, more importantly, because of the remote capabilities wielded by these contractors, the design of information-intensive processes to enhance quality control and accountability also risks the exposure of sensitive personal health information to unauthorised access.

A major point to bear in mind is that “form factor – fixation” often leads to a compartmentalisation of new technologies that have no true grounding in reality. Whether a drone is a “surveillance platform” or a “transport vehicle” is purely a question of mode of behaviour not of set function, and the answer to that question can change from context to context for the same system.

Hence, a drone transporting sensitive medical products into contexts of heightened vulnerability for individuals or groups of individuals regardless of its nominal designation metamorphosises into a creature far more concerning than a drone designed to drop crates of soda into suburban courtyards.

Privacy policy design for novel and emerging services, whether for drone deliveries or 3D printing of biotech products, should always proceed with an emphasis on behavioural dynamics not structural form.

Log in