You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

The Fidelity of Errors: Why Biometrics Fail in African Elections

It is trite knowledge that all biometric systems rely on probabilistic matching, and that every successful biometric authentication of an individual is merely a statement about the likelihood of their being who they are expected to be.

What is more interesting is the determination of the factors that go into establishing what the reasonable margin of error should be when processing such a match, or, more accurately, when estimating these probabilities.

Officially speaking, the error tolerance margin should be determined by empirical testing based on data about the false match rate and the false non-match rate; in essence: the interplay between the “false negatives” and “false positives” record obtained from the field.

The practice in many situations, however, flouts this normative standard, with the effect of leaving calibration determination to the competing forces of commercial logic and political expediency. And no where more so than in the context of sophisticated African elections.

To fully illustrate this point, it bears clarifying, at a very high level, some key technical concepts.

Some errors are intrinsic to the probabilistic nature of biometric technology itself, but many others are non-inherent or extrinsic, and these sources of defect include factors that range from the technical to the sociological, such as:

  • Topological instabilities in matching as a result of source object (such as a finger) positioning in relation to the measurement device
  • Poor scanning of biometric attributes
  • Storage and retrieval quality
  • Personnel and manpower inadequacies
  • Environment and ambience (for example, lighting and aural ambience can affect voice and visual biometric authentication in varying ways)

Extrinsic sources of error are typically institutional, and thus literally impossible to eliminate without wholesale interventions in the design and user environment typically considered out of scope when constructing even the most complex biometric-enabled systems.

Intrinsic sources of error are far more complex in their provenance and, not surprisingly, very difficult to adequately explain in a post of this length.

In general though, these errors emanate from the statistical procedures used to reduce ratios, measurements and correlations compiled from the physical imaging or recording of phenotypical (or, in the near future, genotypical) features of designated source objects associated with the target individual or subject of interest. We might refer to this type of error as representing a level of “systematic risk” indispensably present in the use of the underlying concepts of biometry themselves as a means of precise differentiation of sociobiological humans.

The common effect of both types of error is however straightforward: the need for large-scale biometric-enabled programs and systems to anticipate considerable errors in the margin of performance.

The anticipated risk that results from this error-certainty bifurcates into a probability that the machine shall include individuals who should be excluded from authentication, and an alternative probability that the machine shall exclude individuals who should have been included.

At the intersection of these two sets of expectations lies the acceptable threshold, which, of course, moves up or down depending on which of the two main probabilities are of the strongest concern, a determination strictly dependent on institutional context and the stakes of failure.

In the case of elections, the strongest tension is between the delegitimating of results by permitting a certain margin of the same fraud against which biometric systems are usually introduced to stem, on the one hand, and, on the other hand, disenfranchising several electors whose right to vote and influence the selection of their leaders many constitutions insist should be treated as sacrosanct.

Empirical testing in the user environment prior to the design of the biometric instruments would be the obvious sensible prerequisite to the calibration of the thresholds for rejection and inclusion. Yet, they are almost always, across Africa, performed hardheartedly, with the result often that basic challenges stemming from occupational (some types of work deface fingerprints faster than others, for example), environmental (higher amounts of dust, sweat, etc), infrastructural and other challenges, quickly ricochet into higher than expected rates of false negatives.

Because false negatives are more “visible” (they exclude and therefore antagonise), the feedback loop against false non-match rates are far stronger than false match rates, which in the field can only be surfaced by mystery shopping, a prospect highly infeasible in the normal, politically charged, election scenario.

False positives do show up in the end though, when despite biometric authentication and biometrically sanitised electoral registers, overvoting incidents, notionally impossible in the presence of the technology, are recorded. But this being a delayed effect, and likely only discoverable in the event of a disputed and judicially scrutinised outcome, the incentive to set threshold rates low enough to minimise false non-match rates and, as a corollary, raise the prevalence of false match rates, is much stronger.

Consequently, in the two African countries where the most sophisticated biometric systems have so far been deployed in recent general elections – Ghana and Kenya – an interesting trend was observed. Whereas in previous elections, biometric exclusion incidents were numerous and resulted in serious altercations at polling stations, more recent elections saw very few of these types of incidents. Yet, following the decision of Opposition parties to challenge the elections in the law courts, alarming evidence emerged of outcomes that should theoretically have been barred by the deployment of biometric apparatus. Incidents such as over-voting.

Why is testing so poorly done? Primarily because most of the vendors behind these solutions tend to be foreign systems integrators with minimal exposure to the sociological context of processes they are expected to model, and also because commercial considerations often militate against a serious examination of system design. Much too often, contracting procedures are shrouded in technical secrecy and undue backroom backscratching, with claims of proprietary standards and jostling for advantage by political connected rent-seekers.

With cost per voter in the most sophisticated African electoral systems now among the highest worldwide, many observers are beginning to wake up to the urgent need for outcomes-driven reforms of the technical safeguards of universal suffrage on the continent.

Leave a Comment

Log in