Skip to content

Accountability in government code: the need for due process

A case of bad QA?In the Terminator movie series, an artificially intelligent defense network concludes that the greatest threat to world peace is human beings and proceeds to launch preemptive world-wide nuclear strikes. This Oedipal fear that our silicon-and-code children might one day overthrow us – a favorite among dystopic science fiction authors – may be closer at hand than we’d like to think. Predator drones have yet to evolve to Schwarzenegger clones, but our fears about the inner workings of Diebold voting machines are real and understandable. Thus Professor Danielle Citron’s forthcoming article, “Technological Due Process,” provides a timely examination of an urgent problem challenging our computer-dependent society.

Professor Citron, of the University of Maryland School of Law, has written the opening chapter to an important new field of legal study, one hinted at in Lawrence Lessig’s “Code is Law” formulation and which I had dubbed “Law as Code,” but which until now has not received direct scholarly investigation. Her article, forthcoming in the Washington University Law Review, identifies the need for “technological due process” in software that executes government policies – from public benefits to no-fly lists.

In brief, Professor Citron describes how software code increasingly executes our public laws. Decision support systems, she convincingly argues, quickly become decision making systems. And invariably, the vagaries of the legislative and administrative processes leave large gaps in the specifics of how a given law should be executed. Without firmer guidance from proper governmental bodies, the programmers charged with translating legal code into software code essentially wind up creating law to fill the gaps. (I describe this as “shoving analog pegs into digital slots”). From a procedural – even a Constitutional – perspective, this is a grievously inappropriate delegation of governmental functions to the private sector, not unlike the hiring of Blackwater mercenaries to achieve military objectives. Professor Citron finds, therefore, the need for “technological due process”: safeguards to ensure that software is literally up to code.

In exploring what I’d described as “Law as Code,” and now significantly better-informed by the invaluable analysis of “Technological Due Process,” this fall the Berkman Center started seeking out areas of public law at risk of being improperly executed by unaccountable software code. In my work in legal aid I’d already identified the distribution of federal food stamps as one such area. Here in Massachusetts, the BEACON software system is riddled with errors and misinterpretations of the law. In New York State, Federal District Court Judge Rakoff had similar invalidated similar software allocating that state’s food stamps (445 F.Supp.2d 400). Rather than going after broken code, we’re much more keen on identifying code not yet in place, where identifying and implementing some best practices can go much farther than litigating for change in a system already starved for resources.

“Technological Due Process” points the way to new vistas of research. Perhaps most important among them are questions about how to bring democratic accountability back into the system. From my perspective, at least two paths present themselves. The first lies in the realm of classic administrative law. There is no question in my mind that many instances of government software meet the definition of agency rules: they are rules of execution, prospective in nature, applied uniformly. Meeting that standard is significant: it would subject such code to rigid requirements such as public participation in their creation (notice-and-comment) and judicial review in their actual application.

But as James Grimmelmann, now a professor at New York Law, pointed out when I’d first suggested this approach to the CyberScholars this past spring, many lawyers and scholars perceive ad law as broken. Furthermore, perhaps software is sui generis and deserves its own method of review tailored to its unique nature.

Berkman clinical student and JOLT editor Ryan Trinkle has proven invaluable in guiding our search for review methods native to software. As both a student and practitioner of the coding arts, Ryan pointed out that software quality assurance (QA) might provide some excellent models for how to achieve the due process goals of ad law. We ultimately struck on one potentially elegant merger of QA and notice-and-comment: obligate vendors to include testing suites with their software, and allow the public to submit specific cases that the test suites would calculate. In the case of food stamp software, for example, advocates for domestic violence survivors might submit a battery of both mainstream and “outside case” scenarios and evaluate the results for accuracy. (The New York food stamp laws had specifically tripped up on a category of undocumented immigrants who were also domestic violence victims). The proof of the software, then, would be in its results — and tested publicly before deployment.

Prof. David Super, a colleague of Prof. Citron and visiting this year at Harvard, has also suggested avenues for federal change, namely, the Office for Management and Budget under its “Management” mandate. OMB issues circulars governing how the quality of products and services purchased by the federal government which include software (§277), but most if it pertains to accounting except for §277.18. We might consider contacting the Office of Information and Regulatory Affairs (OIRA), whose mission is ensuring good government, and making the dual argument that the software the government currently procures is (a) bad, and (b) burdensome to maintain.

Ultimately we predict that legal and software code will merge as semantic computing becomes more powerful. Indeed, perhaps “legalese” will evolve into an even more technical, even self-executing, language. One way or another this evolution will also call for more specially-trained lawyers who can bridge the two faces of code.

But until then, I think our best hope for democratic accountability over the software that increasingly shapes public life will be robust testing suites as Ryan suggests and further research down the paths that Prof. Citron has broken open for us. And, perhaps, never giving one of our artificially intelligent agents a nuke.

Be Sociable, Share!