Cybersecurity legislative and policy proposals have had to grapple with when (if ever) firms ought to be held liable for breaches, hacks, and other network intrusions. Current approaches tend to focus on the data that spills when bad things happen: if it’s sensitive, then firms are in trouble; if not personally identifiable, then it’s fine; if encrypted, then simply no liability. This approach is a little bit strange, by which I mean daft: it uses the sensitivity of the information as a proxy for both harm (how bad will the consequences be?) and precautions (surely firms will protect more sensitive information more rigorously?).
I propose a different model. We should condition liability – via tort, or data breach statute, or even trade secret misappropriation – based upon how the intruders gained access. Let’s take two canonical examples. One exemplifies the problem of low-hanging fruit – or, put another way, the trampling of the idiots. Sony Playstation Network (Sony is a living model for how not to deal with cybersecurity) apparently failed to patch a simple bug in their database server that was widely known (an SQL injection attack, for the cognoscenti). Arthur the dog would have patched that vulnerability, and he is a dog who is continually surprised to learn that farts are causally connected to his own butt. On the other hand, Stuxnet and Flame depended upon zero day vulnerabilities: there is, by definition, no way to defend against these attacks. They are like the Crane Kick from “The Karate Kid”: if do right, no can defense.
So why would we measure vulnerability based on data rather than precautions? The latter is a classic tort move: we look at whether the defensive measures taken are reasonable, rather than whether the harm that resulted is large. I would suggest a similar calculus for cybersecurity (ironic in light of software’s immunity from tort vulnerability): if you get pwned based on something you could have easily patched, then you’re liable for every harm that a plaintiff can reasonably allege. In fact, I’m perfectly happy with overdeterrence here: it’s fine with me if you get hit for every harm a creative lawyer can think of. But if your firm gets hit by a zero day attack against your Oracle database, you’re not liable. (There are some interesting issues here about who can best insure against this residual risk; I’m assuming that companies are not the best bearers of that risk.)
This leaves some hard questions: what about firms that have stupid employees who open e-mails loaded with zero day exploit code? We might need a more sophisticated analysis of precautions. How was your desktop A/V? Did you segment your network? Did you separate your data to make it harder to identify or exploit?
To take up one obvious objection: this scheme requires some forensics. One must determine why a breach occurred to fix liability. But: firms do this analysis already. They have to figure out how someone broke in. We can design rules to protect secrets such as network defenses, and any litigation is likely to take place months if not years after the fact. I think it’s unlikely that firms will be able to game effectively the system to show that intrusions resulted from impossible attacks rather than someone jiggling doorknobs to find unlocked ones. And, we could play with default rules to deal with this problem: companies could be liable for breaches unless they could show that attackers exploited unknown weaknesses. If we’re worried about fakery, we could require that firms prove their case to a disinterested third party, such as Veracode or Fireeye – companies with no incentive to cut a break to weak organizations. Or, we could set up immunity for firms that follow best practices: encrypt your data, patch known vulnerabilities in your installed software base, provide for resilience / recovery, and you’re safe.
I think we should differentiate liability for cybersecurity problems based on how the attackers broke in. Were you defeated by the Crane Kick? If so, then you get sympathy, but not liability. But if it turns out that you left the front door unlocked, then you’re going to have to pay the freight. We can’t expect miracles from IT companies, but it makes sense to require them to do the easy things.
1 Comment »
Filed under: badware, Computer crime, Criminal law, Encryption, FTC, Intermediaries, Internet & Society, ISP, Microsoft, national security, NSA, Politics, Search Engines, Security, Software