Responsible Cyberwar

What means war and what doesn’t? This seems to be the main question in political science regarding cyberconflict. From the perspective of governments the question is more like “how much can we get away with.” From what I know espionage has typically not been treated as an act of war, although the unlucky spies didn’t fare too well regardless. Espionage often involves infiltrating the enemy (and friends!) and if/when those spies get caught the target country gets to decide what to do with them. In this way there’s accountability. In areas like traditional (e.g. terrestrial radio) singles intelligence, it would be quite odd to view listening in as an aggressive act. The hacking of computer systems to gain intelligence seems different from all of these examples; there’s infiltration but no human culprit. As to destructive hacking, I see no significant difference between it and dropping bombs or destroying paper-based information. Just cause you use a computer to do it doesn’t change the fact that you made something happen in the physical world. Cyberattacks can be a lot more specific in what they destroy, in the case of Stuxnet and the Iranian centrifuges the bug literally only broke the machines. The traditional variation of the attack would probably have include several thousand pounds of bunker-busting bombs and left a giant smoking hole in the ground. It would also have killed people, which would probably have been seen as far bigger affront. However, cyberattacks are not inherently narrow in their scope of destruction and they can miss their intended targets or the cyberarms can fall into the wrong hands. The WannaCry attacks are an excellent example of this; NSA tools were leaked and used by criminals to write malware that shutdown hospitals all over the world.* Thinking about it now, this seems very analogous to the Obama era “fast and furious” operation that resulted in the arming of Mexican drug cartels (of course that wasn’t the point, everything just went terribly wrong). The big problem with cyberarms is that if they are leaked they can be copied and adjusted to attack entirely different targets. Once in the wild it’s nearly impossible to stop their spread.

What I’m trying to get at here is that cyberattacks aren’t about war, they’re about creating weapons that will inevitably get into the wild where they can be distributed without limitation. Anyone who thinks they can control cyberweapons and simultaneously use them is delusional. Lucas Kello notes that the cost of cyberattacks include losing precious zero-day vulnerabilities to exploit because those are patched once an attack displays their use. But the cost is higher than that; until patched these zero-days can be used by other countries or criminal gangs or bored teenagers. That’s a really big problem. A responsible government would surely not want such a thing to happen. Nobody would drop re-usable bombs because that would be really stupid.

A responsible government has an alternative option; instead of setting malware-weapons loose they could notify developers of the same vulnerabilities. Any country with an active “cyber-attack” squad already has most of this in place. They already know how to find bugs, the only remaining step is to tell someone who can do something about it.

But what about irresponsible governments? They will certainly use cyberweapons if they have them and so won’t those bugs get out into the wild anyways?

Not if they’re already patched.

 

 

 

*In case I’m wrong and there really was no direct link between leaked NSA tools and WannaCry, such a scenario could happen just as easily as a gun running sting operation could end up arming the cartels and not catching their leaders.

Leave a Comment

Log in