Alan Friedman – 9/21/2010


Alan Friedman, fellow at the Center for Research on Computation and Society at Harvard (CRCS), is inaugurating a new tradition at the Berkman Center: a fellows seminar. For years, the community of Berkman Fellows have met for discussion on Tuesdays afternoons. This year, we’re trying something a bit new. Each Tuesday, one or more of the fellows will offer a “seminar” on a topic, loosely connected to a central theme: internet exceptionalism. So Alan’s talk is addressing both his topic of choice – the dynamics of cybersecurity policy discussions – and a larger topic – are issues of security different in the world of the internet than in a pre-internet world?

Alan gets the honor of leading off our seminar in part because he’s about to leave Berkman and begin work at a Washington DC policy center. As such, these issues are near and dear to his heart. He begins by warning us that he’s going to give “a short, general talk on cybersecurity designed to persuade people not to listen to short, general talks on cybersecurity.” His concern – we do a disservice to the complex ideas behind cybersecurity by melding multiple issues into one. He acknowledges that there are key computer security and information security issues that need addressing, but the focus for his provocation is on skepticism about the current framings of cybersecurity.

Showing us the cover image from the July 1, 2010 issue of the Economist, which focused on cybersecurity, Alan attempts to unpack a four-page article in that newspaper. The article on cybersecurity covers over a dozen topics, including critical infrastructure, military strategy, cybercrime, financial fraud, economic espionage, espionage between states, and issues of global governance. These topics are so broad that it’s impossible for an individual to be knowledgeable about all fronts – in an academic context, each of these could be a full academic specialty.

It’s important to unpack the discourse of cybersecurity because framing matters – how you talk about a problem affects how you think about it and how you try to address problems. One common frame for cybersecurity is national security. If we frame these issues in terms of national security, then we conclude that these issues are hugely important and that price is no object. If we address them as criminal justice issues, we focus on getting the bad guy. Alan offers the frame of “identity theft” as an example. If we see identity theft as theft, we focus on deterring the their or on letting victims defend themselves. If we think of this in terms of “impersonation”, the responsibility might shift onto the processor and away from the “victim”. Alan suggests that we might consider frameworks aside from national security or criminal justice: pollution, public safety, or the cost of doing business.

Cyber policy can now touch on virtually everything, Alan tells us:
– National interest – other states have visions for how they deal with the digital world. Do we need an overarching national vision?
– International governance – people in the US tend not to be excited about the realization that these systems extend beyond our national border and that decisions of other states may shape what we can do
– Legal issues – Do computers and IT change our underlying assumptions about law
– Freedom of Expression
– Critical infrastructure

For the purposes of today’s discussion, Alan asks to focus on issues of national security and of crime. National security has become
a hot topic in cyber policy circles. The forthcoming US CyberCommand is a political compromise that allows the Department of Defense to engage in cyber activities without duplicating the National Security Agency’s deep competence in this field. But some of the rhetoric is getting downright strange – the Department of the Navy has declared that every one of their 75,000 employees are now “cyberwarriors”, whatever that might mean.

In the context of cyber policy and national security, Alan suggests we consider some key policy considerations:

Reachable states In defense planning, there are certainly people employed to think about the possibility of a full on war with China. As such, there are now people thinking about full-on cyberwar with China. Alan suggests we need to do “conditional probability” – before we consider full-on war or cyberwar with China, what’s already transpired. It may be ludicrious to consider the possibility that our adversary would knock out the electrical grid in the Northeast during winter because, at that level of conflict, we might already be exchanging nuclear arms.

Proportional response While this is a new battlefield, and while there are almost certainly dangers in terms of intercepting communications and espionage, it’s possible that we can use old language of proportional response. In the days of the cold war, we’d see a sub come too close to our shores and we’d put a few more bombers on the tarmac, which would be visible on the next satellite photo. This is the language of how states interact, and we should bring this into discussion of proportional response in terms of cybersecurity.

Cyberterrorism, or “state versus non-state actors” Alan offers the story of a town in New Jersey that sought – and received – $10,000 in anti-terrorism funding to protect a gumball machine, a local attraction. In the wake of 9/11, we noted that people could attack critical infrastructure and protected it as if someone would attack that infrastructure. We should explore how our digital infrastructures work and map resilience, but it may not make sense to protect every system as if it’s going to be attacked and critical if it fails.

The development of a national cyberwar strategy focuses on issues of deterrence. This is an attempt to map this new space onto a previously understood model. In the offline world, we know that bad actors can hide in other states, and since we can bomb those other states, we hold states responsible for the actions of their citizens and those they harbor. Following this logic, Richard Clarke suggests we hold nations responsible for every bit that transits their borders… and suggests the US take the lead in this space. This has serious implications for how we handle identity in a digital age – it points to the need for a hierarchical model that maps individuals back onto states.

Moving to cybercrime, Alan offers some observations about what’s new and what’s old in this space. Cybercrime is a visceral issue – theft through the computer screen feels more scary because it could happen because I failed to patch my software. And it’s scary because companies are trying to scare us – McAfee claims $1 trillion in annual damages from cybercrime… which is both larger than the total output of the global IT industry, and represents 8% of the global economy.

He suggests that most cybercrime fits into five general, and well understood, categories:
– hijacked resources
– authenticator fraud
– IP theft
– Illicit content
– Scams

The specific attacks and the ways these map to these categories are complicated, as are the vulnerabilities that make these attacks possible, and the organizations responsible for protecting us from these attacks. But much of this territory is understood. What’s genuinely new? Possibly the victimization of children. Possibly fraud, which now scales much more easily. But Alan recommends we focus on industrial espionage and cases where identity fraud can cause critical failures.

In summary, Alan is arguing that when we talk about cybersecurity, we’re talking about a huge bundle of issues, perhaps too huge a bundle. It behooves us to ask “Why is cyber now on the agenda?”, “Can and should we co-opt this attention to promote our own agendas?” and “What’s genuinely new and important here?”

Hal Roberts wonders if we know what security actually means in an online context. He notes that McAfee recently released the alarming statistic that 60% of people are victims of cybercrime. If that seems insane, it’s because McAfee includes anyone who reports being infected with a computer virus as being a cybercrime victim… which suggests that 40% of the people who’ve been infected with viruses don’t know they were affected. If you’re the “victim” of “crime”, and you didn’t notice, does it matter?

Ethan Zuckerman asks whether this tendency to lump everything under cybersecurity was a moment in time that would pass and proceed to more sane discourse in the future – can’t we just wait for everyone to figure this out and handle this issue slightly more sanely?

Alan explains that, if we don’t fight the current frames around cybersecurity, the best case scenario is that the US government spends billions of dollars badly. (He cites a deal between HP and the Navy where the Navy paid HP $2 billion so HP would tell it what it had done in building the Navy’s information architecture, as that architecture was the property of HP.) In the worst case scenario, changes made in the name of cybersecurity might add strong authentication and identity mechanisms to the internet and damage the current openness and generativity.

Ethan asks whether the big difference between the internet and the real world in security terms is that identity is so difficult to establish online. If we can’t identify who’s responsible for an attack – whether it came from a state or an individual, which state it came from – how do we retaliate? And without retaliation, is there deterrence? If the issue is identity, doesn’t that point to a solution that makes identity much less fluid on the internet… a solution many of us don’t want because it has terrible implications for privacy and freedom of speech.

Alan mentions that identity matters in terms of certain types of cyberattacks (the attacks on Estonia, for instance, where – citing an unnamed academic – he argues the attacks were a cyberriot, not a cyberwar) and in terms of phishing. Deep identity solutions might offer some protection there. But cyberwarfare isn’t just about attribution – it’s about defense. We need to understand our vulnerability to targeted attacks. If you really want to take out the east coast power grid, he argues, you might need eight riflemen shooting insulators… but they need to know which insulators to shoot. What’s scary in cybersecurity is how carefully targeted some of the attacks we’re starting to see are – espionage attacks that focus on specific deputy secretaries in the State department before critical negotiations. He warns, “If you’re negotiating an international contract, there’s a decent chance your counterparty knows your reserve price. it’s going to affect negotiations, and might mean that secrets live in people, not in networks.”

David Weinberger notes that US discourse over conflict no longer includes discussions of peace – we’re a long way away from the 1970s and 80s when “peace studies” was a central part of curicula about conflict. Now people are addressing cyber-insecurity and cyberwar. Do we have a vision for cyberpeace?

Alan offers that cyberpeace might be systems functioning as we expect them to. Perhaps cyberpeace was when the internet was young and innocent, prior to the Morris Worm. Now we might think in terms of pollution – the “background radiation of risk” that comes from spam, identity theft, phishing, DDoS. That might be a more appropriate frame than war and peace – we’ve had 25 years of IT fueled growth, and now we many need to deal with some of the pollution that industry has generated.

Wendy Seltzer draws out the pollution frame, suggesting that we consider problems in terms of ones with local bad effects (polluting a local water source) and those that have systematic effects (carbon dioxide emissions leading to global warming.) “Are there systemic-level cybersecurity problems that we need to address, without trying to make everyone perfectly safe?”

Hal Roberts suggests that Jonathan Zittrain‘s concept of generativity might be a vision of peace – it’s what we want to promote when the internet works well. The cosmopolitan vision of Global Voices could be another vision of what we want from an internet at peace. We handle the bad stuff via social insurance systems – systems that spread the cost of bad action over the many who benefit from being protected – and celebrate the good stuff.

Ethan tells a story from a conference at Princeton’s Center for Information Technology Policy. At a discussion of these issues, at least four camps were represented – a national security camp, a cybercrime camp, a human rights camp – which argued that activists are often targeted by their own states through IT systems, and a network administrator camp. The latter camp argued, “Sure, the internet is broken. But it seems to work pretty well nevertheless. Weinberger quotes Tim Berners-Lee as saying “the web will always be a little broken.” Wendy Seltzer contributes, “Any system that can’t be misused isn’t worth using,” a saying so pithy, it might need to go on Berkman’s coat of arms.

Charlie Nesson asks Joseph Reagle, who’s recently published a book on Wikipedia, about what cybersecurity looks like from the perspective of a community like Wikipedia, citing the community as an exemplar of “internet peace”. Reagle notes that Wikipedia is could be vulnerable to DDoS and is certainly affected by vandalism… but he wonders if we have meaningful enough definitions of war, peace and crime to be able to discuss these ideas. Some things are so massive in scale – an electromagnetic pulse attack on our information systems, for instance – that it’s a mistake to believe we can think through all the consequences.

Charlie pushes forward, wondering whether the model of Wikipedia – which Joseph’s book asserts is a community based on good faith at its core – could offer instructions to build other communities of good faith. Joseph suggests that “trust involves baring your throat”, and that Wikipedia depends on having faith in your fellow contributors and on a wealth of subtle factors. “These systems are delicate, much like internet security at large.” Game theory suggests that the system as a whole could fail if certain thresholds are crossed and contingent cooperation no longer leads to positive community behavior. Joseph worries that the way Alan is framing these issues – asserting that there may be no vision of cyberpeace – contributes to a world where mutual cyberarmament is inevitable. This, in turn is aided by a cyber-industrial complex that benefits from a militarized cyberspace.

Doc Searls invokes Scott Bradner, one of the “greybeards” responsible for international internet governance and Harvard’s chief security officer, who describes the difference between bellheads (telco techies) and netheads as a religious difference. Bellheads believe in the importance of carrier grade service, “six-nines” (99.9999% uptime) and central control, as opposed to the nethead values of “loose consensus and running code”. Carriers believe (still) that the internet doesn’t work. Perhaps this means that systems that are cyber are inherently somewhat insecure. Peaceful, perhaps, but not secure.

Brad Abruzzi suggests that, in the wake of 9/11, Americans began thinking about all the real-world vulnerabilities to terrorist attack that we might consider worrying about. Could reservoirs be poisoned? How secure was our food supply? What if terrorists targeted grocery stores? Eventually, many of us came to the conclusion that it’s impressive and surprising that, as vulnerable as we are, we aren’t attacked all that often. Cyberspace may be scarier, because attacks could come from one person anywhere in the world… but we need to contextualize our fear and realise that we’re vulnerable on many fronts.

Alan concludes by observing that a cybersecurity and cyberwar paradigm means we need to consider not just the flows of information on networks but the risks associated with those flows. He points out that there’s a treaty being proposed, pushed by Russia, that would prohibit cyberwarfare. That treaty is unlikely to be acceptable to the US, and may be designed to push the US into a corner, painting the US as the aggressor in this space. The EU’s approach to these problems may be quite different from the US approach, because their concerns about privacy carry more wait. These issues are real, in play, contentious and unlikely to be settled any time soon.

In Preparation
Clay Shirky – Losing the discipline of journalism

Comments are closed.