Archive for the 'Uncategorized' Category

⚐ Cyber Dialogue 2014 – Working group on “Cyber war & international diplomacy issues”


I am just back from the CyberDialogue conference, an event presented by the Canada Centre for Global Security Studies at the Munk School of Global Affairs, University of Toronto, that convenes actors « from government, civil society, academia and private enterprise to participate in a series of facilitated public plenary conversations and working groups around cyberspace security and governance ». The conference was frankly awesome – I’ve always been fan of the Citizen Lab, they deserve major applause for putting together a very productive and unique conference. They did a truly impressive job at convening actors with very different backgrounds and points of view – something that is much needed in that space, as conversations unfortunately tend to happen within echo chambers of like-minded experts.

I was lucky to be among an impressive group of folks for a working group on cyber war and diplomacy. We had a whole day to talk through differences in perspectives, to identify priorities and fault lines and to come up with a short statement of things we could agree on – then point to things we couldn’t reach consensus on.

Below are three points we managed to agree on. It reads a bit mild, because it had to be common grounds and consensual, yet it’s interesting because it stresses the necessity address cyber conflict norms in times of peace.

“1. There exists a need to establish viable norms for state behavior in peacetime regarding cyber security.  Much attention has been dedicated to the threshold for cyber action regarding the Law of Armed Conflict, but less about the steady state beneath armed conflict.

2. Attention should be paid to the development of humanitarian principles for cybersecurity.  For example, are Computer Emergency/Incident Response Teams (CERTs) off limits for targeting?  Can International Committee of the Red Cross principles – neutral, impartial, and independent – be applied to the first responders of cyberspace?

3. International activity on cybersecurity, such as the UN Group of Government Experts on cyber and other multi-national initiatives, should include a mechanisms for inclusionary participation and distributed responsibility beyond government and industry.”

Many thanks to our group moderator, Chris Bronk from Rice University, for compiling these remarks. I am pasting the framing notes of the working group below, along with the conference topic for this year. To keep groups small enough to function, the initial working group topic been separated into “Surveillance and accountability” and “Cyber war and international diplomacy issues”. On one hand, that clearly made us more effective, on the other hand I would have loved to finally see a conversation on how cyberwar institutions and legal frameworks feed into the dynamics of Governmental surveillance. The « domestic » surveillance conversation is often held separately from the « cyberwar » international one, which obscures how deeply they are linked.

More on this later: if more details of the discussion (under Chatham House rules though) are posted, I’ll share links. In the meantime:

Conference website2013 Cyber Dialogue video gives a clear idea of what the conf. feels like:

Citizen Lab website is filled with great researchBits of conference wisdom can be found on Twitter hashtagBonus Twitter feed

✎ WORKING GROUP TOPIC: From Surveillance to Cyber War: What are the Limits and Impacts?

Moderators: Jan Kleijssen (Council of Europe) and Chris Bronk (Rice University)

Description: The Snowden revelations have touched off widespread criticism and alarm of government organized mass surveillance and computer network exploitation and attacks. Yet even liberal democratic governments require well-equipped law enforcement, intelligence, and armed forces to enforce the law and secure themselves from threats abroad. The world can be a nasty place, and we have to live in it. Both mass and targeted surveillance, including computer network exploitation and attacks, are likely going to be a part of the that world for the foreseeable future. What are the proper limits and safeguards of lawful intercept? Do we need new forms of oversight and accountability? How do we reconcile the seemingly conflicting missions of agencies charged to protect domestic critical infrastructure from attack while developing ways to compromise networks abroad? Is there an arms race in cyberspace? How do we control it? Can we develop norms to limit global cyber espionage?

CONFERENCE TOPIC: After Snowden, Whither Internet Freedom?

A recent stream of documents leaked by former NSA contractor Edward Snowden has shed light on an otherwise highly secretive world of cyber surveillance. Among the revelations — which include details on mass domestic intercepts and covert efforts to shape and weaken global encryption standards — perhaps the most important for the future of global cyberspace are those concerning the way the U.S. government compelled the secret cooperation of American telecommunications, Internet, and social media companies with signals intelligence programs.

For American citizens, the NSA story has touched off soul-searching discussions about the legality of mass surveillance programs, whether they violate the Fourth and Fifth Amendments of the U.S. Constitution, and whether proper oversight and accountability exist to protect American citizens’ rights. But for the rest of the world, they lay bare an enormous “homefield advantage” enjoyed by the United States — a function of the fact that AT&T, Verizon, Google, Facebook, Twitter, Yahoo!, and many other brand name giants are headquartered in the United States.

Prior to the Snowden revelations, global governance of cyberspace was already at a breaking point. The vast majority of Internet users — now and into the future — are coming from the world’s global South, from regions like Africa, Asia, Latin America, and the Middle East. Of the six billion mobile phones on the planet, four billion of them are already located in the developing world. Notably, many of the fastest rates of connectivity to cyberspace are among the world’s most fragile states and/or autocratic regimes, or in countries where religion plays a major role in public life. Meanwhile, countries like Russia, China, Saudi Arabia, Indonesia, India, and others have been pushing for greater sovereign controls in cyberspace. While a US-led alliance of countries, known as the Freedom Online Coalition, was able to resist these pressures at the Dubai ITU summit and other forums like it, the Snowden revelations will certainly call into question the sincerity of this coalition. Already some world leaders, such as Brazil’s President Rousseff, have argued for a reordering of governance of global cyberspace away from U.S. controls.

For the fourth annual Cyber Dialogue, we are inviting a selected group of participants to address the question, “After Snowden, Whither Internet Freedom?” What are the likely reactions to the Snowden revelations going to be among countries of the global South? How will the Freedom Online Coalition respond? What is the future of the “multi-stakeholder” model of Internet governance? Does the “Internet Freedom” agenda still carry any legitimacy? What do we know about “other NSA’s” out there? What are the likely implications for rights, security, and openness in cyberspace of post-Snowden nationalization efforts, like those of Brazil’s?

As in previous Cyber Dialogues, participants will be drawn from a cross-section of government (including law enforcement, defence, and intelligence), the private sector, and civil society. In order to canvass worldwide reaction to the Snowden revelations, this year’s Cyber Dialogue will include an emphasis on thought leaders from the global South, including Africa, Asia, Latin America, and the Middle East.

⚐ Talk at Columbia: “Augmented Humanity, Drones, Self-Driving Cars, Furbys and Robotic Politics: Freedom and Security in the Robotics Age”


This lunch talk was an intro of robotics policy issues for SIPA: I bundled the different Robotics policy issues in “phases”, that also correspond to the chronological evolution of these concerns. An object that focused policy attention is chosen for each phase: “Furby phase”, the “Self-driving car phase”, the “Drone phase” and the “Transhumanist phase”.

Written notes to be posted soon!

Abstract As we surround ourselves with robots, autonomous or not, from the ground to the sky, we are facing policy questions we thought pertained to the realm of science-fiction. We build drones for both war and investigative journalism, plan to put self-driving cars on the road and design social robots to care for elders: what are the implications for our freedom and security? What are the social, ethical and policy questions we must address? For instance, how does the current debate about privacy, data collection and surveillance play out in an age in which we surrender more and more of our autonomy to machines?



◓ Cryptivism: Voluntary Botnet Bitcoin Mining Fundraising?


A theoretical question on my mind: has anyone ever tried putting a voluntary botnet at work to mine crypto-currencies for philanthropic fundraising purposes? 

Botnets mining bitcoins (botcoins!) is no new idea. Though usually there not voluntary botnets, and they mine for profit, which makes it a criminal activity. See ZeroAccess  for instance. Or ESEA, the gaming company that got caught mining behind its user’s backs (“serious gamers like ESEA’s customers made excellent soldiers for a botnet army: Gaming machines have powerful graphical processing units that are pretty good at bitcoin mining”). They got sued for it in the States, which gives us a nice peak into a legal discussion around non-voluntary botnet bitcoin mining.

Bitcoin mining by voluntary botnet for for-profit purposes also seems to have been tried but in more or less shady ways, see Security researchers Brian Krebs and Xylitol on FeodalCash, which promised to make your computer work in the botnet and gives you shares of what has been mined:

“Dear slave masters, check your wallets you should have received your shares now.
We are glad you’re working with us.
Regards, FeodalCash”

If you trust an organization enough that you would join their voluntary botnet, which would be like saying “Hey, I trust you, here is a little it of my computer power, we can do a lot together”, then theoretically this organization could mine its way through a successful fundraising campaign.

I wonder how many groups have access to voluntary botnets though: botnets that people have willingly joined. I can think of Anonymous’ Low Orbit Ion Cannon, LOIC, but I’m sure there are many smaller initiatives, like Computer Science labs whose students would have formed voluntary botnets (and who could mine for pizza?). 

I also wonder how profitable the operation would be. Bitcoin entrepreneurs friends suggested that, in the current setting, putting a botnet of one million computers at work on the basis that they would mine when they’re not used by their users for other purposes would bring $50 000 a week. Research on botnet mining bitcoins (see this paper for instance) suggests that other sorts of cryptocurrencies would be more profitable to bot-mine. It seems very hard to model returns predictably.

There are a couple other challenges on the NGO’s side, for instance it’s not always easy to accept bitcoin donations. See EFF’s complicated bitcoin donation story. That being said, there is an impressive list of organizations that would accept bitcoin donations.

So there are a couple challenges on the road, but that would be an interesting case of useful clicktivism (or cryptivism?)…


✏︎ A perspective on cyberwar for the BBC


January 29th, 2014

Tara McKelvey from the BBC, whom I met as she joined us at the Drone Conference last October to moderate a panel on “Life Under Drones”, called me today with a question about cyber threats. Her great article, “Hackers, spies, threats and the US spies’ budget” is online here.

“Are cyber threats overblown?” is an common question: it is the one we are being forced to debate at each passing of budget or review of threats. The usual narrative goes: the US needs to protect itself against an “cyber Pearl Harbor”, or an “cyber 9/11”. A quick Google search will point you to a myriad of articles debating whether theses threats are a myth (one I really like here, from Henry Farrell), or a dire priority for us to address. I’m grateful to have been consulted on this point.

My work on cyberwar also operates in a slightly different dimension. Cyber threats are indeed real. Experts are right therefore to attempt to evaluate these threats with greater precision, and to budget the corresponding security spendings. However, I’m looking at cyberwar from a different angle: not primarily as a threat (may it be overblown or imminent) but as an ideological framework that is shaping both our institutional and legal reality, and our public debate.


◓ Google & DeepMind: Society, too, must ask ethical questions


Google just bought a new Artificial Intelligence firm, DeepMind. Not a surprising move, but every step Google takes in the Robots / AI direction makes the need to consider the ethical and legal implication of these activities more urgent.

In a nutshell, DeepMind is a Singularity-inspired (see co-founder Shane Legg’s talk at 2010 Singularity Summit) and London-based AI firm. This is reported to be a talent aquisition.

Founded by neuroscientists, DeepMind’s goal is to create computers that can function as human brains. Legg sees this happenning by 2013 – of course, in the process of making intelligent machines, they also wonder about what exactly is intelligence, and how to measure it (see Legg’s paper here).

In the AI lingo, this is called “strong AI”. The following short description stresses that Strong AI is about replicating Human Intelligence in geenral, not solving specific problems (like: How can Google’s search engine can give you better ads based on what they already know about you from your emails?):

Strong AI is a hypothetical artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that could successfully perform any intellectual task that a human being can.  It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as “artificial general intelligence” or as the ability to perform “general intelligent action.” Strong AI is associated with traits such as consciousness, sentience, sapience and self-awareness observed in living beings.

Some references emphasize a distinction between strong AI and “applied AI”: the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.


What does this matter? So DeepMind’s three top talents will join ranks with other brilliant AI inventors at Google, including Singularity pioneer Ray Kurzweil who joined in 2012 as Engineering Director. Google has a secret lab on campus that deals with moonshot ideas, Google X, and that already gave us the self-driving Google car. Google’s Andy Rubin has a carte blanche to create a robotics revolution. Google bought a firm that builds robots and that used to be primarily funded by grants from the US Military (via DARPA): Boston Dynamics. Regina Dugan, DARPA’s former Director, Regina Dugan, has also joined in 2012. Google also bought a firm that builds cute Humanoid robots, Meka. Since their acquisition, their website just says: “We have been acquired by Google and are busy building the robot revolution.” These are a couple items on a much longer list.

Are all of these people working together to build a giant Skynet-like organization? Probably not. Are these completely unrelated acquisitions from Google executives seeking to invest in tomorrow’s exponential businesses? Even if Google has a track record of having people work in silo, it’s tough to assume that there won’t be synergies in these domains.

Google’s intent with these aquisitions rather seems to mimic DARPA’s core purpose: “To work in vigorous pursuit of [one] mission: making the pivotal early technology investments that ultimately create and prevent decisive surprise.” (April 2013 DARPA letter from the Office of Director). Unless that with Google, it is not to prevent surprises “for U.S. National Security” but for Google’s business. That makes things quite different.

It doesn’t have to go wrong, but the move raises legitimate ethical and legal concerns. This new ecosystem that Google is building, bringing together the best minds in robotics and AI and providing them enough budgetary leeway to make all fantasies come true, more or less behind closed and opaque doors, deserves an open debate.

The most interesting comment on Google’s acquisition of DeepMind is that DeepMind have reportedly asked for an Ethical Board to be set up within Google in order to evaluate how Google could/should work on AI. In 2011, DeepMind’s Shane Legg was already evaluating as “too low” the “current level of awareness of possible risks from AI”. He warned: “it could well be a double edged sword: by the time the mainstream research community starts to worry about this issue, we might be risking some kind of arms race if large companies and/or governments start to secretly panic. That would likely be bad.”.

It is a good news that Google is taking steps to set up an Ethical Board to think about these questions, but society should also take the hint. Google’s behavior, its leader’s declarations and recent acquisition tell us: it is time for society, too, to ramp up their ethical and legal thinking of these questions.


* Jan. 29th update –  More in that direction: THE VERGE reported that as Google was selling Motorola to Lenovo, its “Advanced Technology and Projects” division of about 100 people, led by Dugan, will remain at Google and join the Android teams.

Photo credit –…


◐ Robots Conference @ Columbia University Saltzman Institute for War and Peace Studies


On Dec. 9th, I gave at talk at Columbia’s School of Public and International Affairs (SIPA) on Freedom and Security in the Robotics Age.

In this intro to Robotics Policy issues, I presented four ‘phases’ of public policy concerns in robotics, each illustrated by a an object that embodied each wave of concerns: first Furbys, then Self-Driving Cars, then Drones and finally… Cyborgs. My talk was moderated by Captain Shawn Lonergan (US Army), below are links to the event and an abstract, but I should be able to publish a write-up of the intervention soon.

Abstract – As we surround ourselves with robots, autonomous or not, from the ground to the sky, we are facing policy questions we thought pertained to the realm of science-fiction. We build drones for both war and investigative journalism, plan to put self-driving cars on the road and design social robots to care for elders: what are the implications for our freedom and security? What are the social, ethical and policy questions we must address? For instance, how does the current debate about privacy, data collection and surveillance play out in an age in which we surrender more and more of our autonomy to machines?

Links –…

◒ A Reality Check on Cyberspace: Punks, War and Ideologies


Commentary invited by editors of Scientific American

What Is War in the Digital Realm? A Reality Check on the Meaning of “Cyberspace”

By Camille François | November 26, 2013 |  Comments2
ShareShare  ShareEmail  PrintPrint

Credit: Wikimedia Commons/NASA

Cyber is everywhere: in political speeches, in newspapers, at dinner conversations. There’s cyberwar and cybersex and cybercafés (they still exist, I promise), and there’s the U.S. Cyber Command. Once in a while, there is a new surge of articles arguing that the word “cyber” is vague, dated and that we just should get rid of it in favor of more precise terminology.

That is wishful thinking: we might lack clear definitions of the cyber prefix, but for whatever reason cyber seems here to stay, which is why we should take a moment to explore what meanings and ideologies we have been infusing in this word to better inform our debates about technology.

Cyber’s most popular namechild certainly is cyberspace (always cited and never defined), and it has been here with us for more than 30 years. It’s time for a short review of its origins, its many variations and what’s hiding behind the term.

Cyberspace was a term brought to us by literature, and its trajectory traveled through poetry, academic analysis, politics and ideologies. It is now pervasively used by anyone who wishes to discuss security and democracy in a networked society. The stakes are crucially important. Using vague, misunderstood and meaningless language tools to articulate these debates hinders our ability to think critically about technology, something we can’t afford when we should be having informed debates about our expectations on surveillance, privacy or freedom of speech.

Origins in Cyberpunk
Given the lack of clear definition, it’s not surprising that the Wikipedia entry on cyberspace offers a very abstract explanation: that cyberspace is “the idea of interconnectedness of human beings through computers and telecommunications, without regard to physical geography.”

“Cyberspace” was popularized by novelist William Gibson, father of the literary genre known as cyberpunk. He didn’t mean to forge a political concept, though, and he later noted that his word was “evocative and essentially meaningless.”

In his 1982 short story Burning Chrome, the word cyberspace makes its first appearance as the name of a machine: the “workaday Ono-Sendai VII, the ‘Cyberspace Seven.’”

In his 1984 novel Neuromancer, it becomes more than a computer’s pet name and is described in more conceptual terms:

“Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts…. Unthinkable complexity. Lines of light ranged in the non space of the mind, clusters and constellations of data.”

Read it one more time. This sentence has all the seeds of topics still being discussed today, it holds all the complexity of how cyberspace could be interpreted: Is it athing, handled by its operators? Is it a space, an abstraction uniting the minds across all the nations? Is it a place, organized in clusters, or it is a political ideology, these hallucinations that take strength in consensus? Let’s explore all of the above.

Cyberspace is not a thing
You can’t “fix” cyberspace, and it doesn’t sound right to talk about The Cyberspace— cyberspace is not a thing. You can’t really use the term “cyberspace” to replace “Internet”: the first one is more abstract that the technology described by the last one. And since we’re here, “Internet” is not a thing either: it’s a set of protocols, a technology that enabling computers to talk to each other.

Could cyberspace be a “space”?
What it could be, though, is a metaphorical space emerging from the technology. “Cyberspace” could describe the abstract space in which the conversations of people using Internet are happening. It could be the name of the theoretical online salon, the public square that one can access in a couple of clicks, even though that opens questions like: whose space is it, who is ruling it, who is excluded from it, and are all the people thinking that they are in cyberspace truly in the same salon? This understanding is a fun conceptual alley to explore, and its road is paved with great academic research.

Could cyberspace be a “place”?
Now, what’s the difference between a space and a place? A space is much more of an abstract and moving concept than a place; a place is more structured, has rules, people, frontiers. A place is closer to the idea of a territory.

Europe can be analyzed as a space—its people share some sort of common history and principles, but when its frontiers and ideology are discussed, they evolve with various political projects. Tracing its borders, defining its rulers, declaring its principles, institutionalizing power in it, and making it a territory (the European Union) becomes a political act.

Calling a space a “place” is making a political statement; it imprints an ideology on it. This is why “cyberwar” is an ideological turn.

Can Cyberspace be at war?
The cyberwar rhetoric turns the abstraction of cyberspace into a new zone of combat—and aligns it with land, sea, air and space. Most of the definition problems around cybersecurity and cyberwar have to do with their first five letters: if you can’t define cyber, what are you going to secure? What are you declaring war on?

In 1996, The Advent of Netwar is described in a RAND report explaining that we must protect “The Net”, and that for such a task offense will be our best defense. These elements are deep at the core of the cyberwar rationale.

Cyberwar is the political ideology that proposes new principles for the space, new actors to rule it. Cyberwar is an ideology that hides behind the discourse of reality: there are, indeed, very real cyber-attacks, and there are security concerns for critical infrastructures connected to the network, but what does it mean to declare war oncyber? Cyberwar paints a metaphorical space as the subject of threats; it depicts the cyberspace as a proper place in which power has to be deployed and conquered.

Cyberspace as an ideology
It may seem that “space” or “place” is a minor distinction. But this small change in perspective indicates a significant change and an ideological turn. “Cyberspace”, by that standard, could also be seen as first ideology to take network society as a battleground.

By the mid-1990s, the word “cyberspace” had transformed from a vague poetical and literary concept into a concrete political utopia. John Perry Barlow’s Declaration of Independence of Cyberspace (1996) captures the significance of this shift. As Barlow writes: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

In this declaration, “placehood” is an aspiration made clear. Cyberspace is a “home” for the mind, where some will “gather.” Cyberspace is an alternative to state power, a place to build to escape authorities and rules of the state. It is a political utopia, both etymologically (ὐ – τόπος ,“no-place” in Greek) and philosophically (“an imagined place in which everything is perfect” writes the Oxford English Dictionary). Six years before that text, in 1990, Barlow had co-founded the Electronic Frontier Foundation, the first advocacy group for digital rights, who outlined this framework by calling itself the “first line of defense” of cyberspace’s “frontier”: very geographical vocabulary.

Cyberspace vs. cyberwar
Today, making cyberspace a harbor free of states’ influence seems like a lost battle. The unfolding debate on state surveillance, Internet censorship and the many other manifestations of state power exercising sovereignty over the network make that very idea sound foolish, or outdated.

It wasn’t back in the time – How did it look then? What was the Internet in 1996, and what did the space for conversation it enabled look like?

In its early days, Internet created a forum for like-minded intellectuals over privileged parts of the wired world, an infrastructure mainly used to share academic material. When developed in 1989, the World Wide Web (a specific application of Internet protocols enabling people to view and navigate pages on a browser) was the solution Tim Berners-Lee envisioned for sharing the CERN’s research papers to strengthen academic collaboration with other institutes. Publishing this research on paper proved to be very expensive because it needed to be constantly updated. If considering the development and adoption of TCP/IP as a landmark for the birth of Internet infrastructure, the 1996 Internet was a 14-year-old teenager. The Web was a six-year-old child.

At this time, there was not yet anything critical to steal or protect on the Internet. No real-world political battle was fought there yet—cyberspace, as a political project, still stood a chance. There was limited incentive for states to truly deploy power there, even if the intent had always been considered.

In March 1995, Time Magazine’s cover story Welcome To Cyberspace describes the new trends of the Net, at risk of “turning into a shopping mall”, but still concludes: “At this point, however, cyberspace is less about commerce than about community. The technology has unleashed a great rush of direct, person-to-person communications, organized not in the top-down, one- to-many structure of traditional media but in a many-to-many model that may – just may – be a vehicle for revolutionary change. In a world already too divided against itself – rich against poor, producer against consumer – cyberspace offers the nearest thing to a level playing field.”

This is not what the Internet looks like today. It changed a lot with its growth and democratization. There is plenty to steal and plenty to protect. People’s credit-card numbers, terrorists’ emails, nuclear plant and air-traffic control systems–they are all connected to the Internet.  And if you look to the ways in which states use the Internet for political advantage—as a tool of espionage, as a way of winning hearts and minds, or as a tool of war against other states—it becomes clear that cyberspace has been unable to realize itself as a bastion against state encroachment.

That, of course, is truly disappointing for those who aspired for the technology to provide a safe harbor from the state’s power. Yet, as history many times has taught us, we must oppose our principles to the ideologies that rise against what we believe in. If cyberspace is colonized by war, there is one essential question: what doescyberpeace look like?


◐ View original post on Scientific American’s website here:

◓ MozFest Keynote – Freedom in our Information Society


On Oct. 26th, I gave a keynote at the 2013 edition of Mozilla Festival, in London – Mark Surman, Mozilla’s wonderful Executive Director, had invited me to talk about Privacy.

The video is available here (and there on YouTube, but original link is always better :’)), and my preparatory notes for the keynotes are there:



It’s always great to hear Internet pioneers talk about the web they initially built, and how it evolved. I can’t do that. I’m not an Internet pioneer. I’m more part of this generation that has been labelled “Digital Natives”. It’s an odd situation to be in, notably because two things are often said about digital natives:

1 – It’s a generation of people who grew up with the Web – the assumption that comes with this is: they must know how it works.

2 – The Web for them is some sort of very stable environment – it has always been there for them, they will always be able to count on it. For younger people today it’s even more so – the Web now relates to their first girlfriend, and also they’ve probably never done homework without the Web.

So it is true that I am part of one of the first generations to truly live in the Web. I think of the amount of personal information that has transited there – all my emails, my searches, and all of the things I rely on the Web for.

The rest though, the two assumptions, is bullshit.

The whole “Digital Natives must know how it works” thing is an illusion – the Web comes more and more pre-packaged and people see less and less incentives to look inside the box. That’s not good. That’s the path towards being increasingly dependent of mechanisms we don’t understand. That’s not the path towards freedom. That’s one of the many reasons why I admire the Mozillians’ constant efforts to open the box, and show how the Web works to enable people to keep on building it.

The whole “The Web has and will always be there for you as it is” thing doesn’t make much sense either. It changes too much. It evolves too much. For most people, today’s Web has little in common with what Tim Berners Lee built in the 90’s, and even has little in common with what it looked like 5 years ago – it’s what Anil Dash was describing this morning with the idea of “The Web we lost”.

To talk about this Web that has been lost, I would like us to read some 1996 political poetry. In 1996, John Perry Barlow, lyricist for the grateful dead and co-founder of the EFF, was writing “The Declaration of Independence of Cyberspace”. It’s a beautiful text opposing the users to the States and to Industry’s giants, and it goes: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not

welcome among us. You have no sovereignty where we gather.” And then Barlow goes: “I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

How does this text feels like today? The least we can say is “optimistic”, “hopeful”, and “not exactly accurate”. Today, “Cyberspace” does look more like a power struggle, that very much so implies both the States and the Industry. Somehow, we can understand why, as it’s also not the same “Cyberspace”. Today on the Web there is more to protect, more to steal, more to sell – more at stake.


Sometimes, we wish we could get a glimpse at the future to see how that power struggle is going to play out. A couple months ago, in June, when Edward Snowden leaked the NSA documents, we got better than a glimpse at the future. We got a glimpse at the present. So what did we see?

We saw a Web in which the power was very, very centralized. We see that people are handling more and more of their online lives to less and less actors.

To illustrate that, let’s play a little mind game together. If we manage to make a sentence that only refers to three actors and that dramatically affects billions of citizens and create a global crisis, then we can say we have an over centralization problem. So how about: “The NSA has been collecting and storing quite a lot of data from Google and Facebook”?

The other thing we saw is that the legal framework supposed to set the democratic boundaries to the collection and analysis of all this data – for instance the law that gave the US Government an obligation to justify the means it employed in that domain – was formulated in very broad terms – and was secretly interpreted. That seemed to have create quite a bit of confusion.

Of course, we don’t see a malicious intent behind these programs. But is this a reason not to worry? Of course not. This has never been a democratic criteria – we don’t judge by intent only, we care about the means.

Our plan cannot be: “We will technically, legally and politically allow the creation of a massive, stored, durable and searchable database of everything all these people have ever said on the Web – – and then hope no one comes to use this weapon against our freedoms.”

Instead of malicious intents and scary political projects, we see confusion. Confusion gives us this important opportunity to have a dialogue about freedom in the information society. There are many signs of the confusion, in the US and internationally, and I could start with two of them:

When the leaks broke in June 2013 and when the NSA programs began being discussed in the press, a couple of the US Senators who voted from the laws providing legal legitimacy for these programs stood up and said: “I never voted for such a system!”. That’s

worrisome. That speaks to their inability – and truly, our inability as a society – to consider the whole puzzle. As we say here: these people need a view source on democracy.

What has been revealed about Tor also is an interesting example of that confusion. Tor is a software that people can use to anonymize they traffic on Internet. For that reason, it is used by many peace activists within dictatorships that monitor and censor the use of Internet – this is the main reason why 60% of Tor’s funding come from the US Department of State. Yet on the other hand, in the meantime as we have learned from the Guardian, the NSA circulated presentations nicely called “Tor Stinks”, and spends a lot of effort trying to find vulnerabilities to break into that system. (And then complain about Mozilla effortlessly fixing the vulnerabilities they found in software updates. Fun story). This is confusing.


So, where does all of that that put us? At a crossroads.

We got a glimpse of the present and it is our duty to say: “That does not look like the future we want!”. This does not look like a Net respectful of the principles we hold as sacred.

Often people say that democracy is like plumbing. You only care about plumbing where there are bad smells. Democracy in our information society doesn’t smell so good right now, and we need some plumbing.

Mozillians, we are the great and heroic plumbers of the Information Society, (and the architects, and the painters, and the evangelists, and the priests, if plumber sounds bad to you) – what I mean is that this community in good shape to solve these problems, well equipped with our knowledge, our motivation and our values.

Which problems? We talk about Privacy, but truly, it’s about freedom, about democracy. What’s this thing we call “Privacy”?

Privacy is a fundamental right. It is in many constitutions, its in our Universal Declaration of Human Rights 1948, the European Convention on Human Rights. We, citizens of the 21st century, inherited the idea that humanity needed to hold privacy as sacred for peace to be ensured. Why, how, when, are all fascinating questions that I encourage us to pursue.

But today, in 2013, in the Information Society, privacy gains a new kind of importance. Privacy is what enables a lot of other fundamental freedoms: in the Information Society, there is no Freedom of Press, no Freedom of Association, of assembly, no freedom to protest, without Privacy.

It’s though to think about Privacy, it seem very abstract.


I like Eben Moglen’s definition of Privacy because it makes the stakes more clear. Moglen says: “There are three components to privacy”.

It’s secrecy, anonymity, autonomy. And when we consider each of them individually, we understand better why we care, and better better what we fear.

Secrecy is simply the ability for two people to communicate without others being party to their communications.
That’s part of what creeps us (and Angela Merkel) when we hear about the NSA programs and we say: “Stop watching us!”.

Anonymity is the ability to communicate without the source or the recipient being known. It’s not abstract at all for journalists and activists, who say they can’t do their work without such a possiblity.

Autonomy is the ability to control who knows what about you and holds data about you. This morning, someone said to me: “I am alarmed when I think someone has all this data on me, they might know more about myself than I do, I don’t want anyone to use this data to change my behavior, and it creeps me out to think they might.” That’s an autonomy issue. Autonomy is also what ensures free will and self determination in our societies.

So these are the pipes we need to fix.


Fixing the pipes takes a constructive debate, and also takes many people taking small steps in the right direction. All these steps, we are taking them together by engaging with at least three types of actions. We build technological solutions. These solutions can be plugins that reveal what’s in the box and that teaches us about what is going on, they can be encryption tools made easier, they can be anything.

Technological fixes preserve our own freedoms, and that of those who need them the most – the journalists, the activists, for instance. This is why they are so crucially important. This is also why they are not sufficient: this would leave behind all those who lack incentive or knowledge to use them to protect themselves.

To achieve “freedom for all” rather than “freedom for those who care”, we will need to engage in the policy dialogue. In doing so, we should oppose the idea that we are facing “new technologies” bringing “new problems” upon this world, and feel empowered by the old wisdoms and age-old principles of the peace builders to whom we owe the freedoms we now stand to protect. An example: in the past, people and their Governments have successfully stood up to say “It’s not because the States and the Industry have new technical abilities that the armies of this world should weaponize them”, “because we can” is not a sufficient rationale to authorize Governments to engage with the full powers of the technologies – that’s somewhat how we achieved nuclear non-proliferation. Phrasing what

peace feels like and designing a framework enabling it is the debate we must engage with – again.

The last piece I want to talk about today is a crucially important one – it’s education. It’s our duty to explain and frame all these issues for the generations to come. This morning, someone told me: “I have a nine years old using Facebook and Gmail. What do I tell her for her to understand the stakes behind what she does on a daily basis?” Providing answers to that question is fundamental.

People rightly fear that it will be complicated. Privacy is technical, is legal, is political, and operated on so many scales: it’s an everyday life matter, it’s a State matter, it’s even an international matter! But framing, teaching, explaining these types of challenges are things we have proved to be able to tackle!

Today, very few nine years old can talk about the international legal framework for sustainable development.

But almost all of them know about pollution. They know “it’s bad”, and they know some sort of personal fixes: maybe they even recycle. We can achieve an understanding of how freedom need to be protected in the information society, even if it is abstract, we can achieve it because it is crucially important.

Let’s make some plumbing, let’s build and teach the web we want, let’s build Cyberpeace. I was part of the first generation to live in the Web and I would like to be part of the first generation to see it at peace. 

◒ Moving Blogs


Chers amis, I recognize that I’m terrible at blogging and have decided to make an effort to at least gather the updates about what I am doing at the Berkman at the same place. Bienvenue sur le nouveau blog ! I will move stuff from there to here, and keep on writing here as things come.

PS: Not ready to do a Twitter effort yet, but that may come later (or never).

Log in