You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

The Path Towards Centralization of Internet Governance Under the UN

PART 2 OF A 3 PART SERIES
Essay by Anonymous

This essay is the second of a three-part series (1,3). It focuses on the steps of a possible roadmap for centralizing Internet governance under the UN.

As presented in the first essay, the course of Internet governance may be following the same incremental steps that international strategists follow when wishing to establish a permanent body with authority to deal with a given area. The steps as applied to recent moves for Internet governance under a UN umbrella are detailed below.

1. Launch Study

Normally friction comes as in-the-know people with interests (“stakeholders”) weigh in on policy decisions at the national level; then there is the added chafing that comes when different countries’ governments come together to try to hash out policies. But bring those in-the-know stakeholders from different countries together at the international level, and the possibilities for opinion clashes are endless.

Hence, it was predictable that the first phase of the World Summit on the Information Society (WSIS) would unleash cacophony in convening thousands of people with varied interests in Geneva (December 2003). Anticipating the mêlée, a negotiating group on Internet governance drafted language for the WSIS Declaration of Principles and Plan of Action. They proposed a Working Group on Internet Governance (WGIG) to prepare for the second phase of WSIS, to be held in Tunis (November 2005). Faced with hordes of people clamoring for attention to their myriad concerns, what government decision-maker could disagree with such a proposal?

The WGIG was thus created with a mandate to define (a) what should be conceived of as Internet governance, (b) what the public policy issues were that were relevant to this area, and (c) what the respective roles and responsibilities of stakeholders should be.

As any seasoned international negotiator knows, setting up a study group is the first step to agreeing new rules. Anyone not wanting new rules lost this battle.

2. Be Inclusive

In convening the WGIG, organizers conducted open consultations so as to enable any interested participant to contribute, with these inputs then feeding into work by an expert group. The process went far in enabling transparency and public participation.

If the ultimate goal was to establish a permanent UN agency to deal with Internet governance, the inclusive mode was brilliant. By bringing NGOs, businesses, and academic institutions into a setting normally reserved for governments, organizers could elevate these non state actors and dilute the power of governments – especially those governments wielding extra strong influence over Internet policy. Psychologically, non-state actors were on par with governments in this international process.

In the meantime, the WGIG process was setting the precedent for what would be accepted as fulfilling the call in the Declaration of Principles that “the international management of the Internet should be multilateral, transparent and democratic, with the full involvement of governments, the private sector, civil society and international organizations…” In particular, the open participation would cause some people to view the centralized, UN process as affording direct representation and thereby being more democratic than the traditional mode of governments representing their publics at the international level. Of course, others might contend that stakeholders participating in WGIG consultations did not represent the public, but by and large people with this critical view stayed away – meaning that for the most part the voices present were those feeling empowered by the new mode.

3. Avoid Conclusions

For a negotiator wanting a study process to turn into something more prolonged and eventually permanent, an important way to avoid ringing alarm bells is to go slowly. After all, an initial study that produces revolutionary recommendations is sure to draw attention and attract opposition. Findings that are inconclusive are much more palatable and do not appear to be biased toward any particular agenda.

One way to avoid producing substantial findings is to limit the amount of time spent developing them. Perhaps this is why roughly half of the time allotted for WGIG work was “wasted” with the procedural matter of composing an expert group that would be politically acceptable.

Another technique is to put forward a mixed bag of options that satisfies nobody.

In the case of WGIG, the final report accomplished this beautifully, particularly the four models proposed for “Global Public Policy and Oversight”.

4. Receive Commission

The inconclusive WGIG report left it to government negotiators preparing for the second phase of WSIS to haggle over a path forward. With roughly 19,000 people traveling to Tunis to champion a multitude of concerns at that meeting, governments had to produce some form of agreed text so as to stay in a position of leadership. After last-minute horse trading, governments endorsed the Tunis Agenda for the Information Society
.
In this document, governments extracted elements of the WGIG report and agreed to two notable Internet governance processes: First, they agreed to have the UN set up an Internet Governance Forum (IGF), mandating this forum, among other matters, to: “[d]iscuss public policy issues related to key elements of Internet Governance…”; “[i]dentify emerging issues…”; “[d]iscuss, inter alia, issues relating to critical Internet resources”; and “[h]elp to find solutions to the issues arising from the use and misuse of the Internet…” Second, they agreed for the UN to lead a process of “enhanced cooperation” whereby the UN would promote improved ties among entities dealing with international Net policy. The same officials who had chaired the negotiating group that proposed WGIG and who had led WGIG would now continue to shepherd these processes.

5. Stay Mainstream

If the end goal is to establish a permanent, centralized body for Internet governance under the UN, a key strategy at this stage is to show demand for central coordination and to demonstrate this capability in a mild way, as a facilitator.

In the forum process, the key is thus to focus on non-contentious issues that stakeholders say require international cooperation. The IGF has done so first by convening consultations and receiving written inputs to hear what stakeholders’ concerns are, and then by selecting problems nobody can deny (e.g., spam and online child pornography) for concerted efforts. The IGF then facilitates dialogue by offering a forum where stakeholders in a position to tackle issues collectively can meet.

In addition to starting with non-contentious issues, it makes sense to categorize work under banners that nobody can disagree with politically, such as the themes of IGF meetings in Athens (2006) and Rio de Janeiro (2007): “Access”, “Diversity”, “Openness”, “Security”, and “Critical Internet Resources”. Such banners help to avoid conflict concerning values (e.g., human rights) and allay qualms on the part of governments that fear the process will force them to make changes. After all, at this point governments are still the decision-makers in the international system.

It is also strategic to continue with the arrangement that dilutes the power of those who stand to lose, i.e. governments. As long as non-state actors are present, they will issue calls for equal air time and wish to be on the same footing as governments. All that the organizers need to do is to create space for the non-governmental stakeholders.

(Of course, sometimes these stakeholders can stand a little prodding. By way of illustration, the first “Dynamic Coalition” on its surface seemed to spring up spontaneously as a result of a multi-stakeholder panel on privacy at the first IGF meeting in 2006. While this and later Dynamic Coalitions are billed as something that stakeholders came up with, in fact it was an official from a government that favors a restructuring of Internet governance who planted the idea. These Dynamic Coalitions in turn have become the recognized vehicles for civil society input to IGF meetings. In their meetings between IGF meetings, they are helping to give institutional form to the IGF.)

Turning to the process of “enhanced cooperation,” it is strategically important to take a somewhat passive approach initially, since being too proactive could alert people to a trend toward institutionalization and backfire. Instead, what is required at this juncture is to appear nonchalant. By inviting agencies involved in Internet governance to initiate their own cooperation and to report on it, the process leaders can avoid criticism for trying to promote centralization. Most likely, these agencies will prove reluctant to respond, for they are not subject to this enhanced cooperation process and do not wish to create the impression that they are. They have no real way to object, however, for an organized, collective response can be converted into a sign of success in the process’ ability to promote cooperation; meanwhile, simply remaining silent does not alert others to the inappropriateness of the enhanced cooperation process’ telling them what to do. They are stymied.

6. Celebrate Harmony

It is foreseeable that the IGF will use its five-year mandate not to establish any sort of ranking of concerns, but rather to show that there is a mish-mash of concerns, and that so far the process has shown constructive engagement among all stakeholders.

Stakeholder groups will still be responding to the call for participation and coming together to clamor for attention. Their own processes will have become more streamlined and formalized. Not only will they help to give the impression that there is demand for a permanent body in which people can talk, but also they will serve to show that the IGF has materialized into a veritable structure. As suggested above, the meetings-between-meetings by self-organized, multi-stakeholder groups like Dynamic Coalitions represent a sort of institutionalization of the IGF.

Even voices of dissent can be translated into expressions of support, cited as evidence that the process is inclusive. Once the IGF has demonstrated its capacity to serve as an inclusive forum where all views may be expressed and heard, who will be able to argue against it? Making the case for a permanent body just got easier.

7. Salvage Cooperation

With reference to the process of enhanced cooperation, the agencies involved in Internet policy are unlikely to fall in line with the dictates of a process that has no authority over them. Hence, they will appear recalcitrant. It will then be logical and acceptable for the light-handed coordinators of the process to assume more pro-active positions.

8. Become Established

Under the bicycle theory, certain government delegates advocate keeping the momentum going and establishing a permanent body to continue the forum’s process of dialogue in an institutionalized fashion. (They may do so, for example, at a follow-up, WSIS-style meeting.) As for enhanced cooperation, the pro-centralization group may call attention to the fact that agencies dealing with the Internet have been slow to exchange information, suggesting that decision-makers should give central organizers a greater role in prompting cooperation. If made permanent, the scope of such mandates will likely expand as the Internet becomes more and more pervasive in the information society.

9. Expand Authority

Once established as a permanent body to host discussions and coordinate agencies, the institution can begin to exercise more authority. This movement appears natural as the institution becomes indispensable in serving as a central point for information exchange. Early efforts will entail convening meetings, coordinating among agencies, and conducting consultations.

Having set a precedent for exercising authority, the body takes on additional functions incrementally. In this way, the institution will morph from being a place for information exchange, to being a place where decisions of consequence are taken. Leaders may justify this expansion of functions as fitting under the body’s coordination role: after all, it is only natural that coordination would begin to affect the actual workings of Internet policy. To strengthen their position, these leaders can highlight improvements in agencies’ operations due to the cooperation.

After a period, organizers may see to it that proposals are put forward to expand the institution’s official authority and to formalize what it has been practicing. So, for example, instead of just discussing public policy issues related to Internet Governance, members can set policy at meetings hosted by the institution; this would be a logical juncture to add other competencies as well, such as the management of critical Internet resources and dispute resolution.

TO FOLLOW
The last essay in this three-part series will discuss reasons for concern and suggest that participation in the process may nonetheless be the best way forward given those reservations.

The Path Towards Centralization of Internet Governance Under the UN

PART 1 OF A 3 PART SERIES
Essay by Anonymous

This essay is the first of a three-part series (2, 3). It focuses on the steps of a possible roadmap for centralizing Internet governance under the UN.

INTRODUCTION

As part of the Tunis Agenda for the Information Society that resulted from the United Nations (UN) World Summit on the Information Society (WSIS), governments agreed to set in motion an Internet Governance Forum (IGF), mandating it, among other tasks, to: “Discuss public policy issues related to key elements of Internet Governance…”; “Identify emerging issues…”; “Discuss, inter alia, issues relating to critical Internet resources”; and “Help to find solutions to the issues arising from the use and misuse of the Internet…”

People familiar with this version of international Internet governance* primarily fall into two camps: gung-ho-ers and nay-say-ers. There is a third group as well, who share some characteristics of both camps.

In the gung-ho group are people who are excited about the prospect of Internet public policy taking place under the United Nations (UN) umbrella. This group includes: many individuals from non-governmental organizations (NGOs) who enjoy having their voices heard in international discussions on Internet governance; some government officials who embrace the idea of shared control over the Internet; and assorted academics who see a new field of study emerging and relish being at the forefront.

In the nay-sayer camp are people who believe that there is much hype, but little substance, in the talk of international Internet governance. This group includes: various technologists who see the distributed approach to Internet control as natural and who shun restrictive regulation; some government officials who believe a single government can and should go it alone; select academics who see governments as still operating quite independently when it comes to steering the Internet; and many business people who view the whole discussion as a lot of hot air with little chance for substantive impact.

There is a third camp who see the UN process as pointing to UN control over the Internet and do not accept the legitimacy of this campaign. This group views the UN’s treatment of Internet governance as falsely lending the appearance of being ad hoc and auspiciously adaptive; to this group the activity seems more akin to an orchestrated, top-down plan that amounts to a roadmap for UN takeover of Internet governance. Whether this group is supportive of centralized Internet governance is not the point here – rather, the issue is that they disagree with the process because they see it as a sham.

This series of essays is written from the point of view of a person in the third camp. The essays tell how, despite its semblance of spontaneity, the UN’s Internet governance activity actually bears the markings of a well-mapped out plan: a plan for establishing a permanent, international body to oversee global Internet policy – in other words, centralized Internet governance.

The compilation suggests that the UN course so far can be seen as pursuing the same incremental steps that international strategists follow when wishing to establish a permanent body with authority to deal with a given area. The box below outlines these steps.

Steps for establishing a permanent, international body with authority

1. Launch Study – Suggest the creation of a study group to figure out how best to treat issues; this group should report back after a set time.

2. Be Inclusive – Open discussion in a way that elevates likely supporters and dilutes the power of those who stand to lose.

3. Avoid Conclusions – To seem innocuous and disarm those fearing change, limit results with an inconclusive final report; make it clear that more work is necessary.

4. Receive Commission – Set in motion processes to (a) facilitate dialogue, and (b) promote information exchange (“cooperation”) among relevant agencies.

5. Stay Mainstream – Initiate work on non-contentious issues. Be perceived as a facilitator responding to demand, not a driver pushing centralization.

6. Celebrate Harmony – At the end of the time period set for discussion, call attention to achievements in bringing groups together and navigating through rough terrain; show how the process has materialized.

7. Salvage Cooperation – Use the lack of response in the cooperation process to draw attention to the need for leadership.

8. Become Established – Watch collaborators in a decision-making group successfully advocate keeping the momentum going by establishing a permanent body to continue the process in an institutionalized fashion.

9. Expand Authority – Assume additional functions to expand authority over time.

TO FOLLOW

The second essay in this series describes the application of these steps in the context of UN Internet governance as some would have it. The third essay highlights some reasons for concern and suggests that participation in the process may nonetheless be the best way forward given those reservations.


* The term “Internet governance” deserves an explanation. Because the UN’s Working Group on Internet Governance defined the term for the purposes of UN discussions, and because the UN’s moves in this area are a main concern here, this paper uses their wide definition: “Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.” This broad definition is interesting because it means that UN work on Internet governance is not limited to Internet addressing and routing (e.g., matters handled by the Internet Corporation for Assigned Names and Numbers, or ICANN) or matters relating strictly to the seven layers of the Internet’s infrastructure (dealt with by groups like the Internet Engineering Task Force, or IETF); rather, the definition allows UN work on Internet governance to expand outward and encapsulate anything relating to the evolution or use of the Internet.

FORWARD WITH FIBER: An Infrastructure Investment Plan for the New Administration

Essay by Doc Searls

In 1803, Thomas Jefferson presided over the country’s first economic stimulus package: the Louisiana Purchase. For a sum of $23 million and change, the U.S. doubled its territory and became a world power.

Wouldn’t it be cool to do a deal like that today?

We can, through infrastructure investment — not just in roads, electrical service and water systems; but in fiber optic connections to nearly every home and office. Nothing could do more for the economy while costing less.

The future is digital and connected. It follows that maximizing connectivity and network capacity will also maximize economic growth.

We can’t see the potential for that growth as long as we’re blinded by phone and cable company offerings, which treat the Internet as the third act in a “triple play.” We can’t see it as long as we look to Congress and the FCC to protect what little Internet we have, rather than to ditch the regulatory harness called telecom and open the digital frontier.

We can’t see the digital light as long as the future remains darkened by analog’s long shadow. Not surprisingly, that shadow is darkest over telephony and television.

Even though most home phones are now digital, we still “dial” to connect and get billed by the minute. And while analog cell phones are gone, even “smart” digital phones are locked up by phone companies and their phone-making partners.

Next February all over-the-air television broadcasting in the U.S. will go digital, matching cable and satellite TV distribution systems that have been digital for years. Yet we still watch “programs” on “channels,” just like we started doing in 1950.

But dawn is breaking.

On the telephony front, Apple’s iPhone has become a highly generative platform for countless purposes. In just three months, the number of applications has grown from a handful to more than five thousand. Though the iPhone is still a proprietary platform, it demonstrates how telephony is one among an infinitude of useful purposes, all facilitated by a digital device that can slip into a pocket or a purse.

On the television front, couch potato farming is being marginalized by more active forms of video engagement. Millions of consumers are now also producers, creating and distributing files watched by millions of other users on their laptops, iPhones, Blackberries and flat screens.

And on both fronts, new devices based on open source technologies demonstrate how easy it is to scaffold and build innovative new products and services that make money and expand the scope of civilization.

All of this is happening on the vast digital matrix we call the Internet.

The “backbones” of that matrix are fiber-optic connections. For a lucky few million U.S. households, that fiber matrix extends to them as well.

Our apartment near Boston is served by a strand of fiber from Verizon FiOS. That fiber uses a technology called GPON (Gigabit-capable Passive Optical Network), which can provide throughputs of up to 2.5Gb down and 1.2Gb up. While I am pleased with the 20Mb symmetrical Internet service I’m getting for about $60/month, I’m also aware that this is a tiny fraction of my actual bandwidth, most of which is cordoned off for television, which our family rarely watches.

How many other uses can that connection support? Think about the business possibilities here. Think production, not just consumption.

“Triple play” (telephone, TV, Internet) is a legacy offering made worse by usage restrictions and prohibitive pricing for “business” uses — a captive-market shakedown racket that was modeled by Ma Bell and that prevents far more business than it enables.

It’s time for the carriers to start thinking outside their old monopoly boxes. It’s time to wake up and smell the capacity — especially when they’re the ones brewing it.

Amazon and Google provide huge clues to the possibilities here. In The Big Switch, Nicholas Carr calls Google Apps and Amazon Web Services examples of “utility computing.” Why shouldn’t phone and cable utilities get into that game too? These carriers have huge advantages over Google and Amazon: reduced latency over short-haul connections, local offices to fill with equipment and customer support, skilled installers and maintenance personnel, and existing relationships with millions of customers. They also have the luxury of choice: they can compete with these “cloud” companies, or partner with them.

They could also partner with their own customers, among which are the very people who are revolutionizing the entertainment industry by producing their own music and movies. High-quality audio production got cheap years ago, and already flat-screen owners are discovering that the best video they can watch comes from their own HD camcorders. But the most interesting demand is coming from producers of cinema-quality video (what we call “movies”), who are making use of cheap top-quality shooting and production gear. Check out what’s being done with the Red camera and the RedUser community. This virtual Hollywood is distributed all over the world. Its choke points right now are in that last mile, and in the need for low-latency render farms, among other offsite services. Who is in the best position to help these creators out?

The leverage we’re talking about is incalculably high because the sum capacity of fiber has oceanic dimensions. One optical fiber is the width of a human hair. A typical fiber trunk fits an 864-fiber cable inside a 1.5-inch conduit. Each fiber can carry 10 gigabits of data. The total comes to 1.6 terabits. Here’s how David Isenberg puts that into perspective:

If all 6.5 billion people on earth had a telephone, and if they were all off-hook, generating 64 kilobits a second, and all those conversations were routed to this cable, there would be 100 fibers still dark.

Bringing fiber to homes and offices costs between $1000 to $7000 per “drop.” Those costs are in the same range as home entertainment systems, which begin depreciating to worthlessness immediately. Meanwhile, fiber’s value increases with every new connection you can make through it, and every new application you can run on it. And the costs of fiber, conduit and and installation are coming down while quality goes up. (This photo set provides some visual examples.)

We’ve all heard reports about how the U.S. has been falling behind other countries in broadband deployment. This is a red herring. The term “broadband” has many meanings, only one of which is data transmission rate. Worse, it’s generally associated with telecommunications, a category of business that has been subject to strait-jacket regulation going back to the 1934 telecom act and beyond. As Richard Bennett puts it,

The Internet is indeed the most light-regulated network going, and it’s the only one in a constant state of improvement. Inappropriate regulation – treating the Internet like a telecom network – is the only way to put an end to that cycle.

We need to exit the conceptual space called telecom and see raw connectivity and maximized capacity as the Wild West.

Given the hundreds of $billions flushed into rescue and stimulus boondoggles, what’s a few hundred billion more for something that creates immeasurable support for the economy, far beyond the foreseeable future?

David Isenberg tells me a good sum to invest is $300 billion. That would be for “every home passed with more density than about four per road mile and a 50% take rate.” The numbers matter less than the intention, which is to open a vast new marketplace where American business can thrive and everybody can participate and benefit.

Our vision needs to transcend more than telecom. It needs to transcend the Net as we know it today. We need to keep the best of what it already gives us, and open our minds to how much more it can be.

Bob Frankston has often reminded us that (for him at least) the Internet began as a “class project” and a “prototype.” What matters instead, he says, is connectivity extending from home networking to the whole world — and the emergence in its midst of a “bit commons” that involves and supports everybody who contributes to it.

That commons needs room. Only fiber can provide that, as a base level of infrastructure that goes all the way to every possible edge. From there we can fill in the rest with copper and wireless (which is far less scarce than spectrum-mindedness has led us to believe.)

We need public-private cooperation and partnership, and that must extend to individuals and small contractors as well. Networking is something we already do at home. We should be able to stretch that out to our neighborhoods and beyond.

This also can’t be a government project alone, although it needs to involve government at state and local levels as well — not just because those actors might be in the best position to do the work in some cases, but because they often impose regulatory bottlenecks too. We need to make it as easy as possible to drape cabling on poles, trench for conduit, and get the job done without snarls of red tape. We need to keep the goal foremost in mind, and to invest, incentivize and regulate accordingly.

If we fiber up everything we can, with minimum restrictions on use, the economic upsides are limitless.

In a few days we’ll elect a new president. We can help that president do what Jefferson did two centuries ago: invest in a vast new frontier.

So let’s talk about how we can do that. Two requests for responses. One is to leave old arguments — about Net Neutrality, bandwidth hogs, and who is to blame for what — outside the door. The other is to make constructive and realistic suggestions about what this new administration can do in just one area of infrastructure investment: expanding connectivity and network capacity in ways that open innovation and growth opportunity for everybody.

Doc Searls is a Fellow at the Berkman Center for Internet and Society at Harvard Law School and Senior Editor of Linux Journal. He is also co-author (with fellow Berkman Fellow David Weinberger and others) of The Cluetrain Manifesto, and one of the world’s best-known and widely read bloggers. His work as a journalist, speaker and advocate of the Internet led to a Google-O’Reilly Open Source Award for Best Communicator in 2005. He blogs here.

Vote Suppression in a Digital Age

Essay by Tova Andrea Wang

A version of this piece was published in the Miami Herald on October 19, 2008. It is based on a recently released report: “Deceptive Practices 2.0 Legal and Policy Responses” written by Common Cause, the Lawyers Committee for Civil Rights under Law and the Century Foundation.*

In the last several election cycles, “deceptive practices” have been used to suppress voting and skew election results. Usually targeted at minorities and in minority neighborhoods, such activities intentionally disseminate false or misleading information that ultimately disenfranchises potential voters. Historically, deceptive practices have usually taken the form of flyers posted in a particular neighborhood; more recently “robocalls” have targeted voters. Now, we must prepare for deceptive practices 2.0: false information disseminated via the Internet, email, and other new media.

In the past, the worst deceptive practices have relied upon flyers distributed in minority communities. This was rampant in 2004. In Milwaukee, Wisconsin, fliers supposedly from the “Milwaukee Black Voters League” flooded minority neighborhoods claiming wild inaccuracies: “If you’ve already voted in any election this year, you can’t vote in the presidential election; If anybody in your family has ever been found guilty of anything, you can’t vote in the presidential election; If you violate any of these laws, you can get ten years in prison and your children will get taken away from you.” In Pennsylvania, a letter “informed” voters that Republicans would vote on November 2 and Democrats would vote on November 3—the day after the election. Similar fliers were distributed at Ross Park Mall in Allegheny County. In Ohio, a memo on counterfeit Board of Elections letterhead warned voters that anyone registered by the NAACP, ACT, the Kerry campaign, or their local Congressional campaign was disqualified and wouldn’t be able to vote until the next election.

More recently, robocalls have been the weapon of choice. According to the National Network for Election Reform, during elections in Virginia, Colorado and New Mexico, registered voters received calls claiming that their registration had been canceled and they would be arrested if they tried to vote.

In the context of the 2008 Presidential election, the crucial question is: How might such activities be adapted to cyberspace?

Emails can appear to come from reputable sources, (a campaign, an election office, a political party or a nonprofit organization) but actually contain false information about the voting process (date, time, location, rules). Partisan mischief-makers with a bit of technological knowledge could spoof the official sites of secretaries of state, voting rights organizations or local election boards and further disrupt the voting process. Cybertactics, such as pharming could be used to redirect users from an official site to a bogus one.

Such activities have already begun: emails have been going around with misinformation about whether voters can wear campaign clothing to the polling place, how straight ticket voting works in Texas, and what the voter ID rules in Florida are.

We have reason to want to flag these online potentialities because of what has gone on in the campaign context as well. For example, a series of fake campaign websites materialized during the primaries, including FredThomsonForum.com, RudyGiulianiForum.com, and MittRomneyforum.com (now all 404s). Phony campaign websites have also duped people into making campaign donations that actually go into someone’s pocket; in 2004, phishers set up a fictitious website purporting to be for the Democrats that stole the user’s credit card number, and another site that had users call a for-fee 1-900 number.** This year, an Internet site was set up offering to register people to vote for $9.95, a process that is free.***

Emails with false information were routinely created in the context of the presidential campaign, with Barack Obama as the most prominent target of cyber-attacks. Several disturbingly-titled emails circulated widely, including “Who Is Barack Obama?” and “Can a good Muslim become a good American?” Email smear campaigns are especially difficult to restrain because their sources are not easily identified and the messages can continue to circulate indefinitely.

In response to viral rumors, the Obama campaign established a link on its campaign website to address them. On this site the campaign provided concise responses to each summarized smear headline. From here the reader could delve into each rebuttal for a more detailed – and accurate – assessment of the facts.

In addition, the campaign utilized the viral nature of the web by encouraging supporters to send their own emails to their networks to debunk rumors. The campaign website provided an email address to which anyone could forward suspicious-looking emails to the campaign so that they could be addressed immediately.

There are also several websites dedicated to the debunking of political and other types of myths both on and offline. Such sites include FactCheck.org, PolitiFact.com , and Snopes.com. BreaktheChain.org is dedicated to setting straight email chain rumors spread through forwarded messages.

Perhaps there are some lessons here that elections officials, the media, and the voting rights community can adopt as deceptive practices invade cyberspace. For example, elections officials should use websites to provide detailed, accurate information; they must advertise the existence of these sites widely. They can use mainstream media to inform people of their rights and to advise them not to be taken in by any emails they may receive about the process. They must also be in a position to quickly and loudly debunk false online rumors through the web and the mainstream media, as well as through the networks of voting rights and community organizations.

Moreover, bloggers and other online journalists can play a role by quickly spotting malicious campaigns and exposing them. A new independent website like the ones described above could be created, or one of these websites could take on the job of combating voting misinformation.

There may be some technology tools that we can use in the future to combat these challenges to our voting system. But for now, it is as it has always been: the best way to fight bad information will be by drowning it out with good information.

Tova Andrea Wang is Vice President of Research at Common Cause and a nationally known expert on election reform and political participation. Prior to joining Common Cause she was Democracy Fellow at The Century Foundation, where she directed the Post-2004 Election Reform Working Group.

References

*This report was written by Common Cause, in partnership with The Century Foundation and The Lawyers Committee for Civil Rights Under Law, with the tremendous pro bono assistance of the law firms Ropes & Gray and Morrison & Foerster. It details the ways in which misinformation may be electronically disseminated, state and federal laws that can be used to deter and punish such acts, and a series of recommendations for elections officials, voters, voting rights groups, and the press.

**Oliver Friedrichs, “Cybercrimes and Politics,” in Crimeware, Markus Jackobsson, Zulfikar Ramzan, eds., Symantec Press, 2008.

*** Erik Larsen, “Clerk Warns of Internet Deception,” Asbury Park Press, July 29, 2008.

Is the lack of web link and search engine accountability the elephant in the room of online reputation?

Essay by Chris Dellarocas.
Continue the conversation on online reputation with Judith Donath

The majority of debate on online reputation and free speech has focused on questions that relate to content authorship and hosting (see for example, this book and related discussion here, here and here). There has been far less discussion about the responsibilities of those who link to harmful content as well as about the accountability of search engines, whose page ranking algorithms – themselves based on counting links – largely determine the extent of such content’s impact. In this essay I argue that page ranking algorithms and people’s linking decisions are at least as important components of online reputation formation as content itself and deserve to be made more visible and, perhaps, more accountable.

Links constitute the true currency of reputation on the web. Even the most malicious online content will remain largely unnoticed unless others choose to link to it. Links are all the more important since search engines, that ultimate arbiter of online relevance, use a page’s link counts as the primary determinant of that page’s ranking within a set of search results.

Linking to a piece of content constitutes a judgment on the part of the linker that this information is worth noticing. By the same token, a search engine’s choice to employ a link-based page ranking algorithm constitutes a judgment that link counts are a fair method of determining web content’s merit of being read.

Although linking is as deliberate and consequential an action as authoring, our social norms and legal structures have paid much less attention to it. Many web users who would never dream of posting certain types of content have far fewer qualms about linking to them. In cases of slander only the original creator of content bears legal responsibility. Section 230 of the Communications Decency Act of 1996 provides almost blanket immunity to the people who helped make this content visible by linking to it. Similarly, no responsibility is borne by the search engines, whose algorithms chose to list the content near the top of search results and greatly contributed to its negative impact.

Viewed from this perspective, the current lack of accountability with respect to linking and page ranking constitutes an important shortcoming of our fledgling reputation economy. On the one hand it encourages irresponsible and sometimes malicious behavior. On the other hand it misses a great opportunity to turn the millions of web users into more intelligent and responsible information gatekeepers.

Let me be upfront in that I am not advocating more litigation. I believe that a lot can be accomplished through education and implementation of the right incentives into the technical architecture of the web.

The first step is education. Most people do not fully realize the implications and responsibilities that come from their choice to link to a piece of online content. Even fewer people fully grasp the way in which web links, stripped out of their original context and aggregated en masse, affect the decisions of page ranking algorithms. For example, a blogger who links to a racist article from inside a posting in which she strongly condemns it, is at the same time boosting that article’s PageRank, improving its visibility on search engines and exacerbating its negative impact. Fully grasping the consequences of an individual’s linking decisions is the first step towards using this powerful staple of our networked society with responsibility.

The second step is implementing incentives for responsible linking and page-ranking into the architecture of the web. For example, one can envision a set of mechanisms that keep track of the linking actions of websites (and, to the extent possible, individuals), and, on the basis of such actions, assign to them a publicly visible score that roughly translates to their “quality of judgment”. Linking to content that proves to be beneficial increases the score; linking to content that proves to be harmful decreases it.

In a reputation economy, a person’s quality of judgment is as valuable and important a trait as a person’s reputation on any other dimension. In small communities people who spread false rumors quickly acquire a reputation for bad judgment and become ostracized or irrelevant. On the other hand, people who exhibit good judgment grow in esteem and are welcome everywhere. We need to build a similar set of checks and balances for the web.

Search engines must be subject to similar scrutiny. Their choices of page ranking algorithms are deliberate and, therefore, accountable. Plus they have very real consequences. It is my hope that public measurement of a search engine’s “quality of judgment” will induce the creation of more responsible algorithms. At the minimum, it will alert users that these all-powerful gatekeepers of reality are not infallible.

Implementing these ideas will not be easy. There are several difficult challenges for which there are no easy answers. Here are just a few: Who gets to decide what content is beneficial and what is harmful? In limited cases (for example, content that has been proven to be libelous in court) making such judgments with a fair degree of objectivity is feasible; in the majority of cases, however, such decisions will be subjective. How should one take into consideration the context of a link? For example, when a blogger lambasting a libelous posting ends up boosting its visibility on search engines, is this an instance of poor judgment on behalf of the blogger or a failure of page ranking algorithms to properly take the context of the link into consideration? Who should bear the responsibility (or get the credit) for anonymous links posted as comments on eponymous blogs?

Despite their difficulty, these are challenges that we cannot ignore. In our networked society, linking and page ranking carry just as much weight as authoring. All three need to be exercised with caution and responsibility. Similarly, any discussion of free speech and online reputation must focus on all three.

Chris Dellarocas is an Associate Professor of Information Systems and Director of the Center for Complexity in Business at the Robert H. Smith School of Business of the University of Maryland. His research examines the implications of consumer-generated content and social web technologies on business and society. His work on online reputation formation has received international recognition and has been quoted in, among other places, CNN Headline News, The New York Times, The Wall Street Journal, Business Week, Washington Post and the Financial Times. He is an inventor with 3 patents and board member of several Web 2.0 companies.

Is reputation obsolete?

Essay by Judith Donath.
Continue the conversation on online reputation with Chris Dellarocas.

In the past, most conversations were ephemeral: spoken words quickly slipped into the past, resurrected only if a listener later repeated them from memory. Today, many discussions and transactions live on indefinitely. Online conversations are often permanently archived and events in the face-to-face world are frequently recorded. We photograph each other at events both significant and mundane, and upload the images to public media-feeds. Records of our travel times, purchases, health conditions, phone calls, and more exist in vast corporate and government databases. Today, I often no longer have to rely on someone else’s account of your past behavior: I can see for myself.

In a world in which all action is recorded, is there still need for reputation information? If I can see the events of the past for myself, is getting other people’s potentially biased and self-serving opinions about it worth anything? Or, has reputation become obsolete?

In some cases, the answer is yes.

For example, when buying something on eBay, if a seller’s reputation is poor I’ll go elsewhere. However, some ratings are falsely low because of retaliation, or inaccurately high due to fear of retaliation. The reported opinions of others are therefore dubious.* What I really want to know are the facts about the seller’s past transactions: Were the items sent on time? Were they in good condition? The ideal would be if UPS had a service where they photographed an item to be shipped, wrapped it themselves, and posted the photo and shipping info; I could then look up a verified record of the seller’s actions (though items whose condition is not readily apparent from a photo – a laptop, for instance – would need a more extensive evaluation). In other words, I’m more interested in history here than in reputation.

Yet even when the facts of an event are clear,interpretations of them can be important. A politician’s publicly broadcast speech is subsequently argued over by journalists, bloggers, and taxicab drivers; different communities interpret the same words and gestures in vastly different ways. In academia, the committee members evaluating a professor for promotion can read his C.V. and publications, but they also rely heavily on letters from colleagues assessing the significance of the candidate’s work.

Reputation is central to community formation and cooperation (Emler 2001; Gluckman 1963; Hardin 2003). Through discussion about others’ actions, people establish and learn about the community’s standards. Reputation is the core of rewards and sanctioning – it amplifies the benefits of behaving well and the costs of misbehavior. If I work with someone who turns out to be lazy and dishonest, by telling my friends about it, they are spared from a similar bad experience. Having access to reputation information is a big benefit of community membership: insiders know who to trust and how to act toward each other, while strangers do not get the benefit of other’s past experiences. Our ability to share reputation information makes society possible (Dunbar 1996).

In light of this, it would seem that the answer to the question “Is reputation obsolete?” is “No”.

Yet reputation is subject to manipulation, for various reasons. People use it to influence opinion to advance their own causes, to maliciously harm someone, or to curry favor by providing entertaining or seemingly confidential material. We need to understand what circumstances make reputation reliable.

Reputation information exchanged within close-knit communities is more reliable, and members learn when assessments are biased. A colleague recently mentioned that she would never trust another recommendation letter from Professor X again – she’d seen too many in which he claimed that different students were “the top scholar I’ve known”. In overzealously promoting the careers of his students, Professor X acquired a poor reputation for inflated praising. Most letter writers temper the desire to over-enthusiastically praise in order to remain credible in the eyes of their peers, realizing that this close-knit community assesses the assessors. Without community ties, reputation is generally less useful. On public rating sites such as eBay, where no community binds the rater and the reader of ratings, there is no check on reliability and the ratings function primarily as a social exchange between the rater and subject (David & Pinch 2006).

So, is reputation obsolete in an increasingly archival world? The answer, it appears, is “sometimes”. When the immediate facts are primary, we should make use of the vast amount of archived material available. But when situations are ambiguous, when there are conflicting versions of events or codes of behavior, and when developing a shared culture is important (Merry 1997), reputation and the communicative, community-building process of creating it is far from obsolete.

Online, new factors affect the balance between reputation and history. One big issue is “portable identity”: if I spend countless hours on a site being a gracious and well-informed companion, shouldn’t I be able to take that personal history and reputation with me to another site? Many would argue yes, that “you own your own words”. (More controversial is the question of whether you *must* take your history with you: people prefer to port only positive pasts.) But porting reputation is a different matter. Your reputation is information about you, but it is not by you. If you own your own words, then your reputation is owned not by you, but by the people who talk about you. Furthermore, it is a subjective judgment made it a specific context that may not translate well into another. History is portable in ways that reputation is not.

An online site can encourage reliance on history by making search easy and by providing visualizations of patterns within its archive. Or it can encourage the use of reputation by providing both public and private communication channels, as well as feedback about the value of the reputation information people have provided. In technologically mediated societies, evaluating the relative merits of history and reputation is especially important, for the habits of such communities are shaped by deliberate design.

*People do use these ratings and they affect price (Resnick et al. 2006) but this is in the absence of better information.

Judith Donath is a Berkman Faculty Fellow and the director of the Sociable Media Group at the MIT Media Lab. Her work focuses on the social side of computing, synthesizing knowledge from fields such as graphic design, urban studies and cognitive science to build innovative interfaces for online communities and virtual identities. She is known internationally for pioneering research in social visualization, interface design, and computer mediated interaction.

References

David, Shay and Trevor John Pinch. 2006. Six Degrees of Reputation: The Use and Abuse of Online Review and Recommendation Systems. First Monday 11, no. 3.

Dunbar, Robin I. M. 1996. Grooming, Gossip, and the Evolution of Language. Cambridge, MA: Harvard University Press.

Emler, Nicholas. 2001. Gossiping. In The New Handbook of Language and Social Psychology, ed. W. P. Robinson and H. Giles:317–338. New York Wiley.

Gluckman, Max. 1963. Gossip and Scandal (Papers in Honor of Melville J. Herskovits). Current Anthropology 4, no. 3: 307-316.

Hardin, Russell. 2003. Gaming trust. In Trust and reciprocity: Interdisciplinary lessons from experimental research, ed. Elinor Ostrom and James Walker:80-101. New York: Russell Sage Foundation.

Merry, Sally Engle. 1997. Rethinking gossip and scandal. In Reputation: Studies in the Voluntary Elicitation of Good Conduct, ed. Daniel B. Klein:47-74. Ann Arbor: University of Michigan Press.
Resnick, P., R. Zeckhauser, J. Swanson, and K. Lockwood. 2006. The value of reputation on eBay: A controlled experiment. Experimental Economics 9, no. 2: 79-101.

Why Politics and Institutions (Still) Matter for ICT4D

essay by Aaron Shaw, a reply to Ken Banks

Ken Banks’ provocative contribution to the Publius Project, “One Missed Call” boldly urges the ICT for Development (ICT4D) community to look beyond bureaucracy-heavy, top-down solutions to global poverty and inequality. In a similar spirit, my response to Ken’s piece will take the form of a question, critique, and complementary challenge to the ICT4D community that runs somewhat afoul of the Easterly-Schumacher-inspired vision he offered.

Ken echoes William Easterly’s disdain for bureaucratic, large-scale approaches to global poverty, calling instead for the adoption of small techno-centric solutions based on principles of Human-Driven Design and deployment by “grassroots” NGO’s. Like Easterly, he encourages us to bet on the ingenuity of small-time entrepreneurs to break the world’s persistent cycles of poverty. If we identify these entrepreneurs, the theory goes, we can eliminate poverty without the immense waste and inefficiency that plague so-called “Big-D” development projects.

While both Easterly and Banks present compelling, attractive claims, they leave a key question unanswered: how can ICT4D advocates effectively confront the systemic and structural aspects of poverty or inequality within this framework?

Easterly’s argument takes for granted that well-positioned innovators can overcome institutional constraints at the regional, national and global levels. Indeed, his arguments in The White Man’s Burden closely resemble the work of free-market ideologues Milton Friedman and Friedrich Hayek insofar as he objects to all forms of developmental “planning” as fundamentally misguided. Empirical research in Development Studies contradicts this position, suggesting that the ability of grassroots NGO’s and others to deploy technological solutions effectively is overdetermined by the institutional environment within which they act (for a recent example, see Ha-Joon Chang’s Bad Samaritans – The Myth of Free Trade and the Secret History of Capitalism). Of course, to adapt Margaret Mead’s much abused phrase, I do not doubt that a small group of committed citizens can change the world. And yet, such changes are bound to be fleeting in the absence of broader interventions.

The problem, as I see it, stems from the fact that Easterly’s proposition is free-market economics with a friendly face – compassionate conservatism in the truest sense of the phrase. Embracing Easterly’s vision entails a radical denial that broad political, economic, and cultural structures determine developmental outcomes in any way. The history of global development since World War II offers numerous grounds on which to reject this claim. First of all, the emergence of the United States as a superpower influenced the creation of the World Bank and the International Monetary Fund, the primary institutional frameworks within which development projects took place (until recently). Secondly, the concomitant dissemination of U.S. culture, values, and products has also shaped the ideals and aspirations through which people across the world understand what it means to be “developed.”

As a result, we cannot talk about “development” without referring to the broad political, economic and cultural currents that defined the late 20th century and the processes of globalization. All contemporary development projects operate in the institutional space defined by this history – and in many cases it is the space itself, rather than the any individual bureaucracy or top-down vision of change that determines what is and what is not possible for the poor and middle income populations of the Global South.

Contemporary global development paradigms (ICT4D among them) bear the traits of organizational and philosophical predecessors. The Millenium Development Goals represent a continuation of the Big-D development schemes of the 1950’s and 1960’s, where gigantic multilateral institutions like the United Nations dictated the terms on which the world’s poor would modernize. Similarly, the small-d development ideal proferred by Easterly and others places great faith in the ability of unregulated markets and small-scale entrepreneurs to bring widespread economic growth “from the bottom up.” This represents a scaled-down version of the so-called Washington Consensus of the 1980’s and 90’s that saw the dismantling of social welfare systems and the deregulation of financial markets around the world. The results of such “structural adjustment” were catastrophic for the poor, as local elites and multinational corporations extracted spectacular profits at the expense of less-empowered populations.

Both approaches – the big-D and the small-d – are stained by fundamental shortcomings that no amount of revisionism can wash away. On their own, neither will bring about sustainable widespread enhancements in the quality of life for the chronically poor and unstable regions of the world.

As a result, I challenge the ICT4D community to confront the contradictions of these competing paradigms of poverty and inequality alleviation.

At a practical level, we cannot simply abandon participation in (or engagement with) large national and multilateral political institutions. Access to fantastic gadgets and services will mean little in the long-run without a corresponding framework to support sustainable improvements in “human capabilities.” Likewise (and here I agree completely with Ken), the best intentioned multilateral efforts will fail unless they are grounded in the sort of modular, experimental approach embodied in Schumacher’s “small is beautiful” ideal.

Therefore, the ICT4D community (along with fellow travelers like myself) must find ways to split the distance between the Big-D and the small-d. We must reach out to the small grassroots NGO’s and innovators at the same time as we pursue less glamorous forms of political transformation and institution-building. We must design brilliant, appropriate gadgets and cultivate strong, accountable institutions. Together, these digital and social technologies will enable more people around the world to thrive, facilitating access to knowledge, networks, sanitation, water, and healthcare.

The need for broad political engagement has rarely been more apparent than in the present context. The collapse of the World Trade Organization’s Doha round of negotiations and the current global financial crisis provide textbook examples of institutional failures that grassroots intervention alone will not resolve. The lack of consensus at Doha reveals the extent to which existing global governance institutions have failed to meet the needs of low and middle income countries. Meanwhile, the implosion of the housing and credit markets in the United States has illustrated the risks of insufficient coordination between government and the private sector in the face of an obvious, long-standing threat to the collective interests of society. In the absence of sustainable solutions to these overlapping problems, rampant inequalities will likely reproduce and spread, leading to further financial and political instabilities.

In this setting, ICT4D advocates cannot afford to turn their backs on global institutions as critical mechanisms for achieving lasting techno-social change. Of course, analyzing and participating in big bureaucracies such as national states, multilateral governance forums, and international standards committees entails a distasteful degree of compliance with abusive forms of power. In this regard, Easterly’s claim that we must be wary of the tendency for these organizations to deliver corruption and inappropriate technologies is on target.

Nevertheless, if we want to avoid “missing the call” for technologies that have the potential to facilitate enhanced access, equality, and prosperity, such political and institutional engagement is more necessary than ever.

Aaron Shaw is a Research Fellow with the Cooperation Research Group at the Berkman Center and a Ph.D student in the Sociology Department at the University of California, Berkeley. He is currently involved in a large-scale study commons-based peer production online as well as a project examining collaborative practices in the U.S. political blogosphere.His previous research examines the politics of information and development in Brazil, where he has conducted fieldwork and interviews during the past two years. His other interests include the networked public sphere, global governance, the knowledge-based economy, and social theory.

One Missed Call?

Refocusing our attention on the social mobile long tail

Essay By Ken Banks, Founder, kiwanja.net

In “The White Man’s Burden – Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good”, William Easterly’s frustration at large-scale, top-down, bureaucracy-ridden development projects runs to an impressive 384 pages. While Easterly dedicates most of his book to markets, economics and the mechanics of international development itself, he talks little of information and communication technology (ICT). The index carries no reference to ‘computers’, ‘ICT’ or even plain old ‘technology’.

But there is an entry for ‘cell phones’.

E. F. Schumacher, a fellow economist and the man widely recognized as the father of the appropriate technology movement, spent a little more time in his books studying technology issues. His seminal 1973 book – “Small is Beautiful – The Study of Economics as if People Mattered” – reacted to the imposition of alien development concepts on Third World countries, and he warned early of the dangers and difficulties of advocating the same technological practices in entirely different societies and environments. Although his earlier work focused more on agri-technology and large-scale infrastructure projects (dam building was a favorite ‘intervention’ at the time), his theories could easily have been applied to ICTs–as they were in later years.

Things have come a long way since 1973. For a start, many of us now have mobile phones, the most rapidly adopted technology in history. In what amounts to little more than the blink of an eye, mobiles have given us a glimpse of their potential to help us solve some of the most pressing problems of our time. With evidence mounting, I have one question: If mobiles truly are as revolutionary and empowering as they appear to be – particularly in the lives of some of the poorest members of society – then do we have a moral duty, in the ICT for Development (ICT4D) community at least, to see that they fulfill that potential?

You see, I’m a little worried. If we draw parallels between the concerns of Easterly and Schumacher and apply them to the application of mobile phones as a tool for social and economic development, there’s a danger that the development community may end up repeating the same mistakes of the past. We have a golden opportunity here that we can’t afford to miss.

But miss it we may. Since 2003 I’ve been working exclusively in the mobile space, and I’ve come to my own conclusions about where we need to be focusing more of our attention if we’re to take advantage of the opportunity ahead of us. Don’t get me wrong – we do need to be looking at the bigger picture – but there’s not room at the top for all of us. I, for one, am more than happy to be working at the bottom. Not only do I find grassroots NGOs particularly lean and efficient (often with the scarcest of funding and resources), but they also tend to get less bogged down with procedure, politics and egos, and are often able to react far more quickly to changing environments than their larger counterparts. Being local, they also tend to have much greater context for their environments, and in activism terms they’re more likely to be able to operate under the radar of dictatorial regimes, meaning they can often engage a local and national populace in ways where larger organizations might struggle.

So, waving my grassroots NGO flag, I see a central problem of focus in the mobile applications space. Let me explain. If we take the “Long Tail ” concept first talked about by Chris Anderson and apply it to the mobile space, we get something like this. I call it “Social Mobile’s Long Tail”.

Kiwana Long Tail

What it demonstrates is that our tendency to aim for sexy, large-scale, top-down, capital- and time-intensive mobile solutions simply results in the creation of tools which only the larger, more resource-rich NGOs are able to adopt and afford. Having worked with grassroots NGOs for over 15 years, I strongly believe that we need to seriously refocus some of our attention there to avoid developing our own NGO “digital divide”. To do this we need to think about low-end, simple, appropriate mobile technology solutions which are easy to obtain, affordable, require as little technical expertise as possible, and are easy to copy and replicate. This is something I regularly write about, and it’s a challenge I’m more than happy to throw down to the developer community.

Another key problem that we have emerges as a symptom of the first. Because larger international development agencies, by their very nature, tend to pre-occupy themselves with the bigger issues, they often inadvertently neglect the simple, easier-to-fix problems (the “low hanging fruit” as some people like to call it). The Millennium Development Goals (MDG’s) are good examples of the kinds of targets which are far easier to miss than hit.

In mobile terms, using the technology to enhance basic communications is a classic “low hanging fruit”. After all, that’s what mobile phones do, and communication is fundamental to all NGO activities, particularly those working in the kinds of infrastructure-challenged environments often found in the developing world. Despite this, there are few tools available that take advantage of one of the most prolific mobile communication channels available to grassroots NGOs – the text message (or SMS).

Much of my own work with FrontlineSMS has sought to solve this fundamental problem, and in places such as Malawi – where a student, my software, a laptop and one hundred recycled mobile phones has helped revolutionize healthcare delivery to 250,000 rural Malawians – the benefits are loud and clear. In other countries, where activities of international aid organizations may be challenged or restricted by oppressive, dictatorial regimes, grassroots NGOs often manage to maintain operations and often provide the only voice for the people. In Zimbabwe, Kubatana.net have been using FrontlineSMS extensively to engage a population not only starved of jobs, a meaningful currency and a functioning democracy, but also news and information. In Afghanistan, an international NGO is using FrontlineSMS to provide security alerts to their staff and fieldworkers. The software is seen as a crucial tool in helping keep people safe in one of the world’s most volatile environments. With a little will, what can be done in Zimbabwe and Afghanistan can be done anywhere where similar oppression exists.

In cases such as these – and there are many more – we need to stop simply talking about “what works” and start to get “what works” into the hands of the NGOs that need it the most. That’s a challenge that I’m happy to throw down to the ICT4D community. There’s only a certain amount of talking we can do.

There are, of course, many issues and challenges – some technical, some cultural, others economic and others geographical. The good news is that few are insurmountable, and we can remove many of them by simply empowering the very people we’re seeking to help. The emergence of home grown developer communities in an increasing number of African countries, for example, presents the greatest opportunity yet to unlock the social change potential of mobile technology. Small-scale, realistic, achievable, replicable, bottom-up development such as that championed by the likes of Easterly and Schumacher may hardly be revolutionary, but what would be is our acknowledgement of the mistakes of the past, and a co-ordinated effort to help us avoid making them all over again.

I spent the best part of my university years critiquing the efforts of those who went before me. Countless others have done the same. Looking to the future, how favourably will the students and academics of tomorrow reflect on our efforts? If the next thirty years aren’t to read like the last then we need to re-think our approach, and re-think it now. The clock is ticking.

Ken Banks, Founder of kiwanja.net , devotes himself to the application of mobile technology for positive social and environmental change in the developing world, and has spent the last 15 years working on projects in Africa. Recently, his research resulted in the development of FrontlineSMS, a field communication system designed to empower grassroots non-profit organisations. He graduated from Sussex University with honours in Social Anthropology with Development Studies and currently divides his time between Cambridge (UK) and Stanford University in California on a MacArthur Foundation-funded Fellowship. Ken was awarded a Reuters Digital Vision Fellowship in 2006, and named a Pop!Tech Social Innovation Fellow in 2008. He is a close observer of a process he calls the “grassroots mobile revolution.”

The Right to Privacy. Again.

Essay by Dembitz

A response to John Clippinger’s On Technology, Security, Personhood And Privacy: An Appeal

Continue the conversation with Beau Brendler, Michael Barrett, and David Clark.

Our identities in the online world are as real and as significant as our identities in the physical world. Our friendships are formed through the personal details we share on the pages of Facebook and the IMs and emails that traverse across the ether. Governments and businesses make decisions about us based on the bits of information that they have acquired through our recorded moments in these digital worlds. Our reputations are everywhere affected by the increasing power of our digital identities.

The risks that we face to our physical bodies are less severe today than in the past. Indeed, the advances of the medical sciences have cured many diseases and extended our lifespans. Both technology and law have improved the safety of travel, food, and consumer products.

But our digital identities face increasing danger. Identity theft continues at a rapid pace due to the proliferation of persisent identifiers, poor security measures, and sloppy business practices. Digital files are left unencrypted. Old laptops and hard drives, filled with personal information, are available for sale. Security breaches of our personal information are frequent; credit monitoring services provide little protection.

Intrusions into our physical space pale in comparison to intrusions into our private lives in digital space. A camera’s capture of an embarrassing moment passes as quickly as a bid for a pez dispenser on eBay. But the recorded imagery broadcast on YouTube finds its way to servers around the globe.

The choice that we face, as the polity of this new digital world, is as old as political philosophy itself: shall we address these challenges in isolation or shall we decide to address them together, respecting that individuals should retain the freedom to pursue their own definitions of the good?

What is the outcome if we leave each person to defend their own identity? Life in such a digital world will be nasty, brutish, and short. Our identities will be everywhere under assault. The techniques that a person develops to safeguard his or her persona need not be respected by our adversaries or even our governments. All manner of intrusion, surveillance, observation, and analysis will be brought to bear on the most intimate facts of our private lives. Such observation will routinely occur in secret and without accountability. In such a world, the unpopular, the disliked, and the unfamiliar will remain under constant scrutiny. Already we subject the communications of non-citizens to greater examination and deny them the full rights provided by our courts. The technologies of surveillance fall disproportionately upon those who appear different.

Today our identities are increasingly commodified. Our personas are sold from one business to another without consent or compensation. Employers, landlords, insurers, and government agencies know our incomes, our neighbors, the names of our children, and even our political preferences. Facebook widgets collect far more data than is necessary. Without united action, this trend will continue. How long before others sell our digital personas on eBay to the highest bidder? What law today prevents such conduct?

It is tempting to say that a shared solution will undermine freedom or to imagine that individuals armed with keys of sufficient length will be able to defend their rights without the assistance of others. But by now we’ve had several years of testing this strategy. It has greatly favored the interests of large governments and large businesses and has left individuals with decreased rights and increased risks. It has resulted in the disclosure of intimate facts under fraudulent and deceptive circumstances.

To say that we must now engage the work of constructing real safeguards for identity in the digital world is to not to say that such a project will be easy. There will be conflict and controversy. There will also be powerful opposition from vested interests that seek to maintain and to extend their powers. They will offer funding for research centers, paid fellowships, and consulting opportunities to quiet criticisms.

But we can never doubt that we respect private life even as we value the opportunity provided by the Internet to engage in a public life far more open than the one imagined by our parents. Our ability to negotiate the boundary between the public sphere and the private sphere is a measure of our freedom and is essential to the future of our democratic society.

Between a problem and a solution is the time when people come together and begin a discussion. That moment is now. The future lies ahead.

This essay has been published anonymously, under the pseudonym Dembitz.

How I Learned to Stop Worrying about New Media Literacy and Love the Internet

Essay by Evgeny Morozov

A response to Dan Gillmor’s Principles of a New Media Literacy

Continue the conversation with Daisy Pignetti.

While it offers a useful general perspective on the future of media literacy, Dan Gillmor’s essay doesn’t fully answer some of the most fundamental questions about the relationship between education, media, and democracy. Let me sketch just a few of them:

1. Can we do anything to provide for better media literacy and more transparency in the digital age?
2. Should we actually do anything about it?
3. How exactly do we go about it, if, indeed, we could and should?

Gillmor’s answer to the first two questions is an implicit but unqualified “yes.” He broaches the third one –arguably the most important of the three – only in very general terms: “We are doing a poor job of ensuring that consumers and producers of media in a Digital Age are equipped for the tasks,” he says and I agree; but how exactly could we do this job better?

The word “we,” in fact, recurs almost in every paragraph of this essay (not surprisingly, Gillmor’s best-selling book is called We the Media) . But who is this “we”? Is it academia? Is it funders? Is it policy makers? Gillmor never says it explicitly, but I assume that it’s some combination of the three. If that’s the case, I am quite skeptical that “we” are in fact capable of doing anything about these problems or, on some issues, that we should even try.

Take the issue of misinformation. The problems Gillmor alludes to aren’t new; we’ve known for ages that power impinges on media, that journalists are fallible creatures and that what we read in the media is often not true. On a cognitive level, we have already learned how to differentiate between the reliable and the unreliable; few of us confuse the New York Post and the New York Times .

The yellow press in Britain, for example, repeatedly manufactures dubious truths – and yet the country still enjoys one of the healthiest democracies in the world. Are the Brits really less informed than, say, the Germans because of their yellow press? How does it happen then that the best journalism in the world – as practiced by the Financial Times , the Guardian or the Economist —also comes from Britain? Perhaps, the relationship between media, democracy, and the public is a bit more complex than Gillmor wants us to believe.

Gillmor is certainly right that the blogosphere is no longer the pristine land that it used to be. Corporations, PR agencies, and extremists are taking to it in droves for precisely the same reasons that activists, journalists, and pundits did a few years earlier: blogs are often anonymous, they help with quick mobilization and offer a much cheaper way of campaigning. But didn’t the same thing happen with traditional media—television, for example – a few decades ago? Are we really facing genuinely new problems?

Also, wouldn’t they soon become non-issues as, thanks to the Internet, consumers now have terrabytes of reference data at their fingertips and they can verify any fact momentarily? Wouldn’t new systems for determining one’s online reputation – be that Google’s PageRank , the number of Diggs or some other metric— help address the credibility gap? We are only beginning to scratch the surface here, as even Google itself is barely a decade old.

Gillmor’s diagnosis of the issue is largely correct – “we are doing a poor job” (and we always were), but his prescription – “let’s just do more of it”—seems inappropriate. I don’t believe that the Internet would suddenly change our academic, philanthropic, or policy-making institutions, which, armed with their long-term, large-scale and systematic hammers, see everything as a nail. Quite the opposite: the Internet is only making it harder for them to deal with issues that they couldn’t handle even in the pre-Internet age. Adding a qualifier like “digital” or “cyber” to the names of their programs is never enough.

Such an over-reliance on conventional solutions looks awkward in today’s vibrant era of instantaneous self-publishing and micro-change. Media literacy in the digital age is an ever-elusive and always-moving target; we desperately need flexible, ad-hoc and decentralized ways of zooming in on it on a daily—not yearly—basis. It seems that Gillmor’s proposed solution—developing and updating new media literacy curricula—would be ineffective for one simple reason: this would usually take months if not years and make the solution outdated before it even hits the printing press.

Instead, what we – those who belong to some of these institutions – should do is switch from thinking in this anonymous “We”-mode to thinking in a very concrete “I”-mode, where individuals, armed with a panoply of tools like Google or Wikipedia, would play the leading role in solving transparency and literacy problems. We shouldn’t underestimate the great changes that such a switch from “We”-mode to “I”-mode could bring. It may just be that there is no other choice; the era of Web 2.0 has proved that today almost anyone can build effective, cheap and beautiful web projects—anyone but most governments, NGOs and policy-making institutions.

So where Gillmor sees “us” not doing a good job of equipping “them” for “a Digital Age”, I see something different. I see “us” not doing a good job of equipping ourselves for this age, not learning fast enough from “them”—from the great bottom-up examples like Wikipedia with their own community-based governance and organically changing reputation systems.

Why do we assume that 15-year olds — who spend most of their time reading anonymous blogs and browsing fake Facebook profiles – would be as naïve as their parents when dealing with other media? These “digital natives” know much better than we do how easy it is to produce (not just consume) and manipulate information; this automatically makes many of them immune to the trickiest PR stints. Isn’t there a lot that “they” can teach “us” instead?

Gillmor’s alarmist tone may just reflect the panic and unease that “we”– academics, funders, journalists– feel about losing our capacity to influence things. I am not sure this is a bad thing, since our influence—particularly on such soft and intangible issues as media literacy—often carried our own biases and outdated thinking, and was not always effective. That today’s citizens find their own ways to educate themselves and build their own platforms for increasing transparency should be a cause for celebration, not regret.

Evgeny Morozov is the founder and publisher of Polymeme.com. He has written for The Economist, The International Herald Tribune, Le Monde, Business Week, openDemocracy , RealClearPolitics, and other media. Evgeny is on the sub-board of the Information Program at the Open Society Institute. Previously he was Director of New Media at Transitions Online, a Prague-based media development NGO.