You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'internet governance' Category

“Born Digital” and “Digital Natives” Project Presented at OECD-Canada Foresight Forum

2

Here in Ottawa, I had the pleasure to speak at the OECD Technology Foresight Forum of the Information, Computer and Communications Policy Committee (ICCP) on the participative web – a forum aimed at contributing to the OECD Ministerial Meeting “The Future of the Internet Economy” that will take place in Seoul, Korea, in June 2008.

My remarks (what follows is a summary, full transcript available, too) were based on our joint and ongoing HarvardSt.Gallen research project on Digital Natives and included some of the points my colleague and friend John Palfrey and I are making in our forthcoming book “Born Digital” (Basic Books, 2008).

I started with the observation that increased participation is one of the features at the very core of the lives of many Digital Natives. Since most of the speakers at the Forum were putting emphasis on creative expression (like making mash-ups, contributing to Wikipedia, or writing a blog), I tried to make the point that participation needs to be framed in a broad way and includes not only “semiotic democracy”, but also increased social participation (cyberspace is a social space, as Charlie Nesson has argued for years), increased opportunities for economic participation (young digital entrepreneurs), and new forms of political expression and activism.

Second, I argued that the challenges associated with the participative web go far beyond intellectual property rights and competition law issues – two of the dominant themes of the past years as well as at the Forum itself. I gave a brief overview of the three clusters we’re currently working on in the context of the Digital Natives project:

  • How does the participatory web change the very notion of identity, privacy, and security of Digital Natives?
  • What are its implications for creative expression by Digital Natives and the business of digital creativity?
  • How do Digital Natives navigate the participative web, and what are the challenges they face from an information standpoint (e.g. how to find relevant information, how to assess the quality of online information)?

The third argument, in essence, was that there is no (longer a) simple answer to the question “Who rules the Net?”. We argue in our book (and elsewhere) that the challenges we face can only be addressed if all stakeholders – Digital Natives themselves, peers, parents, teachers, coaches, companies, software providers, regulators, etc. – work together and make respective contributions. Given the purpose of the Forum, my remarks focused on the role of one particular stakeholder: governments.

While still research in progress, it seems plain to us that governments may play a very important role in one of the clusters mentioned above, but only a limited one in another cluster. So what’s much needed is a case-by-case analysis. I briefly illustrated the different roles of governments in areas such as

  • online identity (currently no obvious need for government intervention, but “interoperability” among ID platforms on the “watch-list”);
  • information privacy (important role of government, probably less regarding more laws, but better implementation and enforcement as well as international coordination and standard-setting);
  • creativity and business of creativity (use power of market forces and bottom-up approaches in the first place, but role of governments at the margins, e.g. using leeway when legislating about DRM or law reform regarding limitations and exceptions to copyright law);
  • information quality and overload (only limited role of governments, e.g. by providing quality minima and/or digital service publique; emphasis on education, learning, media & information literacy programs for kids).

Based on these remarks, we identified some trends (e.g. multiple stakeholders shape our kids’ future online experiences, which creates the need for collaboration and coordination) and closed with some observations about the OECD’s role in such an environment, proposing four functions: awareness raising and agenda setting; knowledge creation (“think tank”); international coordination among various stakeholders; alternative forms of regulation, incl. best practice guides and recommendations.

Berkman Fellow Shenja van der Graaf was also speaking at the Forum (transcripts here), and Miriam Simun presented our research project at a stand.

Today and tomorrow, the OECD delegates are discussing behind closed doors about the take-aways of the Forum. Given the broad range of issues covered at the Forum, it’s interesting to see what items will finally be on the agenda of the Ministerial Conference (IPR, intermediaries liability, and privacy are likely candidates.)

Social Signaling Theory and Cyberspace

3

Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

Week in Review: IP and Behavioral Science, Records Management, and Internet Governance

1

IP & Behavioral Science: The P2P-Example

So far, I’ve had an interesting week. It started on Tuesday in Munich where I attended an interesting workshop on Intellectual Property Law and Behavioral Sciences, organized by the Gruter Institute and the Max-Planck-Institute for Intellectual Property. Scholars from both sides of the Atlantic discussed the promises of a behavioral science approach to IP Law. I was talking about neuroscience and copyright law in the digital age, asking as to what extent neuroscience might be helpful to gain a better understanding of some of the most interesting copyright-related phenomena we’ve observed in cyberspace. Building upon earlier research conducted at the Berkman Center, I was focusing on the p2p file-sharing phenomenon. In the presentation, I tried to provide and map possible explanations of the file-sharing puzzle (why does it happen, why is it large-scale, why is it persistent?), using an extended version of Lessig’s four-mode-of-regulation framework by replacing Lessig’s “Dot” (i.e. the individual that is exposed to the four constraints on behavior) by a brain. My basic argument is that easy-to-use technology, market conditions, perceived illegitimacy of copyright norms and enforcement problems, and social norms overwriting legal norms (among other factors) are necessary, but not sufficient conditions to explain the emergence, scale and persistence of the p2p file-sharing phenomenon. Other factors have to be taken into account too, including social signaling, trust, and reciprocity (see Ernst Fehr’s research) – elements that are implemented on the platform level through Charismatic Code. The resulting practices of sharing might be bolstered by and corresponding with emotionally preferable states of mind. Since negative emotions are associated with free riding (defection triggers anger in others; defectors expect others to be angry with them), they might provide incentive to share despite lack of direct punishment on P2P networks. Positive emotions, by contrast, might result from cooperation/sharing: fMRI scans show that mutual cooperation is associated with activation in brain areas that are linked with reward processing (cf. Rilling, Gutman et al.)
The p2p case, in my view, nicely illustrates the promise (as well as the problems!) of an interdisciplinary research approach to IP law and policy – beyond law and economics.

Records Management: Local Laws vs. Global Infrastructure and Policies

Yesterday, I went to New York City to attend a series of interesting meetings as part of a consulting job for a multinational Swiss company. Together with the Swiss project leader and U.S. colleagues, we continued a discussion on global records and information management strategies. Once again, I was particularly intrigued by the complexity and granularity of the interactions between legal and regulatory frameworks regarding information and records management on the one hand and IT infrastructure issues on the other hand – both with regard to the physical and logical layer. From an information law perspective, it’s particularly interesting to study how regulators and legislators have been influenced by particular cases (e.g. Enron) as well as technological developments (e.g. storage media and techniques). The legal and regulatory responses are far from coherent even within a jurisdiction (in the U.S., for instance, different approaches have been taken to paper records, electronic documents, and email retention) and vary (not surprisingly) significantly among jurisdictions. For a global company, this heterogeneity and, sometimes, inconsistency of rules and regulations presents a though challenge if the company seeks to develop a global information and records management system as well as globally applicable corporate policies (e.g. on email management and retention). The complexity of the task to design and implement such systems and policies further increases due to the fact that each approach has different ramifications in areas such as litigation (buzzword e-discovery) that have to be taken into account in an iterative decision making process.

Internet Governance: Mapping a Diverse Diversity Debate

Right now, I’m waiting in Zurich for a delayed flight to London, from where I will be traveling to Oxford in order to attend a workshop on Internet Governance hosted by the Oxford Internet Institute. The workshop seeks to clarify the issues that are addressed in the first Internet Governance Forum meeting that will take place in Athens later this year. I have drafted a position paper on the diversity issue. The paper maps the diverse diversity debate and summarizes some of the key challenges faced by the IGF. The conclusion of the short paper (I will post a full PDF version later on) reads as follows:

An initial analysis of the contributions to the first IGF meeting confirms the impression that the “diversity” debate includes a broad range of topics. This position paper has outlined the contours of a framework that might be helpful to map the various issues addressed in the respective contributions. The challenges faced by the IGF, however, go far beyond analysis and categorization. First, the many items on the diversity agenda have to be prioritized. Second, the IGF — like other policy-makers (or “-shapers”) in cyberspace — faces the challenge of synchronizing technological innovation and market development with regulatory evolution if it chooses to set diversity as an item on the regulators’ agenda. Third, the IGF needs to decide on the approaches, institutions, and structures that are apt to deal with the complex components (and the interactions among them) of a diverse information environment. In this context, the promise and limits of a laissez-faire approach to diversity need to be assessed as well. Fourth, the IGF faces the challenge of facilitating discourse among stakeholders from various cultural, societal, economic and legal backgrounds. A look at the history of (national) debates about diversity in electronic media in general and content diversity in particular suggests that these cultural differences will make any implementation efforts at the international level particularly tough.

Against this backdrop, the IGF would be well-advised to focus on specific and clearly defined issues (e.g., the IDN issue), while gaining a deeper understanding — and raising awareness — of the interplay among the many elements that are crucial for building and maintaining a diverse digitally networked information environment.

Tomorrow, finally, back to St. Gallen.

EU Parliament Calls For Code of Conduct For Internet Intermediaries Doing Biz In Repressive Countries

2

With the usual time-lag, the debate about Internet censorship in repressive countries such as China and the role of Internet intermediaries such as Google, Microsoft and Yahoo! has now arrived in Europe. The EU Parliament now confirms what many of us have argued for months, i.e., that the problem of online censorship is not exclusively a problem of U.S.-based companies and is not only about China.

The recent resolution on freedom of expression on the Internet by the European Parliament starts with references to previous resolutions on human rights and freedom of the press, including the WSIS principles, as well as international law (Universal Declaration of Human Rights) and opens with the European-style statement that restrictions on online speech “should only exist in cases of using the Internet for illegal activities, such as incitement to hatred, violence and racism, totalitarian propaganda and children’s access to pornography or their sexual exploitation.”

Later, the resolution lists some of the speech-repressive regimes, including China, Belarus, Burma, Cuba, Iran, Libya, Maldives, Nepal, North Korea, Uzbekistan, Saudi Arabia, Syria, Tunisia, Turkmenistan and Vietnam. The resolution then makes explicit references to U.S.-based companies by recognizing that the “…Chinese government has successfully persuaded companies such as Yahoo, Google and Microsoft to facilitate the censorship of their services in the Chinese internet market” and “notes that other governments have required means for censorship from other companies.” European companies come into play with regard to the sale of equipment to repressive governments, stating that

“… equipment and technologies supplied by Western companies such as CISCO Systems, Telecom Italia, Wanadoo, a subsidiary of France Telecom have been used by governments for the purpose of censoring the Internet preventing freedom of expression.” (emphasis added.)

The resolution, declaratory in nature, in one of its probably most significant parts calls on the European Commission and the Council “to draw up a voluntary code of conduct that would put limits on the activities of companies in repressive countries.” The policy document also stresses the broader responsibility of companies providing Internet services such as search, chat, or publishing to ensure that users’ rights are respected. Hopefully, the Commission and the Council will recognize that several initiatives aimed at drafting such code of conducts are underway on both sides of the Atlantic (I have myself been involved in some of these processes, including this one), and will engage in conversations with the various groups involved in these processes. In any event, it will be interesting to see how the Commission and the Council approach this tricky issue, and as to what extent, for instance, they will include privacy statements in such a set of principles – a crucial aspect that, interestingly enough, has not been explicitly addressed in the Parliament’s resolution.

The resolution also calls on the Council and Commission “when considering its assistance programmes to third countries to take into account the need for unrestricted access by their citizens.” Further coverage here.

Update: On the “European Union’s schizophenric approach to freedom of expression”, read here (thanks, Ian.)

JP/JZ Mash-up: Live from OII SDP

3

John Palfrey runs a session today at the Oxford Internet Institute’s Summer Doctoral Program on Internet Generativity, presenting and discussing Jonathan Zittrain’s paper on Internet Generativity (a.k.a. Z-Theory). John starts mapping the evolution of cyberlaw and policy discourses, leading up to the Z-theory.

  • ’82 e-2-e arguments in system Design (Saltzer, Reed, Clark paper) – technical argument
  • Fast forward to ’96: Internet no longer a medium of academics, geeks, etc.
  • 1996 two strong political arguments emerged. 1) John P. Barlow at WEF, Davos: Declaration of Independence of Cyberspace: Governments have no place in cyberspace; out of reach 2) David Johnson/David Post: Law & Borders: similar argument, framed differently – libertarian view of government; claim more descriptive than Barlow’s. (Internet is different)
  • Lessig’s “Code”: Johnson & Post are wrong, Barlow too. Internet is not unregulable. It’s regulated all the time. Four means of regulation, incl. law (“east coast code”), code (“west coast code”), social norms, and markets. (originally three, added markets.) Interplay among the forces (indirect regulation). Framework of four modes of regulation a.k.a. New Chicago School.
  • 2002/03: Rise of the wisdom of the crowd. Yochai Benkler. 1) “Hourglass architecture” arguments: different layers of the Internet. It’s not only about the regulation of dots (cf. Lessig’s illustration of the four modes of regulation), it’s also about the question how it is regulated at the different layer (physical, logical, content) = refining Lessig & e-2-e principle; 2) Coase’s Penguin. Nature of firm has changed (OSS); emergence of third mode of production (commons-based peer production); non-compensated works.
    • Emergence of these forms of interaction is a reason not to regulate.
    • Means of regulation: the crowd itself could become a regulatory force, beyond the individual/social norms mode.
  • Here, Zittrain comes into play. Z-theory: Four key claims. 2 descriptive, 2 normative arguments:
    • Extraordinary security threats exist (so far, focus of regulators has been on different things, e.g. porn): threat of a “digital 9/11”; e-2-e network design is one that leaves network open and makes it vulnerable. Viruses (worms, etc.) could wipe up everything.
    • Response to that real security threat: “code” in form of lock down of the PC.
      • TiVo-izatin of PC/Internet (other example: mobile phones, come out of the box, are not programmable)
    • What to do about it? So far: leave the net alone (e-2-e argument). However, we need a better argument for what the response should be. Argument of generativity. What we care about is not the e-2-e principle, but about systems that are generative (e.g. MS operating system, on which you can run a .exe file). Positive principle: if it’s generative, it’s good.
    • Way to get there: Think of new solutions that build upon Benkler’s second argument: wisdom of the crowd (“5th mode of regulation”). Peer production of governance.
      • E.g. stopbadware.org
      • Challenges (e.g.): what does it mean for institutional design and institution building? Implications of the approach: privacy concerns (see JZ’s paper)

Now discussion. Job well done, JP, as always.

Power of Search Engines: Some Highlights of Berlin Workshop

ø

I’ve spent the past two days here in Berlin, attending an expert workshop on the rising power of search engines organized by Professor Marcel Machill and hosted by the Friedrich Ebert Stiftung, and a public conference on the same topic.

I much enjoyed yesterday’s presentations by a terrific group of scholars and practitioners from various countries and with different backgrounds, ranging from informatics, journalism, economics, and education to law and policy. The extended abstracts of the presentations are available here. I presented my recent paper on search engine law and policy. Among the workshop’s highlights (small selection only):

* Wolfgang Schulz and Thomas Held (Hans Bredow Institute, Univ. of Hamburg) discussed the differences between search-based filtering in China versus search engine content regulation in Germany. In essence, Schulz and Held argued that procedural safeguards (including independent review), transparency, and the requirement that legal filtering presupposes that the respective piece of content is “immediately and directly harmful” make the German system radically different from the Chinese censorship regime.

* Dag Elgesem (Univ. of Bergen, Department of information science) made an interesting argument with regard to the question how we (as scholars) perceive users as online searchers. While the shift from passive consumers to active users has been debated in the context of the creation/production of information, knowledge, and entertainment (one of my favorite topics, as many of you know), Dag argues that online searchers, too, have become “active users” in Benkler’s sense. In contrast, so Dag’s argument, much of our search engine policy discussion has assumed a rather passive user who just types in a search term and uses what he gets in response to the query. Evidently, the question of the underlying conception of users in their role as online searchers is important because it impacts the analysis whether regulatory interventions are necessary or not (e.g. with regard to transparency, market power, and “Meinungsmacht” of search engines.)

* Boris Rotenberg (DG Joint Research Center, European Commission, Sevilla) linked in an interesting way the question of the search engine user’s privacy – as expression of informational autonomy – with the user’s freedom of expression and information. He argues, in essence, that the increased use of personal data by search engine operators in the course of their attempts to personalize search might have a negative impact on freedom of information in at least three regards. First, extensive use of personal data may lead to user-side filtering (Republic.com scenario). Second, it might produce chilling effects by restricting “curious searches”. Third, personalization tends to create strong ties to a particular (personalized) search engine, hindering the user to use alternative engines (“stickiness”-argument).

* Benjamin Peters (Columbia University) used the Mohammed cartoon controversy to explore three questions: (1) As to what extent do search engines eliminate the role of traditional editors? (2) Do algorithms have any sort of in-built ethics? (Benjamin’s answer, based on David Weinberger’s notion of links as acts of generosity: yes, they have). (3) What are the elements of a “search engine democracy”?

* Dirk Lewandowski (Department of information science, Heinrich-Heine Univ.) provided a framework for assessing a search engine’s quality. He argues that the traditional measurement “precision” – as part of retrieval quality – is not a particularly useful criterion to evaluate and compare search engines’ quality, because the major search engines produce almost the same score on the precision scale (as Dirk empirically demonstrated.) Dirk’s current empirical research focuses on the search engine’s index quality, incl. elements such as reach (e.g. geographic reach), size of the index, and actuality/frequency of updates.

* Nadine Schmidt-Maenz (Univ. of Karlsruhe, Institute for Decision Theory and Management Science) presented the tentative results of an empirical long-term study on search queries. Nadine and her team have automatically observed and analyzed the live tickers of three different search engines and clustered over 29 million search terms. The results are fascinating and the idea of topic detection, tracking, and – even more interestingly – topic prediction (!) highly relevant for the search engine industry, both from a technological and business perspective. From a different angle, we also discussed the potential impact of reliable topic forecasting on agenda-setting and journalism.

* Ben Edelman (Department of Economics, Harvard Univ.) empirically demonstrated that search engines are at least in part responsible for the wide spread of spyware, viruses, pop-up ads, and spam, but that they have taken only limited steps to avoid sending users to hostile websites. He also offered potential solutions to the problems, including safety labeling of the individual search results by the search engine providers, and changes in the legal framework (liability rules) to create the right incentive structure for search engine operators to contribute to overall web safety.

Lot’s of food for thought. What I’d like to explore in greater detail is Dag’s argument that users as online searchers, too, have become highly (inter-)active, probably not only in the sense of active information retrievers, but increasingly also as active producers of information while being engaged in search activities (e.g. by reporting about search experiences, contributing to social search networks, etc.)

YJoLT-Paper on Search Engine Regulation

ø

The Yale Journal of Law and Technology just published my article on search engine regulation. Here’s the extended abstract:

The use of search engines has become almost as important as e-mail as a primary online activity. Arguably, search engines are among the most important gatekeepers in today’s digitally networked environment. Thus, it does not come as a surprise that the evolution of search technology and the diffusion of search engines have been accompanied by a series of conflicts among stakeholders such as search operators, content creators, consumers/users, activists, and governments. This paper outlines the history of the technological evolution of search engines and explores the responses of the U.S. legal system to the search engine phenomenon in terms of both litigation and legislative action. The analysis reveals an emerging “law of search engines.” As the various conflicts over online search intensify, heterogeneous policy debates have arisen concerning what forms this emerging law should ultimately take. This paper offers a typology of the respective policy debates, sets out a number of challenges facing policy-makers in formulating search engine regulation, and concludes by offering a series of normative principles which should guide policy-makers in this endeavor.

As always, comments are welcome.

In the same volume, see also Eric Goldman‘s Search Engine Bias and the Demise of Search Engine Utopianism.

New OECD Reports on Digital Media Policy

1

Two new documents by OECD on digital media policy. The first report is the official summary of the OECD – Italy MIT Conference on the Future Digital Economy: Digital Content, Access and Distribution (see Terry Fisher’s main conclusions and the interesting policy items at the end – monopoly of search engines, DRM, user-created content).

The second report is an OECD study on Digital Broadband Content: Digital Content Strategies and Policies. As complement to the above conference, this OECD study identifies and discusses six groups of business and public policy issues and illustrates these with existing and potential OECD Digital Content Strategies and Policies.

Some Highlights of Yale’s A2K Conference

ø

Our colleagues and friends from the Information Society Project at Yale Law School have organized a landmark conference on Access to Knowledge, taking place this weekend at Yale Law School, that brings together leading thinkers and activists on A2K policy from North and South and is aimed at generating concrete research agendas and policy solutions for the next decade. The impressive program with close to 20 plenary sessions and workshops, respectively, is available here. Also check the resources page and the conference wiki (with session notes.)

Here are some of Friday’s and yesterday’s conference highlights in newsflash-format:

  • Jack Blakin’s framework outlining core themes of the A2K discourse. The three main elements of a theory of A2K: (1) A2K is a demand of justice; (2) A2K is an issue of economic development as well as an issue of individual participation and human liberty; (3) A2K is about IP, but it is also about far more than that. Balkin’s speech is posted here.
  • Joel Mokyr’s lecture on three core questions of A2K: (a) Access to what kind of knowledge (propositional vs. prescriptive)? (b) Access by how many users? Direct or indirect access? (question of access intermediaries and the control of their quality) (c) Access at what costs? (Does a piece of knowledge that I need exist? If yes, where; who has it? How to get it? Verification of its trustworthiness.)
  • Yochai Benkler’s fast-paced presentation on the idea of A2K as a response to 4 long-term trends (decolonization->increased integration; rapid industrialization->information knowledge economy; mass media monopolies->networked society; communism and other –isms->human dignity), the reasons why we should care about it (justice and freedom), the sources of the A2K movement as a response to the 4-long term trends (incl. access to medicine, internet freedo movement, information commons, FOSS, human genome project, spectrum commons, open access publications, digital libraries, … ), and the current moment of opportunity in areas such as regulation of information production and telecommunication policy.
  • Eric von Hippel’s discussion of norm-based IP systems and a recent study on cultural norms shared among Michelin-starred French chefs that regulate – as a substitute to copyright law – how they protect ownership of their recipes.
  • Keith Maskus’ lecture on the interplay between trade liberalization and increased IP protection of technologies and an overview of econometric studies regarding key IPR claims in this zone (transparent and enforceable IP regimes do seem to encourage increase in IT investments and associated export growth, both at the aggregate and micro-level; however, claim is conditioned, i.e., holds in middle-income countries, but no evidence for low income developing countries).
  • Eli Noam’s talk on the evolution of firms from the pre-industrial age to today’s digitally networked environment, in which organizations are increasingly defined by information. More on the MacLuhanization of the firm here.
  • Suzanne Scotchmer’s presentation on the design of incentive systems to manage possible conflicts among incentive goals such as the promotion of R&D, the promotion of its use, and trade policy goals. Scotchmer’s lecture was based on her book Innovation and Incentives.
  • Michael Geist’s overview of the current controversies surrounding the idea of a two-tiered Internet – hot topics, among others,: VoiP, content control, traffic shaping, public vs. private internet, and website premium – and his discussion of the core policy questions (is legal protection from Internet tiering required? Is tiering needed for network building and management? Is it a North-South issue?)
  • Susan Crawford’s discussion of the different perspectives of the Bellheads versus the Netheads and the clash of these world views in the Net neutrality debate. Susan’s key arguments are further discussed in this paper.
  • Pam Samuelson’s lecture on the history of the WIPO Internet Treaties, the battles surrounding the DMCA and the EUCD, the fight against database protection in the U.S., and the lesson we can learn form these earlier tussles with regard to the A2K movement (first of all, don’t be polemic –engage in thorough research.) [Update: excellent notes of Pam’s lecture taken by Susan Crawford.]
  • Jamie Love’s action points for the A2K movement, including the following (see here): (1) Stop, resist or modify the setting of bad norms; (2) change, regulate, and resist bad business practices; (3) create new modes of production (commercial and non-commercial) of knowledge goods; (4) create global frameworks and norms that promote A2K.
  • Natali Helberger’s discussion of the proposed French provision on interoperability (Art. 7 of the IP Act) as an expression of cultural policy and national interests.

Basic Design Principles for Anti-Circumvention Legislation (Draft)

ø

Over the past few weeks I’ve been working, among other things, on a paper on third layer protection of digital content, i.e., anti-circumvention legislation in the spirit of Art. 11 WCT and Art. 18 WPPT and it’s counterparts in regional or national pieces of legislations (e.g. Art. 6/8 EUCD and Sec. 1201 DMCA.) The 50+ pages, single-spaced paper is very much research in progress. It is based on prior research and takes it as its baseline that many countries have already enacted legislation or will soon legislate on TPM in order to comply either with international obligations under WIPO, or with international free trade agreements involving a party that has powerful content industries such as the U.S. Thus, I argue that the immediate question before us is no longer whether the second and third layer of protection of digital works is appropriate or viable (personally, I’m convinced that it is not, but that’s another story. BTW, initial reactions to my draft paper by friends suggest that I should use stronger language and make a clear normative statement in this regard. I’m not sure whether a more radical approach will contribute to project’s goal, but I will re-consider it.) Rather, at this stage, attention should be drawn to the alternative design choices that remain with countries that face the challenge of drafting or revisiting a legal regime aimed at protecting TPM.

Consequently, the purpose of the working paper (drafted in the context of a consulting job for a government in the Middle East) is to identify different legislative and regulatory approaches and to discuss them in the light of previous experiences with TPM legislation in the U.S. and in Europe. Ultimately, the paper seeks to formulate basic design (or best practice) principles, and to sketch the contours of a model law that aims to foster innovation in digitally networked environment and minimize frequently observes spillover effects of TPM legislation.

The paper is divided into three parts. In the first Part, I provide a brief overview of international and national legal frameworks that protect technological measures by banning the circumvention of TPM. The second Part of the paper discusses three particularly important as well as generally contested elements of anti-circumvention legislation—i.e., subject matter and scope; exemption interface; sanctions and remedies—and analyzes in greater detail some of the differences among jurisdictions in order to identify alternative approaches or what we may call “design choices.” The third Part provides a brief summary of what commentators have identified as core areas of concern with this type of legislation. Based on the findings of Part II and the preceding section, basic design principles will be suggested. The final section paints in broad strokes a model law with discussion issues and some guiding principles that might be helpful to policy makers who face the challenge of crafting anti-circumvention legislation.

Today, I’d like to share with you some thoughts at the most abstract level of the paper. Against the backdrop of the analysis in the first two Parts of the paper, I tried to formulate five basic design principles for legislators that face the challenge to implement the WIPO Internet Treaties anti-circumvention provisions. These principles are further specified in the final part of the paper, which provides the rough outline of a model law. The relevant section reads as follows:

“Part II of the paper and the previous section has analyzed, inter alia, what approaches to TPM legislation have been taken and what consequences (intended as well as unintended) certain design choices might have. For the reasons discussed in Part II.C., it is not feasible to provide detailed substantive guidance as to how an anti-circumvention framework should look like without knowing the specifics of the legislative, judicial, cultural, economic, and political environment of the implementing country. However, it is possible, based on the analysis in this paper, to suggest three basic subject-matter design principles that should be taken into account by policy makers when drafting and enacting anti-circumvention laws:

  • Principle 1: Get the terminology right, i.e. provide precise, clear, and unambiguous definitions of key concepts and terms such as “technological (protection) measures,” “effective” TPM, “acts of circumvention;” etc. The analysis of existing anti-circumvention laws in different jurisdictions across continents suggests that legislators, by and large, have done a poor job in defining core terms of anti-circumvention. Although it is true that laws often use abstract terms that require interpretation, it is striking how many vague concepts and ambiguous terms have been identified within the context of TPM legislation. The EUCD, as it has been transposed into the laws of the EU Member States, is particularly illustrative of this point since it leaves it up to the national courts and, ultimately, to the European Court of Justice to define some of the basic terms used in the respective pieces of legislation. In particular, legislators should avoid merely “copying and pasting” provisions as set out by international treaties or other sources of norms without making deliberative choices about the concepts and terms that are used.
  • Principle 2: Recite traditional limitations and exceptions to copyright in the context of anti-circumvention provisions. The review of exception regimes under various legal frameworks as well as the overview of initial experiences with anti-circumvention legislation in the U.S. and in Europe has suggested that anti-circumvention provisions tend to change the carefully balanced allocation of rights and limitations previously embodied in the respective national copyright laws. Particularly significant shifts can be observed in areas such as research (including reverse engineering), teaching, and traditional user privileges such as fair use or the “right” to make private copies. Apparently, not all of these shifts have been intended or anticipated by policy makers. Thus, it is crucial to carefully design the exception framework applicable to TPM, provide appropriate mechanisms for the effective enforcement of exceptions, analyze the interplay of the exception regime with the other core elements of the anti-circumvention framework, and conduct an in-depth impact analysis.
  • Principle 3: Use discretion with regard to sanctions and remedies and adhere to the principle of proportionality. International legal frameworks provide some degrees of flexibility in drafting civil and criminal penalties. Implementing countries should carefully consider the available design choices under the applicable framework, thereby following the principle of proportionality. Among the usual options to be considered are limitations on criminal and civil liability for non-profit institutions such as libraries, archives, and educational institutions, flexible sanctions for innocent infringers, and limitations on sanctions for legitimate purposes such as scientific research and teaching. Again, the interplay among the liability provisions and the other elements of the framework, including scope and exceptions, must be equilibrated.

The review of various controversies—both in practice and theory—surrounding the implementation and application of anti-circumvention frameworks suggests, as noted above, that both the intended effects (e.g. on piracy) as well as the unintended consequences of third layer protection of copyright (e.g. on competition, innovation, etc.) remain uncertain and contested. In this situation of uncertainty and in light of anecdotal evidence suggesting spillover-effects, policy-makers are well-advised to complement the three principles outlined above by two more general principles.

  • Principle 4: Incorporate procedures and tools that permit the monitoring and review of the effects of the anti-circumvention provisions on core values of a given society. Given the degrees of uncertainty mentioned above, it is crucial to establish mechanisms that enable policy makers and stakeholders to systematically identify and assess the effects of TPM and corresponding legislation and, thus, to incorporate what we might call the ability to learn and improve based on “law in action.” Such processes and tools might include legislative, administrative, or academic review and might focus, among others, on the core zones of concern outlined above with special attention to the exception regime.
  • Principle 5: Set the default rule in such a way that the proponents of a more protective anti-circumvention regime bear the burden of proof. As noted, experiences with anti-circumvention legislation so far have not (or at best, only partly) been aligned with its raison d’�tre. Instead, attention has been drawn to unintended consequences. This situation requires that the proponents advocating in favor of a more protective regime (i.e., a regime that increases, along the spectrum set by international obligations, the constraints on a user’s behavior) must provide evidence why additional protections for TPM—e.g. in form of broader scope, narrower exceptions, more severe penalties, or the like—are necessary.”

Comments welcome.

Log in