You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'policy' Category

Basic Design Principles for Anti-Circumvention Legislation (Draft)

ø

Over the past few weeks I’ve been working, among other things, on a paper on third layer protection of digital content, i.e., anti-circumvention legislation in the spirit of Art. 11 WCT and Art. 18 WPPT and it’s counterparts in regional or national pieces of legislations (e.g. Art. 6/8 EUCD and Sec. 1201 DMCA.) The 50+ pages, single-spaced paper is very much research in progress. It is based on prior research and takes it as its baseline that many countries have already enacted legislation or will soon legislate on TPM in order to comply either with international obligations under WIPO, or with international free trade agreements involving a party that has powerful content industries such as the U.S. Thus, I argue that the immediate question before us is no longer whether the second and third layer of protection of digital works is appropriate or viable (personally, I’m convinced that it is not, but that’s another story. BTW, initial reactions to my draft paper by friends suggest that I should use stronger language and make a clear normative statement in this regard. I’m not sure whether a more radical approach will contribute to project’s goal, but I will re-consider it.) Rather, at this stage, attention should be drawn to the alternative design choices that remain with countries that face the challenge of drafting or revisiting a legal regime aimed at protecting TPM.

Consequently, the purpose of the working paper (drafted in the context of a consulting job for a government in the Middle East) is to identify different legislative and regulatory approaches and to discuss them in the light of previous experiences with TPM legislation in the U.S. and in Europe. Ultimately, the paper seeks to formulate basic design (or best practice) principles, and to sketch the contours of a model law that aims to foster innovation in digitally networked environment and minimize frequently observes spillover effects of TPM legislation.

The paper is divided into three parts. In the first Part, I provide a brief overview of international and national legal frameworks that protect technological measures by banning the circumvention of TPM. The second Part of the paper discusses three particularly important as well as generally contested elements of anti-circumvention legislation—i.e., subject matter and scope; exemption interface; sanctions and remedies—and analyzes in greater detail some of the differences among jurisdictions in order to identify alternative approaches or what we may call “design choices.” The third Part provides a brief summary of what commentators have identified as core areas of concern with this type of legislation. Based on the findings of Part II and the preceding section, basic design principles will be suggested. The final section paints in broad strokes a model law with discussion issues and some guiding principles that might be helpful to policy makers who face the challenge of crafting anti-circumvention legislation.

Today, I’d like to share with you some thoughts at the most abstract level of the paper. Against the backdrop of the analysis in the first two Parts of the paper, I tried to formulate five basic design principles for legislators that face the challenge to implement the WIPO Internet Treaties anti-circumvention provisions. These principles are further specified in the final part of the paper, which provides the rough outline of a model law. The relevant section reads as follows:

“Part II of the paper and the previous section has analyzed, inter alia, what approaches to TPM legislation have been taken and what consequences (intended as well as unintended) certain design choices might have. For the reasons discussed in Part II.C., it is not feasible to provide detailed substantive guidance as to how an anti-circumvention framework should look like without knowing the specifics of the legislative, judicial, cultural, economic, and political environment of the implementing country. However, it is possible, based on the analysis in this paper, to suggest three basic subject-matter design principles that should be taken into account by policy makers when drafting and enacting anti-circumvention laws:

  • Principle 1: Get the terminology right, i.e. provide precise, clear, and unambiguous definitions of key concepts and terms such as “technological (protection) measures,” “effective” TPM, “acts of circumvention;” etc. The analysis of existing anti-circumvention laws in different jurisdictions across continents suggests that legislators, by and large, have done a poor job in defining core terms of anti-circumvention. Although it is true that laws often use abstract terms that require interpretation, it is striking how many vague concepts and ambiguous terms have been identified within the context of TPM legislation. The EUCD, as it has been transposed into the laws of the EU Member States, is particularly illustrative of this point since it leaves it up to the national courts and, ultimately, to the European Court of Justice to define some of the basic terms used in the respective pieces of legislation. In particular, legislators should avoid merely “copying and pasting” provisions as set out by international treaties or other sources of norms without making deliberative choices about the concepts and terms that are used.
  • Principle 2: Recite traditional limitations and exceptions to copyright in the context of anti-circumvention provisions. The review of exception regimes under various legal frameworks as well as the overview of initial experiences with anti-circumvention legislation in the U.S. and in Europe has suggested that anti-circumvention provisions tend to change the carefully balanced allocation of rights and limitations previously embodied in the respective national copyright laws. Particularly significant shifts can be observed in areas such as research (including reverse engineering), teaching, and traditional user privileges such as fair use or the “right” to make private copies. Apparently, not all of these shifts have been intended or anticipated by policy makers. Thus, it is crucial to carefully design the exception framework applicable to TPM, provide appropriate mechanisms for the effective enforcement of exceptions, analyze the interplay of the exception regime with the other core elements of the anti-circumvention framework, and conduct an in-depth impact analysis.
  • Principle 3: Use discretion with regard to sanctions and remedies and adhere to the principle of proportionality. International legal frameworks provide some degrees of flexibility in drafting civil and criminal penalties. Implementing countries should carefully consider the available design choices under the applicable framework, thereby following the principle of proportionality. Among the usual options to be considered are limitations on criminal and civil liability for non-profit institutions such as libraries, archives, and educational institutions, flexible sanctions for innocent infringers, and limitations on sanctions for legitimate purposes such as scientific research and teaching. Again, the interplay among the liability provisions and the other elements of the framework, including scope and exceptions, must be equilibrated.

The review of various controversies—both in practice and theory—surrounding the implementation and application of anti-circumvention frameworks suggests, as noted above, that both the intended effects (e.g. on piracy) as well as the unintended consequences of third layer protection of copyright (e.g. on competition, innovation, etc.) remain uncertain and contested. In this situation of uncertainty and in light of anecdotal evidence suggesting spillover-effects, policy-makers are well-advised to complement the three principles outlined above by two more general principles.

  • Principle 4: Incorporate procedures and tools that permit the monitoring and review of the effects of the anti-circumvention provisions on core values of a given society. Given the degrees of uncertainty mentioned above, it is crucial to establish mechanisms that enable policy makers and stakeholders to systematically identify and assess the effects of TPM and corresponding legislation and, thus, to incorporate what we might call the ability to learn and improve based on “law in action.” Such processes and tools might include legislative, administrative, or academic review and might focus, among others, on the core zones of concern outlined above with special attention to the exception regime.
  • Principle 5: Set the default rule in such a way that the proponents of a more protective anti-circumvention regime bear the burden of proof. As noted, experiences with anti-circumvention legislation so far have not (or at best, only partly) been aligned with its raison d’�tre. Instead, attention has been drawn to unintended consequences. This situation requires that the proponents advocating in favor of a more protective regime (i.e., a regime that increases, along the spectrum set by international obligations, the constraints on a user’s behavior) must provide evidence why additional protections for TPM—e.g. in form of broader scope, narrower exceptions, more severe penalties, or the like—are necessary.”

Comments welcome.

Global Online Freedom Act of 2006: Evil is in the Details

1

I’ve just read Rep. Chris Smith’s discussion draft of a “Global Online Freedom Act of 2006,” which has been made available online on Rebecca MacKinnon’s blog. Rebecca nicely summarizes the key points of the draft. From the legal scholar’s rather then the activist’s viewpoint, however, some of the draft bill’s nitty-gritty details are equally interesting. Among the important definitions is certainly the term “legitimate foreign law enforcement purposes,” which appears, for instance, in the definition of substantial restrictions on Internet freedom, and in sec. 206 on the integrity of user identifying information. According to the draft bill, the term ”legitimate foreign law enforcement purposes” means

“for purposes of enforcement, investigation, or prosecution by a foreign official based on a publicly promulgated law of reasonable specificity that proximately relates to the protection or promotion of health, safety, or morals of the citizens of that jurisdiction.”

And the next paragraph clarifies that

“the control, suppression, or punishment of peaceful expression of political or religious opinion does not constitute a legitimate foreign law enforcement purpose.” [Emphasis added.]

While the first part of the definition makes a lot of sense, the second part is more problematic to the extent that it suggests, at least at a glance, a de facto export of U.S. free speech standards to the rest of the world. Although recent Internet rulings by U.S. courts have suggested an expansion of the standard under which U.S. courts will assert jurisdictions over free speech disputes that arise in foreign jurisdiction, it has been my and others impression that U.S. courts are (still?) reluctant to globally export free speech protections (see, e.g. the 9th Circuit Court of Appeal’s recent Yahoo! ruling.)

Indeed, it would be interesting to see how the above-mentioned definition would relate to French legislation prohibiting certain forms of hatred speech, or German regulations banning certain forms of expression—black lists, by the way, which are also incorporated by European subsidiaries of U.S. based search engines and content hosting services.

While the intention of the draft bill is certainly a legitimate one and while some of the draft provisions (e.g. on international fora, code of conduct, etc.) deserve support, the evil—as usual—is in the details. Given its vague definitions, the draft bill (may it become law) may well produce spillover-effects by restricting business practices of U.S. Internet intermediaries even in democratic countries that happen (for legitimate, often historic reasons) not to share the U.S.’ extensive free speech values.

Addendum: Some comments on the draft bill from the investor’s perspective here. Note, however, that the draft bill also includes foreign subsidiaries of U.S businesses to the extent that the latter control the voting shares or other equities of the foreign subsidiary or authorize, direct, control, or participate in acts carried out by the sbusidiary that are prohibited by the Act.

Information Ethics: U.S. Hearing, but Global Responsibility

ø

Today, the US House of Representatives’ IR Subcommittee on Africa, Global Human Rights and International Operations, and the Subcommittee on Asia and the Pacific are holding an open hearing on the question whether the Internet in China is a Tool for Freedom or Suppression. My colleague Professor John Palfrey, among the foremost Internet law & policy experts, has prepared an excellent written testimony. In his testimony, John summarizes the basic ethical dilemmas for U.S. corporations such as Google, Microsoft, Yahoo and others who have decided to do business in countries like China with extensive filtering and surveillance regimes. John also raises the question as to what extent a code of conduct for Internet intermediaries could guide these businesses and give them a base of support for resisting abusive surveillance and filtering requests and the role academia could play in developing such a set of principles.

I’m delighted that our Research Center at the University of St. Gallen in Switzerland is part of the research initiative mentioned in John’s testimony that is aimed at contributing to the development of ethical standards for Internet intermediaries. Over the past few years, a team of our researchers has explored the emergence, functionality, and enforcement of standards that seek to regulate the behavior of information intermediaries. It is my hope that this research, in one way or another, can contribute to the initiative announced today. Although the ethical issues in cyberspace are in several regards structurally different from those emerging in offline settings, I argue that we can benefit from prior experiences with and research on ethics for international businesses in general and information ethics in particular.

So far, the heated debate about the ethics of globally operating Internet intermediaries has been a debate about the practices of large and influential U.S. companies. On this side of the Atlantic, however, we should not make the mistake to think that the hard questions Palfrey and other experts will be discussing today before the above-menioned Committees are “U.S.-made” problems. Rather, the concern, challenge, and project – designing business activities that respect and foster human rights in a globalized economy with local laws and policies, including restrictive or even repressive regulatory regimes – are truly international in nature, especially in today’s information society. Viewed from that angle, it is almost surprising that we haven’t seen more constructive European contributions to this discourse. We should not forget that European Internet & IT companies, too, face tough ethical challenges in countries such as China. In that sense, the difficult, but open and transparent conversations in the U.S. are in my view an excellent model for Europe with its long-standing human rights tradition.

Update: Rebecca MacKinnon does a great, fast-speed job summarizing the written and oral testimonies. See especially her summary of and comments on the statements by Cisco, Yahoo!, Google, and Microsoft.

Identity 2.0: Privacy as Code and Policy

2

Later today, I will be traveling “back home” to Cambridge, MA, where I will be attending an invitation only workshop on user centric identity and commerce hosted by the Berkman Center at Harvard Law School and organized by Berkman Fellow John Clippinger. In preparation for a panel on identity and privacy at this workshop, I have written a discussion paper. Here are the main points:

1. User-centric approaches to online identity management such as Identity 2.0 have several advantages compared to previous attempts—commonly referred to as Privacy Enhancing Technologies (PET)—aimed at regulating the flow of personal information through Code. Three achievements are particularly noteworthy: First, Identity 2.0-like approaches mirror the social phenomenon that privacy must be understood as an aggregation of an individual’s choices along a spectrum between the poles “complete anonymity” and “complete identification.” In other words, Identity 2.0 reflects, inter alia, the granular nature of offline privacy and replicates it at the design level of the digitally networked environment. Second, user profiles containing personal information (as elements of identity profiles) that have been created under the regime of previous PETs are often not “portable” across services and applications. Profiles based on concepts such as Identity 2.0, by contrast, are user-centric and, in that sense, universal in their use. Third, Identity 2.0 seeks to provide a set of profiles that enable an individual user to have parallel identities and make situative choices about the flow of personal data in the context of (commercial) interactions.

2. Consequently, user-centric identity systems have the potential to eliminate some of the basic weaknesses of previous incarnations of identity and privacy management technologies. From a privacy perspective, however, a series of important questions and problems remain to be addressed. First, it is striking that user-centric identity and privacy concepts like Identity 2.0 seek to restore an individual’s control over personal data through the medium “choice,” thereby following a property rights approach to privacy. The designers’ choice is remarkable because the majority of analyses suggest that the privacy crisis in cyberspace, by and large, is the product of extensive data collecting, processing, and aggregating practices by commercial entities vis-�-vis the individual user. In other words, Identity 2.0 concepts are regulating—via Code—the behavior of the sender of personal information (user) rather than targeting the source of the problem, i.e. the informational behavior of the recipients (commercial entities.) Viewed from that angle, the approach taken by Identity 2.0 is in tension with some of the basic principles of data protection, which seek to avoid the use of personal information by the recipient and to establish restrictive requirements on the collection, storage, and usage of personal data while leaving an individual user’s informational behavior unregulated. Although counterintuitive, a user-centric approach to identity and privacy management might therefore result in less user autonomy—understood as the freedom to communicate about oneself—when compared to a traditional data protection approach that aims to regulate the informational practices of the data collectors. This tension between identity architecture and fundamental data protection principles might become more explicit in jurisdictions outside of the U.S.

3. The second persistent challenge results from yet another design choice. Starting point is the observation that user-centric identity and privacy schemes are built upon what might be called the “consent approach,” an approach that ultimately suggests user’s choice as the solution to online identity and privacy problems. Indeed, the emerging generation of identity management and privacy enhancing technology aims to provide the tools to make (and express) choices. However, experiences with previous choice-based mechanisms and standards (like P3P) seem to suggest that the promise of this approach is fairly limited. Even the most sophisticated architecture cannot counter power asymmetries between individual users and the Amazons, eBays, Googles, etc. of this world. From such a pragmatic perspective, it remains doubtful to what extent real choices are available to the user. Or, as Herbert Burkert pointed out in the context of PET, “… the data subject is [usually] asked to choose between giving consent and losing advantages, privileges, rights, or benefits, some of which may be essential to the subject in a given situation.” Further, economic incentives which may motivate people to give away personal information in return for free services such as email accounts, content management sites, social networks, etc. might be particularly strong in the online environment and have a limiting effect on the freedom to choose, especially in situations where users (e.g. due to financial constraints) are forced to rely on such deals. Finally, the user acceptability of consent-based tools heavily depends on the ease-of-use of those instruments, as P3P and similar initiatives have illustrated. Given the number of stakeholders, interests, and standards involved, it remains to be seen whether the apparently complex web of identity providers, identity mechanisms, privacy profiles, etc. in fact will be manageable over one easy-to-use interface as has been envisioned by leading designers.

4. The observation that user-centric concepts such as Identity 2.0 contain many different interacting elements and relations—and, thus, add technological and social complexity to the Net—leads to the third conceptual challenge. Consent and choice in the privacy context means informed consent and choice, respectively. It has been observed with regard to much less complex designs of privacy enhancing technologies that data subjects “cannot know how much they should know without fully understanding the system and its interconnection with other systems.” (H. Burkert) In other words, informed consent by users requires transparency for users, but transparency usually decreases in complex and highly technical environments. Someone with a non-technical background who seeks to understand how the emerging protocols and governance models in the area of user-centric work and what the differences among them are will immediately recognize how difficult it will be to make truly informed choices among different identity providers and privacy management systems. The more individuals depend on complex user-centered technology in order to manage their online identities, the more desirable it seems from a policy perspective that users know about the underlying Code, the functionalities, and risks. So far, it remains unclear whether is a realistic scenario that someone will have access to this meta-information and will aggregate it for users.

5. The three challenges outlined above are not meant as argument against the Identity 2.0 concept. Rather, the remarks are intended as a cautionary note—we should resist the temptation to overestimate the promise of any user-centric and choice-based approaches in the context of privacy. In response to the above arguments, however, one might argue that the emerging user-centric approaches will not exclusively rely on Internet users who are educated enough (probably supported by some sort of “choice assistants”) to dynamically manage their multiple online identities and exchanges of personal information on the Net. Rather, according to this argument, identity and privacy policies developed and monitored by private parties would supplement the user-centric approach. Indeed, such a complementary approach addresses some of the concerns mentioned above. However, the experiences with self-regulation in the area of Internet privacy in the U.S. have been rather disillusioning as several studies demonstrate. Viewed from that angle, it does not seem entirely clear why a similar approach should work well in the context of an Identity 2.0 environment.

6. The previous question leads us to another emerging problem under an Identity 2.0-like environment. It is the question about the control of the information practices of the identity providers themselves. The control issue is a particularly important one because it seems inevitable that the emergence of identity providers will be associated with an increased degree of centralization where personal information in the online environment is managed for the purpose of identity building. Again, the common line of argument currently suggests that self-regulation in the form of peer-auditing and/or reputation systems is an adequate solution to the problem. However, once more a look back at the history of privacy regulation in cyberspace might trigger doubts as to whether an industry-controlled self-regulatory scheme will be adequately effective to ensure fair information practices on the part of identity providers as the new and important players of the future Internet. Against this backdrop, it seems advisable to consider alternatives and critically rethink the interaction between code and law and their respective contributions to an effective management of the identity and privacy challenges in cyberspace. This step may mark the beginning of a discussion on Identity 3.0.

Professor Fisher Presents Conclusions on OECD Digital Content Conference

ø

Professor Terry Fisher has the difficult job, as the Day 1 Rapporteur, to present in 10 minutes the OECD conference conclusions. Here are the main points he made a few minutes ago:

A. Points of agreement (or at least substantial consensus)

(a) Descriptive level:
o We’re entering a participatory culture, active users, explosion of blogs; differences in web usage.

(b) Predictive level:
o Consensus that we’ll see a variety of applications that will florish; the shift to biz models that incl internet distribution will have long tail effects, increase diversity

(c) Level of aspiration:
o We should aim for a harmonized, global Internet – single, harmonized global approach (vs. competing legal/regulatory frameworks)
o Governments should stay out, but broad consensus of 6 areas where governmental intervention is desirable: (1) Stimulating broadband; (2) fostering universal access (bridging dig.div.); (3) educating consumers; (4) engage in consumer protection against fraud, spam; (5) fostering competition; (6) promoting IP to achieve an optimal balance
o We should attempt to achieve “biz model neutrality” (TF’s personal comment: appealing idea, but infeasible, there’s no way to achieve it.)

B. Points of disagreement

(a) Descriptive level
o Whether IP currently does strike optimal balance (yes, middle ground, no – spectrum of positions)

(b) Predictive level
o Which biz strategy will prevail: pay-per-view; subscription; free-advertisement based model?

(c) Level of aspiration:
o Network neutrality: required or not as a matter of policy
o TPM: Majority: yes, smaller group: no; intermediate group: only under certain conditions.
o Should governments be in the biz of interoperability?
o Using government power to move towards open doc format?
o Government intervention to achieve an Internet that is open vs. variations of a walled-gardened net?

Marybeth Peters’ Statement at OECD

ø

Here are the keywords I wrote down during Marybeth Peters’ (U.S. Register of Copyrights, United States Copyright Office) statement here in Rome, which she delivered in the context of the final policy roundtable aimed at identifying priority issues, tools, and policy challenges.

  • We must adjust our copyright laws to the digital environment. Copyright law has always responded to new technologies.
  • Must be an internationally coordinated response due to the global nature of the Net.
  • If copyright owner choose to use TPM, those TPM must be protected. Both copy & access controls.
  • Key questions to ask: Are there new rights that are required to protect creators? But also: Do we need new exceptions (e.g. for libraries). Third, what are appropriate remedies (e.g. criminal penalties).
  • Other important set of question: Who is the infringer (primary vs. secondary). This issue comes up in P2P context (Kazaa, Grokster, etc.) Secondary liability must be considered at the international level.
  • Licensing issues: To be saved for the marketplace, no government intervention required. Consumers know what they want. Strongly opposed to compulsory licensing (costly, ineffective). Instead: DRM, collective administration to solve the problem.

Boyle on EU Database Directive Review

ø

Our Londoner colleague and friend Ian Brown (Happy New Year, Ian!) points us to Jamie Boyle’s FT.com Op-Ed on the European Commission’s recent Database Directive impact analysis.

Burkert on the Changing Role of Data Protection in Our Society

ø

My colleague Professor Herbert Burkert, President of our St. Gallen Reserach Center for Information Law and ISP Yale International Fellow, has just released a paper he presented at the CIAJ 2005 Annual Conference on Technology, Privacy and Justice in Toronto, Ontario. The paper is entitled “Changing Patterns – Supplementary Approaches to Improving Data Protection: A European Perspective” and identifies, analyzes, and evaluates several approaches aimed at improving data protection legislation. Burkert argues that current approaches – broken down into three schools of thought: the renovators, the reformist and the engineers – are insufficient, because they do not sufficiently address “the phenomenon that the deep changes of data protection’s role in our information societies do not result from administrations and private sector organizations applying data protection laws insufficiently or from applying insufficient data protection laws but from parliaments continuously restricting by special sector legislation what had been granted by the general data protection laws.” Vis-a-vis the new threat model, Burkert proposes a supplementary approach that relies on independent data protection agencies and addresses parliaments’ role in information law making more directly.

Unproven Economic Impact of the Sui Generis Right on Database Protection

ø

Cedric Manara points us to an evaluation report (I was only able to open/download it by using Internet Explorer) on the EU Database Directive published by the DG Internal Maket and Services. The evaluation focuses on whether the introduction of this right led to an increase in the European database industry’s rate of growth and in database production. According to the press release, it was conducted on the basis of an online survey addressed to the European database industry and the Gale Directory of Databases as the largest existing database directory that contains statistics indicating the growth of the global database industry since the 1970s.

The most interesting and important part of the report (p. 5) reads as follows:

“The economic impact of the “sui generis” right on database production is unproven. Introduced to stimulate the production of databases in Europe, the new instrument has had no proven impact on the production of databases. Data taken from the GDD (see Section 4.2.3) show that the EU database production in 2004 has fallen back to pre-Directive levels: the number of EU-based database “entries” into the GDD6 was 3095 in 2004 as compared to 3092 in 1998. In 2001, there were 4085 EU-based “entries” while in 2004 there were only 3095.
Is “sui generis” protection therefore necessary for a thriving database industry? The empirical evidence, at this stage, casts doubts on this necessity. The European publishing industry, which was consulted in a restricted online survey, however produced strong submissions arguing that “sui generis” protection was crucial to the continued success of their activities. In addition, most respondents to the on-line survey (see Section 4.2.2) believe that the “sui generis” right has brought about legal certainty, reduced the costs associated with the protection of databases, created more business opportunities and facilitated the marketing of databases.”

These findings, I believe, have also the potential to shape the U.S. debate on the question of database protection. For conclusions within the EU framework see pp. 5-6 of the report.

Regulating Search? Call for a Second Look

ø

Here is my second position paper (find the first one here) in preparation of the upcoming Regulating Search? conference at ISP Yale. It provides a rough arc of a paper I will write together with my friend and colleague Ivan Reidel. The Yale conference on search has led to great discussions on this side of the Atlantic. Thanks to the FIR team, esp. Herbert Burkert and James Thurman, Mike McGuire, and to Sacha Wunsch-Vincent for continuing debate.

Regulating Search? Call for a Second Look

1. The use of search engines has become almost as important as email as a primary online activity on any given day, according to a recent PEW survey. According to an another survey, 87% of search engine users state that they have successful search experiences most of the time, while 68% of users say that search engines are a fair and unbiased source of information. This data combined with the fact that the Internet, among very experienced users, ranks even higher than TV, radio and newspapers as an important source of information, illustrates the enormous importance of search engines from a demand-side perspective, both in terms of actual information practices as well as with regard to users’ psychological acceptance.

2. The data also suggests that the transition from an analog/offline to a digital/online information environment has been accompanied by the emergence of new intermediaries. While traditional intermediaries between senders and receivers of information—most of them related to the production and dissemination of information (e.g. editorial boards, TV production centers, etc.)—have diminished, new ones such as search engines have entered the arena. Arguably, search engines have become the primary gatekeepers in the digitally networked environment. In fact, they can effectively control access to information by deciding about the listing of any given website in search results. But search engines not only shape the flow of digital information by controlling access; rather, search engines at least indirectly engage in the construction of the messages or meaning by shaping the categories and concepts users’ use to search the Internet. In other words, search engines have the power to influence agenda setting.

3. The power of search engines in the digitally networked environment with corresponding misuse scenarios is likely to increasingly attract policy- and lawmakers attention. However, it is important to note that search engines are not unregulated under the current regime. Markets for search engines regulate their behavior, although the regulatory effects of competition might be relatively weak because the search engine market is rather concentrated and centralized; a recent global user survey suggests that Google’s global usage share has reached 57.2%. In addition, not all search engines use their own technology. Instead, they rely on other search providers for listings. However, search engines are also regulated by existing law and regulations, including consumer protection laws, copyright law, unfair competition laws, and—at the intersection of market-based regulation and law-based regulation—antitrust law or (in the European terminology) competition law.

4. Against this backdrop, the initial question for policymakers then must concern the extent to which existing laws and regulations may feasibly address potential regulatory problems that emerge from search engines in the online environment. Only where existing legislation and regulation fails due to inadequacy, enforcement issues, or the like, the question of new, specific and narrowly tailored regulation should be considered. In order to analyze existing laws and regulation with regard to their ability to manage problems associated with search engines, one might be well-advised to take a case-by-case approach, looking at each concrete problem or emerging regulatory issue (“scenario”) on the one hand and discussion relevant to incumbent legal/regulatory mechanisms aimed at addressing conflicts of that sort on the other hand.

5. Antitrust law might serve as an illustration of such an approach. While the case law on unilateral refusals to deal is still one of the most problematic and contested areas in current antritrust analysis, the emergence of litigation applying this analytical framework to search engines seems very likely. Although most firms’ unilateral refusals to deal with other firms are generally regarded as legal, a firm’s refusal to deal with competitors can give rise to anti-trust liability if such firm possesses monopoly power and the refusal is part of a scheme designed to maintain or achieve further monopoly power. In the past, successful competitors like Aspen Skiing Co. and more recently Microsoft have been forced to collaborate with competitors and punished for actions that smaller companies could have probably gotten away with. In this sense, search engines might be the next arena where antitrust laws with regard to unilateral refusals to deal are tested. In addition to the scenario just described, the question arises as to whether search engines could be held liable for refusal to include particular businesses in their listings. Where a market giant such as Google has a “don’t be evil” policy and declines from featuring certain sites in its PageRank results because it deems these sites to be “evil,” there is an issue of whether Google is essentially shutting that site provider out of the online market through the exercise of its own position in the market for information. Likewise, the refusal to include certain books in the Google Print project would present troubling censorship-like issues. It is also important to note that Google’s editorial discretion with regard to its PageRank results was deemed to be protected by the First Amendment in the SearchKing case.

6. In conclusion, this paper suggests a cautious approach to rapid legislation and regulation of search engines. It is one of the lessons learned that one should not overestimate the need for new law to deal with apparently new phenomena emerging from new technologies. Rather, policy- and lawmakers would be well-advised to carefully evaluate the extent to which general and existing laws may address regulatory problems related to search and which issues exactly call for additional, specific legislation.

Log in