Archive for the 'privacy' Category

Identity 2.0: Privacy as Code and Policy


Later today, I will be traveling “back home” to Cambridge, MA, where I will be attending an invitation only workshop on user centric identity and commerce hosted by the Berkman Center at Harvard Law School and organized by Berkman Fellow John Clippinger. In preparation for a panel on identity and privacy at this workshop, I have written a discussion paper. Here are the main points:

1. User-centric approaches to online identity management such as Identity 2.0 have several advantages compared to previous attempts—commonly referred to as Privacy Enhancing Technologies (PET)—aimed at regulating the flow of personal information through Code. Three achievements are particularly noteworthy: First, Identity 2.0-like approaches mirror the social phenomenon that privacy must be understood as an aggregation of an individual’s choices along a spectrum between the poles “complete anonymity” and “complete identification.” In other words, Identity 2.0 reflects, inter alia, the granular nature of offline privacy and replicates it at the design level of the digitally networked environment. Second, user profiles containing personal information (as elements of identity profiles) that have been created under the regime of previous PETs are often not “portable” across services and applications. Profiles based on concepts such as Identity 2.0, by contrast, are user-centric and, in that sense, universal in their use. Third, Identity 2.0 seeks to provide a set of profiles that enable an individual user to have parallel identities and make situative choices about the flow of personal data in the context of (commercial) interactions.

2. Consequently, user-centric identity systems have the potential to eliminate some of the basic weaknesses of previous incarnations of identity and privacy management technologies. From a privacy perspective, however, a series of important questions and problems remain to be addressed. First, it is striking that user-centric identity and privacy concepts like Identity 2.0 seek to restore an individual’s control over personal data through the medium “choice,” thereby following a property rights approach to privacy. The designers’ choice is remarkable because the majority of analyses suggest that the privacy crisis in cyberspace, by and large, is the product of extensive data collecting, processing, and aggregating practices by commercial entities vis-�-vis the individual user. In other words, Identity 2.0 concepts are regulating—via Code—the behavior of the sender of personal information (user) rather than targeting the source of the problem, i.e. the informational behavior of the recipients (commercial entities.) Viewed from that angle, the approach taken by Identity 2.0 is in tension with some of the basic principles of data protection, which seek to avoid the use of personal information by the recipient and to establish restrictive requirements on the collection, storage, and usage of personal data while leaving an individual user’s informational behavior unregulated. Although counterintuitive, a user-centric approach to identity and privacy management might therefore result in less user autonomy—understood as the freedom to communicate about oneself—when compared to a traditional data protection approach that aims to regulate the informational practices of the data collectors. This tension between identity architecture and fundamental data protection principles might become more explicit in jurisdictions outside of the U.S.

3. The second persistent challenge results from yet another design choice. Starting point is the observation that user-centric identity and privacy schemes are built upon what might be called the “consent approach,” an approach that ultimately suggests user’s choice as the solution to online identity and privacy problems. Indeed, the emerging generation of identity management and privacy enhancing technology aims to provide the tools to make (and express) choices. However, experiences with previous choice-based mechanisms and standards (like P3P) seem to suggest that the promise of this approach is fairly limited. Even the most sophisticated architecture cannot counter power asymmetries between individual users and the Amazons, eBays, Googles, etc. of this world. From such a pragmatic perspective, it remains doubtful to what extent real choices are available to the user. Or, as Herbert Burkert pointed out in the context of PET, “… the data subject is [usually] asked to choose between giving consent and losing advantages, privileges, rights, or benefits, some of which may be essential to the subject in a given situation.” Further, economic incentives which may motivate people to give away personal information in return for free services such as email accounts, content management sites, social networks, etc. might be particularly strong in the online environment and have a limiting effect on the freedom to choose, especially in situations where users (e.g. due to financial constraints) are forced to rely on such deals. Finally, the user acceptability of consent-based tools heavily depends on the ease-of-use of those instruments, as P3P and similar initiatives have illustrated. Given the number of stakeholders, interests, and standards involved, it remains to be seen whether the apparently complex web of identity providers, identity mechanisms, privacy profiles, etc. in fact will be manageable over one easy-to-use interface as has been envisioned by leading designers.

4. The observation that user-centric concepts such as Identity 2.0 contain many different interacting elements and relations—and, thus, add technological and social complexity to the Net—leads to the third conceptual challenge. Consent and choice in the privacy context means informed consent and choice, respectively. It has been observed with regard to much less complex designs of privacy enhancing technologies that data subjects “cannot know how much they should know without fully understanding the system and its interconnection with other systems.” (H. Burkert) In other words, informed consent by users requires transparency for users, but transparency usually decreases in complex and highly technical environments. Someone with a non-technical background who seeks to understand how the emerging protocols and governance models in the area of user-centric work and what the differences among them are will immediately recognize how difficult it will be to make truly informed choices among different identity providers and privacy management systems. The more individuals depend on complex user-centered technology in order to manage their online identities, the more desirable it seems from a policy perspective that users know about the underlying Code, the functionalities, and risks. So far, it remains unclear whether is a realistic scenario that someone will have access to this meta-information and will aggregate it for users.

5. The three challenges outlined above are not meant as argument against the Identity 2.0 concept. Rather, the remarks are intended as a cautionary note—we should resist the temptation to overestimate the promise of any user-centric and choice-based approaches in the context of privacy. In response to the above arguments, however, one might argue that the emerging user-centric approaches will not exclusively rely on Internet users who are educated enough (probably supported by some sort of “choice assistants”) to dynamically manage their multiple online identities and exchanges of personal information on the Net. Rather, according to this argument, identity and privacy policies developed and monitored by private parties would supplement the user-centric approach. Indeed, such a complementary approach addresses some of the concerns mentioned above. However, the experiences with self-regulation in the area of Internet privacy in the U.S. have been rather disillusioning as several studies demonstrate. Viewed from that angle, it does not seem entirely clear why a similar approach should work well in the context of an Identity 2.0 environment.

6. The previous question leads us to another emerging problem under an Identity 2.0-like environment. It is the question about the control of the information practices of the identity providers themselves. The control issue is a particularly important one because it seems inevitable that the emergence of identity providers will be associated with an increased degree of centralization where personal information in the online environment is managed for the purpose of identity building. Again, the common line of argument currently suggests that self-regulation in the form of peer-auditing and/or reputation systems is an adequate solution to the problem. However, once more a look back at the history of privacy regulation in cyberspace might trigger doubts as to whether an industry-controlled self-regulatory scheme will be adequately effective to ensure fair information practices on the part of identity providers as the new and important players of the future Internet. Against this backdrop, it seems advisable to consider alternatives and critically rethink the interaction between code and law and their respective contributions to an effective management of the identity and privacy challenges in cyberspace. This step may mark the beginning of a discussion on Identity 3.0.

Burkert on the Changing Role of Data Protection in Our Society


My colleague Professor Herbert Burkert, President of our St. Gallen Reserach Center for Information Law and ISP Yale International Fellow, has just released a paper he presented at the CIAJ 2005 Annual Conference on Technology, Privacy and Justice in Toronto, Ontario. The paper is entitled “Changing Patterns – Supplementary Approaches to Improving Data Protection: A European Perspective” and identifies, analyzes, and evaluates several approaches aimed at improving data protection legislation. Burkert argues that current approaches – broken down into three schools of thought: the renovators, the reformist and the engineers – are insufficient, because they do not sufficiently address “the phenomenon that the deep changes of data protection’s role in our information societies do not result from administrations and private sector organizations applying data protection laws insufficiently or from applying insufficient data protection laws but from parliaments continuously restricting by special sector legislation what had been granted by the general data protection laws.” Vis-a-vis the new threat model, Burkert proposes a supplementary approach that relies on independent data protection agencies and addresses parliaments’ role in information law making more directly.

Google’s Alan Davidson on Areas of Special Concern


Alan Davidson, Washington Policy Counsel and head of Google’s new Washington DC government affairs office, made several interesting remarks in his panel statement. Among them: He identified the following two areas that are of special concern to search engine providers:

(1) Conceptual shift in speech regulation

  • Old approach (offline media): focused on publishers, readers
  • New & emerging generation of speech regulation: focus on deliverers – intermediaries are supposed to police the networks. Examples where this approach is currently up for discussion in D.C.: access to pharmaceutical products, blocking of gaming websites
  • Assessment: It’s not a good idea to target intermediaries: Due process, procedural problem: intermediary, e.g., can’t tell whether or not a particular site featuring copyrighted content is a fair use or not; by going after the intermediary you take the publisher out of equation, can’t go to courts to argue the case
  • Misguided, because search engines are only in the business of indexing existing content; they’re not editors (can’t be, given the scale.)

(2) Government access to information

  • Increasing pressures to provide personalized information (search history, etc.) to third parties
  • Best privacy policy doesn’t help if government wants information for national security reasons; standards really low; plus: search engines not allowed to inform users that info has been passed on to third parties.

Declaration on Human Rights and Internet


EDRI-gram provides an overview of the Declaration on Human Rights and the Rule of Law in the Information Society that has recently been adopted by the the Council of Europe’s Committee of Ministers (see also press release.) The author of the EDRI-gram report concludes:

“… from a digital civil rights point of view, on close reading the declaration doesn’t offer any specific new rights to internet users when it comes to privacy, freedom of speech and access to knowledge. Though these rights and freedoms are all mentioned and reaffirmed repeatedly in the declaration, they are balanced against ‘challenges’ posed by the Internet, such as violation of intellectual property rights, access to illegal and harmful content and “circumstances that lead to the adoption of measures to curtail the exercise of human rights in the Information Society in the context of law enforcement or the fight against terrorism.”

DRM and Consumer Acceptability


Our colleagues at the Institute for Information Law (IViR) at the University of Amsterdam released, as part of the INDICARE project, an interesting report on Digital Rights Management and Consumer Acceptability. It seeks to provide an overview of the state of the (European) discussion from a multi-disciplinary perspective, and analyzes social, legal, technical, and economic issues.

The report concludes that surprisingly little is know about consumers’ acceptance level of DRM, and what users’ expectations are regarding the use of digital content. The report, inter alia, calls for a better involvement of the consumer side and a joint dialogue between the market players.

The report will be updated. Three pointers to Berkman reports and papers in this context:

* re section 6.5 of the report on alternative business models, see also “Content and Control: Assessing the Impact of Policy Choices on Potential Online Business Modles in the Music and Film Industries.”

* re section 4.2 on the EU-Copyright Directive, see also “Transposing the Copyright Directive: Legal Protection of Technological Measures in EU-Member States,” and the respective Berkman project website.

* re section 4.4 on interoperability, see John Palfrey, Holding Out for an Interoperable DRM Standard, in Christoph Beat Graber, Carlo Govoni, Michael Girsberger, and Mira Nenova (eds.), Digital Rights Management: The End of Collecting Societies? (Forthcoming, April 2005.)

Log in