Archive for February 9th, 2006

Identity 2.0: Privacy as Code and Policy

2

Later today, I will be traveling “back home” to Cambridge, MA, where I will be attending an invitation only workshop on user centric identity and commerce hosted by the Berkman Center at Harvard Law School and organized by Berkman Fellow John Clippinger. In preparation for a panel on identity and privacy at this workshop, I have written a discussion paper. Here are the main points:

1. User-centric approaches to online identity management such as Identity 2.0 have several advantages compared to previous attempts—commonly referred to as Privacy Enhancing Technologies (PET)—aimed at regulating the flow of personal information through Code. Three achievements are particularly noteworthy: First, Identity 2.0-like approaches mirror the social phenomenon that privacy must be understood as an aggregation of an individual’s choices along a spectrum between the poles “complete anonymity” and “complete identification.” In other words, Identity 2.0 reflects, inter alia, the granular nature of offline privacy and replicates it at the design level of the digitally networked environment. Second, user profiles containing personal information (as elements of identity profiles) that have been created under the regime of previous PETs are often not “portable” across services and applications. Profiles based on concepts such as Identity 2.0, by contrast, are user-centric and, in that sense, universal in their use. Third, Identity 2.0 seeks to provide a set of profiles that enable an individual user to have parallel identities and make situative choices about the flow of personal data in the context of (commercial) interactions.

2. Consequently, user-centric identity systems have the potential to eliminate some of the basic weaknesses of previous incarnations of identity and privacy management technologies. From a privacy perspective, however, a series of important questions and problems remain to be addressed. First, it is striking that user-centric identity and privacy concepts like Identity 2.0 seek to restore an individual’s control over personal data through the medium “choice,” thereby following a property rights approach to privacy. The designers’ choice is remarkable because the majority of analyses suggest that the privacy crisis in cyberspace, by and large, is the product of extensive data collecting, processing, and aggregating practices by commercial entities vis-�-vis the individual user. In other words, Identity 2.0 concepts are regulating—via Code—the behavior of the sender of personal information (user) rather than targeting the source of the problem, i.e. the informational behavior of the recipients (commercial entities.) Viewed from that angle, the approach taken by Identity 2.0 is in tension with some of the basic principles of data protection, which seek to avoid the use of personal information by the recipient and to establish restrictive requirements on the collection, storage, and usage of personal data while leaving an individual user’s informational behavior unregulated. Although counterintuitive, a user-centric approach to identity and privacy management might therefore result in less user autonomy—understood as the freedom to communicate about oneself—when compared to a traditional data protection approach that aims to regulate the informational practices of the data collectors. This tension between identity architecture and fundamental data protection principles might become more explicit in jurisdictions outside of the U.S.

3. The second persistent challenge results from yet another design choice. Starting point is the observation that user-centric identity and privacy schemes are built upon what might be called the “consent approach,” an approach that ultimately suggests user’s choice as the solution to online identity and privacy problems. Indeed, the emerging generation of identity management and privacy enhancing technology aims to provide the tools to make (and express) choices. However, experiences with previous choice-based mechanisms and standards (like P3P) seem to suggest that the promise of this approach is fairly limited. Even the most sophisticated architecture cannot counter power asymmetries between individual users and the Amazons, eBays, Googles, etc. of this world. From such a pragmatic perspective, it remains doubtful to what extent real choices are available to the user. Or, as Herbert Burkert pointed out in the context of PET, “… the data subject is [usually] asked to choose between giving consent and losing advantages, privileges, rights, or benefits, some of which may be essential to the subject in a given situation.” Further, economic incentives which may motivate people to give away personal information in return for free services such as email accounts, content management sites, social networks, etc. might be particularly strong in the online environment and have a limiting effect on the freedom to choose, especially in situations where users (e.g. due to financial constraints) are forced to rely on such deals. Finally, the user acceptability of consent-based tools heavily depends on the ease-of-use of those instruments, as P3P and similar initiatives have illustrated. Given the number of stakeholders, interests, and standards involved, it remains to be seen whether the apparently complex web of identity providers, identity mechanisms, privacy profiles, etc. in fact will be manageable over one easy-to-use interface as has been envisioned by leading designers.

4. The observation that user-centric concepts such as Identity 2.0 contain many different interacting elements and relations—and, thus, add technological and social complexity to the Net—leads to the third conceptual challenge. Consent and choice in the privacy context means informed consent and choice, respectively. It has been observed with regard to much less complex designs of privacy enhancing technologies that data subjects “cannot know how much they should know without fully understanding the system and its interconnection with other systems.” (H. Burkert) In other words, informed consent by users requires transparency for users, but transparency usually decreases in complex and highly technical environments. Someone with a non-technical background who seeks to understand how the emerging protocols and governance models in the area of user-centric work and what the differences among them are will immediately recognize how difficult it will be to make truly informed choices among different identity providers and privacy management systems. The more individuals depend on complex user-centered technology in order to manage their online identities, the more desirable it seems from a policy perspective that users know about the underlying Code, the functionalities, and risks. So far, it remains unclear whether is a realistic scenario that someone will have access to this meta-information and will aggregate it for users.

5. The three challenges outlined above are not meant as argument against the Identity 2.0 concept. Rather, the remarks are intended as a cautionary note—we should resist the temptation to overestimate the promise of any user-centric and choice-based approaches in the context of privacy. In response to the above arguments, however, one might argue that the emerging user-centric approaches will not exclusively rely on Internet users who are educated enough (probably supported by some sort of “choice assistants”) to dynamically manage their multiple online identities and exchanges of personal information on the Net. Rather, according to this argument, identity and privacy policies developed and monitored by private parties would supplement the user-centric approach. Indeed, such a complementary approach addresses some of the concerns mentioned above. However, the experiences with self-regulation in the area of Internet privacy in the U.S. have been rather disillusioning as several studies demonstrate. Viewed from that angle, it does not seem entirely clear why a similar approach should work well in the context of an Identity 2.0 environment.

6. The previous question leads us to another emerging problem under an Identity 2.0-like environment. It is the question about the control of the information practices of the identity providers themselves. The control issue is a particularly important one because it seems inevitable that the emergence of identity providers will be associated with an increased degree of centralization where personal information in the online environment is managed for the purpose of identity building. Again, the common line of argument currently suggests that self-regulation in the form of peer-auditing and/or reputation systems is an adequate solution to the problem. However, once more a look back at the history of privacy regulation in cyberspace might trigger doubts as to whether an industry-controlled self-regulatory scheme will be adequately effective to ensure fair information practices on the part of identity providers as the new and important players of the future Internet. Against this backdrop, it seems advisable to consider alternatives and critically rethink the interaction between code and law and their respective contributions to an effective management of the identity and privacy challenges in cyberspace. This step may mark the beginning of a discussion on Identity 3.0.

Log in