Archive for the 'digital ID' Category

Study Released: ICT Interoperability and eInnovation

0

John Palfrey and I released today in Washington D.C. a White Paper and three case studies on ICT Interoperability and eInnovation (project homepage here.) The papers are the result of a joint project between Harvard’s Berkman Center and the Research Center for Information Law at St. Gallen, sponsored by Microsoft. Our research focused on three case studies in which the issues of interoperability and innovation are uppermost: digital rights management in online and offline music distribution models; various models of digital identity systems (how computing systems identify users to provide the correct level of access and security); and web services (in which computer applications or programs connect with each other over the Internet to provide specific services to customers).

The core finding is that increased levels of ICT interoperability generally foster innovation. But interoperability also contributes to other socially desirable outcomes. In our three case studies, we have studied its positive impact on consumer choice, ease of use, access to content, and diversity, among other things.

The investigation reached other, more nuanced conclusions:

  • Interoperability does not mean the same thing in every context and as such, is not always good for everyone all the time. For example, if one wants completely secure software, then that software should probably have limited interoperability. In other words, there is no one-size-fits-all way to achieve interoperability in the ICT context.
  • Interoperability can be achieved by multiple means including the licensing of intellectual property, product design, collaboration with partners, development of standards and governmental intervention. The easiest way to make a product from one company work well with a product from another company, for instance, may be for the companies to cross license their technologies. But in a different situation, another approach (collaboration or open standards) may be more effective and efficient.
  • The best path to interoperability depends greatly upon context and which subsidiary goals matter most, such as prompting further innovation, providing consumer choice or ease of use, and the spurring of competition in the field.
  • The private sector generally should lead interoperability efforts. The public sector should stand by either to lend a supportive hand or to determine if its involvement is warranted.

In the White Paper, we propose a process constructed around a set of guidelines to help businesses and governments determine the best way to achieve interoperability in a given situation. This approach may have policy implications for governments.

  • Identify what the actual end goal or goals are. The goal is not interoperability per se, but rather something to which interoperability can lead, such as innovation or consumer choice.
  • Consider the facts of the situation. The key variables that should be considered include time, maturity of the relevant technologies and markets and user practices and norms.
  • In light of these goals and facts of the situation, consider possible options against the benchmarks proposed by the study: effectiveness, efficiency and flexibility.
  • Remain open to the possibility of one or more approaches to interoperability, which may also be combined with one another to accomplish interoperability that drives innovation.
  • In some instances, it may be possible to convene all relevant stakeholders to participate in a collaborative, open standards process. In other instances, the relevant facts may suggest that a single firm can drive innovation by offering to others the chance to collaborate through an open API, such as Facebook’s recent success in permitting third-party applications to run on its platform. But long-term sustainability may be an issue where a single firm makes an open API available according to a contract that it can change at any time.
  • In the vast majority of cases, the private sector can and does accomplish a high level of interoperability on its own. The state may help by playing a convening role, or even in mandating a standard on which there is widespread agreement within industry after a collaborative process. The state may need to play a role after the fact to ensure that market actors do not abuse their positions.

While many questions remain open and a lot of research needs to be done (including empirical studies!), we hope to have made a contribution to the ongoing interoperability debate. Huge thanks to the wonderful research teams on both sides of the Atlantic, especially Richard Staeuber, David Russcol, Daniel Haeusermann, and Sally Walkerman. Thanks also to the many advisors, contributors, and commentators on earlier drafts of our reports.

Second Berkman/St. Gallen Workshop on ICT Interoperability

1

Over the past two days, I had the pleasure to co-moderate with my colleagues and friends Prof. John Palfrey and Colin Maclay the second Berkman/St. Gallen Workshop on ICT Interoperability and eInnovation. While we received wonderful initial inputs at the first workshop in January that took place in Weissbad, Switzerland, we had this time the opportunity to present our draft case studies and preliminary findings here in Cambridge. The invited group of 20 experts from various disciplines and industries have provided detailed feedback on our drafts, covering important methodological questions as well as substantive issues in areas such as DRM interoperability, digital ID, and web service/mash ups.

Like at the January workshop, the discussion got heated while exploring the possible roles of governments regarding ICT interoperability. Government involvement may take many forms and can be roughly grouped into two categories: ex ante and ex post approaches. Ex post approaches would include, for example, interventions based on general competition law (e.g. in cases of refusal to license a core technology by a dominant market player) or an adjustment of the IP regime (e.g. broadening existing reverse-engineering provisions). Ex ante strategies also include a broad range of possible interventions, among them mandating standards (to start with the most intrusive), requiring the disclosure of interoperability information, labeling/transparency requirements, using public procurement power, but also fostering frameworks for cooperation between private actors, etc.

There was broad consensus in the room that governmental interventions, especially in form of intrusive ex ante interventions, should be a means of last resort. However, it was disputed how the relevant scenarios (market failures) might look like where governmental interventions are justified. A complicating factor in the context of the analysis is the rapidly changing technological environment that makes it hard to predict whether the market forces just need more time to address a particular interoperability problem, or whether the market failed in doing so.

In the last session of the workshop, we discussed a chart we drafted that suggests steps and issues that governments would have to take into consideration when making policy choices about ICT interoperability (according to our understanding of public policy, the government could also reach the conclusion that it doesn’t intervene and let the self-regulatory forces of the market taking care of a particular issue). While details remain to be discussed, the majority of the participants seemed to agree that the following elements should be part of the chart:

  1. precise description of perceived interoperability problem (as specific as possible);
  2. clarifying government’s responsibility regarding the perceived problem;
  3. in-depth analysis of the problem (based on empirical data where available);
  4. assessing the need for intervention vis-à-vis dynamic market forces (incl. “timing” issue);
  5. exploring the full range of approaches available as portrayed, for example, in our case studies and reports (both self-regulatory and regulation-based approaches, including discussion of drawbacks/costs);
  6. definition of the policy goal that shall be achieved (also for benchmarking purposes), e.g. increasing competition, fostering innovation, ensuring security, etc.

Discussion (and research!) to be continued over the weeks and months to come.

Social Signaling Theory and Cyberspace

3

Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

The Mobile Identity Challenge – Some Observations from the SFO mID-Workshop

1

I’m currently in wonderful San Francisco, attending the Berkman Center’s Mobile Identity workshop – a so-called “unconference” — led by my colleagues Doc Searls, Mary Rundle, and John Clippinger. We’ve had very interesting discussions so far, covering various topics ranging from Vendor Relationship Management to mobile identity in developing countries.

In the context of digital identity in general and user-centric identity management systems in particular, I’m especially interested the question as to what extent the issues related to mobile ID are distinct from the issues we’ve been exploring in the browser-based and traditionally wired desktop-environment. Here’s my initial take on it:

Although mobile identity can be best understood as part of the generic concept of digital identity and despite the fact that identity as such has some degrees of mobility by definition, I would argue that mobile (digital) identity has certain characteristics that might (or should) have an impact on the ways we frame and address the identity challenges in this increasingly important part of the digitally networked environment. I would argue that the characteristics, by and large, may be mapped onto four layers.

  • Hardware layer: First and most obviously, mobile devices are characterized by the fact that we carry them with us – from location to location. This physical dimension of mobility has a series of implications regarding identity management, especially at the logical and content layer (see below), but also with regard to vulnerabilities such as theft and loss. In addition, the devices themselves have distinct characteristics – ranging from relatively small screens, small keyboards to limited computing power, but also SIM cards, among other things — that might shape the design of the identity management solution.
  • Logical layer: One of the consequences of location-to-location mobility and multi-mode devices is that identity issues have to be managed in a heterogeneous wireless infrastructure environment, which includes multiple providers of different-generation cellular networks, public and private WiFi, Bluetooth, etc., that are using different technologies and standards, and are operating under different incentive structures. This links back to our last week’s discussion about ICT interoperability.
  • Content layer: The characteristics of mobile devices have ramifications at the content layer. Users of mobile devices are limited in what they can do with these devices. Arguably, mobile device users tend to carry out rather specific information requests, transactions, tasks, or the like – as opposed to open, vague and time-consuming “browsing” activities. This demand has been met on the supply-side with application and service providers offering location-based and context-specific content to mobile phone users. This development, in turn, has increased the exchange of location data and contextual information among user/mobile device and application/service providers. Obviously, the increased relevance of such data adds another dimension to the digital ID and privacy discussion.
  • Behavioral layer: The previous remarks also make clear that different dimensions of mobility and the characteristics of mobile devices lead to different uses of mobile devices when compared to desktop-like devices. The type and amount of personal information, for example, that is disclosed in a mobile setting is likely to be distinct from other online settings. Furthermore, portable devices get more often lost (or stolen) than non-portable devices. These “behavioral” characteristics might vary among cultural contexts – a fact that might add to the complexity of mobile identity management (Colin Maclay, for instance, pointed out that sharing cell phones is a common practice in low income countries.)

Today, I got the sense that the technologists in the room have a better understanding of how to deal with the characteristics of mobile devices when it comes to digital identity management. At least it appears that technologists have identified both the opportunities and challenges associated with these features. I’m not sure, however, whether we lawyers and policy people in the room have fully understood the implications of the above-mentioned characteristics, among others, with regard to identity management and privacy issues. It only seems plain that many of the questions we’ve been discussing in the digital ID context get even more complicated when we move towards ubiquitous computing. (One final note in this context: I’m not sure whether we focused too much on mobile phones at this workshop – ID-relevant components of the mobile space such as RFID tags, for instance, have remained largely unaddressed – at least in the sessions I attended.)

ICT Interoperability and Innovation – Berkman/St.Gallen Workshop

1

We have teamed up with the Berkman Center on an ambitious transatlantic research project on ICT interoperability and e-innovation. Today, we have been hosting a first meeting to discuss some of our research hypotheses and initial findings. Professor John Palfrey describes the challenge as follows:

This workshop is one in a series of such small-group conversations intended both to foster discussion and to inform our own work in this area of interoperability and its relationship to innovation in the field that we study. This is among the hardest, most complex topics that I’ve ever taken up in a serious way.

As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful.

As expected, the diverse group of 20 experts had significant disagreement on many of the key issues, especially with regard to the role that governments may play in the ICT interoperability ecosystem, which was characterized earlier today by Dr. Mira Burri Nenova, nccr trade regulation, as a complex adaptive system. In the wrap-up session, I was testing – switching from a substantive to a procedural approach – the following tentative framework (to be refined in the weeks to come) that might be helpful to policy-makers dealing with ICT interoperability issues:

  1. In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
  2. What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
  3. If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we – from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
  4. If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
  5. If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?

As always, comments are welcome. Last, but not least, thanks to Richard Staeuber and Daniel Haeusermann for their excellent preparation of this workshop.

Log in