You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Towards A Best Practice Approach to Internet Filtering? Initial Thoughts After Release of Global ONI Survey

7

I’ve had the great pleasure to celebrate today the launch of the most comprehensive and rigorous study on state-mandated Internet filtering with my colleagues and friends from the Berkman Center and the OpenNet Initiative, respectively. It was an inspired and inspiring conference here at Oxford University, and after a long day of debate it seems plain to me that the filtering reports from 41 countries presented today will keep us busy for the weeks and months to come.

Extenisve coverage both in traditional media (sse, e.g., BBC) and the blogosphere.

In the closing session, Professor John Palfrey, one of the principle investigators (check out his blog), was kind enough to put me on spot and ask for my take away points. Given the complexity of the information ecosystem including its diverse filtering regimes, it seems hard to come up with any kind of conclusion at this early stage. However, among the probably trickiest problems we might want to think about is the question whether we – as researchers – want and should contribute to the development of some sort of best practice model of speech control on the Internet – a model aimed at “minimizing” the harm done to free speech values in a world where filtering and blocking is likely to continue to exist-, or whether such an endeavor would be counterproductive under any circumstances, either because it would be immediately hijacked by governments to legitimize filtering or used by repressive regimes to make filtering more effective.

Having only a tentative answer to that question, we at the St. Gallen Research Center have started to brainstorm about ways in which various governance approaches to content filtering – focusing on filtering regimes in European countries and the U.S. – could be systematically mapped, analyzed, and compared. So far, we have come up with a set of six guiding questions:

  1. Who is obliged or committed to block of filter content?
  2. How do the obliged actors become aware of the content that has to be blocked?
  3. Who determines what content has to be blocked, and how?
  4. What technical means (such as, e.g., IP blocking, URL filtering, etc.) are used?
  5. What are the procedural requirements and safeguards in the filtering process?
  6. Who sets the rules, under which conditions?

The second issue we’re currently debating is how different filtering regimes can be evaluated, i.e., how the benchmarks for online speech control might look like. In this context, we’re considering the application of generic characteristics of good regulation – including criteria such as efficiency, due process, transparency, accountability, and expertise, among others – to online filtering regimes.

What are your thoughts on this idea as well as on the basic question whether we should get involved in a best practice discussion – even (or especially) if we believe in the power of a marketplace of ideas? Comments, as always, most welcome.

Law and Emotion: Possible Impacts of a New Understanding of the Role of Emotion in Law

ø

I had the great pleasure to lecture at the 2nd Colloquium on Law of the Schweizerische Studienstiftung (Swiss Study Foundation), a foundation aimed at creating an interdisciplinary network among young high-potentials in Switzerland. Daniel Haeusermann had been on the planning committee of yesterday’s event, and so it might be less of a surprise that the Colloquium’s topic was “Law and Emotion”.

It was a lot of fun to present some of my theses on this multi-faceted topic. First of all, the Colloquium’s participants were very well prepared and made the discussion really interesting. Second, my Univ. of St. Gallen colleague Prof. Thomas Geiser did a great job in moderating the long day (room with no windows, wonderful sunshine outside…). Third, the Foundation invited a wonderful group of speakers, including Prof. Sandoz and retired Swiss Supreme Court Judge Franz Nyffeler. Last, but not least, it was the first time that I had the opportunity to speak at the same conference as my dad, Dr. Peter Gasser. He gave us a wonderful overview of the current state of the art of psychological and neuro-research on emotions.

I started my presentation with the thesis all speakers seemed to agree upon: Research (as well as life experience) suggests that emotions are constitutive and important elements of almost any phenomena with legal relevance. The emotional component is not limited to facts of the case before the court, but also includes decision-making processes by prosecutors, judges, legislators, etc. In some instances, the legal system is conscious about the emotional element – and in some instances it even explicitly addresses emotional phenomena, both with regard to norms applicable to the facts of a case (take, e.g., mitigating circumstances in criminal law; emotional injury in torts law) and the norms aimed at governing the legal decision-making process (e.g. the duty to recuse oneself in procedure law). In most cases, however, the legal system and its lawyers ignore the role of emotions and/or pretends to be “rational” (this perception of law might be particularly widespread among continental European lawyers).

Against this backdrop, I’m arguing that emotions – and research on emotions – play an important role at two levels, each level consisting of two elements: the analytical level with the elements “phenomenon” (stipulated facts, Sachverhalt) and “legal actors” (judges, attorneys, juries, etc.), and the design level with the components “norms applicable to the facts of the case,” and “norms governing the production of law” (e.g. procedure law). Here’s a rough sketch of the proposed framework:

  • Analytical level:
    • Phenomenon: Using the example of P2P filesharing, I tried to illustrate how a better understanding of the role of emotions (and that means: acknowledging emotions in the first place), makes us better observers and may lead to a deeper understanding of phenomena with legal relevance.
    • Legal actors: Inclusion of insights from research on emotions may make us better legal professionals and thus improve the legal system. I used research on prosecutors’ strong feelings of loyalty as an example.
  • Design level:
    • Norms applicable to facts: New findings about emotions might force us to re-consider existing distinctions and think about new ones. I used the example of adjudicative competence (Dusky standard) as an illustration of this point (see this paper).
    • Norms governing the production of law: New insights might lead to new mechanisms and fora that enable the system‘s actors to express, display, channel, balance, … emotional and rational elements of reasoning in a structured and discursive way. Consider, for example, procedural „speed bumps“ that would slow down legislation that is driven by fear – using the rapidly-enacted Patriot Act as a case in point.

In sum, law and emotion research and scholarship has an important agenda-setting function. The trickiest question, in my view, is as to what extent we (as a society) want to include insights from the sciences of mind. The heated debate about the existence of a free will – triggered by new neuro-biological and neuro-psychological findings – nicely illustrates this normative challenge before us.

My personal view is that we should include as much insights from science as we can as far as the analytical level is concerned. In contrast, I would be much more careful about applying insights from emotion research at the level of norm design. Although it is important to gain a better understanding of emotions at the design level, we would probably be ill-advised to incorporate latest insights from research on emotions without thoroughly discussing the normative implications of it on a case-by-case basis.

New OECD Must-Read: Policy Report On User-Created Content

ø

The OECD has just released what – in my view – is the first thorough high-level policy report on user-created content. (Disclosure: I had the pleasure to comment on draft versions of the report.) From the introduction:

The concept of the ‘participative web’ is based on an Internet increasingly influenced by intelligent web services that empower the user to contribute to developing, rating, collaborating on and distributing Internet content and customising Internet applications. As the Internet is more embedded in people’s lives ‘users’ draw on new Internet applications to express themselves through ‘user-created content’ (UCC).

This study describes the rapid growth of UCC, its increasing role in worldwide communication and draws out implications for policy. Questions addressed include: What is user-created content? What are its key drivers, its scope and different forms? What are new value chains and business models? What are the extent and form of social, cultural and economic opportunities and impacts? What are associated challenges? Is there a government role and what form could it take?

No doubt, the latest OECD digital content report (see also earlier work in this context and my comments here) by Sacha Wunsch-Vincent and Graham Vickery of the OECD’s Directorate for Science, Technology and Industry is a must-read that provides plenty of “food for thought” – and probably for controversy as well, as one might assume.

Law, Economics, and Business of IPR in the Digital Age: St. Gallen Curriculum (with help from Berkman)

1

The University of St. Gallen has been the first Swiss university that has implemented the principles and standards set forth in the so-called Bologna Declaration aimed at harmonizing the European Higher Education System (more on the Bologna process here.) As a result, the St. Gallen law school offers two Master programs for J.D. students: Master of Arts in Legal Studies, and Master of Arts in Law and Economics.

Recently, I have been heavily involved in the law and economics program (I should mention that St. Gallen doesn’t follow the rather traditional approach to law and economics that is predominant among U.S. law schools. Click here for a brief description of the St. Gallen interpretation of law and economics). Today is a special day for the program’s faculty and staff, because the first generation of students enters the final 10th semester of the Bologna-compatible Master program. Arguably, this 10th semester is rather unique as far as structure and content is concerned. Instead of providing the usual selection of courses for graduate students, we have designed what we call an “integrating semester” in which all students are required to take three (but only three) full-semester courses aimed at “integrating” the knowledge, skills, and methods they have acquired over the past few years. All three seminars – together worth 30 credits – are designed and taught by an interdisciplinary group of faculty members from the University of St. Gallen and beyond, including legal scholars, economists, business school profs, technologists, etc. The first seminar, led by Professor Peter Nobel, Thomas Berndt, Miriam Meckel and Markus Ruffner, is entitled Law and Economics of Enterprises and deals with risk and risk management of multinational corporations. The second seminar, led by Professor Beat Schmid and me, concerns legal, economic, and business aspects of intellectual property rights in the digital age. Professors Hauser, Waldburger, and van Aaken, finally, are teaching the third seminar entitled Law and Economics of Globalization, addressing issues such as world market integration of low-income countries, foreign investments, global taxation, and regulation of multinational enterprises.

My seminar on law and economics of IPR in the digital age starts with a discussion of basic concepts of economic analysis of intellectual property law and a stock-taking of the main IPR-problems associated with the shift from an analog/offline to a digital/online environment. It then follows a module in which we will explore three key topics in greater detail: digital copyright, software and business methods patents, and trademarks/domain names. Towards the end of the semester, we will then try to tie all the elements together and develop a cross-sectional framework for economic analysis and assessment of IPR-related questions in the digitally networked environment. In this context, we will also be visiting the Swiss Federal Institute of Intellectual Property (in charge, among other things, with working on IP legislation in Switzerland), where we will discuss the promises and limits of economic analysis of IP law with the Institute’s senior legal advisor and the senior economic advisors.

Clearly, we have a very ambitious semester ahead. I’m particularly thrilled that a wonderful group of colleagues from Europe and abroad is helping me to do the heavy lifting (of course, my wonderful St. Gallen team is very involved, too, as usual.). My colleague and friend John Palfrey, Clinical Professor of Law at Harvard Law School, the Berkman Center’s executive director, and member of the board of our St. Gallen Research Center for Information Law, will be discussing with us thorny digital copyright issues and future scenarios of digital media. Klaus Schubert, partner of WilmerHale Berlin, will be guiding us through the software patents and business methods patents discussion. Last but not least, Professor Philippe Gillieron from the University of Lausanne will be speaking about trademark law in the digital age, focusing on domain name disputes.

All sessions are (hopefully) highly interactive. The students will contribute, among other things, with discussion papers, term papers, group presentations, and will participate in mock trials (one on Google’s recent copyright case in Europe), Oxford debates, and the like. Unfortunately, the Univ. of St. Gallen is still using a closed online teaching system called StudyNet, but if you’re interested in the Syllabus, check it out here. Comments, thoughts, suggestions, etc. most welcome!

Entering Collaboration with Fudan University, Shanghai

ø

I’m currently in Shanghai, the most vibrant city I’ve ever visited. In my role as the Academic Coordinator of the University of St. Gallen’s Executive Master of Business Law (MBL-HSG) Program, I’m thrilled to announce that we’ve entered collaboration with the Fudan University here in Shanghai.

From September 3 to September 8, a group of 40 students of the MBL Program and 30 alumni will be visiting Shanghai and studying at Fudan. Allen Chan, Managing Director of LGT Investment Management (Asia) Ltd. and Senior Financial Consultant to the President of Fudan University, and an old high school friend, Nathan Kaiser, partner of Shanghai and Taipei-based law office Wenfei, have helped me to put together an interesting curriculum and a wonderful line-up of speakers. The goal of the Shanghai Module is to offer our students an introduction to “Law and Business in China”.

We will start with an introduction to Chinese culture and business culture, an overview of Chinese economy (past and present), and an introduction to the Chinese legal system. We will then focus on certain hot topics at the intersection of law, business, and economy, including WTO accession, IP Law, arbitration and litigation, corporate law and corporate governance, and taxation, among others. Our lecturers will include professors from Fudan University’s School of Management and Law Faculty, partners with local and international law firms, accountant companies, as well as inhouse counsels of multinational companies in China.

In addition to lectures and in-class discussions, we also organize field trips and informal dinners with practionners. We will be visiting the production facilities of Georg Fischer Automative, a Swiss manufacturer with 1800 employees in China and among the first companies with extensive (and successful) Joint Venture experience in China. We are also working on a field trip to Intel, where the General Counsel Asia will give us his take on IP law issues.

Among our special guests are Professor Anna Wu, former Head of the Equal Opportunity Office in Hong Kong, William Frei, Consul General of Switzerland in Shanghai, and Dr. Hans J. Roth, Consul General of Switzerland in Hong Kong. Lecturers include Michelle Gon, Parter with Baker & McKenzie, Daniel Fink, the Georg Fischer Delegate of the Corporate CEO in China, Regula Hwang, Credit Suisse, and Intel’s Chen Gong. Professor Carl Baudenbacher, president of the EFTA COURT in Luxembourg, will lead the Swiss delegation.

Managing Corporate Risks in an E-Environment

ø

My colleague Daniel Haeusermann and I just released a new paper entitled “E-Compliance: Towards a Roadmap for Effective Risk Management.” In the article, which is largely based on consulting work we’ve been doing, we argue that the widespread use of digital communication technology on the part of business organizations leads to new types of challenges when it comes to the management of risks at the intersection of law, technology, and the marketplace. In order to effectively manage these challenges and associated risks in diverse areas such as security, privacy, consumer protection, IP, and content governance, we call for an integrated and comprehensive compliance concept in response to the structural and substantive peculiarities of the digital environment in which corporations – both in and outside the dot-com industry – operate today. See also this post. The conclusion section of the paper reads as follows:

Through significant efforts, the legal system has adjusted to the changes in the information and communications technology of daily corporate life—changes at the intersection of the market, technology, and law. Organizations must make adjustments on their part as well in order to deal with the consequences resulting from these changes in the legal system. The observation that led to this essay was that these adjustments represent a greater challenge than the already decreasing entropy surrounding concepts such as “e-commerce law” or “cyberlaw” would suggest. Our initial foray into the concept, characteristics, responsibilities and organizational guiding principles of e-Compliance confirms this observation.

E-Compliance, as discussed in this article, is confronted with the phenomenon of a close interconnection between law and technology, a prominent dynamization of the law, massive internationalization of issues and legal problems, as well as a strong increase in the significance of soft law. These characteristics, which in part may also apply to traditional areas of compliance such as financial market regulation, call in their interplay for the further development of compliance concepts as well as adaptation of the affected aspects of corporate organization. Due to the increasing amalgamation of corporate organizational nexus and ICT, the symbiotic relations between traditional compliance and e-Compliance will be increasingly amplified. The view that e-Compliance represents merely a single risk area among the many of compliance is therefore outdated in our opinion. E-Compliance is actually a multidimensional and multidisciplinary task, although there are certainly areas of law that are particularly affected by digitization (or also which particularly impact digitization) and therefore are of particular importance for the field of e-Compliance.

Thus, in conclusion, the authors do not posit a special “e-Sphere” within or without existing compliance departments. Rather, we argue for an integrated and comprehensive compliance concept that appropriately makes allowance for the structural and substantive peculiarities of e-Compliance as outlined in this essay and stays abreast with the pace of digitization.

Please contact Daniel or me if you have comments.

Ian Brown Comments On IIPA’s Copyright Recommendations

2

My colleague and friend Dr Ian Brown, co-leader of the EUCD best practice project (check out the wiki and the project report), has posted a great article written for the EDRI-gram on the International Intellectual Property Alliance’s (IIPA) recent recommendations to the US Trade Representative’s 2007 review of global copyright laws. Ian concludes:

It is not surprising that US companies lobby to change global laws that would increase their profits. On past performance, the US government is likely to take careful note of their recommendations. But European nations should robustly defend their right to shape copyright policy to meet the needs of their own citizens, and not just those of large copyright holders.

I hope the EUCD best practice project mentioned above and similar initiatives support European policy makers in identifying the leeway they have under the WIPO Internet Treaties and the EUCD when shaping their copyright and DRM frameworks.

Social Signaling Theory and Cyberspace

3

Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

Haeusermann on the Laws of Virtual Worlds

1

My colleague and collaborator Daniel Markus Haeusermann has sketched the contours of what he calls a “tentative taxonomy of legal scholarship and virtual worlds” over at his Information Law Possum blog. He differentiates among four basic categories that might be subject of inquiry: Offline law as applied to legal issues of MMORPGs of our world; our law as applied to things that happen within virtual worlds; the law of virtual worlds; and the relation between the law of virtual worlds and our law. Read on here.

I should also mention that Daniel recently published a terrific law review article in the Aktuelle Juristische Praxis on the possibilities and limitations of a legal approach to the protection of emotions related to faith – using the example of legal disputes associated with the controversial Mohammed-cartoons. I hope Daniel will soon provide an English summary of the article (which can be understood as a contribution to law & emotion scholarship) on his weblog.  Update: The English summary is now available (thanks, Daniel.)

The Mobile Identity Challenge – Some Observations from the SFO mID-Workshop

1

I’m currently in wonderful San Francisco, attending the Berkman Center’s Mobile Identity workshop – a so-called “unconference” — led by my colleagues Doc Searls, Mary Rundle, and John Clippinger. We’ve had very interesting discussions so far, covering various topics ranging from Vendor Relationship Management to mobile identity in developing countries.

In the context of digital identity in general and user-centric identity management systems in particular, I’m especially interested the question as to what extent the issues related to mobile ID are distinct from the issues we’ve been exploring in the browser-based and traditionally wired desktop-environment. Here’s my initial take on it:

Although mobile identity can be best understood as part of the generic concept of digital identity and despite the fact that identity as such has some degrees of mobility by definition, I would argue that mobile (digital) identity has certain characteristics that might (or should) have an impact on the ways we frame and address the identity challenges in this increasingly important part of the digitally networked environment. I would argue that the characteristics, by and large, may be mapped onto four layers.

  • Hardware layer: First and most obviously, mobile devices are characterized by the fact that we carry them with us – from location to location. This physical dimension of mobility has a series of implications regarding identity management, especially at the logical and content layer (see below), but also with regard to vulnerabilities such as theft and loss. In addition, the devices themselves have distinct characteristics – ranging from relatively small screens, small keyboards to limited computing power, but also SIM cards, among other things — that might shape the design of the identity management solution.
  • Logical layer: One of the consequences of location-to-location mobility and multi-mode devices is that identity issues have to be managed in a heterogeneous wireless infrastructure environment, which includes multiple providers of different-generation cellular networks, public and private WiFi, Bluetooth, etc., that are using different technologies and standards, and are operating under different incentive structures. This links back to our last week’s discussion about ICT interoperability.
  • Content layer: The characteristics of mobile devices have ramifications at the content layer. Users of mobile devices are limited in what they can do with these devices. Arguably, mobile device users tend to carry out rather specific information requests, transactions, tasks, or the like – as opposed to open, vague and time-consuming “browsing” activities. This demand has been met on the supply-side with application and service providers offering location-based and context-specific content to mobile phone users. This development, in turn, has increased the exchange of location data and contextual information among user/mobile device and application/service providers. Obviously, the increased relevance of such data adds another dimension to the digital ID and privacy discussion.
  • Behavioral layer: The previous remarks also make clear that different dimensions of mobility and the characteristics of mobile devices lead to different uses of mobile devices when compared to desktop-like devices. The type and amount of personal information, for example, that is disclosed in a mobile setting is likely to be distinct from other online settings. Furthermore, portable devices get more often lost (or stolen) than non-portable devices. These “behavioral” characteristics might vary among cultural contexts – a fact that might add to the complexity of mobile identity management (Colin Maclay, for instance, pointed out that sharing cell phones is a common practice in low income countries.)

Today, I got the sense that the technologists in the room have a better understanding of how to deal with the characteristics of mobile devices when it comes to digital identity management. At least it appears that technologists have identified both the opportunities and challenges associated with these features. I’m not sure, however, whether we lawyers and policy people in the room have fully understood the implications of the above-mentioned characteristics, among others, with regard to identity management and privacy issues. It only seems plain that many of the questions we’ve been discussing in the digital ID context get even more complicated when we move towards ubiquitous computing. (One final note in this context: I’m not sure whether we focused too much on mobile phones at this workshop – ID-relevant components of the mobile space such as RFID tags, for instance, have remained largely unaddressed – at least in the sessions I attended.)

Log in