You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'digital institutions' Category

Information Quality and Reputation

3

Heavily influenced by the work of Jean Nicolas Druey and Herbert Burkert, among others, I’ve been working on information quality issues in various contexts for the past 8 years or so. Today, I have the pleasure to attend the Yale Information Society Project’s conference on Reputation Economies in Cyberspace and contribute to a panel on reputational quality and information quality. Essentially, I would like to share three observations that are based on previous research projects. The three points I will talk about later today are:

  1. From both a theoretical and empirical viewpoint, information quality is a horse that is difficult to catch. As a complicating factor, information quality in the context of reputation systems is a meta-question, concerning the quality of statements about the qualities of a person, service, advice, or the like. As such, it is important to be specific about the particular aspect of the quality challenge that is up for discussion in a given quality discourse. A taxonomy of quality problems/issues in the context of online reputation might be a good first step. Such a taxonomy needs to conceptualize informational quality of reputation as a composite of syntactic (data), semantic (meaning), and pragmatic (effects) factors. [We will present an initial draft of such a taxonomy at the conference]
  2. While addressing specific quality issues, it’s important to consider the full range of possible approaches (“tools”) that are available. The role of market-based approaches (“pricing”, “incentives”) has already been explored in detail in the context of reputation systems. We also have a growing understanding about the social norms at work (research on online identity). As far as technology (“platform design”) is concerned, insights from social signaling theory might be a source of inspiration (e.g. conditions to foster honest signaling). Largely unexplored, by contrast, is the substantive (e.g. privacy) or procedural (e.g. due process) role that law may play in the context of a blended approach.
  3. Information quality conflicts can’t be avoided, only managed. Each “regulatory” approach mentioned before comes at costs and has inherent (factual and/or normative) limitations. A general limitation is the contextual and subjective nature of human information processing and decision-making processes (e.g. buying a digital camera) in which the quality of statements about quality (reputation) plays a role. The case of “teenagers” might be illustrative given our knowledge about the neurobiological state of development of brain areas (prefrontal cortex) involved in information selection, interpretation, and evaluation. But also cognitive biases of adults mark the limits on what can be achieved at the level of governance of reputation systems.

Comments, as always, welcome.

Law, Behavior, and the Brain Conference

ø

I’m currently on my way to far-away Olympic Valley, CA, where I have the great pleasure to attend the Gruter Institute for Law and Behavioral Research Conference on Law, Behavior, and the Brain. The conference, led by Monika Gruter Cheney, brings together a terrific interdisciplinary group of roughly 40 experts in areas such as evolutionary biology, neuroscience, behavioral economics, and – yes – also a number of legal scholars. During four days, we will be exploring topics such as “State of Play: Law, Behavioral Biology and Neuroscience,” “Rationality, Emotions and Moral Judgments in Humans and Other Species,” “Property and Economics,” and “Results in Neuroeconomics and Experimental Economics,” to list just a few sessions. I’m much looking forward to learning from all conference contributors, including Paul Zak, Carl Bergstrom, Kevin McCabe, John Clippinger, Bruce Hay, Oliver Goodenough, Susan Bandes, Larry Frolik, Sara Beale, Terry Maroney, among many others.

Here are the abstracts of my contributions to the conference:

1) Panel on Law & Emotions

A recent interdisciplinary conference in Switzerland was dedicated to law & emotion scholarship. In my brief presentation, I would like to answer the apparently trivial question asked by a conference participant: “Given the fact that it isn’t that much of a surprise that even judges, prosecutors, etc. have emotions, and that therefore emotions play a role in decision-making processes with legal relevance, what’s really the contribution of law & emotion research and scholarship? What’s new about it?”. I will try to answer this question in a systematic way, arguing that law & emotion research has (or might have) an impact on (at least) two levels, each consisting of two elements: the analytical level with the elements “phenomenon (stipulated facts)” and “legal actors”, and the design level with “norms applicable to the facts of the case” and “norms governing the production of law.” I will use a few stories – ranging from file-sharing to the U.S. Patriot Act – to illustrate these points.

2) Presentation on Digital Institutions / Social Signaling Theory

Social signals play an important role in defining social relations and structuring societies, both in the on- and offline world. In my presentation, I will focus on the role of social signaling in the digitally networked environment. More precisely, I will explore the promises and limitations of social signaling theory as applied to cyberspace, including digital institutions. In essence, I will address three questions: First, in what online contexts do we have an interest in signal reliability and honest signaling? Second, what are regulatory strategies and approaches (using Lessig’s framework of four modes of regulation) to increase the reliability of social signals? And third, who will make the decisions about the degrees of required signal reliability in cyberspace?

3) New Insights into Property Panel

My last year’s presentation focused on a new generation of neuro-science-informed arguments aimed at explaining large-scale file-sharing over P2P networks. This year, my contribution to the property panel will not focus on the explanation of a presumably illegal activity, but on a socially desirable one: In my talk entitled “Social economics of collaborative creativity”, I will provide a brief overview of the literature that seeks to explain why thousand of volunteers work together in lose-knit networks to peer-produce an online encyclopedia (Wikipedia), to come up with improved versions of an open-source web browser (e.g. Mozilla), or create shared open content platforms, to name just three examples. The presentation ends with the outline of a research agenda.

Promises and Limits of a Law and Economics Approach to IPR in Cyberage

1

Over the past few weeks, our graduate students at the Univ. of St. Gallen have done quite some heavy lifting in the three courses that I described here. In my own course on law and economics of intellectual property rights in the digital age, we’ve completed the second part of the course, which consisted of three modules dealing with digital copyright, software and biz methods patents, and trademarks/domain name disputes. We were very fortunate to have the support of three wonderful guest lecturers. Professor John Palfrey taught a terrific class on digital media law and policy (find here his debriefing and putting-into-context). Klaus Schubert, partner with WilmerHale, provided an excellent overview of the current state of software patenting in and across the EU, in the U.S., and Japan and made us think about the hard policy questions up for discussion. Last week, Professor Philippe Gillieron from the Univ. of Lausanne discussed with us the legal and economic aspects of domain name disputes and ways to solve them (the focus was on UDRP – in my view a particularly interesting topic when analyzed through the lens of new institutional economics theory, see also here for variations on this theme.)

In the last session before “flyout” week, Silke Ernst and I had a first cut at a synthesis aimed at tying together several of the core themes we’ve been discussing so far. At the core of the session was the question as to what extent the law & economics approach can help us to deal with the complex IPR-questions that are triggered while transitioning from an analog/offline to a digital/online information environment. The students contributed to the session by presenting their views on the promises of and limits on a law & economics approach to IPR in the digital age. Using the time while traveling from Oxford back to Zurich, my recollection of the in-class discussion looks as follows (alternative interpretations, of course, encouraged and welcome) – starting with the argument that the law & economics approach to IPR serves at least two functions:

  • On the one hand, it provides a toolset that helps us to frame, analyze, and evaluate some of the complex phenomena we observe in cyberspace (such as, for instance, large-scale file-sharing over P2P networks or the user-created content), and enables us to gain a better understanding of the interaction among existing rules and norms and these phenomena. We might want to call it the “analytical function” of law & economics (this aspect gets close to – but is in my view not exactly identical with – what has traditionally been described as the “positive” strand of discussion in law & economics.)
  • On the other hand, law & economics may guide us at the design level (again, this gets close to what has been termed “normative” law & economics. For reasons I don’t want to discuss here, I don’t want to work with this distinction in the present context.). First, it can help us to identify the need for law reform by showing that the existing rules have a negative impact on social welfare. Here, the design function intersects with the previously mentioned analytical function. Second, law & economics provides a consistent framework to evaluate the impact of alternative means of regulation on the (economic) behavior of individuals and compare costs and benefits of different approaches aimed at solving a particular problem.

At a more granular level, we might identify the following promises and limitations of a law & economics approach with regard to the respective functionality:
Analytical function

  • Promises: coherent framework, consistent and shared set of criteria, rational and quasi-objective analysis, …
  • Limitations: Bounded rationality/areas of non-rationale behavior, lack of transparency regarding underlying causalities, limited possibilities to quantify phenomena, lack of empirical data, …

Design function:

  • Promises: Cost-benefits analysis of alternative policy choices, taking into account perspectives of different actors in an ecosystem, at least ideal-type predictions based on models, …
  • Limitations: Complexity of real-life situations, non-economic perspectives, motives, and effects, non-economic values, …

We reached some sort of consensus that the law & economics approach indeed provides a great toolset to analyze at least some of the trickiest IPR-related policy questions in cyberspace. However, the large majority seemed also to agree that some of the limitations of such an analysis become particularly visible in the digitally networked environment with phenomena such as commons-based peer production of content based on intrinsic motivations. Most of us also agreed that it would be dangerous to attempt to answer the IPR policy questions only against the backdrop of law & economics theory. Indeed, many of the decisions to be made in this space ultimately include choices about core values of our society that do not easily translate into the frameworks of law & economics, like for example informational justice, equal access, participatory culture, or semiotic democracy.

I’m very much looking forward to continuing the discussion about the role of law and economics in the digital age with my colleagues, the teaching team, and – most importantly – with the wonderful group of students enrolled in this seminar.

Law, Economics, and Business of IPR in the Digital Age: St. Gallen Curriculum (with help from Berkman)

1

The University of St. Gallen has been the first Swiss university that has implemented the principles and standards set forth in the so-called Bologna Declaration aimed at harmonizing the European Higher Education System (more on the Bologna process here.) As a result, the St. Gallen law school offers two Master programs for J.D. students: Master of Arts in Legal Studies, and Master of Arts in Law and Economics.

Recently, I have been heavily involved in the law and economics program (I should mention that St. Gallen doesn’t follow the rather traditional approach to law and economics that is predominant among U.S. law schools. Click here for a brief description of the St. Gallen interpretation of law and economics). Today is a special day for the program’s faculty and staff, because the first generation of students enters the final 10th semester of the Bologna-compatible Master program. Arguably, this 10th semester is rather unique as far as structure and content is concerned. Instead of providing the usual selection of courses for graduate students, we have designed what we call an “integrating semester” in which all students are required to take three (but only three) full-semester courses aimed at “integrating” the knowledge, skills, and methods they have acquired over the past few years. All three seminars – together worth 30 credits – are designed and taught by an interdisciplinary group of faculty members from the University of St. Gallen and beyond, including legal scholars, economists, business school profs, technologists, etc. The first seminar, led by Professor Peter Nobel, Thomas Berndt, Miriam Meckel and Markus Ruffner, is entitled Law and Economics of Enterprises and deals with risk and risk management of multinational corporations. The second seminar, led by Professor Beat Schmid and me, concerns legal, economic, and business aspects of intellectual property rights in the digital age. Professors Hauser, Waldburger, and van Aaken, finally, are teaching the third seminar entitled Law and Economics of Globalization, addressing issues such as world market integration of low-income countries, foreign investments, global taxation, and regulation of multinational enterprises.

My seminar on law and economics of IPR in the digital age starts with a discussion of basic concepts of economic analysis of intellectual property law and a stock-taking of the main IPR-problems associated with the shift from an analog/offline to a digital/online environment. It then follows a module in which we will explore three key topics in greater detail: digital copyright, software and business methods patents, and trademarks/domain names. Towards the end of the semester, we will then try to tie all the elements together and develop a cross-sectional framework for economic analysis and assessment of IPR-related questions in the digitally networked environment. In this context, we will also be visiting the Swiss Federal Institute of Intellectual Property (in charge, among other things, with working on IP legislation in Switzerland), where we will discuss the promises and limits of economic analysis of IP law with the Institute’s senior legal advisor and the senior economic advisors.

Clearly, we have a very ambitious semester ahead. I’m particularly thrilled that a wonderful group of colleagues from Europe and abroad is helping me to do the heavy lifting (of course, my wonderful St. Gallen team is very involved, too, as usual.). My colleague and friend John Palfrey, Clinical Professor of Law at Harvard Law School, the Berkman Center’s executive director, and member of the board of our St. Gallen Research Center for Information Law, will be discussing with us thorny digital copyright issues and future scenarios of digital media. Klaus Schubert, partner of WilmerHale Berlin, will be guiding us through the software patents and business methods patents discussion. Last but not least, Professor Philippe Gillieron from the University of Lausanne will be speaking about trademark law in the digital age, focusing on domain name disputes.

All sessions are (hopefully) highly interactive. The students will contribute, among other things, with discussion papers, term papers, group presentations, and will participate in mock trials (one on Google’s recent copyright case in Europe), Oxford debates, and the like. Unfortunately, the Univ. of St. Gallen is still using a closed online teaching system called StudyNet, but if you’re interested in the Syllabus, check it out here. Comments, thoughts, suggestions, etc. most welcome!

Social Signaling Theory and Cyberspace

3

Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

Formation of Digital Institutions: Some Comments

2

I just returned from 02138, where I attended a working conference on digital institutions sponsored by the Berkman Center and the Gruter Institute and chaired by my colleagues Oliver Goodenough and John Clippinger. During two (rather intense) workshop days, an impressive line-up of panelists and discussants representing various backgrounds and areas of research – ranging from neuroeconomics, biology to virtual world developers – shared their knowledge about “digital institutions” with each other.

I was asked to frame the theme of a panel with Colin Maclay, Mike Best, and Iqbal Quadir on the formation of institutions in developed and developing economies. I started with a series of questions to be discussed and issues to be considered. First, I offered some thoughts about terminology and key distinctions that I considered being helpful for the discourse. Second, I briefly touched upon core elements that are likely to have an impact on the formation of digital institutions, emphasizing the importance of pre-existing institutional arrangements

Here are my notes:

1) Introducing basic distinctions: What do we mean by the term “institution”?

  • Reviewing the literature in fields of sociology, economics, and law, it’s far from clear what the term means and how it differs from related terms such as “organizations” or “firms”. Generally, definitions of institutions include at least two elements: (a) Institutions consist of a set of rules, plus (b) some sort enforcement regime.
  • Within this broad definition, however, various types of institutions can be distinguished, for example:
    • internal institutions = community enforcement of rules (e.g.: Wikipedia with neutrality of viewpoint rule; enforced through community/peers; P2P file-sharing with strong norms re: sharing [“charismatic code” – cf. Strahilevitz]).
    • external institutions = government enforcement of rules (e.g.: Intellectual property rights)
    • informal institutions = emerge without explicit agreements (e.g.: Blogger ethics, e.g. “Identify and link to sources whenever feasible.”)
    • formal institutions = based on deliberative formation processes (e.g.: ICANN)
  • Similarly, formation of institution varies:
    • spontaneous emergence (e.g.: online discussion groups)
    • authoritative formation (e.g.: Companies providing virtual world platforms are created by entrepreneurs (= authority))
  • Why do we (or should we) care about definitions?
    • Epistemological argument: The way we conceptualize “institutions” shapes the way we perceive (digital) institutions as subject of our research
    • To define means to differentiate: In order to deepen our understanding of formation of institution, we need some degree of granularity. Different types of institutions, e.g., emerge and evolve in different ways
    • Putting together various pieces of knowledge about formation of institutions from different areas of research in order to use it in context of digital institutions also requires a relatively fine-grained picture of the concept “institutions”.
  • However, traditional distinctions may not be 100% feasible for the digitally networked environment. Example: Virtual Worlds (such as Second Life) are at the same time…
    • …internal institutions (avatars create & enforce their own rules) and external institutions (e.g. Linden Lab using EULAs, IP law, etc.)
    • …informal institutions (rules of behavior within the game emerge in part w/out explicit agreements) and formal institution (e.g. enactment of “in world laws/codes in deliberative processes)
  • In fact, first research question as to what extent the distinctions of analog/offline world translate into digital/online world.
    • Possible starting point: Institutional approach to “commons-based peer production” of content (social production beyond hierarchies and price signals; cf. Benkler)
  • For this panel, we use a broad an open terminology; pragmatic approach. Digital institution here includes diverse online phenomena such as Wikipedia, Flickr, eBay, Second Life … but also offline institutions operating in digital environment (e.g. mobile phone networks)

2) Formation of digital institutions: What are important factors that shape the formation of digital institutions?

At least three interrelated and interacting, but analytically distinct elements:

  1. Preexisting institutional arrangements, both “analog” and “digital”
  2. Technology (availability, pricing, development, …); physical and logical layer
  3. Content-related aspect (content “supply” and “demand”; content diversity, quality, etc., but also human factor: ability to process information, knowledge, and entertainment, incl. level of education, literacy, etc.)

Focus on (1) = importance of preexisting institutional arrangements

  • Analog” institutions have huge impact on formation of digital institutions; analog and digital not clearly separable
    • Example 1: Legal system: Consider what is required to establish a Virtual World such as Second Life? [(1) establishing Linden Research Inc.; (2) creating platform – requires everything from corporate law over contract to IPR]
    • Example 2: Economic system: Consider the role of financial system, e.g. micro-credits in low-income countries; cf. Grameen village phones project
  • “Digital” institutional arrangements, too, have impact on formation of new digital institutions
    • Internet: Esp. digital institutions at logical layer of the Net for providing services at content layer (fundamental example: ICANN – domain name system)
    • Offline, but digital: phone networks (incl. mobile) and other infrastructure as prerequisite for digital entrepreneurs (here, intersection with second element: technological development)
  • These few remarks illustrate interdependency and complexity, both at theoretical and practical level. Now: digging deeper in a case-study mode (Iqbal, Mike, Colin) – as one way to deal with complexity.

So far my remarks. After sessions with formal presentations covering issues such as trust and reciprocity, social signaling, stabilizing cooperation, dispute resolution, institution formation, virtual economics, and authentication and several hours of brainstorming, we came up with some sort of loosely joined research agenda as well as two or three more specific project ideas. In my perception, the multi-layered discussions centered around three core questions or perspectives: (1) What have we learned and what can we learn from existing and evolving digital institutions? (2) How could we build or use digital institutions to address or solve specific problems? (3) How can we change the offline environment to support the formation of digital institutions? The two days have made clear that in each area we are only at the beginning of a long, but exciting and promising conversation.

Professor Fisher Presents Conclusions on OECD Digital Content Conference

ø

Professor Terry Fisher has the difficult job, as the Day 1 Rapporteur, to present in 10 minutes the OECD conference conclusions. Here are the main points he made a few minutes ago:

A. Points of agreement (or at least substantial consensus)

(a) Descriptive level:
o We’re entering a participatory culture, active users, explosion of blogs; differences in web usage.

(b) Predictive level:
o Consensus that we’ll see a variety of applications that will florish; the shift to biz models that incl internet distribution will have long tail effects, increase diversity

(c) Level of aspiration:
o We should aim for a harmonized, global Internet – single, harmonized global approach (vs. competing legal/regulatory frameworks)
o Governments should stay out, but broad consensus of 6 areas where governmental intervention is desirable: (1) Stimulating broadband; (2) fostering universal access (bridging dig.div.); (3) educating consumers; (4) engage in consumer protection against fraud, spam; (5) fostering competition; (6) promoting IP to achieve an optimal balance
o We should attempt to achieve “biz model neutrality” (TF’s personal comment: appealing idea, but infeasible, there’s no way to achieve it.)

B. Points of disagreement

(a) Descriptive level
o Whether IP currently does strike optimal balance (yes, middle ground, no – spectrum of positions)

(b) Predictive level
o Which biz strategy will prevail: pay-per-view; subscription; free-advertisement based model?

(c) Level of aspiration:
o Network neutrality: required or not as a matter of policy
o TPM: Majority: yes, smaller group: no; intermediate group: only under certain conditions.
o Should governments be in the biz of interoperability?
o Using government power to move towards open doc format?
o Government intervention to achieve an Internet that is open vs. variations of a walled-gardened net?

Emergence of Digital Institutions

ø

I had the great pleasure to participate in the International Society for New Institutional Economics’ 9th Annual Conference that took place in my favorite European city Barcelona. Professor Oliver Goodenough from Vermont Law School organized and chaired an interesting panel on the design and function of digital institutions. It turned out that New Institutional Economists, by and large, have not yet explored this topic in such a great detail as one would expect, and Oliver’s session, in fact, was the only one at the conference dealing with cyber-issues at all (which is a bit surprising given the breadth and depth of the conference — check the program.) Oliver did a terrific job in framing the key questions we were talking about – “we” also includes Berkman Fellow John Clippinger, who presented some striking theses about cyber-institutions born out of social networks and their interactions with institutions of the offline world. My own contribution is posted here. Thanks to Oliver, John Clippinger and, last but not least, John Palfrey for making the conversation possible.

Log in