You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

ICT Interoperability and Innovation – Berkman/St.Gallen Workshop

1

We have teamed up with the Berkman Center on an ambitious transatlantic research project on ICT interoperability and e-innovation. Today, we have been hosting a first meeting to discuss some of our research hypotheses and initial findings. Professor John Palfrey describes the challenge as follows:

This workshop is one in a series of such small-group conversations intended both to foster discussion and to inform our own work in this area of interoperability and its relationship to innovation in the field that we study. This is among the hardest, most complex topics that I’ve ever taken up in a serious way.

As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful.

As expected, the diverse group of 20 experts had significant disagreement on many of the key issues, especially with regard to the role that governments may play in the ICT interoperability ecosystem, which was characterized earlier today by Dr. Mira Burri Nenova, nccr trade regulation, as a complex adaptive system. In the wrap-up session, I was testing – switching from a substantive to a procedural approach – the following tentative framework (to be refined in the weeks to come) that might be helpful to policy-makers dealing with ICT interoperability issues:

  1. In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
  2. What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
  3. If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we – from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
  4. If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
  5. If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?

As always, comments are welcome. Last, but not least, thanks to Richard Staeuber and Daniel Haeusermann for their excellent preparation of this workshop.

Positive Economic Impact of Open Source Software on EU’s ICT Sector

1

The EU commission recently released an impressive 280+ pp. study on the economic impact of open source software on innovation and the competitiveness of the ICT sector in the EU. The report analyzes, among other things, FLOSS’ market share, and its direct and indirect economic impacts on innovation and growth. It also discusses trends and scenarios and formulates policy recommendations. Some of the findings that I find particularly interesting:

  • Almost two-thirds of FLOSS is written by individuals, while firms contribute about 15% and other institutions 20%. The existing base of FLOSS software represents about 131.000 real person-years of effort.
  • Europe is the leading region as far as the number of globally collaborating FLOSS developers and global project leaders are concerned. Weighted by average income, India is the leading provider of FLOSS developers, followed by China.
  • The existing base of quality FLOSS applications would cost firms almost 12 billion Euros to reproduce internally. The code base has been doubling every 12-24 month. FLOSS potentially saves the industry over 36% in software R&D investment.
  • FLOSS is an important growth factor for the European economy. It encourages the creation of SMEs and jobs and is unlikely to cannibalize proprietary software jobs. The FLOSS-related share of the economic could reach 4% of the European GDP by 2010.
  • Europe’s strength regarding FLOSS are its strong community of active developers, small firms, and secondary software industry. In contrast, a generally low level of ICT investment and a relatively low rate of FLOSS adaptation by large industry (if compared to U.S.) are among its weaknesses.

As to policy recommendations, the report suggests a focus on the correction of existing policies and practices that currently favor proprietary software. Among the recommendations: support FLOSS in pre-competitive research and standardization; encourage partnerships among large firms, SMEs and FLOSS communities; provide equitable tax treatment for FLOSS creators.

Instant Classic: Prof. Lessig’s Code v2.0

ø

Professor Lawrence Lessig has just launched a partly peer-produced version 2 of his seminal book Code and Other Laws of Cyberspace, available online and as paperback, released under a CC Attribution-ShareAlike license. From the preface to the second edition:

“… The confidence of the Internet exceptionalists has waned. The idea–and even the desire–that the Internet would remain unregulated is gone. And thus, in accepting the invitation to update this book, I faced a difficult choice: whether to write a new book, or to update the old, to make it relevant and readable in a radically different time. I’ve done the latter. …”

Code v2.0 also includes an interesting section on the Z-theory, calling it “the missing piece in code v1.” No doubt, code v2.0 is an instant classic.

EUCD Best Practice Guide Released

3

We have just released our EUCD best practice guide. The report, sponsored by the Open Society Institute (OSI), provides a set of recommendations for transposing the EU Copyright Directive (EUCD) into the national copyright frameworks of accession states and candidate countries. The guide, which could also inform future law reform in existing member states and is related to stock-taking studies such as the Gowers Report (released yesterday) and the forthcoming official review of the EU copyright framework, is based on a peer-produced compilation and comparison of existing EUCD implementations across the EU.

The best practice guide takes a closer look at four clusters of legal issues typically associated with EUCD-implementation. First, in a cross-sectional manner, it provides recommendations regarding the implementation of the EUCD’s anti-circumvention provisions (i.e., legal protection of technological protection measures). Second, it suggests a series of principles in areas of copyright law that shape the ways in which we – as peers – can produce and distribute information. The third section deals with universal access issues, including teaching and research exceptions, exceptions for libraries, archives, and the like, and copyright exceptions for disabled people. Third, the document provides recommendations with regard to selected copyright provisions that have an impact on political and cultural participation.

Here is an overview of the recommendations we’ve made:

Anti-circumvention provisions

  • In order to avoid unintended consequences in general and spillover effects of anti-circumvention legislation in particular, (a) define the subject matter and scope of TPM as narrow as possible; (b) choose a liberal approach to exceptions and limitations and make sure that beneficiaries of exceptions can enjoy them; and (c) take a minimalist approach to sanctions and remedies for the violation of anti-circumvention provisions.
  • Provide a definition of the circumstances (“minimum threshold”) under which TPM are considered to be “effective”.
  • To the extent possible, limit the scope of prohibited circumvention-relevant conduct to situations where circumventions would lead to actual infringement of copyright.
  • Immediately establish a mechanism for the enforcement of copyright exceptions vis-à-vis TPM and in the absence of voluntary measures by right-holders. Provide for an easily accessible and effective enforcement mechanism.
  • Incorporate a private copying right vis-à-vis TPM analog to traditional private copying exceptions in order to foster access to information, knowledge, and entertainment.
  • Use discretion with regard to sanctions and penalties and adhere to the principle of proportionality. Consider limitations on criminal and civil liability for non-profit organizations such as libraries, archives, etc., flexible sanctions for innocent infringers, and limitations on sanctions for legitimate purposes such as research and teaching.

Peer collaboration & distribution

  • Provide for a broad private copying exception that is applicable to both analog/offline and digital/online works.
  • Use discretion with regard to sanctions and penalties imposed on illegal file-sharing (uploading) and adhere to the principle of proportionality. Consider limitations on criminal and civil liability for small-scale infringements.
  • Provide for a private copying exception that encompasses the act of downloading copyrighted material from the Internet, including from P2P file-sharing networks, regardless of the lawfulness of the master copy or the distribution platform.

Universal Access

  • Provide a broad teaching exception that not only covers materials for face-to-face use in the classroom of educational facilities, but also the use of works at home for studying purposes. The preparation and post-processing of courses at educational institutions should be included as well.
  • Implementations should not (further) limit the scope of the teaching exception as stipulated in the EUCD. Instead, provide for open definitions of the limitations on exempted uses for teaching purposes.
  • Transpose the quotation exception by allowing quotations in multimedia works with an educational purpose or within instructions and textbooks for educational use.
  • Provide for an exception that allows publicly accessible libraries and archives as well as documentation centers to make copies of entire works for specific purposes, without respect to whether these institutions are part of an educational or scientific institution or of a museum.
  • Explicitly allow the reproduction of works on any medium in both digital and analog format.
  • Allow the sharing of out-of-print copies among beneficiaries if certain requirements are met (out-of-print clause).
  • Explicitly regulate the question of traditional as well as advanced forms of electronic document delivery by privileged institutions such as public libraries in the national copyright act.
  • Permit electronic forms of delivery (e.g. in graphic file format) of individual copies of articles in periodicals and parts of published works to patrons for private study and research for non-commercial purposes, regardless whether the relevant material is available via an on-demand service or not.
  • Provide for a broad disability exception to both the rights to reproduction and communication to the public that might mention, but is not limited to, certain types of disabilities such as visual or hearing impairment.
  • Consider an exception or limitation for people with disabilities without requiring fair compensation.

Political & Cultural Participation

  • Provide for a current-event exception and prescribe the conditions under which the freedom of expression right trumps the exlusive author’s rights. Do not restrict the scope of the exception to traditional media, such as newspapers, television or radio.
  • The quotation right should allow diverse forms of quotations. It should encompass multimedia quotes as well as texts.
  • Allow private persons to disseminate public and political speeches over the internet.
  • Explicitly allow creative forms of political and cultural crticism. Use caricature, parody or pastiche as exemplary forms, but do not restrict the exception to these forms.

Download the full report for detailed discussion, references to member state implemenations, and case law examples.

Gowers Review of IP Released; EUCD Best Practice Guide Coming Soon

ø

The long-awaited final report of Gowers Review of U.K.’s IP framework has been released earlier today and is online available here. (Summary here.) Personally, I’m particularly interested in Gowers recommendations 8-13, addressing copyright issues, and I’m thrilled to see significant similarities between the findings in the Growers report and our own recommendations in the forthcoming EU Copyright Directive Best Practice Guide (to be released within the next couple of days; draft version here.) Stay tuned.

Caron’s Long Tail of Legal Scholarship

ø

With usual delay I just read Paul Caron’s nice essay The Long Tail of Legal Scholarship that was recently posted on SSRN. Caron contrasts the findings of Tom Smith’s ongoing research project on citations of scholarly works in law review articles with his own analysis of SSRN downloads.

Smith’s citation analysis characterizes legal scholarship, in contrast to what long tail theory would suggest, as a hit-driven market (the top 0.5% of articles get 18% of all citations, the top 17% get 79% of all citations, and 40% of articles get never cited at all.)

Caron, in contrast, argues that the picture changes if one looks at consumption rather than the end use of legal scholarship. Using download counts from SSRN as an alternative measure, Caron demonstrates that the tail is getting much longer and is consistent with the long tail thesis: “… 97% of authors have had at least one download in the past year and 100% have had at least one download at some time.”

See also this post and chart.

How can Public Policy Encourage Innovation and Entrepreneurship?

ø

The Rueschlikon Conference on Information Policy, chaired by Professor Viktor Mayer-Schoenberger, just released its latest conference report on Innovative Entrepreneurship and Public Policy. The report, authored by Kenneth Cukier, includes recommendations for what public policy can do to encourage innovation and entrepreneurship. The executive summary suggests five recommendations.

  • Entrepreneur: The Individual – Innovation starts with a “random walk” in “design space,” where ideas can be incubated and challenged. Investing in education is crucial, as is softening the consequences of failure.
  • Social Networks: The Group – The relationships among people, firms and nations help determine the degree of diversity they are exposed to, which influences inventiveness. Supporting the interactions across groups is essential.
  • Organizing R&D: Universities and Firms – A networked-model based on connections, collaboration, flat hierarchies, modularity and constant “re-writing” is required. This enables groups to respond successfully to discontinuities.
  • Creating Clusters: Geographic Areas – Places where finance, technical talent, legal, accounting and marketing support intermingle aids the innovation process. Yet it should ideally be technology-neutral, and not reliant on one technical domain.
  • Public Policy: The Role of Government (Municipal, Regional, National) – Reengineering society for a networked economy requires resources, patience and ceding control International cooperation with new stakeholders is imperative.

The full report with the title Hero with a Thousand Faces is available online.

FTC Hearing: DRM Interoperability

ø

This morning, I had the pleasure and honour to speak – “as the European voice” – at the FTC hearing on “Protecting Consumers in the Next Tech-ade” (check out the official weblog for more information and summaries of the discussion.) I was asked to report about the legal and regulatory discussions on DRM in Europe and to focus on DRM interoperability in particular. The latter question is also part of an ongoing research collaboration between the Berkman Center at Harvard Law School and our St. Gallen Research Center for Information Law. The research project is aimed at exploring the interaction between interoperability and e-innovation, an important aspect that was only briefly mentioned at today’s hearing.
Here is the longer and slightly modified (links added) written version of my statement. For a more detailed discussion, check out the excellent paper “DRM Interoperability and Intellectual Property Policy in Europe” by Mikko Valimaki and Ville Oksanen.

Over the past few years, much of the legal/regulatory debate in Europe about DRM has focused on the legal protection of technological protection measures and its ramifications for the digital ecosystem, because EU member states have faced the challenge to transpose the rather vague EU Copyright Directive into their national laws and comply with the relevant anti-circumvention provisions of the WIPO Internet Treaties.

Introducing and harmonizing anti-circumvention laws across Europe has been a long and an enormously controversial process. As far as DRM is concerned, three topics in particular have caused heated controversies:

  • DRM and its legal protection vis-à-vis traditional limitations on copyright such as the “right” (or privilege) to make copies for private purpose;
  • DRM and “fair compensation”;
  • DRM and interoperability.

Given our panel’s topic, please let me address the interoperability issue in some greater detail – a topic that has gained much attention in the context of iTunes’ penetration of the European market, esp. in France.

At the European level, though, no coherent DRM interoperability framework exists, although DRM interoperability has been identified as an emerging issue by the European Commission, which has established – among other things – a multi-stakeholder High Level Group on DRM that has also addressed DRM interoperability issues.

The lack of specific and EU-wide DRM interoperability provisions leaves us with three areas of law that address this issue more generally, both at the EU level as well as the level of EU member states. The areas are: copyright law, competition law, and consumer protection law.

Copyright Law

The EU Copyright Directive, mandating the legal protection of DRM systems, does not set forth rules on DRM interoperability. Recital 54 only mentions that DRM interoperability is something member states should encourage, but does not provide further guidance and seems to trust in the market forces. However, one might argue that the anti-circumvention framework itself allows the design of interoperable systems – e.g. a music player able to play songs encoded in different DRM standards – by outlawing only trafficking in such circumvention devices that are (inter alia) primarily designed and marketed for circumvention of effective TPM. Along these lines, at least one Italian Court has ruled – in one of the Bolzano rulings – that the use of modified chips aimed at restoring the full functionality of a Sony PlayStation (incl. its ability to read all discs from all markets despite region coding) is not illegal under the EUCD’s anti-circumvention provisions.

At the EU member state level, France has taken a much more proactive approach to DRM interoperability. A draft of the revised copyright law (implementing the EUCD) introduced an obligation of DRM providers to disclose interoperability information upon requests without being compensated. This “lex iTunes” has triggered strong reactions by the entertainment industry, and the final version of the law softened up the original proposal. Current French law states that a regulatory authority mediates interoperability requests on a case-by-case basis. Under this regime, too, DRM providers can be forced (under certain conditions) to disclose interoperability information on non-discriminatory terms, but they now have the right to reasonable compensation in return.

Competition Law

The baseline is: Competition law in Europe may become relevant in cases where a company with a dominant market position refuses to license its DRM standard to its competitors. However, to date, there exists no case law at the EU level where competition law has been applied to the DRM interoperability problem. But there are important cases (IMS Health and Magill, but also the anti-trust actions against Microsoft) illustrating how competition law — at least in exceptional circumstances — can give the need for interoperability more weight than the IP claims by dominant players. In France, Virgin Media tried to use competition law as an instrument to enforce access to iTunes FairPlay system. The French competition authority, however, has ruled in favour of iTunes, partly because it considered the market for probable music players to be sufficiently competitive (click here for more details).

Consumer Protection

From a consumer protection law perspective, three issues seem particularly noteworthy. First, the Norwegian Consumer Ombudsman has been very critical about Apple’s iTMS interoperability policy in response to a complaint by the consumer council. The Ombudsman argues that iTMS is using DRM and corresponding terms of services to lock its consumers into Apple’s proprietary systems.

Second, a French court fined EMI Music France for selling CDs with DRM protection schemes that would not play on car radios and computers (check here and here). EMI violated consumer protection law because it did not appropriately inform consumers about these restrictions. The court obliged EMI to label its CDs with the text: “Attention – cannot be listened on all players or car radios”.

Third, a recent proposal by the European Consumers’ Organisation proposes to include DRM in the unfair contract directive. The idea behind it is that consumer protection authorities should also be able to intervene against unfair consumer contract terms if the terms are “code-” rather than “law-based”.

E-Compliance: Managing Risks at the Intersection of Law and ICT

2

Earlier today, I attended a conference organized by Oliver Arter and Florian S. Joerg on Internet and e-Commerce Law in Zurich. I was invited to speak about e-Compliance in general and the implications of e-Business on compliance and corporate organization in particular. E-Compliance can be understood as a set of institutional arrangements and processes aimed at managing the legal and regulatory risks resulting from the transition from an offline/analog to an online/digital corporate information environment. My colleague Daniel Haeusermann and I have come up with the following theses – intended as discussion points and “food for thought.”

The main thesis is that e-Compliance, in important regards, is qualitatively distinct from traditional compliance. We argue that four trends support this key thesis.

Law and digital technology are closely intertwined. The compliance-relevant interactions are hereby bi-directional. Digital technology leads to legal problems that have not emerged in the paper world. Consider, for example, the use of email in a corporation as a partial replacement of oral communications and the set of legal problems associated with email usage and storage (ranging from data privacy/monitoring issues to e-Discovery exposure.) However, digital technology can also help to ensure a company’s compliance with the law. Software that can be used to enforce a “litigation hold” might be a good example in this context. At the organizational level, the suggested interplay between law and technology calls for a close collaboration between lawyers and IT-staff.

E-Compliance is risk management in a quicksilver environment and under conditions of legal uncertainty. The speed of ICT innovation has put the legal system under enormous pressure. The legal system’s answer, essentially, is either the application of existing rules (“old law”) to the new phenomena, or legal innovation (e.g. by formulating new rules or introducing new doctrines.) Typically, both processes create uncertainty, because the legal system is forced to synchronize its relatively slow adaptation processes with the speed of technological change. A nice illustration of the increased pace of change in law that creates uncertainty are legal regimes that govern online intermediaries such as access providers, search engines, and hosting providers. Up to the year 2000 legislators around the world have enacted laws (such as the CDA or the E-Commerce Directive) to limit the liability of online intermediaries, or to “immunize” them entirely. Only few years later we now face a global trend towards stronger regulation of online intermediaries, including a reconsideration of the respective liability regimes. From an organizational perspective, this increased speed of change requires that companies in the IT-business (this includes, e.g., banks) establish “early warning systems” – for example in collaboration with academic partners – aimed at tracking trends and developments at the intersection of law, ICT, and markets.

Digitization in tandem with the emergence of electronic communication networks has internationalized (old and new) legal problems in an unprecedented way. The first driver of internationalization of e-Compliance is straightforward: it’s the global medium “Internet” itself. The second source is related to the first one, but less obvious: In our view, the digitally networked environment creates a notion of proximity that leads to an increased relevance of foreign national law for corporations being incorporated and/or operating in another jurisdiction. Good examples are cross-border e-Discoveries, where U.S. plaintiffs seek to use American procedure and evidence laws to access information stored in different jurisdictions, e.g. in Europe, usually without following the procedures set forth in respective international treaties such as the Hague Convention on Evidence. It follows from this trend that it is a necessity for successful e-Compliance to apply a global perspective. In the case of multinational enterprises this requires, for instance, that the legal and compliance departments of the entities located in different countries collaborate closely on e-Compliance issues.

The rapid evolution of digital technologies on the one hand and the increased legal uncertainty with regard to the interpretation of old and new laws on the other hand further increase the relevance of industry self-regulation, for instance in form of codes of conducts or best practice models. Again, the regulation of online intermediaries is illustrative for this trend. In Germany, for example, content regulation of online intermediaries such as search engines is largely based upon a self-regulatory approach. In the light of this development, sustainable e-Compliance increasingly includes involvement in standard-setting bodies and industry best practice-groups – both as an expression of “good corporate citizenship” and based on the acknowledgment that “soft law”, in turn, can improve a company’s e-Compliance with the increasingly complex network of legal, quasi-legal, pre-legal and ethical obligations.

Rational Choice Theory vs. Heuristics Approach: What are the Consequences for Disclosure Laws?

11

I just finished reading Heuristics and the Law (2006) edited by Gerd Gigerenzer and Christoph Engel. It’s an interesting collection of essays by legal scholars, psychologists, and economists, exploring the conceptual and practical power of the heuristics approach in law. Given my own research interest, I was particularly intrigued by Chris Guthrie’s contribution with the title “Law, Information, and Choice: Capitalizing on Heuristic Habits of Thought.”

In the article, Chris starts with the observation that American law has a long-standing tradition to foster individual autonomy and choice by mandating the disclosure of information. Disclosure rules can be found in many areas of law, ranging from corporate law or product liability to gaming laws, etc. The underlying rational choice approach, according to Guthrie’s analysis, assumes that individuals will use all available information to “identify and evaluate all available options, assess and weight all of the salient attributes of each option, and then select the option they evaluate most favorably.” (id., p. 427). Guthrie contrasts these assumptions with (empirical) insights from heuristic-based approaches (“fast and frugal heuristics program” and “heuristics-and-biases program”), which suggest that individuals often make sound decisions by using limited information, and concludes that lawmakers should heed the lessons of these theories in order to foster autonomy and choice. More specifically, the author argues that lawmakers should not aim for full disclosure of information as rational choice theory would recommend, but should require limited disclosure by identifying the specific pieces of information to be disclosed, requiring the information to be presented in a manner designed to attract user’s attention and inform understanding, and by imposing limitations on the amount of disclosed information.

Clearly, Chris Guthrie’s empirical arguments support some of the observations I have made – inspired by my Doktorvater Prof. Dr. Jean Nicolas Druey – in the context of my information quality research on the one hand and earlier discussions of the information overload problem on the other hand. However, I’m not sure whether I agree with all of Guthrie’s conclusions, particularly once we move from an analog/offline to a digitally networked environment. The author himself acknowledges in the final paragraph – but leaves unanswered – the problem that information phenomena are highly context-specific and that information processing has an inherent subjective component to it. These characteristics have been identified as among the key challenges faced by the legal system aimed at regulating information (i.e. what we call information law on this side of the Atlantic), which by it’s own nature seeks to make general and abstract statements (“norms”) about informational phenomena. Against this backdrop, it is questionable as to what extent the “content-presentation-amount” program suggested by Guthrie might balance this fundamental tension between the characteristics of informational phenomena and information law.

Besides context-dependency and individuality of information processes, I’m for yet two other reasons not convinced that the author’s basic observation – according to which individuals make decisions based on limited information (although full information, in theory, would be available) – justifies the normative conclusion that lawmakers should limit the amount of information to be disclosed when drafting laws.

  • First, there are people who follow – at least in certain situations and sometimes because it’s required by professional ethics or even by duty of care standards – the “text-book”-style decision-making procedure as envisioned by the rational choice theory. These individuals, I would argue, might be worse off under a regime that is based on the approach that “less information is more information”. In short, I would argue that less information is not always more information.
  • Second, the aggregation of large amounts of information becomes much more efficient and effective once we operate in a digitally networked environment. Indeed, some of the “mathematical” work that is involved in preparing decisions under the rational choice theory is increasingly done by services – ranging from peer-based recommendation systems (“wisdom of the crowds”) to more hierarchical expert systems – that are in the business of collecting and comparing information for their users. Here, I would argue that the quality of the information based on which decisions can be made is likely do decrease if lawmakers impose qualitative and quantitative limits on the disclosure of information by the respective senders.

In any event, Chris Guthrie’s arguments deserve close attention and further consideration, although one might disagree with some of the conclusions.

Log in