Archive for the 'signaling theory' Category

Law, Behavior, and the Brain Conference


I’m currently on my way to far-away Olympic Valley, CA, where I have the great pleasure to attend the Gruter Institute for Law and Behavioral Research Conference on Law, Behavior, and the Brain. The conference, led by Monika Gruter Cheney, brings together a terrific interdisciplinary group of roughly 40 experts in areas such as evolutionary biology, neuroscience, behavioral economics, and – yes – also a number of legal scholars. During four days, we will be exploring topics such as “State of Play: Law, Behavioral Biology and Neuroscience,” “Rationality, Emotions and Moral Judgments in Humans and Other Species,” “Property and Economics,” and “Results in Neuroeconomics and Experimental Economics,” to list just a few sessions. I’m much looking forward to learning from all conference contributors, including Paul Zak, Carl Bergstrom, Kevin McCabe, John Clippinger, Bruce Hay, Oliver Goodenough, Susan Bandes, Larry Frolik, Sara Beale, Terry Maroney, among many others.

Here are the abstracts of my contributions to the conference:

1) Panel on Law & Emotions

A recent interdisciplinary conference in Switzerland was dedicated to law & emotion scholarship. In my brief presentation, I would like to answer the apparently trivial question asked by a conference participant: “Given the fact that it isn’t that much of a surprise that even judges, prosecutors, etc. have emotions, and that therefore emotions play a role in decision-making processes with legal relevance, what’s really the contribution of law & emotion research and scholarship? What’s new about it?”. I will try to answer this question in a systematic way, arguing that law & emotion research has (or might have) an impact on (at least) two levels, each consisting of two elements: the analytical level with the elements “phenomenon (stipulated facts)” and “legal actors”, and the design level with “norms applicable to the facts of the case” and “norms governing the production of law.” I will use a few stories – ranging from file-sharing to the U.S. Patriot Act – to illustrate these points.

2) Presentation on Digital Institutions / Social Signaling Theory

Social signals play an important role in defining social relations and structuring societies, both in the on- and offline world. In my presentation, I will focus on the role of social signaling in the digitally networked environment. More precisely, I will explore the promises and limitations of social signaling theory as applied to cyberspace, including digital institutions. In essence, I will address three questions: First, in what online contexts do we have an interest in signal reliability and honest signaling? Second, what are regulatory strategies and approaches (using Lessig’s framework of four modes of regulation) to increase the reliability of social signals? And third, who will make the decisions about the degrees of required signal reliability in cyberspace?

3) New Insights into Property Panel

My last year’s presentation focused on a new generation of neuro-science-informed arguments aimed at explaining large-scale file-sharing over P2P networks. This year, my contribution to the property panel will not focus on the explanation of a presumably illegal activity, but on a socially desirable one: In my talk entitled “Social economics of collaborative creativity”, I will provide a brief overview of the literature that seeks to explain why thousand of volunteers work together in lose-knit networks to peer-produce an online encyclopedia (Wikipedia), to come up with improved versions of an open-source web browser (e.g. Mozilla), or create shared open content platforms, to name just three examples. The presentation ends with the outline of a research agenda.

Social Signaling Theory and Cyberspace


Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

Log in