You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Digital Drafting with Oliver Goodenough

August 30th, 2010 by SY

Digital Drafting with Oliver Goodenough, via Vermont Law School…

Public’s loss of privacy translates into big profits for online companies

June 7th, 2010 by zeba

Unlike most aspects of the financial crisis, it’s fairly easy to understand the Goldman Sach’s debacle. The company put its own interests ahead of its clients by encouraging them to invest in a mortgage product that the bank itself had bet would fail. But while politicians, the public, and the media take aim at the shenanigans of Wall Street, a similar breach in public trust is emanating from Silicon Valley amidst a wash of money and public ignorance.  The currency they’re manipulating though is worth billions: your personal information.

In April, Facebook launched Open Graph, its newest social platform that aims to turn the web into a more social experience by sharing user information and preferences with other sites.  Company founder Mark Zuckerberg heralded its launch as “the most transformative thing we’ve ever done for the web.” He’s right, but not for the reasons he says.

Facebook is chasing after Google’s dominance in the advertising market, which rewarded Google with $22.9 billion last year alone.  With personal information the increasingly valuable currency of the online marketplace, technology companies are scrambling to collect and share personal data with advertisers and other third parties.   In the process, your privacy is being compromised, with little debate or realization by society.

Over the past several years, as Facebook and other companies have launched new social tools and platforms, they have followed a fairly consistent rollout model: First, they automatically opt you into sharing your personal information. Then, they give you the option to opt-out but only after your privacy has been violated. After all, once information is out on the web, it is impossible to contain. The privacy controls you think you have are thus worthless.

In an effort to mitigate potential public backlash, both Facebook and Google are attempting to convince you that privacy is dead. Last December, Google CEO Eric Schmidt declared: “If you have something you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” A month later, Zuckerberg said privacy was “no longer a social norm.” But despite what the CEOs say, the public’s desire for privacy is far from dead. A recent study conducted by the University of California, Berkeley and the University of Pennsylvania found that 88 percent of people have refused to give out information to businesses because they felt it was too personal or unnecessary.

“Facebook and Google do not bear the cost of experimenting with our personal data – we do,” said John Clippinger, Co-Director of the Law Lab at Harvard University in a recent interview. And as research is beginning to illustrate, the consequences are immediate and oftentimes severe.  Danah Boyd, a social media researcher at Microsoft and fellow at the Berkman Center for Internet and Society at Harvard, recently relayed the story of a young girl and her mother who fled from an abusive father, only to have their location revealed as a consequence of Facebook’s ever-changing privacy policies. They, like most of us, couldn’t keep up with the company’s frequent and confusing privacy policy changes (at least six revisions in the past five years).  Even the experts, such as former Google employee and current White House Deputy Chief Technology Officer Andrew McLaughlin, have been burned by the myth of ‘control’ over personal information.  When Google launched Buzz, a feature that automatically shares the names of your most-emailed contacts, McLaughlin found his White House contact list blasted across the Internet within minutes.

Those who don’t mind sharing personal information with other companies should know that this process can impact their friends simply by association.  Two students at MIT last year demonstrated they could predict with 78 percent accuracy whether a Facebook user was gay based on the percentage of his friends that were gay.

Aggregating and publicizing information can of course be beneficial to society. Google Flu Trends for example, is able to estimate flu activity effectively two weeks before the Center for Disease Control by compiling search queries about the flu.  But the “public by default” policy pursued by both Facebook and Google over the last few years benefits companies, not society.

Without sensible regulatory oversight and an informed public, technology giants like Facebook and Google will continue to whittle away at the institution of privacy while raking in staggering profits. Goldman Sachs and Wall Street lobbied for deregulation, so that they could pursue wild-eyed schemes with less accountability. These companies asked us to trust them.  Let’s not make the same mistake twice.

Zeba Khan is an independent social media consultant and writer

Facebook is Betting Against its Users, by John H. Clippinger

June 4th, 2010 by zeba

Note: This piece was originally published at the Huffington Post

Facebook is at it again: Here are our new privacy settings. Trust us, we will take care of you.

After its privacy practices have been roundly criticized by the New York Times, a chorus of users, Silicon Valley insiders and privacy advocates, Facebook doubles down its privacy bet with Open Graph, their new service that makes it possible to share your profile and friends with any website, anywhere, anytime. Forget the prior privacy fiasco of their Beacon service — boy, do we have a deal for you! By clicking on the “Like” button, you can let your world — and a zillion marketers know — what you like. It is “open” so that means it should be good, like in “open source,” what the good guys do to make software a transparent public good. Right?

Hardly. This is a company that does not have your best interest at heart, despite what CEO Mark Zuckerberg said in his recent PR-laced op-ed. Not that sharing information and connecting with online “friends” is not a good thing. But the way Facebook does it has a price: your privacy. The Faustian bargain is simple: You relinquish any effective control over your personal information, and hence your digital life and identity, and in turn, you can do all these cool things.

Privacy-shmivacy. “Who cares?” says CEO Mark of Facebook and CEO Eric of Google. Trust us. To paraphrase Google’s Eric Schmidt, if you don’t trust us, then you were probably doing something you shouldn’t. We want all the information we can get from you because that is how we make our billions. Just let go and go with the flow. Don’t be anti-social. This is social media. Be social. SHARE.

But who can blame them. It is a simple matter of THE BUSINESS MODEL. This is how they make money — selling information about you to “advertisers.” Who would want to deny Silicon Valley entrepreneurs their natural right to make as much money as fast as they can?

But there really is a problem. It is the Goldman Sach’s problem, so exquisitely played out in the morality tale of their fateful congressional testimony last April. You couldn’t script it any better with the Fabulous French Fab and a CEO and CFO so embedded in their bubble of self-interest and self-importance that they make Marie Antoinette look like a social worker.

It is the business model, stupid. The more we know about you and the less you, our customer/client, knows about us, the more money we make. We “short” our customer because we are betting that they aren’t savvy enough to protect their own interests; the client, the user, is not so much a customer, as a “mark.” Goldman says they have “sophisticated” customers who know the risks. Yeah, state pension fund employees and fund manages that get paid a fraction of their salaries with a fraction of their analytic resources, data, and street smarts.

The same asymmetry of information exists in the social media space. According to a Pew poll, 23% of Facebook users don’t even know about privacy settings — period. What about the Power Users of social media? Well, there is no better exemplar of the informed, seasoned, savvy Power User than Andrew McLaughlin, current Deputy Technology Officer of the USA, former Head of Global Policy and Government Affairs at Google, Yale Law School grad, and former Senior Fellow at Harvard’s Berkman Center for Internet and Society. (Full disclosure that is also where I reside as well.) McLaughlin got himself in deep, deep water when he exposed his trusted Gmail account to Google’s “revolutionary” service, Buzz, designed to enable many of the same great things that Open Graph promised. Out leaked his contact list to the world at large, and more specifically, to eager Republican watchdogs such as RedState.com, who gleefully raised all sorts of embarrassing questions about whether McLaughlin was using his Gmail account for White House business and whether he was inappropriately communicating with his old Google colleagues. Cries were made for full disclosure, invocation of the Freedom of Information Act, comparisons made to Dick Cheney’s protected list of White House visitors. If McLaughlin had it to do over again, I doubt he would have clicked on Buzz. But there are no do overs — what is out is out. And that is the point of privacy leakages. When damage is done, it cannot be undone. And if an Andrew McLaughlin can be tricked and trapped, there really is no hope for the rest of us.

The issue is not with social media. Social media is great and here to stay. Moreover, when it goes mobile, it will only get more powerful and more useful. But it also could become easily Orwellian through the exploitation of personal information. It could become a means for total surveillance where the costs and impacts of today’s breaches are a trifle by comparison. Think medical information, DNA, all financial and commercial transactions, what you do, where you are, and whom you talk to every minute of the day.

The problem is that information marketing companies should not be like some banks and the credit card companies that make money by tricking and trapping, obfuscation, and betting against their “customers” under the guise of acting in their interest. This is not to say that information marketing companies should not make money off of social media and customer data. They should. Indeed, by providing the proper safeguards, checks and balances, more money can be made off of sensitive data, because it will be trusted and more readily shared and relied upon.

What is needed is a kind of Glass Steagall Act for the collection, use, storage and sale of personal data, which prohibits those banks entrusted to safeguard commercial accounts from also trading in those accounts. Fortunately, the FTC, the White House, FCC, GSA, and DoD, and several credit service providers, telecom carriers, and others are showing more foresight in appreciating the importance of user control and the commercial value of trust and privacy than many financial service and social media companies. But even with their efforts, technology, the market and the money are moving faster than they are.

Now is the time to get the rules of the road right so that is possible to both protect and share valuable and sensitive information. This does NOT require government micro-regulation, but it does require SOME thoughtful regulation, principles and architectures that create the right checks and balances and the incentives to reward a race to the top rather than one to the bottom. One can look to the new White House National Strategy for Secure Online Transactions currently under development as the beginning of a new privacy framework that thoughtfully tries to resolve the paradox of having both privacy and sharing in a way that is cognizant of current technology trends and new business models to advance rather than undermine the public interest.

FTC Roundtable Explores Online Privacy

April 22nd, 2010 by zeba

Can there be security and privacy online after the fact? That was the question posed at a March 17th public roundtable on consumer privacy sponsored by the Federal Trade Commission. The roundtable brought together academics, industry experts and government officials to discuss the challenges of building a secure and authenticated layer for the Internet on top of the original open and trust-based structure.

In her opening remarks, outgoing FTC Commissioner Pamela Jones Harbour cited the recent launch of Google Buzz and Facebook’s rollout of its new privacy settings as well as the 2007 release of Facebook Beacon as examples of irresponsible conduct by technology companies with respect to consumer privacy. In those instances, consumers were automatically signed up for the rollouts or launches and had to opt-out after the fact. “Unlike a lot of tech products, consumer privacy cannot be run in beta. Once data is shared, control is lost forever,” Harbour said.

In its early days, the Internet was used to facilitate communication among a number of researchers at various universities around the country. It was a small, known, and trusted environment. However in the decades since, its nature has changed dramatically. An architectural layer was built on top, encompassing a complex commercial enterprise, social-networking, and search functionality.  In time, a variety of popular services rose up, many of which to this day only employ encryption technology for initial log-in information, leaving all subsequent data sent unencrypted. Experts say this practice exposes consumers to significant risk when they connect to popular cloud-based services using public wireless networks in coffee shops, airports, and other public areas. Without encryption, hackers can easily intercept user data.  As new technologies are continuously being developed and new business models are created, many experts are focused on how such privacy concerns and future privacy challenges can be met.

One of the most significant issues in online security is the lack of an authentication layer within the architecture of the web. The current and cumbersome system of using usernames, passwords, and shared secrets is continuously threatened by the possibility of phishing and identity theft. “Personally identifying information can be constructed from non-identifying information,” said John Clippinger, co-director of the Law Lab at Harvard University’s Berkman Center for Internet & Society. “You have to have a user-centric, interoperable system that allows people to control information about themselves and have a chain of trust that can be traced back to the individual.”

The panel encouraged the use of protocols that have already been developed like SSL encryption as a first step towards tackling current privacy issues. Looking towards the future, several panelists referred to the work being done to develop new types of authentication technology to address the usability of privacy. One such technology is the information card, which allows users to sign into hundreds of websites using the one card with no usernames or passwords. The underlying technology provides a different personal identifier to each website, ensuring that no correlatable identifier is being shared across all those sites. These new kinds of identifiers such as the I-Card will give consumers more control over their digital identities, allowing them to control what and how much of their information is shared with other parties while protecting their privacy.

Another issue of discussion was concern over the lack of a clear directive from any regulatory body to technology companies on consumer privacy protocol. Some panelists felt that technology companies are learning harmful lessons from each other’s attempts to push the envelope and are encouraging copycat behavior. With the emergence of business models based upon aggregating information and making it available, correct business incentives and audit mechanisms will play increasingly important roles. “There’s great wealth and opportunity and things that could happen when you use this information effectively, so you don’t want to sequester it. But at the same time, you want to have governance principles that are enforced quickly, transparently, and effectively that grow with the technology,” Clippinger added. “Otherwise, it will get co-opted.”

The event was the final of three public events sponsored by the FTC to explore the privacy challenges that are posed by technology and business practices that collect and use consumer data.

View the webcast here.

Zeba Khan is a social media consultant and writer.

Matt Dunne on Transforming the Last Mile State

April 22nd, 2010 by zeba

Being the least connected state in the country earned Vermont the nickname of the Last Mile State. But it is precisely because of this reputation that Matt Dunne thinks Vermont can be transformed into a leader in connectivity, broadband competition, and innovation.

Dunne, a former state senator and current candidate for Vermont governor, is also Head of Community Affairs for Google and argued in a recent talk at the Berkman Center that Vermont is uniquely positioned to be a place of experimentation in high speed internet competition and the innovation that could come with it.

Citing Vermont’s mountainous and heavily forested geography, Dunne explained the difficulty Vermont has historically faced in deploying connectivity over the airwaves. As a result, most broadband and cell phone service providers have largely ignored the state.  The lack of connectivity has hurt Vermont’s ability to incubate startup entrepreneurs and has prevented established Vermont companies from attracting talent. In its education sector, the state is experiencing a decline in it student population, making high-speed internet connection critical for sharing teachers across the state’s widely spread out community-based school system. Connectivity also brings the promise of an improved overall cost efficiency to the state at a time when it is suffering from its largest deficit in history. “Connectivity is no longer only about getting ahead, it is about keeping up as well,” Dunne said.

However there are also several unique advantages Vermont has that would make the state particularly amenable to experimenting with broadband deployment. “The notion of community benefit and working in service of it is engrained in Vermont and that notion transfers into an expectation of private entities to do the same.” In addition, Dunne points to Vermont’s triple A bond rating which would allow the state the kind of capacity and autonomy to experiment in a way other states cannot and further argued that Vermont’s population of approximately 620,000 people would make for a compelling but still controlled demonstration project for those excited about the possibility of broadband competition to show you can deliver broad band at a much lower cost.

As part of his campaign platform, Dunne is proposing that the state make a long term investment with bonded dollars as well as guarantees from its triple A bond rating to entice the private sector to come forward with a commitment through either a municipal, non profit, or for profit mechanism to deploy broadband connectivity throughout Vermont while allowing for access to the broadband infrastructure at a reasonable and appropriate cost. “If you look at broadband in the broadest concept, agnostic to device, then you have opportunities to use a backbone and delivery system to not only deliver speed of information, VoIP, and other kinds of services to the home but also to be able with low aesthetic and environmental impact to deploy it anywhere along the roads where you have that pipe.”

Dunne is also encouraging exploring cost saving ideas in broadband deployment such as stringing glass along the lower part of utility poles and the potential for low frequency spectrum deployment. Low frequency spectrum is particularly of interest to Dunne because Vermont uses the smallest amount of the old television spectrum of any state. Because of the geographic difficulty of transmitting across the state, Vermont did not build many television towers and as a result now has an excess of spectrum available, which Dunne says will mean less pushback from potential critics about using the spectrum for experimentation in broadband.  Less pushback plus the resources and community sense to understand the importance of making connectivity open, accessible, and competitive for the long term will be key to Vermont’s ability to go from next to nothing in connectivity to leapfrogging into next generation technology.

To view the webcast and additional information, go to the Berkman Center’s Event page here.

Zeba Khan is a social media consultant and writer.

The Case for Evolvable Contract Spreadsheets, by John H. Clippinger

April 18th, 2010 by SY

A good contract is a fair contract. A contract is fair when it is willfully entered into with full and informed consent. Fairness is not just a matter of how the contract is drafted, but how it is understood. Parties need to understand how their interests are served and need to feel confident that they will be protected under most reasonable circumstances. However, since all contracts are currently expressed as static documents, it can be difficult to capture in text dynamic, evolving relationships that are triggered by potential, future events. In order to account for such scenarios, contracts must enumerate various contingencies and remedies, often times resulting in hundreds of pages of complex, technical prose that even legal professionals find daunting. Signatories, founders and entrepreneurs in particular, are often told to “sign at the dotted line,” without actually reading, much less comprehending the full import of the contract. Even relatively simple contracts, like a standard venture capital term sheet, are difficult to understand when they account for just standard business contingencies. This can hardly be considered full and informed consent.

Contracts written as static documents are inherently limited by their inability to capture and generalize the intent of the drafters.  Intent by definition cannot be literal, because it represents a principle of how to act rather than a particular kind of action. While such a principle or rule might be described in text, it cannot be implemented as text, since text is inherently descriptive whereas rules are procedural, that is, actions that are triggered by conditions, in short, algorithms.

Given these inherent limitations in textual contracts, might it not be possible to algorithmically represent and protect the interests of parties to a contract without knowing in advance what events or actions might impinge on those contracts? Instead of writing a contract, it may be more effective, efficient, and fair to program a contract to act in a way that transparently, exhaustively, and fairly preserves the interests of the parties for circumstances not previously contemplated.  Clearly, there will be novel circumstances that cannot be anticipated, and hence, the algorithm would have to be amended, but for the vast majority of cases there could be empirical data to support the likelihood of different scenarios and how the contract would handle them in a highly transparent manner.

With this understanding, the Law Lab developed a kind of evolvable contract spreadsheet using the Wilson Sosini standard venture capital term sheet as a first use case. We focused on a venture capital term sheet because it is sufficiently complex to be representative and compelling, but sufficiently simple to be tractable. The evolvable contract spreadsheet allows users to explore in graphical terms the “what-if” implications of different provisions and clauses in a term sheet that are typically negotiated between venture capitalists and entrepreneurs. Moreover, by making it possible to explore different funding and ownership structures, what is in the trade called capitalization or “cap” tables, users are able to have a fairly clean definition of what the “fitness function” would be for the modeling.  In this case, the fitness function is the preferences the different parties have for cap tables (e.g. percentages for different types of stock and their value at liquidation).

But what makes the evolvable contract spreadsheet highly novel is that it uses genetic algorithms to generate alternative combinations of term sheets to meet all the fitness conditions and preferences of the different parties. This contracting model allows us to demonstrate simply the implications of the various terms for all parties under different capitalization and business scenarios, making the contract far more transparent and comprehensible. Another key advantage is that by being automated, the drafting process becomes far more independent and inexpensive than traditional contracting.

Robustness in biology refers to the ability of an organism to survive and replicate under a variety of adverse and diverse conditions. Similarly, robust fairness in a contract refers to a contract’s ability to be both transparent and durable in protecting the original intent of the contracting parties.

Evolvable contract spreadsheets could lead to more transparent, less costly, and more durable contracting. Given these new possibilities in modeling contracts and generating alternatives, the traditional approach to term sheets and to a wide variety of contracting may no longer be necessary or desirable.

The Promise of Digital LLC, by Oliver Goodenough

April 13th, 2010 by cnolan

The Internet is creating a new class of web-based, geographically-dispersed entrepreneurs. Digital communication allows work, capital, and knowledge to come together in a virtual world that can let go of the old necessities of handshakes and paperwork. Until recently, however, the legal frameworks available for structuring these businesses haven’t kept pace. With the advent of Vermont virtual business laws, and particularly the digital LLC, there are now forms that allow the legal formalities of setting up and running a business to be migrated entirely into cyberspace.

The Law Lab at Harvard’s Berkman Center for Internet and Society has been active in developing software applications that can make full use of this flexibility, while preserving the protections and stability of a recognized organizational form. We are now releasing a demonstration version of a digital LLC platform aimed at entrepreneurial start-ups with a relatively small core group of owners who want to use the ease and flexibility of digital interactions to form and manage their business. Starting a company may never be the same again.

These developments are, in a sense, overdue. In the commercial world, many kinds of transactions are safely and routinely handled via the web, from buying books to energy trading to selling the contents of our garage. Internet banking allows digital controls over transactions with a high need for security – and it all works remarkably well. Given this environment, it seems absurd to still be using the same paper-based means of documenting meetings and recording the decisions of companies that were developed in an age of steam and telegrams. But that is the basic orientation of most business organization laws. Even a web-based service like Legal Zoom sends you a physical minute book as the end-result of setting up a new corporation online.

Why are the traditional options for business organizations ill-suited to the needs of such web-based businesses? Some of the limitations have nothing to do with the potential of the digital world. For instance, under U.S. law, traditional partnerships may not survive past the death or departure of one of their members and do not afford their members limited liability. Traditional corporations provide greater permanence and are generally better in terms of the ability to raise capital but also impose cumbersome internal governance processes that are appropriate for companies with thousands of shareholders but that ill-suit the needs of a small, entrepreneurial ownership group. The limited liability company (LLC) form offers entrepreneurs much greater flexibility, streamlined governance, and pass-through taxation. However even with these structural advantages, an LLC in its traditional form is still tied to paper for its formalities, and so cannot fully meet the needs of a web-based, geographically dispersed team of entrepreneurs.

In 2009, Vermont led the way by passing groundbreaking legislation that for the first time offered a legal framework for virtual companies. These changes allow three critical aspects to take place. The first is digital interaction with the State, a step authorized in other states as well. Moving beyond this, however, Vermont has made two key additions, allowing digital originals of bylaws, operating agreements, and other primary documents, and by permitting the full use of any “sequentially structured” digital communication in its formal decision making. These steps permit the formal interactions to move entirely into the digital sphere. The availability of software to carry out these functions will lower the barrier to entry for entrepreneurs worldwide who might not have or be able to afford legal counsel when starting a corporation or LLC, and will open up the possibility for a “Cambrian Explosion” of new digital firms and start-ups.

In order to catalyze this new business environment, the Law Lab has developed Digital LLC, a web-based software platform that allows entrepreneurs to form and manage an LLC completely online. Where existing on-line “set up a company” sites typically just have a user fill in the initial filings with the state, Digital LLC aims to be an interactive forum for the entire length of a digital LLC’s existence. The software provides tools for negotiating the LLC structure and building the two main components that govern the management of an LLC – the Operating Agreement and the Articles of Organization. Once the business is set up, the software provides a framework for making and recording decisions and identifying and resolving disputes as they occur. (Please see the videos here.)Through Digital LLC, businesses can be established and run entirely through internet communication.

As the digital business sector grows and matures, the Law Lab will expand its work to focus on new ways to reduce barriers to entry, provide governance safeguards and efficiency, reduce formalities, and augment paper-free and nearly lawyer-free administration. Our aim is to help the ideas-based entrepreneurs of the 21st century. Digital companies’ flexible, non-terrestrially based nature will help make them a natural governance form around which geographically dispersed innovators can coalesce, unlocking a whole new wave of firm creation and entrepreneurial activity.

A Reflection on Leslie Zebrowitz’s talk, by Judith Donath

March 5th, 2010 by cnolan

Leslie Zebrowitz’s talk brought her work—which deals with how deep-seated tendencies to overgeneralize distort our assessment of people,–to bear on the issue of justice in the courtroom.

The premise of Zebrowitz’s work on overgeneralization is that the adaptive, useful reactions we have to features such as a baby’s cute face, the disfiguration caused by disease or genetic malady, or the familiarity of our family members, also distort our impression of people whose faces happen to have those characteristics.

It is essential for the survival of the species that we should perceive babies as attractive, innocent, needing and deserving our care and protection. Babies’ faces are round, with little noses and chins, big eyes and high foreheads. Some adults, through no action of their own but simply due to the genetics of their bone structure, happen to have faces with baby-like features. Adults with “baby faces” are perceived as having the qualities of a baby: naive, guileless, etc. Adults with mature faces – strong square chin, heavy brow, and lower forehead — are perceived to be more aggressive, but also more responsible and intelligent. Although these perceived qualities do not have any basis in the actions or character of the subject, arising from how their bones, hair and skin developed, they may strongly affect the observer’s impressions. These misapprehensions, which can result in costly errors for the perceiver, subject, or both, persist because the evolutionary value of reacting with nurturance, acceptance, and patience to an actual baby outweighs the disadvantage of sometimes misperceiving adult characteristics due to the look of a face.

There are times, however, when this misapprehension can be extremely costly. One of them is in the court system. Dr. Zebrowitz, with her colleagues and students, has looked at “baby-face” perceptions and other overgeneralizations and how they affect justice in court. They found that these effects do influence court outcomes; for example, babyfaced defendants are more likely to be found innocent in cases involving intentional actions, and more likely to be found guilty in cases involving negligence. Justice is not, in fact, blind. But these cases highlight why the ideal of justice is portrayed as blind or unbiased. Seeing the participants in the case introduces errors in person perception, errors which can be serious yet subtle and hard to address.

But can we make justice blind? What, ideally, should juries and judges be able to see of the defendant, plaintiff or witnesses? Many feel that it they need to see the accused in order to assess their words and gauge how believable they are. Yet if that judgment is based on distorted character perception, then perhaps justice is better served by being blind, by not seeing the participant in a case, no matter how important it feels.

Would we be better off with a purely audio court? This is an intriguing idea. There are studies that show that people are better at distinguishing truth from deception when only listening to audio recordings, rather than seeing a video (with audio). Yet, for other assessments, such as understanding the rapport and relationship between people, visual cues are more important. The issue is further complicated because the existence of cues to deception in a particular modality does not necessarily match the observer’s use of those cues – indeed, the problem Dr Zebrowitz’s work addresses is people’s erroneous assessments.

These are not just hypothetical questions. Virtual courts are being developed. As we design the technology for these computer-mediated justice systems, we must think about the impact of different ways of representing people, and how those representations might impact our ability to assess honesty, character, and relationships. Although the first impulse might be to use the most richly detailed media available; however, if one channel tends to make us less accurate in our perceptions, is its absence better?

Intriguingly, we can also investigate the possibility of excising only the problematic aspects of visual appearance. It can be very useful to see a person’s expressions and their gestures. A subject who sits calmly and attentively gives off a different impression than one who fidgets constantly and seems uninterested, distracted or annoyed, which is different again from the one who glares menacingly at the jury. These gestures and expressions, unlike the features that lead to overgeneralization problems, are not an artifact of genetic inheritance, but actions generated by the person, and are useful and relevant in assessing him or her. Could we create a court technology where all the key participants were given the same neutral appearance, but where gestures and expressions were still visible?

It is possible to use the actions of a person to drive the animation of an avatar. Would this be desirable in a future virtual court? It eliminates much of the facial overgeneralization issues. It retains the voice, which is necessary for detecting deception. The voice would usually reveal gender, and sometimes race and socioeconomic position. Such an experiment raises very interesting questions about what aspects of a person’s physique are relevant to the court. For instance, we could make all these avatars equal in size. But in a case where much of the case depends on knowing if the plaintiff had reason to be scared of the defendant, seeing that the plaintiff is 5 feet and frail and the defendant is a hulking 6 foot plus, makes a difference. So here basic physical shape would need to be conveyed.

This opens other questions. The underlying thought experiment is as follows: in the context of a trial, if we can control what aspects of a person’s physical appearance are introduced into the proceedings, what elements should we include, and what should we omit? If a defendant is very baby-faced and innocent looking, and is charged with say, fraud or with deceptive sales practices, is it not relevant to see what he or she looks like, in order to have insight into how the plaintiffs could have been taken in?

Another issue is familiarity. If someone looks like you, or like many people you know, you’ll believe them to be more trustworthy. This is an artifact of your perception, not a fact based on any trait of theirs. Similarly, if someone resembles an individual about whom you have strong feeling, you are likely to transfer those feelings to that person. Thus, the defendant who has the luck of looking like a juror’s beloved uncle may be warmly defended by that juror. (see also Bailenson’s work showing that making another look more like you, by blending photographs, makes them seem more trustworthy).

Zebrowitz notes the effect of facial features is different in determining guilt and in sentencing: attractive individuals have been found to receive lighter sentences, though attractiveness did not affect whether they were found guilty. One could address this problem without high tech solutions, by maintaining face to face court proceedings, but having a separate judge determine sentencing. The sentencing judge would be given the written summary of the case, with all the relevant information, but would not see or hear the participants, thus working from all – and only – the information deemed legally relevant for that decision.

What, ideally, would we see of others in a given situation in order to have all the information we need about them, but without having the information that leads us to make erroneous assumptions? The court scenario shows vividly and with serious consequences the effects of our overgeneralizations and other error-prone impression formation processes – but this question has relevance far beyond the legal system. It has implications in how we perceive politicians, in how we determine who to trust. It makes us think about what aspects of a person’s appearance are germane to a discussion.

A Reflection on Jeremy Bailenson’s talk, from Judith Donath

February 10th, 2010 by cnolan

A Reflection on Jeremy Bailenson’s talk, “Transformed Social Interaction in Virtual Reality.”

In virtual worlds, people appear in the guise of avatars. These graphical representations can closely resemble the user – but they can also be radically or subtly transformed. These transformations can be apparent to all inhabitants of the virtual world, or they can be tailored to individual perspectives. With a series of ingenious experiments, Jeremy Bailenson has been studying the social and psychological effects of transforming avatar behavior and appearance.

In one experiment, Bailenson and colleagues blended the face of a viewer with that of a presidential candidate. The blend was subtle enough that the viewer did not detect it, yet the new resemblance to the candidate was effective: candidates thus transformed were perceived to be more familiar—and therefore more desirable—than candidates who were not altered.

In another, an avatar that had been programmed to maintain constant eye gaze spoke with the subject. Such persistent scrutiny is almost unheard-of in the real world – we typically look at the person we’re talking to only about 40% amount of time, or 70% of the time when we are listening. The intense gaze discomfited the subjects, but was at the same time, persuasive.

Other experiments focused on how one’s avatar affected one’s own behavior and perceptions. Subjects with attractive avatars felt and acted friendlier than did those who saw themselves portrayed by ugly ones. Such effects occurred even when only the subject saw the transformation: people negotiated harder and more successfully when they saw their own avatar as taller than another, even though their negotiating partner did not see the transformed height.

This work raises many ethical questions and forces us to articulate what, exactly, we mean by an “honest” representation – and when we actually want it.

During an election, candidates play different roles in front of different audiences. They may appear in plaid shirts and jeans to address a group of farmers, and jackets and ties for a dinner with corporate executives. They may even shift the cadences of their speech, e.g. adding a drawl in the South. Is this mimicry dishonest, or is it a reasonable way of expressing comradeship with the audience?

Mimicry is integral to our social interactions. In face to face conversation, we subconsciously express empathy and solidarity by mimicking each others’ verbal cadences and movements. This mirroring not only reflects the empathy between the parties, it also helps form it. However, this ordinarily subconscious and socially beneficial behavior can be deliberately exploited by someone who wants to seem amicably like-minded, but who actually has ulterior, if not predatory, motives.

One of Bailenson’s experiments showed that avatars programmed to mimic the subject’s gestures were more persuasive and well-liked than avatars using naturalistic but non-mimicking gestures. Is mimicry carried out via avatar simply an extension of the same social adaptability, or is it fundamentally different?

I would argue that the automatic simulation of mimicry it is fundamentally different, even from the most deliberate and calculated of face to face imitations. The candidate who copies the clothes and cadences of his or her potential voters, or the empathy-faking listener, must at least pay close attention to the actions of their audience and experience acting like them. When the mimicry is transposed to the virtual world, the person behind the avatar experiences no such affinity. The intimacy is purely illusory.

Yet before we relegate such socially smooth avatar behaviors to the category of inherently dishonest depictions, it is worth thinking about the alternative. If we are to have embodied online interactions – and the massive popularity of avatar-based places and games indicate they will be of growing importance – the avatars need to have some level of automatic behavior. If you want your avatar to move, you don’t want to be laboriously animating each step of its gait, you want it to have a walking algorithm. And, arguably, if you want your avatar to social, you don’t want to be laboriously animating each nod and gesture, you want it to have social interaction algorithms. The question becomes: where do we want to draw the line?— where does an algorithm help make the avatar experience come alive, and where do we want the active engagement of the participants to control this behavior?

Part of what makes Bailenson’s research so thought-provoking is that in reacting to the prospect of automated persuasion, we are forced to confront our beliefs and practices around simulated empathy in our everyday life. From the cashier’s cheery “have a nice day” to the waiter’s praise of our discerning menu choices, we enjoy the warmth of virtual friendliness. Much of the vast “service industry” is built on imitation camaraderie, and we complain bitterly when it is absent. For society to function, much faking is needed.

Bailenson’s experiments also touch on the illusions inherent in our relationship with our self. People who saw their own avatar as taller than others did better in negotiations – even though only they saw the height differential. People who saw their own avatar as attractive were more confident and friendly. People who saw their avatar get visibly fatter when eating were more successful dieters. These have fascinating implications, both exciting and disturbing, for our increasingly simulated lives.

Judith Donath is a Berkman Faculty Fellow and was the founding director of the Sociable Media Group at the MIT Media Lab. She is leading the Berkman Center for Internet & Society’s Law Lab Spring 2010 Speaker Series: The Psychology and Economics of Trust and Honesty. Judith’s work focuses on the social side of computing, synthesizing knowledge from fields such as graphic design, urban studies and cognitive science to build innovative interfaces for online communities and virtual identities. She is known internationally for pioneering research in social visualization, interface design, and computer mediated interaction.

A Reflection on Stephen Kosslyn’s talk, from Judith Donath

February 1st, 2010 by cnolan

A Reflection on Stephen Kosslyn’s talk “Brain Bases of Deception: Why We Probably Will Never Have a Perfect Lie Detector”

The premise of lie detection is that there is some perceivable physical sign when someone is lying. We have many beliefs about what these signs may be. For instance, we may want someone to look us in the eye when they are recounting a suspect tale, because we believe that direct eye contact is difficult for liars. We are confident of out ability to spot a lie, but in practice it is difficult: we’re not nearly as good as we think we are (indeed, some studies show that many people do not do much better than chance). Being deceived is quite harmful, so this is a big problem.

People have long sought ways to determine who is lying. In the Middle Ages, suspects were put through ordeals, such as dipping their arm in boiling waters— if they did not blister were they considered innocent. In ancient China, suspects were made to chew dry rice and spit it out; if it remained dry they were convicted.

While today we do not look for immunity from injury as a sign of innocence, modern polygraphs work on the same principle as chewing dried rice – they emphasis physical responses that are believed to insuppressibly accompany lying. The rice test sought to detect dry mouth, a sign of nervousness; today’s polygraphs often measure heart rate, respiration, and how sweaty one’s hands are (Galvantic Skin Response or GSR).

Polygraphs are widely used in the intelligence community and in private companies, and they are well embedded in the popular imagination at a truth-telling mechanism

However, they are notoriously unreliable, and many jurisdictions now forbid or limit their use as evidence.

They are unreliable because they are based on side effects of the phenomenon they seek to measure. The investigator wants to know if the subject is telling a lie. However, what the polygraph measures are physiological symptoms of emotions that may accompany lying, i.e. stress, nervousness and fear. They are not measuring the lying itself—that is, the creation of a false narrative— but are instead looking at how one responds to the act of lying. They are not looking at the thought-process itself, but at the symptoms that accompany a state of aroused feeling.

When using these measurements, false positives are a clear possibility: some people respond nervously because of the situation. And there are numerous false negatives. If someone does not feel guilt or fear about lying – at the extreme, the most pathological of liars – they will appear on a polygraph test to be truthful. In addition, there are many techniques for fooling the polygraph, such as putting a nail in your shoe and pressing on it to experience pain with each answer, including ones where you are known to be telling the truth, in order to alter your response profile.

Looking at these attempts to measure physical corollaries of deception, Stephen Kosslyn and his colleagues asked: Why look at the side effects of lying? Why not go right to the source – what is the brain doing? What can we see in brain activity that enables us to distinguish lying from truth-telling?

One of the main themes running through Professor Kosslyn’s research has been the idea that the processes that we think of as a “single activity” are, when you see what the brain is doing, comprised of multiple functions. For example, we think of identifying a thing – that’s a chair, that’s a bluejay – as a singular activity. But it turns out identifying something at a general level – there’s a bird – uses a different part of the brain than identifying it at a more specific level – there’s a robin. .

Lying, too, is a complex mental process and different types of lies use different parts of the brain. For instance, Kosslyn and colleagues looked at the difference between spontaneous lies and lies that had been rehearsed, and found that the pattern of neural activity was quite different for each. Rehearsed lies, for example, elicit more activation in the right anterior frontal cortice, a part of the brain used in recalling episodic memory. And there are many other features that distinguish different types of lies. There are lies about yourself and lies about other people. There are lies that you think are justified and lies you think are wrong. All of these would not only have a different neural activation pattern than the corresponding truth, they might also all be distinct from one another. One of Kosslyn’s main conclusions is that a reliable neural imaging lie-detector is unlikely: there are too many factors that go into lying to make a recognizable signature of it feasible.

Significant individual differences further complicate the situation. Although a particular brain region may be activated inmost people when they make a certain kind of lie, that pattern is not universal. Differences may be caused by differences in how they perceived a certain kind of lie (indeed, even in neutral tasks such as bird recognition, there is significant individual differences that result from differing levels of expertise. For me, recognizing a bird as a robin is pretty specific, but for a serious bird watcher, robin is a general category). Differences may also be the result of individual variations in brain function.

Kosslyn entitled his talk “Why We Probably Will Never Have a Perfect Lie Detector.” It’s a provocative title, in light of all the research now being done on finding the neural correlates of deception.

Is our understanding of the brain simply at an early stage now, and as it advances, we will indeed be able to look into someone’s head and know if they are lying? There are people who are exceptional human lie-detectors, able to distinguish truth telling from deception at a remarkably high rate, without the benefit of technological tools What do they see in another’s demeanor that reveals the lie – and if this can be read via physical manifestations, should it not be equally if not more possible to read these results from the brain itself?

Or is the problem with the concept of “lie” itself? Our folk understanding of lies distinguishes between social lies and harmful lies. Research such as Kosslyn’s experiments further reveals the cognitive complexity behind deception.

Finally, scientists are usually confident – if not over-confident – that the problem they are pursuing with their research will be solved. But here, with the claim that “we probably never will have a perfect lie detector” are we hearing a frank assessment of the difficulties facing a research agenda — or a note of hope? Social lies aside, deception is destructive—it is the tool of criminals and cheaters. But deception is also tied to privacy. If I can lie, I am in control of the contents of my mind. It is how we keep secrets. The perfect lie detector is the end of privacy.

Judith Donath is a Berkman Faculty Fellow and was the founding director of the Sociable Media Group at the MIT Media Lab. She is leading the Berkman Center for Internet & Society’s Law Lab Spring 2010 Speaker Series: The Psychology and Economics of Trust and Honesty. Judith’s work focuses on the social side of computing, synthesizing knowledge from fields such as graphic design, urban studies and cognitive science to build innovative interfaces for online communities and virtual identities. She is known internationally for pioneering research in social visualization, interface design, and computer mediated interaction.