Social Experiments: People vs Machines and In-lab vs Online

Social Experiments: People vs Machines

Recently, I attended a couple of talks on conducting social experiments. I found them both very interesting for different reasons, and thought of giving you an overview in this posting.

The first talk was at MIT. The Dertouzos lecture was established after the death of MIT’s LCS Director Michael Dertouzos who, even though he left us early, he left behind a great legacy. Given the strong interest of Dertouzos in the inter-disciplinary nature of Computer Science, the choice of Prof. Michael Kearns of UPenn was a particularly appropriate choice. Here is the abstract of Michael’s talk:

“What do the theory of computation, economics and related fields have to say about the emerging phenomena of crowd sourcing and social computing? Most successful applications of crowd sourcing to date have been on problems we might consider “embarrassingly parallelizable” from a computational perspective. But the power of the social computation approach is already evident, and the road cleared for applying it to more challenging problems. In part towards this goal, for a number of years we have been conducting controlled human-subject experiments in distributed social computation in networks with only limited and local communication. These experiments cast a number of traditional computational problems — including graph coloring, consensus, independent set, market equilibria, biased voting and network formation — as games of strategic interaction in which subjects have financial incentives to collectively “compute” global solutions. I will overview and summarize the many behavioral findings from this line of experimentation, and draw broad comparisons to some of the predictions made by the theory of computation and microeconomics.”

Michael is interested in exploring how well would people be able to effectively crowd source in the lab, when presented with a variety of problems, from the computationally easy to the hard. Graph coloring is a hard problem for a computer (i.e., for any parallel or sequential algorithm we know so far). How well would 36 undergraduate students solve instances of graph coloring? Quite well, it turns out. See the video clip.

Finding consensus (e.g., having all nodes in a graph choose the same color) is an easy problem to solve by both sequential and parallel algorithms. Yet, when presented with a time limit, humans have troubles reaching consensus as they are not able to come up consistently with a successful strategy: some will change colors often, trying to accommodate their neighbors; others will stick stubbornly to their color expecting other to follow them, yet others will flip-flop a lot giving up at the wrong moment, etc. Experience does not seem to help: Playing this game over and over, seems to be teaching them little. See this video clip of 36 undergraduates finding consensus of a graph composed of highly interconnected tribes.

The two video clips I recorded on my iPad during his talk are only a small teaser of the work Michael Kearns presented. If you are interested, you should take a closer look at his published papers.

Social experiments in the lab vs online

The second talk was at the Berkman Center for Internet and Society. Fellow Jerome Hergueux’s talk was entitled “The Promises of Web-based Social Experiments.” He is interested in exploring how closely the results of experiments conducted online match those conducted in the lab. Here is the abstract of his talk:

“The advent of the internet provides social scientists with a fantastic tool for conducting behavioral experiments online at a very large-scale and at an affordable cost. It is surprising, however, how little research has leveraged the affordances of the internet to set up such social experiments so far.  In this talk, Jerome Hergueux will introduce the audience to one of the first online platforms specifically designed for conducting interactive social experiments over the internet to date. He will present the preliminary results of a randomized experiment that compares behavioral measures of social preferences obtained both in a traditional University laboratory and online, with a focus on engaging the audience in a reflection about the specificities, limitations and promises of online experimental economics as a tool for social science research”

Jerome and his colleagues at the University of Paris tried to re-create online as close as possible the environment of the labs that social scientists have used for a long time. They recruited subjects from the very same pool, and asked some of them to participate in experiments in a lab setting, while others were to participate in the very same experiments online. There were no interactions between the participants, though the ones in the lab would see who else had come for the experiments. What they found was that the results of the experiments differ! In particular they found that the online subjects seem to be significantly more social than those in the Lab: More altruistic, showing higher trust, and being less risk averse. While this is still preliminary work, it seems quite promising in giving us a better understanding on the transformation we undergo when we go online. You can watch the full talk of Jerome Hergueux from the Berkman’s site.

We still have a lot to learn about conducting social experiments, but these two talks are definitely helping in this direction.

 

Leave a Comment

Log in