When the News Comes from Political Tweetbots

About the Author:

Eni Mustafaraj

Eni Mustafaraj

Norma Wilentz Hess fellow of Computer Science at Wellesley College

In July 2010, the blogger Andrew Breitbart wrote a long blog post with a short video excerpt from a speech by Shirley Sherrod. The post went viral and Ms. Sherrod was forced to resign from her U.S. Department of Agriculture position. Afterwards, it was revealed that the excerpt was taken out of context and the accusations were false, but, alas, the damage was done. I’m recalling this story, because in a previous post in this blog, Tim Hwang raised the following issue in the context of astroturfing campaigns:

A deeper problem is one of assigning responsibility – even when revealed, one common issue is the difficulty of figuring out who exactly launched these campaigns in the first place.

However, even if we were to learn who launches astroturfing campaigns (and I’m going to give an example below), it might bee too late to reverse their intended effect, if such campaigns are intelligently targeting influential nodes in a social network and feeding them information that seems genuine (for example, video & audio excerpts), on the eve of an important event (for example, election day), with the hope that it will be picked up and amplified by the 24-hours news cycle.

John Carney, a CNBC journalist received one of the Twitter-bomb tweets and retweeted it, expressing his surprise about political tweetbots, since this was a novelty.

What follows is the story of the first documented political Twitter-bomb against Martha Coakley (the attorney general of Massachusetts) who in January 2010 was running against Scott Brown for the vacant seat of the late senator Ted Kennedy. In the tweets we collected during the week before the election, the most frequent URL belonged to the website CoakleySaidIt.com, which was registered as a domain on January 15, 2010. It was a bare-bone website with three video/audio excerpts and a petition form. The website contained a copyright note by the American Future Fund. In the same day, someone created 9 anonymous Twitter accounts within a 13-minute interval, with names such as CoakleySaidWhat, CoakleyAgainstU, CoakleyCatholic, etc. Later that day, these accounts sent within 137 minutes 929 tweets with a link to the website, directly targeting users who had been tweeting about the senate race earlier that day. Twitter spamming filters worked properly and the fake accounts were banned, but the messages were retweeted 346 times, and we calculated that the network effect exposed the website link to almost 60,000 Twitter users.

We don’t know exactly whether American Future Fund launched this anti-Coakley Twitter campaign,  they didn’t answer  a Boston Globe request for comment, but it was their sponsored website being promoted in the tweets. This story shows how simple and cheap is to reach a large audience while by-passing traditional media channels that might fact-check and provide context for what is being shown as proof. Experienced journalists (see tweet in the image) can be quick at recognizing what is happening, but will this always be the case? The techniques will be more sophisticated and more such campaigns could happen at the same time.

So my question for the Berkman/MIT symposium participants is: can we imagine and develop together technology that augments the information being spread through social media channels with the missing context that will help journalists and citizen tell the truth from the truthiness.