~ Archive for twittertrails ~

Artificial Intelligence, your brain, and other things you cannot trust about politics


A few days ago the Center for Research on Computation and Society organized a workshop with the provocative title “Six Reasons Fake News is the End of the World as we Know It“. I call it provocative because, whether “fake news” is a new thing or not, has been discussed a lot lately. Not all of us agree on what it is, or how novel it is. Some point out that it is as old as newspapers, others see it as something that mainly appeared last year. Yet others doubt that it is even a phenomenon worth discussing and that, instead of fake news, we should talk instead about specific categories such as false news, misinformation, disinformation, and propaganda.

Accepting the challenge, I gave a talk with an equally provocative, I would like to believe, title:  “Artificial Intelligence, your brain, and other things you cannot trust about politics“. You can follow my talk in the video below, but let me give you a list of the “things” that I discussed in the talk:


I hope you find it interesting and do your own thinking about what we can trust when it comes to politics. Importantly, we need to figure out how to solve the problems of online misinformation and propaganda that seem to be all around us these days.

Or, to learn how to live with them, which is what I think will happen.

Two rumors about the downing of a Russian warplane by Turkey


News of Turkish airplane shooting down a Russian one over the Turkish-Syrian border has dominated the news and the social media lately. We investigated the rumor within hours after it appeared (24 Nov. 2015) and you can see the results of the analysis here: http://twittertrails.wellesley.edu/~trails/stories/investigate.php?id=462776628

This was not the first time a rumor of this kind emerged. About a month and a half ago (10 Oct. 2015) an identical rumor had emerged. We had investigated that rumor too and you can see the results of our analysis here: http://twittertrails.wellesley.edu/~trails/stories/investigate.php?id=134661966

Russian jet downing rumors

As you can see, based on the crowd’s reaction to the rumors, TwitterTrails was able to determine that the October rumor was false while the November one was true. The false rumor did not spread much and had a lot of skeptical tweets questioning its validity. On the other hand, the true rumor spread much higher and in terms of skepticism was undisputed.

Our understanding of the way the “wisdom of the crowd” works is that, when unbiased, emotionally cool observers see a rumor that seems suspicious, they usually react in one of two ways: They either do not retweet it, reducing its spread, or they may respond questioning the validity of the rumor, resulting in higher skepticism.

This is something we see often in the stories we investigate on TwitterTrails. Our understanding of the way the “wisdom of the crowd” works is that, when unbiased, emotionally cool observers see a rumor that seems suspicious, they usually react in one of two ways: They either not retweet it, reducing its spread, or they may respond questioning the validity of the rumor, resulting in higher skepticism.

When plotting the true and false rumors (after they have been verified through journalists’ work), the following image emerges:

spread-vs-skepticismIt is not a 100% separation, but one can see that the false rumors (marked by red triangles) show low spread and high skepticism, while the true ones show high spread and low skepticism. The picture is of course muddled in the lower corner. A rumor that does not attract much attention did not have the opportunity to benefit from the “wisdom of the crowd” and thus cannot be determined by our system.


Note: This posting originally appeared on our TwitterTrails blog.

False rumors do not propagate like True ones


On Twitter, claims that receive higher skepticism and lower propagation scores are more likely to be false.
On the other hand, claims that receive lower skepticism and higher propagation scores are more likely to be true.

The above is a conjecture we wrote in a recent paper entitled Investigating Rumor Propagation with TwitterTrails (currently under review). Feel free to take a look if you want to know more details about our system, but we will write here some of its highlights.

As you may know if you have read our Twitter Trails Blog before, we are developing a Web service that, starting from a tweet or a set of keywords related to a story propagating on Twitter (or a hashtag), it will investigate it and answer automatically some of the basic questions regarding the story. If you are not familiar, you may want to take a look at some of the posts. Or, it can wait until you read this one.

Recently we deployed twittertrails.com a site containing the growing collection of stories and rumors that we investigate. Its front end looks like this:


This is the “condensed view” which allocates one line per story, 20 stories per page. There are over 120 stories collected at this point. Clicking on a title brings you the investigation page with lots of details and visualizations about its propagation, its originator, how it burst, who supports it and who refutes it.

Note that on the right side of the condensed view we automatically compute two metrics:

  • The propagation level of a story. This is a logarithmic scale of the h-index of a tweet collection that has currently 5 levels: Extensive, High, Moderate, Low and Insignificant.
  • The skepticism level of a story. This is the ratio of tweets with negated propagation over tweets with no negated propagation. It has four levels: Undisputed, Hesitant, Dubious and Extremely doubtful.

The initial quote at the top of this post refers to these metrics.

There is also a more detailed,  “main view” of TwitterTrails:


In the main view there are additional tools to select stories, based on time of collection, particular tags, levels of propagation and skepticism or keywords.

A few weeks ago we gave a presentation of TwitterTrails at the Computation and Journalism 2014 symposium at Columbia University in NYC. There is a video of our presentation that you can view if interested. In this presentation we noted that false rumors have different pattern of propagation on Twitter than true rumors. Below is a graph that shows that difference.


The graph displays propagation levels vs skepticism levels, and the data points are colored depending on whether a rumor was true (blue), false (red) or something else (green) that cannot be categorized as true or false (e.g., reference to an event or a tweet collection based on a hashtag). The vast majority of the false rumors show insignificant to low propagation while at the same time their level of skepticism ranges from dubious to extremely doubtful.

This is remarkable, but it may not be too surprising. As we write in the paper, “Intuitively, this conjecture can be explained as an example of the power of crowd sourcing. Since the ancient times philosophers have argued that people will not willing do bad unless they are guided by irrational impulses, such as anger, fear, confusion or hatred. Therefore, the more people see some false information, the more likely it is that they will either raise an objection or simply decide not to repeat it further.

We make the conjecture specific for Twitter because it may not hold for every social network. In particular, we rely on the user interface for promoting an objection to the same level as the false claim. Twitter’s interface does that; both the claim and its negation will get the same amount of real estate in the a user’s Twitter client. On the other hand, this is not true for Facebook, where a claim gets much greater exposure than a comment, while a comment may be hidden quickly due to follow up comments. So, on Facebook most people may miss an objection to a claim.”

Take a look at TwitterTrails.com and tell us what you think!
We would also be happy to run an investigation for you, if interested.

(This is copy of a blog post on the blogs.wellesley.edu site.)


Log in