Big Data – Midterm Review

Midterm review
Last week, we presented our goals and questions for the semester in the DPSI Midterm review event. It was great to hear about the focus of other groups – some of them struggling with very different questions than ours and others working on surprisingly similar things. We exchanged thoughts and ideas with other members of the community.
A puzzle to solve
To help envision the problems we are dealing with, we shared a few examples of de-identification problems and their implications:
(1) EdX and Completion rate
When researchers began analyzing the completion rate of EdX courses they began noticing that the annonmyzed dataset presented with very different statistics when compared with the original dataset. The completion rate showed a significant drop when the data was anonymized. When digging into this, it became evident that observations of  many of those who actually completed courses was dropped from the anonmyzized dataset. This is because the characteristics of a person who signed up for a course once and never went back to the page again were drastically different than those of a person who signed up, watched every lecture, and did every problem set. With so much identifying information, such observations were frequently dropped, even though these individuals were much more likely to finish a course. Analysis on the annonymized dataset was therefore useless.
(1) Google Ads and users behavior 
While interning at Google this summer, Olivia, a member of our Big Data group, ran into a peculiar problem. In her role as a data scientist, she was trying to understand whether people who saw Google Ads were more likely to conduct search on ad-related queries. Since Olivia was an intern and was not allowed to see user’s individual search information, she received a dataset in aggregated form, which summed up interactions by user. When she ran the analysis she saw some strange results – it seemed like people who saw ads were somehow less likely to perform ad-related queries. Since she believed such results were suspicious she raised those concerns, and her supervisor ran the analysis on the original dataset. The results were radically different, and as expected showed that users who saw ads were much more likely to run ad-related queries. Why did this happen? It seemed like users who watched ads for a few seconds were very different from users who watched ads for a few minutes, but that richness of the data disappeared in aggregated form. You could no longer distinguish between a user who saw many ads for a second and a user who saw an ad for a minute. This drastically changed the results and rendered the anonomyzed dataset useless for such purpose.
 
What’s up next
We now officially have a de-identified dataset to work with, along with some of the documentation around how it was de-identified. The coders in the group will begin examining it and playing with the code.
Our policy team continues to work on de-identification laws outside of the education space (FERPA). We are taking a look at HIPPA, which specifies de-identification requirements for medical information, and international laws (especially in privacy protecting Europe).

Leave a Reply

Your email address will not be published. Required fields are marked *