In January 1999, Scott McNealy, CEO of Sun Microsystems (now part of Oracle Corporation), announced that we should no longer be concerned with privacy, since consumers ‘have zero privacy anyway’ and should just ‘get over it.’ His argument, that in the era of information technology we have become unable to protect precisely what such technology relies on and delivers (information) has met the full spectrum of imaginable reactions – from outrage to enthusiastic endorsement. Many different cures have been proposed to treat at least the symptoms of the disease caused by the loss of privacy. Yet there is little disagreement concerning the diagnosis itself: privacy does not enjoy an enviable state of health. Recent emphasis on big data and their inescapable presence have only made the prognosis dimmer for the once cherished ‘right to be let alone’ – as Samuel D. Warren and justice Louis D. Brandeis famously defined privacy back in 1890.
Such a deteriorating outlook should sound especially alarming in the fields of healthcare and medical research. In such domains, professional norms of medical confidentiality have long ensured sufficient levels of privacy protection, accountability, and trust. Yet we are told that this may no longer be the case: sensitive, personal, health-related information – just like any other type of information – now comes in electronic formats, which makes it much more reachable than before, and increasingly difficult to protect. Imagine the consequences this may have in the case of genomic data – arguably one of the most sensitive forms of personal information. Should such information fall into the wrong hands, we may face harsh consequences ranging from discrimination to stigmatization, loss of insurance, and worse. To enjoy the right to genomic privacy, one has to be able to exercise some meaningful amount of control over who gets access to her genetic data, be adequately shielded from harms of the sort just mentioned, and yet retain the possibility of deciphering what’s written in her DNA for a variety of purposes – including, but not limited to, health-related ones. All this is undoubtedly demanding. All the more so now that we know how even apparently innocent and socially desirable uses, like genomic research employing anonymized DNA, are not immune from the threat of malicious re-identification.
In light of such considerations, one might be led to think that health privacy protection is a lost cause. In fact, one may go even further and argue that, all things considered, we shouldn’t worry too much about the decline of privacy. Having our sensitive data in a state of highly restricted accessibility, so the argument goes, prevents us from extracting medically valuable insight from those data and hinders medical discovery from which we may all benefit.
The trade-off between privacy and both individual and societal utility has been a matter of intense debate in bioethics in recent times, especially as far as genetic and genomic data are concerned. This kind of discussion has produced a rich catalogue of alternative models of consent in medical research – showing that, despite the grim news about the end of privacy, bioethicists and regulators have not (yet) given up on the task of adequately protecting the basic moral interests of research participants and patients.
Nevertheless, as technology pushes in the direction of unabridged transparency, we are often led to assume that people are ready to tolerate more exposure in exchange for other goods. This allegedly signals reduced privacy expectations on the part of patients, research subjects, and citizens in general. Indeed, large numbers of individuals appear to be comfortable sharing personal data, including health data, with a variety of organizations – both private and public. Whether they feel they have no other choice or whether this is something they actually wish to do, it is worth asking what people’s privacy expectations around health data really are today. Do people still care about their privacy despite the fact that they may openly disclose their data?
To answer this question, we studied the users of an online platform called OpenSNP. These are people who, having obtained their genome sequence from direct-to-consumer genetic testing companies (such as 23andMe, Ancestry, and Family Tree DNA) voluntarily decide to make their genetic data freely available on the Internet by uploading them on the OpenSNP platform. Importantly, in doing so they accept that there are no privacy protections whatsoever. The website makes this clear to users. It includes McNealy’s statement that “There is zero privacy anyway, get over it,” and lists a series of possible harms associated with the use of the site. Our findings, recently published in a paper in PlosOne, show that OpenSNPers are animated by a variety of personal motives – e.g. wanting to learn about themselves, helping the advancement of medical science, improving the accuracy of genetic testing, or just the fun of playing around with data. When asked about their attitude towards privacy risks, most respondents showed concern for possible misuses of their data by employers (58.4%) and insurance companies (69.1%). However only a minority of them deemed it likely that such fears become reality (12.5% reported concerns about inappropriate uses by employers; 35.1% reported concerns about inappropriate uses by insurers).
Given the lack of IRB oversight and the absence of any access restriction, one could consider these users extreme advocates of open science. Or we may think of them as pioneers of the privacy-less world in which we will soon be living. But our respondents are still worried about the privacy-related harms they can be subjected to by employers and insurers getting access to their genomic data. While they believe that such risks are unlikely to materialize, they clearly think such harms warrant concern.
It is indeed the case that our data are more accessible than ever before and that, given the amount of personal information circulating over the Internet, we probably cannot expect or guarantee very high levels of confidentiality. Yes, we might actually have to ‘get over it.’ There might even be something to gain in this state of affairs – for both individuals and society as a whole. But we should not rush to unwarranted conclusions when we think about such crucial matters: data exposure does not entail abdicating the moral and legal importance of privacy. And even if adapting privacy protections to a new reality of widely accessible health data might appear daunting, such efforts will serve deeply felt moral interests.