Compulsory Genetic Testing for Refugees: No Thanks

By Gali Katznelson

lab worker testing dna

DNA tests are not perfect and they can be vulnerable to manipulation. The UNHCR says genetic testing is an invasion of privacy. (Photo by andjic/Thinkstock)

Recent reports claim that Attorney General Jeff Sessions is considering using genetic testing to confirm the relationships of children who enter the country with adults to determine if they share a genetic relationship.

The website the Daily Caller reported that Sessions suggested in a radio interview that the government might undertake genetic testing of refugees and migrants in an effort to prevent fraud and human trafficking.

This proposal is problematic, not only because DNA testing is unreliable and vulnerable to hacking, it is also an invasion of privacy and flies in the face of guidelines from the United Nations’ refugee agency.

Continue reading

Prescription Monitoring Programs: HIPAA, Cybersecurity and Privacy

By Stephen P. Wood

Privacy, especially as it relates to healthcare and protecting sensitive medical information, is an important issue. The Health Insurance Portability and Accountability Act, better know as HIPAA, is a legislative action that helps to safeguard personal medical information. This protection is afforded to individuals by the Privacy Rule, which dictates who can access an individual’s medical records, and the Security Rule, which ensures that electronic medical records are protected.

Access to someone’s healthcare records by a medical provider typically requires a direct health care-related relationship with the patient in question. For example, if you have a regular doctor, that doctor can access your medical records. Similarly, if you call your doctor’s office off-hours, the covering doctor, whom may have no prior relationship with you, may similarly access these records. The same holds true if you go to the emergency department or see a specialist. No provider should be accessing protected information however, without a medical need.

Continue reading

DNA Donors Must Demand Stronger Privacy Protection

By Mason Marks and Tiffany Li

An earlier version of this article was published in STAT.

The National Institutes of Health wants your DNA, and the DNA of one million other Americans, for an ambitious project called All of Us. Its goal — to “uncover paths toward delivering precision medicine” — is a good one. But until it can safeguard participants’ sensitive genetic information, you should decline the invitation to join unless you fully understand and accept the risks.

DNA databases like All of Us could provide valuable medical breakthroughs such as identifying new disease risk factors and potential drug targets. But these benefits could come with a high price: increased risk to individuals’ genetic data privacy, something that current U.S. laws do not adequately protect. Continue reading

Facebook Should ‘First Do No Harm’ When Collecting Health Data

By Mason Marks

Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. I will explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.

Last week, Facebook CEO Mark Zuckerberg testified before Congress about his company’s data collection practices. Many lawmakers that questioned him understood that Facebook collects consumer data and uses it to drive targeted ads. However, few Members of Congress seemed to understand that the value of data often lies not in the information itself, but in the inferences that can be drawn from it. There are numerous examples that illustrate how health information is inferred from the behavior of social media users: Last year Facebook announced its reliance on artificial intelligence to predict which users are at high risk for suicide; a leaked document revealed that Facebook identified teens feeling “anxious” and “hopeless;” and data scientists used Facebook messages and “likes” to predict whether users had substance use disorders. In 2016, researchers analyzed Instagram posts to predict whether users were depressed. In each of these examples, user data was analyzed to sort people into health-related categories.

Continue reading

The Opioid Crisis Requires Evidence-Based Solutions, Part I: How the President’s Commission on Combating Drug Addiction Misinterpreted Scientific Studies

By Mason Marks

The opioid crisis kills at least 91 Americans each day and has far-reaching social and economic consequences for us all. As lawmakers explore solutions to the problem, they should ensure that new regulations are based on scientific evidence and reason rather than emotion or political ideology. Though emotions should motivate the creation of policies and legislation, solutions to the opioid epidemic should be grounded in empirical observation rather than feelings of anger, fear, or disgust. Legislators must be unafraid to explore bold solutions to the crisis, and some measured risks should be taken. In this three-part series on evidence-backed solutions to the opioid crisis, I discuss proposals under consideration by the Trump Administration including recent recommendations of the President’s Commission on Combating Drug Addiction and the Opioid Crisis. Though the Commission made some justifiable proposals, it misinterpreted the conclusions of scientific studies and failed to consider evidence-based solutions used in other countries. This first part of the series focuses on the misinterpretation of scientific data.

Last year more than 64,000 Americans died of drug overdose, which is “now the leading cause of death” in people under 50. Opioids are responsible for most of these deaths. By comparison, the National Safety Council estimates about 40,000 Americans died in auto crashes last year, and the Centers for Disease Control reports that 38,000 people were killed by firearms. Unlike deaths due to cars and firearms, which have remained relatively stable over the past few years, opioid deaths have spiked abruptly. Between 2002 and 2015, U.S. opioid-related deaths nearly tripled (from about 12,000 deaths in 2002 to over 33,000 in 2015). Last year, synthetic opioids such as fentanyl contributed to over 20,000 deaths and accounted for the sharpest increase in opioid fatalities (See blue line in Fig. 1 below). Continue reading

The CVS/Aetna Deal: The Promise in Data Integration

By Wendy Netter Epstein

Earlier this month, CVS announced plans to buy Aetna— one of the nation’s largest health insurers—in a $69 billion deal.  Aetna and CVS pitched the deal to the public largely on the promise of controlling costs and improving efficiency in their operations, which they say will inhere to the benefit of consumers. The media coverage since the announcement has largely focused on these claims, and in particular, on the question of whether this vertical integration will ultimately lower health care costs for consumers—or increase them.  There are both skeptics  and optimists.  A lot will turn on the effects of integrating Aetna’s insurance with CVS’s pharmacy benefit manager services.

But CVS and Aetna also flag another potential benefit that has garnered less media attention—the promise in combining their data.  CVS CEO Larry Merlo says that “[b]y integrating data across [their] enterprise assets and through the use of predictive analytics,” consumers (and patients) will be better off.  This claim merits more attention.  There are three key ways that Merlo might be right. Continue reading

Emergent Medical Data

By Mason Marks

In this brief essay, I describe a new type of medical information that is not protected by existing privacy laws. I call it Emergent Medical Data (EMD) because at first glance, it has no relationship to your health. Companies can derive EMD from your seemingly benign Facebook posts, a list of videos you watched on YouTube, a credit card purchase, or the contents of your e-mail. A person reading the raw data would be unaware that it conveys any health information. Machine learning algorithms must first massage the data before its health-related properties emerge.

Unlike medical information obtained by healthcare providers, which is protected by the Health Information Portability and Accountability Act (HIPAA), EMD receives little to no legal protection. A common rationale for maintaining health data privacy is that it promotes full transparency between patients and physicians. HIPAA assures patients that the sensitive conversations they have with their doctors will remain confidential. The penalties for breaching confidentiality can be steep. In 2016, the Department of Health and Human Services recorded over $20 million in fines resulting from HIPAA violations. When companies mine for EMD, they are not bound by HIPAA or subject to these penalties.

Continue reading

Privacy and Confidentiality: Bill of Health at Five Years and Beyond

In honor of the occasion of the Fifth Anniversary of Bill of Health, this post reflects on the past five years of what’s generally known as “privacy” with respect to health information.  The topic is really a giant topic area, covering a vast array of questions about the security and confidentiality of health information, the collection and use of health information for public health and research, commercialization and monetization of information, whether and why we care about health privacy, and much more.  Interestingly, Bill of Health has no categorizations for core concepts in this area:  privacy, confidentiality, security, health data, HIPAA, health information technology—the closest is a symposium on the re-identification of information, held in 2013.  Yet arguably these issues may have a significant impact on patients’ willingness to access care, risks they may face from data theft or misuse, assessment of the quality of care they receive, and the ability of public health to detect emergencies.

Over the past five years, Bill of Health has kept up a steady stream of commentary on privacy and privacy-related topics.  Here, I note just a few of the highlights (with apologies to those I might have missed—there were a lot!) There have been important symposia:  a 2016 set of critical commentaries on the proposed revisions of the Common Rule governing research ethics and a 2013 symposium on re-identification attacks.  There have been reports on the privacy implications of recent or proposed legislation: the 21st Century Cures Act, the 2015 proposal for a Consumer Privacy Bill of Rights, and the proposed Workplace Wellness Bill’s implications for genetic information privacy.  Many comments have addressed big data in health care and the possible implications for privacy.  Other comments have been highly speculative, such as scoping out the territory of what it might mean for Amazon to get into the health care business. There have also been reports of research about privacy attitudes, such as the survey of participants in instruments for sharing genomic data online.  But there have been major gaps, too, such as a dearth of writing about the potential privacy implications of the precision medicine and million lives initiative and only a couple of short pieces about the problem of data security.

Here are a few quick sketches of the major current themes in health privacy and data use, that I hope writers and readers and researchers and most importantly policy makers will continue to monitor over the next five years (spoiler alert: I plan to keep writing about lots of them, and I hope others will too): Continue reading

Is There a Fourth Amendment Expectation of Privacy in Prescription Records? According to the Utah District Court, Maybe Not

It might come as a surprise to many in the United States that they may have no Fourth Amendment reasonable expectation of privacy in their physicians’ records when their physicians transfer these records to state agencies under state public health laws. Yet on July 27, the federal district court for the state of Utah said exactly this for records of controlled substance prescriptions—and perhaps for medical records more generally. (United States Department of Justice, Drug Enforcement Administration v. Utah Department of Commerce, 2017 WL 3189868 (D. Utah July 27)). Patients should know that their physicians are required by law to make reports of these prescriptions to state health departments, the court said. Because patients should know about these reports, they have no expectation of privacy in them as far as the Fourth Amendment is concerned.  And, so, warrantless searches by the Drug Enforcement Administration (DEA) are constitutionally permissible at least so far as the district of Utah is concerned.  Physicians are by law required to make many kinds of reports to state agencies: abuse, various infectious diseases, possible instances of bioterrorism, tumors, abortions, birth defects—and, in most states, controlled substance prescriptions.  The Utah court’s reasoning potentially throws into question the extent to which any of these reports may receive Fourth Amendment protection.

Continue reading

Sharing Data for 21st Century Cures – Two Steps Forward…

By Mary A. Majumder, Christi J. Guerrini, Juli M. Bollinger, Robert Cook-Deegan, and Amy L. McGuire

The 21st Century Cures Act was passed with support from both sides of the aisle (imagine that!) and signed into law by then-President Obama late last year. This ambitious legislation drives action in areas as diverse as drug and device regulation and response to the opioid epidemic. It also tackles the issue of how to make data more broadly available for research use and clinical purposes. In our recently published GIM article, “Sharing data under the 21st Century Cures Act,” we examine the Act’s potential to facilitate data-sharing, in line with a recent position statement of the American College of Medical Genetics and Genomics. We highlight a number of provisions of the Act that either explicitly advance data-sharing or promote policy developments that have the potential to advance it. For example, Section 2014 of the Act authorizes the Director of National Institutes of Health to require award recipients to share data, and Section 4006 requires the Secretary of Health and Human Services to promote policies ensuring that patients have access to their electronic health information and are supported in sharing this information with others.

Just as relevant, the Act takes steps to reduce some major barriers to data sharing. An important feature of the Act, which has not been extensively publicized, is its incorporation of provisions from legislation originally proposed by Senators Elizabeth Warren and Mike Enzi to protect the identifiable, sensitive information of research subjects. Senator Warren, in particular, has been a vocal advocate of data sharing. Arguably, one of the biggest barriers to sharing is public concern about privacy. The relevant provisions address this concern chiefly via Certificates of Confidentiality. Among other things, the Act makes issuance of Certificates automatic for federally-funded research in which identifiable, sensitive information is collected and prohibits disclosure of identifiable, sensitive information by covered researchers, with only a few exceptions such as disclosure for purposes of other research. These protections became effective June 11, 2017. While NIH has signaled its awareness of the Act, it has not yet updated its Certificates of Confidentiality webpage. Continue reading

How should we organize consent to research biobanking in the hospital?

By Alena Buyx, MD PhD

Ever wondered what happens to the biological material you leave behind when you check out of the hospital? Nothing much, is the usual answer. However, the little bits of blood, tissue, and urine are potentially valuable for medical research; miniscule amounts of it may already allow sophisticated analyses, including genetic ones. Thus, in an approach termed ‘healthcare-embedded biobanking’, healthcare providers have started collections of leftover patient materials to create resources for future research.

However, unlike traditional research, healthcare-embedded biobanking is not done with a clear research question in mind. The materials are simply left-overs from diagnosis or treatment and, at the time of collection, the scientific projects for which they may be used eventually are entirely unclear.

This approach leads to an ethical conundrum. Established research ethics frameworks found here and here require that patients be asked for their consent and that they are given  all the information they need to make an informed decision about whether to donate their material (and its associated data) or not.  This includes, in particular, the research goals as well as the potential benefits and risks. However, this provision of information is not possible in healthcare-embedded biobanking: the risks and benefits can only be described in very broad terms, and the goals and timing of future research are usually unknown. Indeed, the materials may even not be used at all. Continue reading

The Problematic Patchwork of State Medical Marijuana Laws – New Research

By Abraham Gutman

The legal status of medical marijuana in the United States is unique. On one hand, the Controlled Substance Act of 1970 classifies marijuana as a Schedule I drug with no acceptable medical use and high potential for abuse. On the other hand, as of February 1, 2017, 27 states and the District of Columbia have passed laws authorizing the use of medical marijuana. This discrepancy between federal and state regulation has led to a wide variation in the ways that medical marijuana is regulated on the state level.

In a study published today in Addiction, our team of researchers from the Temple University Center for Public Health Law Research and the RAND Drug Policy Research Center finds that state laws mimic some aspects of federal prescription drug and controlled substances laws, and regulatory strategies used for alcohol, tobacco and traditional medicines.

In the past, studies on medical marijuana laws have focused on the spillover effect of medical marijuana to recreational use and not on whether the laws are regulating marijuana effectively as a medicine. Using policy surveillance methods to analyze the state of medical marijuana laws and their variations across states, this study lays the groundwork for future research evaluating the implementation, impacts, and efficacy of these laws.

The study focuses on three domains of medical marijuana regulation that were in effect as of February 1, 2017: patient protections and requirements, product safety, and dispensary regulation.

Here’s some of what we found:

Continue reading

Will the Recent Workplace Wellness Bill Really Undermine Employee Health Privacy?

By Jessica L. Roberts

While the effort to repeal and replace the Affordable Care Act (ACA) has taken center stage, another health-related bill has been making its way through the House without nearly as much attention. On March 2, 2017, Representative Virginia Foxx (R-NC) introduced House Resolution (HR) 1313 on behalf of herself and Representative Tim Walberg (R-MI).   The bill would lift current legal restrictions on access to genetic and other health-related information. Specifically, HR 1313 targets provisions of the Americans with Disabilities Act (ADA) that prohibit employers from conducting unnecessary medical examinations and inquiries that do not relate to job performance; the Genetic Information Nondiscrimination Act’s (GINA) provisions proscribing employers from requesting, requiring or purchasing the genetic information of their employees; and GINA’s prohibition on group health insurance plans acquiring genetic information for underwriting purposes and prior to enrollment. The bill passed through the Committee on Education and the Workforce last Wednesday along strict party lines with 22 Republicans supporting the proposed legislation and 17 Democrats opposing it.

Despite the public outcry against the bill, HR 1313 may not be as far-reaching as it initially appears. First, while advocates of genetic privacy fear the worst, both the ADA and GINA contain exceptions for wellness programs that already allow employers to access at least some employee health data. Second, even if HR 1313 passes, employees would still enjoy the ADA’s and GINA’s antidiscrimination protections.   HR 1313 could well give employers additional access to genetic and other health-related information about their employees but it is not a license to then use that information to discriminate.

Continue reading

New Book – Electronic Health Records and Medical Big Data: Law and Policy

Guest Post by author Sharona Hoffman

hoffman-cover-1-002I am pleased to post that my new book, “Electronic Health Records and Medical Big Data: Law and Policy” was recently published by Cambridge University Press.  The book enables readers gain an in-depth understanding of electronic health record (EHR) systems, medical big data, and the regulations that govern them.  It is useful both as a primer for students and as a resource for knowledgeable professionals.

The transition from paper medical records to electronic health record (EHR) systems has had a dramatic impact on clinical care.  In addition, EHR systems enable the creation of “medical big data,” that is, very large electronic data resources that can be put to secondary, non-clinical uses, such as medical research, public health initiatives, quality improvement efforts, and other health-related endeavors.  This book provides thorough, interdisciplinary analysis of EHR systems and medical big data, offering a multitude of technical and legal insights. Continue reading

Genomic Testing, Reflective Equilibrium and the Right Not To Know

By Seán Finan

Almost any test can return incidental results. An incidental result is something demonstrated by the test but not an answer to the test’s original question. Trying on a new pair of trousers, for example, can tell you whether or not they fit. It can also return the incidental result that the holiday feasting hadn’t been as kind to your waistline as you had hoped. Incidental results in genetic testing can be even more alarming. Whether done for clinical or research purposes, genetic tests can reveal a range of mutations, markers and predispositions far beyond the range being tested for. As technology advances, it expands the breadth of possible results.

Incidental results can often impart life changing information. Many can be a cause for dramatic but potentially life saving medical intervention: the presence of BRCA1 and BRCA2 variants that indicate an increased risk of breast cancer, for example.Where incidental results suggest that a patient might have an increased risk of developing a condition in the distant future, that information might allow them to act immediately to mitigate that risk. Genetic testing might also reveal inherited or inheritable mutations that could be crucial information for a patient’s entire family. Even outside the realm of disease, a genetic test might reveal something that could have huge psychological or social ramifications for a patient: for example, a test might reveal true paternity. However, the potentially life altering nature of some of these findings, in contexts where they are not being looked for or even expected, has led to questions about whether they should be revealed to the test subject at all.

Continue reading

CALL FOR ABSTRACTS, DUE 12/2! 2017 Annual Conference, “Transparency in Health and Health Care: Legal and Ethical Possibilities and Limits”

Medical care prices against a white background

The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School is pleased to announce plans for our 2017 annual conference, entitled: Transparency in Health and Health Care: Legal and Ethical Possibilities and Limits.

Transparency is a relatively new concept to the world of health and health care, considering that just a few short decades ago we were still in the throes of a “doctor-knows-best” model. Today, however, transparency is found on almost every short list of solutions to a variety of health policy problems, ranging from conflicts of interest to rising drug costs to promoting efficient use of health care resources, and more. Doctors are now expected to be transparent about patient diagnoses and treatment options, hospitals are expected to be transparent about error rates, insurers about policy limitations, companies about prices, researchers about data, and policymakers about priorities and rationales for health policy intervention. But a number of important legal and ethical questions remain. For example, what exactly does transparency mean in the context of health, who has a responsibility to be transparent and to whom, what legal mechanisms are there to promote transparency, and what legal protections are needed for things like privacy, intellectual property, and the like?  More specifically, when can transparency improve health and health care, and when is it likely to be nothing more than platitude?

This conference, and anticipated edited volume, will aim to: (1) identify the various thematic roles transparency has been called on to play in American health policy, and why it has emerged in these spaces; (2) understand when, where, how, and why transparency may be a useful policy tool in relation to health and health care, what it can realistically be expected to achieve, and when it is unlikely to be successful, including limits on how patients and consumers utilize information even when we have transparency; (3) assess the legal and ethical issues raised by transparency in health and health care, including obstacles and opportunities; (4) learn from comparative examples of transparency, both in other sectors and outside the United States.  In sum, we hope to reach better understandings of this health policy buzzword so that transparency can be utilized as a solution to pressing health policy issues where appropriate, while recognizing its true limitations.

Call for Abstracts

We welcome submissions on both the broad conceptual questions described above and more specific policy issues, including: Continue reading

Social Media Use in Research Recruitment: A New Guidance Document from Petrie-Flom and Harvard Catalyst

stethoscope_computerImagine this scenario: you are a researcher conducting a clinical trial on a promising treatment for a rare but serious heart condition. Unfortunately, you are struggling to locate and enroll enough eligible participants and your study is at risk of not completing. Then you discover a Facebook support group for precisely the condition you are studying. The group is open: you do not need to be invited or to suffer from the condition to become a member—anyone can join. Here are the eligible participants you have been looking for!

But what are your obligations in approaching members of this group for recruitment? Would such recruitment be ethically advisable? Under what conditions? And what ethical norms apply when approaching sick and potentially vulnerable people for recruitment over social media? How should you (and the IRB) evaluate this type of activity from an ethical perspective?

Continue reading

FitBits Be Free: General Wellness Products Are Not (Generally) Medical Devices

By Nicolas Terry

The FDA has issued a final guidance on low risk wellness devices, and it is refreshingly clear. Rather than applying regulatory discretion as we have seen in the medical app space, the agency has made a broader decision (all usual caveats about non-binding guidances aside) not to even examine large swathes of wellness products to determine whether they are Section 201(h) devices. As such, this guidance more closely resembles the 2013 guidance that declared Personal Sound Amplification Products (PSAPs) not to be medical devices (aka hearing aids).

The FDA approach to defining excluded products breaks no new ground. First, they must be intended for only general wellness use and, second, present a low risk. As to the former, FDA has evolved its approach to referencing specific diseases or conditions. Make no such reference and your product will sail through as a general wellness product. Thus, claims to promote relaxation, to boost self-esteem, to manage sleep patterns, etc., are clearly exempt. On the other hand, the agency will clearly regulate products that claim to treat or diagnose specific conditions. Continue reading

Use of Estimated Data Should Require Informed Consent

Guest post by Donna M. Gitter, Zichlin School of Business, Baruch College, based on Professor Gitter’s presentation at the Petrie-Flom Center’s 2016 Annual Conference, “Big Data, Health Law, and Bioethics,” held May 6, 2016, at Harvard Law School.

Cross-posted from the Hastings Center’s Bioethics Forum.

The Icelandic biotech firm deCODE Genetics has pioneered a means of determining an individual’s susceptibility to various medical conditions with 99 percent accuracy by gathering information about that person’s relatives, including their medical and genealogical records. Of course, inferences have long been made about a person’s health by observing and gathering information about her relatives. What is unique about deCODE’s approach in Iceland is that the company uses the detailed genealogical records available in that country in order to estimate genotypes of close relatives of individuals who volunteered to participate in research, and extrapolates this information in order to make inferences about hundreds of thousands of living and deceased Icelanders who have not consented to participate in deCODE’s studies. DeCODE’s technique is particularly effective in Iceland, a small island nation that, due to its largely consanguineous population and detailed genealogical records, lends itself particularly well to genetic research.

While Iceland’s detailed genealogical records enable the widespread use of estimated data in Iceland, a large enough U.S. database could be used to make similar inferences about individuals here. While the U.S. lacks a national database similar to Iceland’s, private companies such as 23andme and Ancestry.com have created rough gene maps of several million people, and the National Institutes of Health plans to spend millions of dollars in the coming years sequencing full genome data on tens of thousands of people. These databases could allow the development of estimated data on countless U.S. citizens.

DeCODE plans to use its estimated data for an even bolder new study in Iceland. Having imputed the genotypes of close relatives of volunteers whose DNA had been fully catalogued, deCODE intends to collaborate with Iceland’s National Hospital to link these relatives, without their informed consent, to some of their hospital records, such a surgery codes and prescriptions. When the Icelandic Data Protection Authority (DPA) nixed deCODE’s initial plan, deCODE agreed that it will generate for only a brief period a genetic imputation for those who have not consented, and then delete that imputation from the database. The only accessible data would be statistical results, which would not be traceable to individuals.

Are the individuals from whom estimated data is gathered entitled to informed consent, given that their data will be used for research, even if the data is putatively unidentifiable? In the U.S., consideration of this question must take into account not only the need for privacy enshrined in the federal law of informed consent, but also the right of autonomy, which empowers individuals to decline to participate in research. Although estimated DNA sequences, unlike directly measured sequences, are not very accurate at the individual level, but rather at the group level, individuals may nevertheless object to research participation for moral, ethical, and other reasons. A competing principle, however, is beneficence, and any impediment to deCODE using its estimated data can represent a lost opportunity for the complex disease genetics community.

Continue reading

Legal Dimensions of Big Data in the Health and Life Sciences

By Timo Minssen

Please find below my welcome speech at last-weeks mini-symposium on “Legal dimensions of Big Data in the Health and Life Sciences From Intellectual Property Rights and Global Pandemics to Privacy and Ethics at the University of Copenhagen (UCPH).  The event was organized by our Global Genes –Local Concerns project, with support from the UCPH Excellence Programme for Interdisciplinary Research.

The symposium, which was inspired by the wonderful recent  PFC & Berkman Center Big Data conference,  featured enlightening speeches by former PFC fellows Nicholson Price on incentives for the development of black box personalized medicine and Jeff Skopek on privacy issues. In addition we were lucky to have Peter Yu speaking on “Big Data, Intellectual Property and Global Pandemics” and Michael J. Madison on Big Data and Commons Challenges”. The presentations and recordings of the session will soon be made available on our Center’s webpage.

Thanks everybody for your dedication, inspiration, great presentations and an exciting panel discussion.

“Legal Dimensions of Big Data in the Health and Life Sciences – From Intellectual Property Rights and Global Pandemics to Privacy and Ethics”

Continue reading