In this brief essay, I describe a new type of medical information that is not protected by existing privacy laws. I call it Emergent Medical Data (EMD) because at first glance, it has no relationship to your health. Companies can derive EMD from your seemingly benign Facebook posts, a list of videos you watched on YouTube, a credit card purchase, or the contents of your e-mail. A person reading the raw data would be unaware that it conveys any health information. Machine learning algorithms must first massage the data before its health-related properties emerge.
Unlike medical information obtained by healthcare providers, which is protected by the Health Information Portability and Accountability Act (HIPAA), EMD receives little to no legal protection. A common rationale for maintaining health data privacy is that it promotes full transparency between patients and physicians. HIPAA assures patients that the sensitive conversations they have with their doctors will remain confidential. The penalties for breaching confidentiality can be steep. In 2016, the Department of Health and Human Services recorded over $20 million in fines resulting from HIPAA violations. When companies mine for EMD, they are not bound by HIPAA or subject to these penalties.
Pancreatic cancer is one of the deadliest illnesses out there. The five-year survival rate of patients with the disease is only about 7%. This is, in part, because few observable symptoms appear early enough for effective treatment. As a result, by the time many patients are diagnosed the prognosis is poor. There is an app, however, that is attempting to change that. BiliScreen was developed by researchers at the University of Washington, and it is designed to help users identify pancreatic cancer early with an algorithm that analyzes selfies. Users take photos of themselves, and the app’s artificially intelligent algorithm detects slight discolorations in the skin and eyes associated with early pancreatic cancer.
Diagnostic apps like BiliScreen represent a huge step forward for preventive health care. Imagine a world in which the vast majority of chronic diseases are caught early because each of us has the power to screen ourselves on a regular basis. One of the big challenges for the modern primary care physician is convincing patients to get screened regularly for diseases that have relatively good prognoses when caught early.
I’ve written before about the possible impacts of artificial intelligence and algorithmic medicine, arguing that both medicine and law will have to adapt as machine-learning algorithms surpass physicians in their ability to diagnose and treat disease. These pieces, however, primarily consider artificially intelligent algorithms licensed to and used by medical professionals in hospital or outpatient settings. They are about the relationship between a doctor and the sophisticated tools in her diagnostic toolbox — and about how relying on algorithms could decrease the pressure physicians feel to order unnecessary tests and procedures to avoid malpractice liability. There was an underlying assumption that these algorithms had already been evaluated and approved for use by the physician’s institution, and that the physician had experience using them. BiliScreen does not fit this mold — the algorithm is not a piece of medical equipment used by hospitals, but rather part of an app that could be downloaded and used by anyone with a smartphone. Accordingly, apps like BiliScreen fall into a category of “democratized” diagnostic algorithms. While this democratization has the potential to drastically improve preventive care, it also has the potential to undermine the financial sustainability of the U.S. health care system.
The FDA has issued a final guidance on low risk wellness devices, and it is refreshingly clear. Rather than applying regulatory discretion as we have seen in the medical app space, the agency has made a broader decision (all usual caveats about non-binding guidances aside) not to even examine large swathes of wellness products to determine whether they are Section 201(h) devices. As such, this guidance more closely resembles the 2013 guidance that declared Personal Sound Amplification Products (PSAPs) not to be medical devices (aka hearing aids).
The FDA approach to defining excluded products breaks no new ground. First, they must be intended for only general wellness use and, second, present a low risk. As to the former, FDA has evolved its approach to referencing specific diseases or conditions. Make no such reference and your product will sail through as a general wellness product. Thus, claims to promote relaxation, to boost self-esteem, to manage sleep patterns, etc., are clearly exempt. On the other hand, the agency will clearly regulate products that claim to treat or diagnose specific conditions. Continue reading →
The symposium, which was inspired by the wonderful recent PFC & Berkman Center Big Data conference, featured enlightening speeches by former PFC fellows Nicholson Price on incentives for the development of black box personalized medicine and Jeff Skopek on privacy issues. In addition we were lucky to have Peter Yu speaking on “Big Data, Intellectual Property and Global Pandemics” and Michael J. Madison on Big Data and Commons Challenges”. The presentations and recordings of the session will soon be made available on our Center’s webpage.
Thanks everybody for your dedication, inspiration, great presentations and an exciting panel discussion.
“Legal Dimensions of Big Data in the Health and Life Sciences – From Intellectual Property Rights and Global Pandemics to Privacy and Ethics”
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Continue reading →
The company is exploring creating online “support communities” that would connect Facebook users suffering from various ailments. . . . Recently, Facebook executives have come to realize that healthcare might work as a tool to increase engagement with the site. One catalyst: the unexpected success of Facebook’s “organ-donor status initiative,” introduced in 2012. The day that Facebook altered profile pages to allow members to specify their organ donor-status, 13,054 people registered to be organ donors online in the United States, a 21 fold increase over the daily average of 616 registrations . . . . Separately, Facebook product teams noticed that people with chronic ailments such as diabetes would search the social networking site for advice, said one former Facebook insider. In addition, the proliferation of patient networks such as PatientsLikeMe demonstrate that people are increasingly comfortable sharing symptoms and treatment experiences online. . . . Facebook may already have a few ideas to alleviate privacy concerns around its health initiatives. The company is considering rolling out its first health application quietly and under a different name, a source said.
Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information.
I will also have a related paper on mobile health coming out later this summer that I will blog about when it comes out…
For those interested in the FDA’s decision to regulate 23andMe’s direct-to-consumer genetic testing service, it is worth reading a recent comment in Nature by Robert Green and Nita Farahany. The piece raises two core objections to the FDA’s decision that deserve further attention.
One objection is that the FDA’s decision runs contrary to “the historical trend of patient empowerment that brought informed-consent laws, access to medical records and now direct access to electronic personal health data.” Green and Farahany suggest that 23andMe and manufacturers of other consumer medical products (such as mobile health apps) “democratize health care by enabling individuals to make choices that maximize their own health,” and that we must not “stifle all innovations that do not fit into the traditional model of physician-driven health care.”
While I agree with Green and Farahany that we should not be locked into physician-driven health care, I am not sure that the information provided by 23andMe and medical apps is unambiguously “patient-empowering” and “democratizing” (a framing of personalized medicine that pervades both marketing materials and academic journals). Continue reading →
It is estimated that 500,000 patients are discharged from U.S. hospitals against the recommendations of medical staff each year. This category of discharges, dubbed discharges against medical advice (DAMA), encompasses cases in which patients request to be discharged in spite of countervailing medical counsel to remain hospitalized. Despite safeguards that exist to ensure that patients are adequately informed and competent to make such decisions, these cases can be ethically challenging for practitioners who may struggle to balance their commitments to patient-centered care with their impulse to accomplish what is in their view best for a patient’s health.
Writing in the most recent issue of JAMA, Alfandre et al. contend that “the term [‘discharge against medical advice’] is an anachronism that has outlived its usefulness in an era of patient-centered care.” They argue that the concept and category of DAMA “sends the undesirable message that physicians discount patients’ values in clinical decision making. Accepting an informed patient’s values and preferences, even when they do not appear to coincide with commonly accepted notions of good decisions about health, is always part of patient-centered care.” The driving assumption here seems to be that if physicians genuinely include patients’ interests and values in their assessments, then the possibility of “discharge against medical advice” is ruled out ab initio, since any medical advice issued would necessarily encapsulate and reflect patients’ preferences. They therefore propose that “[f]or a profession accountable to the public and committed to patient-centered care, continued use of the discharged against medical advice designation is clinically and ethically problematic.”
While abandoning DAMA procedures may well augment patients’ sense of acceptance among medical providers and reduce deleterious effects on therapeutic relationships that may stem from having to sign DAMA forms, it leaves relatively unaddressed the broader question of how to mitigate health risks patients may experience following medically premature or unplanned discharge. Alfandre and Schumann’s robust interpretation of patient-centeredness also raises the question of how to handle situations in which patients refuse medically appropriate discharge. On this interpretation, can the ideal of patient-centered care be squared with concerns for optimizing the equity and efficiency of resource allocations more broadly?
These issues generate unprecedented opportunities for healthcare innovators and entrepreneurs to design solutions that can effectively address widening disparities between healthcare supply and demand, particularly within vulnerable and underserved areas.
Three days of hearings by a House of Representatives committee concluded yesterday with a pledge from an FDA official to finalize long-awaited guidance on the regulation of mobile medical applications “in coming weeks“; at the latest by the end of the FDA’s fiscal year (i.e., September 30th).
The hearings, convened jointly by several subcommittees of the House Energy and Commerce Committee, were announced last week following a pointed letter to the FDA (pdf) from seven committee members on March 1st. In the letter, the Congressmen pressed the FDA for information on the agency’s mHealth regulatory timeline and the implications for innovation and industry of the proposed regulations.
For years, and with increasing frequency, health care and information technology companies have touted the potential of mobile medical and health applications and technologies to improve the quality and delivery of health care through the use of technology. While the future of mobile health (frequently referred to as “mHealth”) is undoubtedly filled with promise, the legal and regulatory landscape in which mHealth technologies reside is only now beginning to take shape.
As mHealth developers, funders and even users consider investing in the field, or including in particular mHealth technologies, they should keep in mind the emergent and fluid nature of the mHealth regulatory landscape. Here, we outline the likely key players and discuss several recent and projected initiatives with respect to the oversight of mHealth technologies: