The symposium, which was inspired by the wonderful recent PFC & Berkman Center Big Data conference, featured enlightening speeches by former PFC fellows Nicholson Price on sufficient incentives for the development of personalized medicine and Jeff Skopek on privacy issues. In addition we were lucky enough to have Peter Yu speaking on “Big Data, Intellectual Property and Global Pandemics” and Michael J. Madison on Big Data and Commons”. The presentations and recordings of the session will soon be made available on our Center’s webpage.
Thanks everybody for your dedication, inspiration, great presentations and an exciting panel discussion.
“Legal Dimensions of Big Data in the Health and Life Sciences – From Intellectual Property Rights and Global Pandemics to Privacy and Ethics”
By PI Timo Minssen
“Our goal is to create a European Open Science Cloud to make science more efficient and productive and let millions of researchers share and analyze research data in a trusted environment across technologies, disciplines and borders”.
– Carlos Moedas, EU Commissioner for Research, Science & Innovation
“!The European Cloud Initiative will unlock the value of big data by providing world-class supercomputing capability, high-speed connectivity and leading-edge data and software services for science, industry and the public sector.”
– Günther H. Oettinger, Commissioner for the Digital Economy and Society
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Continue reading →
The company is exploring creating online “support communities” that would connect Facebook users suffering from various ailments. . . . Recently, Facebook executives have come to realize that healthcare might work as a tool to increase engagement with the site. One catalyst: the unexpected success of Facebook’s “organ-donor status initiative,” introduced in 2012. The day that Facebook altered profile pages to allow members to specify their organ donor-status, 13,054 people registered to be organ donors online in the United States, a 21 fold increase over the daily average of 616 registrations . . . . Separately, Facebook product teams noticed that people with chronic ailments such as diabetes would search the social networking site for advice, said one former Facebook insider. In addition, the proliferation of patient networks such as PatientsLikeMe demonstrate that people are increasingly comfortable sharing symptoms and treatment experiences online. . . . Facebook may already have a few ideas to alleviate privacy concerns around its health initiatives. The company is considering rolling out its first health application quietly and under a different name, a source said.
Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information.
I will also have a related paper on mobile health coming out later this summer that I will blog about when it comes out…
For those interested in the FDA’s decision to regulate 23andMe’s direct-to-consumer genetic testing service, it is worth reading a recent comment in Nature by Robert Green and Nita Farahany. The piece raises two core objections to the FDA’s decision that deserve further attention.
One objection is that the FDA’s decision runs contrary to “the historical trend of patient empowerment that brought informed-consent laws, access to medical records and now direct access to electronic personal health data.” Green and Farahany suggest that 23andMe and manufacturers of other consumer medical products (such as mobile health apps) “democratize health care by enabling individuals to make choices that maximize their own health,” and that we must not “stifle all innovations that do not fit into the traditional model of physician-driven health care.”
While I agree with Green and Farahany that we should not be locked into physician-driven health care, I am not sure that the information provided by 23andMe and medical apps is unambiguously “patient-empowering” and “democratizing” (a framing of personalized medicine that pervades both marketing materials and academic journals). Continue reading →
It is estimated that 500,000 patients are discharged from U.S. hospitals against the recommendations of medical staff each year. This category of discharges, dubbed discharges against medical advice (DAMA), encompasses cases in which patients request to be discharged in spite of countervailing medical counsel to remain hospitalized. Despite safeguards that exist to ensure that patients are adequately informed and competent to make such decisions, these cases can be ethically challenging for practitioners who may struggle to balance their commitments to patient-centered care with their impulse to accomplish what is in their view best for a patient’s health.
Writing in the most recent issue of JAMA, Alfandre et al. contend that “the term [‘discharge against medical advice’] is an anachronism that has outlived its usefulness in an era of patient-centered care.” They argue that the concept and category of DAMA “sends the undesirable message that physicians discount patients’ values in clinical decision making. Accepting an informed patient’s values and preferences, even when they do not appear to coincide with commonly accepted notions of good decisions about health, is always part of patient-centered care.” The driving assumption here seems to be that if physicians genuinely include patients’ interests and values in their assessments, then the possibility of “discharge against medical advice” is ruled out ab initio, since any medical advice issued would necessarily encapsulate and reflect patients’ preferences. They therefore propose that “[f]or a profession accountable to the public and committed to patient-centered care, continued use of the discharged against medical advice designation is clinically and ethically problematic.”
While abandoning DAMA procedures may well augment patients’ sense of acceptance among medical providers and reduce deleterious effects on therapeutic relationships that may stem from having to sign DAMA forms, it leaves relatively unaddressed the broader question of how to mitigate health risks patients may experience following medically premature or unplanned discharge. Alfandre and Schumann’s robust interpretation of patient-centeredness also raises the question of how to handle situations in which patients refuse medically appropriate discharge. On this interpretation, can the ideal of patient-centered care be squared with concerns for optimizing the equity and efficiency of resource allocations more broadly?
These issues generate unprecedented opportunities for healthcare innovators and entrepreneurs to design solutions that can effectively address widening disparities between healthcare supply and demand, particularly within vulnerable and underserved areas.
Three days of hearings by a House of Representatives committee concluded yesterday with a pledge from an FDA official to finalize long-awaited guidance on the regulation of mobile medical applications “in coming weeks“; at the latest by the end of the FDA’s fiscal year (i.e., September 30th).
The hearings, convened jointly by several subcommittees of the House Energy and Commerce Committee, were announced last week following a pointed letter to the FDA (pdf) from seven committee members on March 1st. In the letter, the Congressmen pressed the FDA for information on the agency’s mHealth regulatory timeline and the implications for innovation and industry of the proposed regulations.
For years, and with increasing frequency, health care and information technology companies have touted the potential of mobile medical and health applications and technologies to improve the quality and delivery of health care through the use of technology. While the future of mobile health (frequently referred to as “mHealth”) is undoubtedly filled with promise, the legal and regulatory landscape in which mHealth technologies reside is only now beginning to take shape.
As mHealth developers, funders and even users consider investing in the field, or including in particular mHealth technologies, they should keep in mind the emergent and fluid nature of the mHealth regulatory landscape. Here, we outline the likely key players and discuss several recent and projected initiatives with respect to the oversight of mHealth technologies: