In this next installment of today’s live-blogging of the conference (and with all of the caveats of live-blogging mentioned by my colleagues and my apologies for any errors or misrepresentations) we have Professors David Hyman (DH), Mark White (MW) and Andrea Freeman (AF) in a panel moderated by Glenn Cohen (GC) on the “Potential Problems and Limits of Nudges in Health Care”.
The panel began with DH, H. Ross & Helen Workman Chair in Law and Director of the Epstein Program in Health Law and Policy, University of Illinois College of Law, and a talk entitled, “what can PPACA teach us about behavioral law and economics” (Patient Protection and Affordable Care Act). DH began with the observation that nudges often work quite well… “unless they don’t”. While many nudges are “sticky”, i.e. they influence behavior in the way they were intended, others are “slippery”, i.e. they fail to influence behavior in the way they were intended. His talk set out to illustrate the phenomenon, and to pose two questions. The first was an empirical question: what makes a nudge sticky vs slippery? The second was philosophical: is it meaningful to talk about a “failed nudge” or when we do, do we really just mean failed marketing? He focused on an analysis of PPACA as a case study.
At least since the publication of the President’s Commission on Bioethics’ report in 2003, “Beyond Therapy: Biotechnology and the Pursuit of Happiness”, there has been an ongoing debate about the ethics of using drugs to modify emotional memories. Rather than focus on the Hollywood-type total memory erasure featured in the Eternal Sunshine of the Spotless Mind, many ground the debate in the molecular neuroscience of memory reconsolidation (for an excellent overview, see here). In the process of memory reconsolidation, a newly reactivated memory triggers certain molecular events that are necessary for it to return to long-term storage; during these events, the memory is temporarily susceptible to disruption by certain drugs like the beta-blocker, propranolol. Further work with people with Post-Traumatic Stress Disorder (PTSD) suggests that using propranolol in this way doesn’t erase a memory, but may blunt the reconsolidation of the memory’s negative emotional content. In the ethical discussion, most agree that 1) it should usually be acceptable to use drugs to modify memories in cases of PTSD where the emotional content of memories becomes debilitating, but 2) the use of memory modifying drugs is usually morally problematic when the target is everyday unpleasant memories, disappointments, and rejections.
Existing debate has focused on intentional memory modification. But what about those who modify memories in these problematic ways unintentionally? Conspicuously under-discussed is the ethics of continuing to use drugs with potential memory-modifying properties for the treatment of other medical conditions. Propranolol, for example, is on the Department of Veterans Affairs (VA) National Formulary for treatment of patients with severe liver disease (liver cirrhosis). This (not-small) population of people, in theory, risks unintentionally (and pre-emptively) modifying memories every day!
I have written previously on this blog about morally modifying technologies (here and here), which by definition work no better than existing technologies but enable the side-stepping of a moral tension associated with the first technology. Generic pharmaceuticals are a particularly well-known and widely endorsed form of morally modifying technology: they have no therapeutic advantage over name-brand drugs, but by costing less enable the sidestepping of some of the difficult moral issues involved in rationing healthcare. With the current public focus on limiting the rising cost of healthcare, moreover, there is increasing emphasis on the development and use of generics as a cost-saving measure. Jonathan J. Darrow has already written on this blog questioning whether we should celebrate increasing public endorsement of the development of these drugs that bring with them no new therapeutic benefit. But I would like to highlight in this post a different challenge to the responsible pursuit of a golden age of generics: bioequivalence.
Helping the development costs of generics to stay low, the FDA has an abbreviated approvals process that hinges on the generic being shown ‘bioequivalent’ to the name-brand drug (on top of requiring the generic to contain the same active chemical and be taken by the same route and dosage form) [See here and here]. Bioequivalence may sound reasonable, but many would be surprised to learn that it does not mean therapeutic equivalence.
This past Sunday, a group of researchers reported in the journal, Nature Medicine, a preliminary technique that uses variation in blood levels of 10 fats to predict the likelihood that elderly individuals would develop mild cognitive impairment (MCI) or Alzheimer’s Disease in the following 2-3 years. The sample size was small and the results may not generalize beyond the narrow age-range and demographics of the study group (i.e. the assay is far from ready for “prime time”), but the study is an important first step towards a lower cost (vs PET imaging) and less invasive (vs spinal tap) predictive biomarker of cognitive decline*. Its publication has also triggered a flurry of discussion on possible ethical ramifications of this sort of blood biomarker. I will not attempt to address these ethical issues specifically here. Rather, I seek to highlight that how ethically troubling one views the technology to be may depend partly on the sort of knowledge one thinks these biomarkers reveal (applied epistemology at its best).
In my last blog post, I suggested that we consider incentivizing scientists and engineers to develop technologies that side-step ethical dilemmas entangling certain current technologies. I highlighted that these morally modifying technologies 1) neither resolve a moral debate nor do they take a side, 2) usually do not function empirically better than existing technology, and 3) make a moral dilemma less practically problematic by providing a technological work-around. I highlighted induced pluripotent stem cells, blood recirculators, and fixed-time ventilators as three examples of morally modifying technologies. But when is it a bad idea to encourage the development of morally modifying technologies?
In response to an excellent comment on that post by Joanna Sax, I would like to extend my initial description of technological solutions to moral problems to a discussion of their limits and the potential problems that might accompany them. I will begin here with the three externalities Joanna suggested and start a discussion on how they might be avoided.
When we consider our society’s tough moral questions, like whether it is acceptable to use embryonic stem cells for research and medicine, we often look towards governmental leaders, policy makers, lawyers, and ethicists to find solutions. But should we look more often towards engineers?
Morally Modifying Technologies represent an under-incentivized means through which scientists and engineers could help us disentangle our society’s most controversial moral issues and have three key components. The first is that they neither resolve a moral debate (in this case, the acceptability of embryonic stem cells for research and medicine) nor do they comment on the validity of the reasons on each side of an issue; the moral questions raised are equally problematic before and after the invention of the technology. The second is that, even though the issue itself is unaffected, the importance of our resolving it seems to matter less. That is, morally modifying technologies make a moral dilemma less practically problematic. The third is that the new technology often does not perform the desired function empirically better than existing technology (it might even be worse), but does so in a morally less problematic way – that is, if it were not for the moral advantage, the technology might be thought of as redundant.
Recently in the New England Journal of Medicine, D.S. Jones described the history of a dangerous new technology, the detrimental health effects of which had clinicians very worried. That technology was the automobile. While the public health concern spanned from inactivity to new maladies like “automobile knee”, by far the greatest concern was automobile accidents. Jones describes that in 1912, accident mortality was such a big problem that a New York coroner’s clerk said “’our streets are becoming as perilous as a battlefield’” and by 1957 the evaluation was not much better: “Harvard researchers described accident mortality as a ‘mass disease of epidemic proportions’.” Interestingly, Jones highlights that doctors viewed this epidemic not merely as a governmental problem, but one in which there was a moral imperative that doctors themselves play a role in both studying what factors lead to car crashes and (more controversially) identifying high-risk drivers and thus contributing more directly to prevention.
Now in many developing economies across the globe, an interesting twist on this story is emerging: while modern cars have long existed in these locations, only very recently has there been a massive expansion of well-paved roads. And along with new and improved transport routes, new risks to public health.
There are many ways to drive medicine forward. One is to work to remove economic, political, or geographic barriers to accessing care, and thus aid those whose suffering can be assuaged but is not being so. Another is to work to develop treatments for types of suffering poorly eased (or addressed) by current care. Both are important. Serious pursuit of the second strategy, however, requires the participation of industry; to translate bench science into benefits for real people will usually require manufacture of new medicines or devices, a function that universities and public institutions do not do but industry does well.
But for those students, like myself, currently training in MD-PhD programs in hopes of pursuing this goal of translational medicine, it is not at all clear what attitude we should take towards industry. On the one hand, the vision to move science from bench to bedside would seem best served by those clinician-scientists who do not see publication as the end result but are devoted to responsibly guiding their discoveries into the industrial setting and propelling them to patients. On the other hand, connections between industry and academia are often described categorically as “conflicts of interest” that must be disclosed and ideally divested. I will not attempt to comment here on the events that have led to a prima facie (pharma facie?) negative valence of academic-industrial connections; I was struck to hear, however, one of the panelists (an academic) on a recent panel discussion on translational medicine open with a slow and measured statement affirming her belief that collaborations between academics and industry can be a “good thing.” She then paused, as if to let the shock of the statement permeate the audience.
I recently saw someone walk into a signpost (amazingly, one that signalled ‘caution pedestrians’); by the angle and magnitude that his body rebounded, I estimated that this probably really hurt. What I had witnessed was a danger of walking under the influence of a smart phone. Because this man lacked the ability to tweet and simultaneously attend to and process the peripheral visual information that would enable him to avoid posts, the sidewalk was a dangerous place. If only there existed some way to enhance this cognitive ability, the sidewalks would be safer for multi-taskers (though less entertaining for bystanders).
In a public event on neurogaming held last Friday as part of the annual meeting of the International Society for Neuroethics, Adam Gazzaley from UCSF described a method that may lead to just the type of cognitive enhancement this man needed. In a recent paper published in nature, his team showed that sustained training at a game called NeuroRacer can effectively enhance the ability of elderly individuals to attend to and process peripheral visual information. While this game has a way to go before it can improve pedestrian safety, it does raise interesting questions about the future of our regulations surrounding distracted driving, e.g., driving while texting. In many jurisdictions, we prohibit texting while driving, and a California court recently ruled to extend these regulations to prohibit certain instances of driving under the influence of smart phones (i.e. smart driving).
But if individuals were to train on a descendant of NeuroRacer and improve their ability to visually multitask, should we give them a permit to text while driving?
“Examining the intersection of law and health care, biotech & bioethics”
– the subtitle of the Bill of Health blog.
I approach this intersection like many of my fellow students: outfitted with the tools and spectacles of a specific discipline. Whether that is health law, policy, medicine, engineering, philosophy, genetics, or cognitive science, none of us have had the ideal education that would enable not only an approach, but an inhabitation of this intersection.
What would that ideal education be? To consider the ideal education for a citizen, Rousseau conducts an elaborate thought experiment giving that education to a fictional young boy named Emile (hence the title of the work: Emile, or On Education). Let us begin a similar experiment to consider the ideal education for someone to inhabit the intersection of law and health care, biotech & bioethics.
In general, the panel rightly pointed out practical limitations of these technologies. Panelist Nancy Kanwisher highlighted, for example, that research on lie-detection is done in a controlled, non-threatening environment from which we may be unable to generalize to criminal courts where the stakes are high.
While I was sympathetic to most of this discussion, I was puzzled by one point that the panel raised several times: the problematic nature of applying data based on a group of people to say something about an individual (e.g., this particular defendant). To present a simplified example: even if we could rigorously show a measurable difference in brain activity between a group of people who told a lie in the imager and a group of people who told the truth, we cannot conclude that an individual is lying if he shows an activity pattern similar to the liars. Since the justice system makes decisions on individuals, therefore, use of group data is problematic.
To me, this categorical objection to group data seems a bit odd, and this is why: I can’t see how group data is conceptually different from ordinary circumstantial evidence. Continue reading →
The WHO Surgical Safety Checklist is unusual as a patient-safety intervention in that it has been widely promoted as universally effective, i.e. effective both in high-income and resource-limited settings; checklists are now used in approximately 1800 hospitals worldwide. In a paper recently published in the journal, BMJ Open, Aveling and colleagues report the results of a qualitative study on the implementation of the WHO checklist in two UK hospitals and two hospitals in resource-constrained settings in Africa. Their results suggest that the checklist is “no magic bullet” – that if adopted without proper investment and adaptation to the context of the target hospital, the checklist not only may fail to replicate benefits, but can actually levy its own unintended costs – especially in resource-limited settings. Though the study raises a number of interesting questions, given the nature of this blog, I am hoping that we might start a discussion about those in the domain of ethics and law.
For example, consider the following real case, which was reported in the BMJ paper:
“A patient admitted for cholecystectomy [surgical removal of the gallbladder] suffered hypoxic [oxygen depriviation-related] brain injury and died following surgery. Subsequently, two staff members (not the surgeon) were threatened with guns by the patient’s family, who said that the surgical team had killed the patient. The two staff members were later arrested and criminal charges brought against one of them. One of the questions asked during the police investigation was whether a pulse oximeter [i.e. a tool for measuring blood-oxygen levels] had been used. It had not: according to staff, no pulse oximeter was available for use, even though the checklist requiring use of this equipment was, officially, in use at the hospital.”
The staff members also did not get any legal representation for weeks because there were no clear policies established surrounding who was responsible for providing that counsel.
The Petrie-Flom Center is pleased to welcome our 2013-2014 Student Fellows. During the coming year, each of the fellows will pursue independent research under the supervision of Center faculty and fellows. They will also be regular contributors at the Bill of Health on issues relating to their research.