This past week week I attended the International Neuroethics Society’s (INS) annual conference in San Diego, California. Neuroethics is multidisciplinary field that grapples with the implications of neuroscience for—and from—medicine, law, philosophy, and the social sciences. One of the many excellent panels brought together scholars from each of these four disciplines to discuss the diverse approaches to the field. The panel featured; Paul Appelbaum, a Professor of Psychiatry at Columbia University; Tom Buller, Chair of philosophy at Illinois State University; Jennifer Chandler, Professor of law at the University of Ottawa, and; Ilina Singh, Professor of Neuroscience & Society at the University of Oxford.
The panel started by considering the importance of the “competing identities” present in the field of neuroethics. As moderator Eric Racine explained, right from the start, even the term ‘neuroethics’ suggests a tension. Consider the variety of research methodologies employed in the field. For instance, a scholar trained in philosophy might approach neuroscience from a conceptual and purely analytical basis, and yet a social scientist might research the same question by collecting empirical interview data. The interplay between empirical and theoretical work was a theme that defined the discussion.
A psychiatrist by training, Dr. Applebaum spoke on the medical approach to the field. He argued that a focus on ethical issues in clinical psychiatry and neurology should be viewed as a part (but only a part) of neuroethics. Furthermore, medicine’s empirical approach to neuroethics is one (but not the only) way to advance thinking on neuroethical issues. Continue reading →
What should the future look like for brain-based pain measurement in the law? This is the question tackled by our concluding three contributors: Diane Hoffmann, Henry (“Hank”) T. Greely, and Frank Pasquale. Professors Hoffmann and Greely are among the founders of the fields of health law and law & biosciences. Both discuss parallels to the development of DNA evidence in court and the need for similar standards, practices, and ethical frameworks in the brain imaging area. Professor Pasquale is an innovative younger scholar who brings great theoretical depth, as well as technological savvy, to these fields. Their perspectives on the use of brain imaging in legal settings, particularly for pain measurement, illuminate different facets of this issue.
This post describes their provocative contributions – which stake out different visions but also reinforce each other. The post also highlights the forthcoming conference-based book with Oxford University Press and introduces future directions for the use of the brain imaging of pain – in areas as diverse as the law of torture, the death penalty, drug policy, criminal law, and animal rights and suffering. Please read on!
At least since the publication of the President’s Commission on Bioethics’ report in 2003, “Beyond Therapy: Biotechnology and the Pursuit of Happiness”, there has been an ongoing debate about the ethics of using drugs to modify emotional memories. Rather than focus on the Hollywood-type total memory erasure featured in the Eternal Sunshine of the Spotless Mind, many ground the debate in the molecular neuroscience of memory reconsolidation (for an excellent overview, see here). In the process of memory reconsolidation, a newly reactivated memory triggers certain molecular events that are necessary for it to return to long-term storage; during these events, the memory is temporarily susceptible to disruption by certain drugs like the beta-blocker, propranolol. Further work with people with Post-Traumatic Stress Disorder (PTSD) suggests that using propranolol in this way doesn’t erase a memory, but may blunt the reconsolidation of the memory’s negative emotional content. In the ethical discussion, most agree that 1) it should usually be acceptable to use drugs to modify memories in cases of PTSD where the emotional content of memories becomes debilitating, but 2) the use of memory modifying drugs is usually morally problematic when the target is everyday unpleasant memories, disappointments, and rejections.
Existing debate has focused on intentional memory modification. But what about those who modify memories in these problematic ways unintentionally? Conspicuously under-discussed is the ethics of continuing to use drugs with potential memory-modifying properties for the treatment of other medical conditions. Propranolol, for example, is on the Department of Veterans Affairs (VA) National Formulary for treatment of patients with severe liver disease (liver cirrhosis). This (not-small) population of people, in theory, risks unintentionally (and pre-emptively) modifying memories every day!
This past Sunday, a group of researchers reported in the journal, Nature Medicine, a preliminary technique that uses variation in blood levels of 10 fats to predict the likelihood that elderly individuals would develop mild cognitive impairment (MCI) or Alzheimer’s Disease in the following 2-3 years. The sample size was small and the results may not generalize beyond the narrow age-range and demographics of the study group (i.e. the assay is far from ready for “prime time”), but the study is an important first step towards a lower cost (vs PET imaging) and less invasive (vs spinal tap) predictive biomarker of cognitive decline*. Its publication has also triggered a flurry of discussion on possible ethical ramifications of this sort of blood biomarker. I will not attempt to address these ethical issues specifically here. Rather, I seek to highlight that how ethically troubling one views the technology to be may depend partly on the sort of knowledge one thinks these biomarkers reveal (applied epistemology at its best).
I recently saw someone walk into a signpost (amazingly, one that signalled ‘caution pedestrians’); by the angle and magnitude that his body rebounded, I estimated that this probably really hurt. What I had witnessed was a danger of walking under the influence of a smart phone. Because this man lacked the ability to tweet and simultaneously attend to and process the peripheral visual information that would enable him to avoid posts, the sidewalk was a dangerous place. If only there existed some way to enhance this cognitive ability, the sidewalks would be safer for multi-taskers (though less entertaining for bystanders).
In a public event on neurogaming held last Friday as part of the annual meeting of the International Society for Neuroethics, Adam Gazzaley from UCSF described a method that may lead to just the type of cognitive enhancement this man needed. In a recent paper published in nature, his team showed that sustained training at a game called NeuroRacer can effectively enhance the ability of elderly individuals to attend to and process peripheral visual information. While this game has a way to go before it can improve pedestrian safety, it does raise interesting questions about the future of our regulations surrounding distracted driving, e.g., driving while texting. In many jurisdictions, we prohibit texting while driving, and a California court recently ruled to extend these regulations to prohibit certain instances of driving under the influence of smart phones (i.e. smart driving).
But if individuals were to train on a descendant of NeuroRacer and improve their ability to visually multitask, should we give them a permit to text while driving?
“Examining the intersection of law and health care, biotech & bioethics”
– the subtitle of the Bill of Health blog.
I approach this intersection like many of my fellow students: outfitted with the tools and spectacles of a specific discipline. Whether that is health law, policy, medicine, engineering, philosophy, genetics, or cognitive science, none of us have had the ideal education that would enable not only an approach, but an inhabitation of this intersection.
What would that ideal education be? To consider the ideal education for a citizen, Rousseau conducts an elaborate thought experiment giving that education to a fictional young boy named Emile (hence the title of the work: Emile, or On Education). Let us begin a similar experiment to consider the ideal education for someone to inhabit the intersection of law and health care, biotech & bioethics.
In general, the panel rightly pointed out practical limitations of these technologies. Panelist Nancy Kanwisher highlighted, for example, that research on lie-detection is done in a controlled, non-threatening environment from which we may be unable to generalize to criminal courts where the stakes are high.
While I was sympathetic to most of this discussion, I was puzzled by one point that the panel raised several times: the problematic nature of applying data based on a group of people to say something about an individual (e.g., this particular defendant). To present a simplified example: even if we could rigorously show a measurable difference in brain activity between a group of people who told a lie in the imager and a group of people who told the truth, we cannot conclude that an individual is lying if he shows an activity pattern similar to the liars. Since the justice system makes decisions on individuals, therefore, use of group data is problematic.
To me, this categorical objection to group data seems a bit odd, and this is why: I can’t see how group data is conceptually different from ordinary circumstantial evidence. Continue reading →