Glen and Mark have done their bit for benchmarking our field with another round of health law professor rankings. It is a largely thankless task, so thank you professors. Last year, I responded to their list with the observation that any count based on law review publication alone was problematic in assessing the contributions of those in our field whose scholarship is primarily empirical or aimed at the health world. I offered a suggestive “top scholars list” based on Google Scholar profiles. Using Google Scholar, which captures articles in all fields, plus books and gray literature, brought a number of different names into the top 20. Since Google Scholar depends on individuals to create and clean their profiles, my list missed a lot of top scholars without profiles (I am talking about you, Michelle Mello and George Annas, etc. etc.), but it was enough to suggest that some very productive and much-cited scholars were missed in the Hall-Cohen list.
By Barbara A. Spellman, Professor of Law and Professor of Psychology, University of Virginia School of Law
Journals and scientists should be BFFs. But currently they are frenemies. Or, in adult-speak:
Journals play an important role in ensuring that the scientific enterprise is sound. Their most obvious function is to publish science—good science, science that has been peer-reviewed by experts and is of interest to a journal’s readership. But in fulfilling that mission, journals may provide incentives to scientists that undermine the quality of published science and distort the scientific record.
Journal policies certainly contributed to the replication crisis. As businesses, publishers (appropriately) want to make money; to do so they need people to buy, read, and cite their journals. To make that happen, editors seek articles that are novel, that confirm some new hypothesis, and that have clear results. Scientists know that editors want articles with these qualities. Accordingly, scientists may (knowingly or not) bias the scientific process to produce that type of result.
November 6, 2017 5-7 PM
Hauser Hall, Room 104
Harvard Law School, 1575 Massachusetts Ave., Cambridge, MA
Download the Presentation: “Once Ticketed, Twice Shy? Specific Deterrence from Road Traffic Laws”
David M. Studdert is Professor of Medicine and Professor of Law at Stanford University. He is a leading expert in the fields of health law and empirical legal research. His scholarship explores how the legal system influences the health and well-being of populations. A prolific scholar, he has authored more than 150 articles and book chapters, and his work appears frequently in leading international medical, law, and health policy publications.
Before joining the Stanford faculty, Studdert was on the faculty at the University of Melbourne (2007-13) and the Harvard School of Public Health (2000-06). He has also worked as a policy analyst at the RAND Corporation, a policy advisor to the Minister for Health in Australia, and a practicing attorney.
Studdert has received the Alice S. Hersh New Investigator Award from AcademyHealth, the leading organization for health services and health policy research in the United States. He was awarded a Federation Fellowship (2006) and a Laureate Fellowship (2011) by the Australian Research Council. He holds a law degree from University of Melbourne and a doctoral degree in health policy and public health from the Harvard School of Public Health.
October 16, 2017 5:00 PM
Hauser Hall, Room 104
Harvard Law School, 1575 Massachusetts Ave., Cambridge, MA
Presentation: “Cops, Docs, and Code: A Dialogue Between Big Data in Health Care and Predictive Policing” by I. Glenn Cohen & Harry S. Graver
This paper is not available for download. To request a copy in preparation for the workshop, please contact Jennifer Minnich at jminnich at law.harvard.edu.
I. Glenn Cohen is Professor of Law and Faculty Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.
Glenn’s current research projects relate to health information technologies, mobile health, reproduction/reproductive technology, research ethics, rationing in law and medicine, health policy, FDA law and to medical tourism – the travel of patients who are residents of one country, the “home country,” to another country, the “destination country,” for medical treatment. His past work has included projects on end of life decision-making, FDA regulation and commodification.
Paul Erwin, Associate Editor of the American Journal of Public Health, recently wrote about the establishment of a Sentinel Practitioner Surveillance System for Policy Change Impact, or what might be called “sentinel policy surveillance.” The network of twelve diverse health officers will be trying to identify and share instances of harmful impact from Trump administration policies.
Erwin is suitably circumspect about what such a network can do. It is, he writes, no replacement of research, and, indeed, may be reporting perceived or feared effects as often as real ones. I found the idea intriguing to ruminate on, though. What follows are some scattered thoughts about the concept. I hope readers will add theirs. Mostly I am interested in how the practice fits with general policy surveillance and public health law research. Continue reading
By Brad Segal
When people fall acutely ill, they deserve a non-sleep deprived doctor—but they also deserve an adequately-trained doctor. There are only so many hours to the day, and so in medical education a resident’s need for self-care must be balanced against the need for maximum clinical exposure. Since 2003, when restrictions to resident duty hours were first enacted, there has been disagreement about how to best navigate the tension. Recently, the debate resurfaced when the Accreditation Council for Graduate Medical Education (ACGME) proposed a change to the policy governing resident duty hour limits. Perhaps the most surprising part of the announcement was that their proposal increased the time limit that interns (first year residents) can care for patients without sleep. The policy ACGME enacted in 2011 had capped interns at 16 hours on-call, and the proposal increases the limit to 28 hours.
In my prior post I raised arguments for and against the proposed changes to duty hour limits. Here I will unpack the conclusions and limitations of the best empirical evidence available to ACGME: the Flexibility in Duty Hour Requirements for Surgical Trainees (FIRST) Trial. Published in the New England Journal of Medicine (NEJM) in 2016, the FIRST Trial randomized 117 surgical residency programs nationwide to have either “standard” duty hour policies, which included the current 16-hour cap on interns, or “flexible” policies, which reflect the recent ACGME proposal. Data were collected from July 2014 to June 2105. The sister-study involving medical residencies nationwide has regrettably not yet published.
The FIRST Trial warrant close attention because, like a Rorschach test, different people see different things in the data. For instance, take the finding that neither group caused significantly more or less harm to patients, though shorter duty hours were associated with more handoffs of patient responsibility. Taken at face value, these results neither clearly bolster nor contradict the proposed duty hour changes; yet they are used to both support and undermine the tentative changes to ACGME policy. The study’s first author told NPR that, “We believe the trial results say it’s safe to provide some flexibility in duty hours.” On the other hand, an editorial published in NEJM alongside the study argues that, “The FIRST Trial effectively debunks concerns that patients will suffer as a result of increased handoffs and breaks in the continuity of care.” Is there a right conclusion to draw from the study? Continue reading
By Brad Segal
Amidst a roller-coaster presidential campaign, on November 4th the Accreditation Council for Graduate Medical Education (ACGME) presented a plan to change resident duty hour limits. That the specifics have largely flown under the radar is perhaps unsurprising given the current news cycle. But the understated revision to, “Resident Duty Hours in The Learning and Working Environment” is the latest twist in a relatively contentious issue within medical education (see 2016 NEJM op-ed vs. responses). The proposal is currently undergoing requisite comment period until December 19. This week I’ll briefly lay out the history of duty hours to help explain the significance of ACGME’s proposal, and I will then go through general empirical arguments for and against such a change. My next post will examine how well these argument hold in light of the most recent data available.
Today the physicians’ training experience immediately following medical school is no longer the whir of dangerous sleep deprivation lampooned in the House of God. Amid mounting evidence that resident sleep deprivation caused medical errors, and under threat of federal legislation, in 2003 the ACGME first introduced national guidelines restricting resident work hours to 80 hours per week (averaged over 4 weeks), and capped residents to 30 hours of continuous in-house call. Then in 2009 the Institute of Medicine (IOM) released a 427-page report reviewing scientific evidence on resident work hours, sleep deprivation, and fatigue-related errors. The evidence overwhelmingly suggests that sleep deprivation significantly impairs most aspects of cognition. Hence the IOM ultimately recommended that residents not exceed 16 hours of continuous work before dedicated rest.
The ACGME subsequently modified duty hour guidelines in 2011 and limited first-year residents (‘interns’) to working 16-hour stretches. The reason ACGME’s most recent proposal is curious, though, is that it back-tracks on the 2011 intern duty-hour limits, raising their in-house cap to 28 hours. In response to this proposal a national advocacy group, Public Citizen, claimed it, “would expose residents, their patients and the general public to the risk of serious injury and death.” Continue reading
By Brad Segal
This past week week I attended the International Neuroethics Society’s (INS) annual conference in San Diego, California. Neuroethics is multidisciplinary field that grapples with the implications of neuroscience for—and from—medicine, law, philosophy, and the social sciences. One of the many excellent panels brought together scholars from each of these four disciplines to discuss the diverse approaches to the field. The panel featured; Paul Appelbaum, a Professor of Psychiatry at Columbia University; Tom Buller, Chair of philosophy at Illinois State University; Jennifer Chandler, Professor of law at the University of Ottawa, and; Ilina Singh, Professor of Neuroscience & Society at the University of Oxford.
The panel started by considering the importance of the “competing identities” present in the field of neuroethics. As moderator Eric Racine explained, right from the start, even the term ‘neuroethics’ suggests a tension. Consider the variety of research methodologies employed in the field. For instance, a scholar trained in philosophy might approach neuroscience from a conceptual and purely analytical basis, and yet a social scientist might research the same question by collecting empirical interview data. The interplay between empirical and theoretical work was a theme that defined the discussion.
A psychiatrist by training, Dr. Applebaum spoke on the medical approach to the field. He argued that a focus on ethical issues in clinical psychiatry and neurology should be viewed as a part (but only a part) of neuroethics. Furthermore, medicine’s empirical approach to neuroethics is one (but not the only) way to advance thinking on neuroethical issues. Continue reading
By Brad Segal
The surging opioid epidemic is a threat to the nation’s public health. This year the CDC reported that mortality from drug overdose reached an all-time high, with the annual death toll more than doubling since 2000. Yet in the backdrop of this epidemic, the country also faces ongoing shortages of a different sort–too few organs for transplantation. Every day, approximately 22 people die while waiting for an organ to become available. To some it is not a surprise–or at least not inconceivable–that the fastest-growing source of organ donors is being fueled by the national spike in drug overdoses. This first post will help delineate the scope and scale of the situation. My follow-up will discuss the ethical considerations and ramifications for public policy.
To start: the numbers. The Organ Procurement and Transplantation Network (OPTN) makes domestic transplant data publicly available online, which currently extends from 1994 to September 30th, 2016. Two decades ago, 29 organ donors died from a drug overdose.* In just the first nine months of this year, that number has climbed to 888 donors. Even with a quarter of the calendar year left to be counted, 2016 has already surpassed previous record set in 2015 (Figure 1).
One might question whether this trend is an illusion–perhaps a rise in the incidence of donors who had overdosed reflects an increasing number of transplants. But the data suggest the opposite. Also plotted in Figure 1, the percentage of total organ donors who died from overdose (maroon diamonds, right-sided Y axis) has not remained constant–instead, the percentage has steadily increased. Two decades ago, overdose caused the deaths of 0.6% of all organ donors; this year, it is the cause of death among 12.0% of organ donors nationwide. The rising percentage means that not only are more victims of drug overdose donating organs, but that the pool of organ donors is increasingly composed of such individuals. Continue reading
According to the Centers for Disease Control and Prevention, more than 6.4 million US children 4-17 years old have been diagnosed with attention-deficit/hyperactivity disorder (ADHD). The percentage of US children diagnosed with ADHD has increased by 3-5 percent per year since the 1990s. Relatedly, the percentage of children in this age group taking ADHD medication also has increased by about 7 percent per year from 2007-2008 to 2011-2012.
In response, some state Medicaid programs have implemented policies to manage the use of ADHD medications and guide physicians toward best practices for ADHD treatment in children. These policies include prescription medication prior authorization requirements that restrict approvals to patients above a certain age, or require additional provider involvement before approval for payment is granted.
In a new article published this afternoon in MMWR, CDC researchers compared Medicaid and employer-sponsored insurance (ESI) claims for “psychological services” (the procedure code category that includes behavior therapy) and ADHD medication among children aged 2–5 years receiving clinical care for ADHD.
The article references a newly released LawAtlas map that examines features of state Medicaid prior authorization policies that pertain to pediatric ADHD medication treatment, including applicable ages, medication types, and criteria for approval.
My last post was a summary of the NAM’s Recommendations on Mitochondrial Replacement Therapy (MRT). Now here is my take on the report. But keep in mind the report was just released and all I could give it was a quick read, so these are really more like initial impressions: Continue reading
By Benjamin E. Berkman, JD, MPH
While promising to eventually revolutionize medicine, the capacity to cheaply and quickly generate an individual’s entire genome has not been without controversy. Producing information on this scale seems to violate some of the accepted norms governing how to practice medicine, norms that evolved during the early years of genetic testing when a targeted paradigm dominated. One of these widely accepted norms was that an individual had a right not to know (“RNTK”) genetic information about him or herself. Prompted by evolving professional practice guidelines, the RNTK has become a highly controversial topic. The medical community and bioethicists are actively engaged in a contentious debate about the extent to which individual choice should play a role (if at all) in determining which clinically significant findings are returned.
In a recent paper published in Genetics in Medicine, my coauthors and I provide some data that illuminates this and other issues. Our survey of 800 IRB members and staff about their views on incidental findings demonstrates how malleable views on the RNTK can be. Respondents were first asked about the RNTK in the abstract: “Do research participants have a right not to know their own genetic information? In other words, would it be acceptable for them to choose not to receive any GIFs?” An overwhelming majority (96%) endorsed the right not-to-know. But when asked about a case where a specific patient has chosen not to receive clinically beneficial incidental findings, only 35% indicated that the individual’s RNTK should definitely be respected, and 28% said that they would probably honor the request not to know. Interestingly, the percentage of respondents who indicated that they do not support the RNTK increased from 2% at baseline to 26% when presented with the specific case. The percentage of people who are unsure similarly jumps, from 1% to 11%.
A recent study in JAMA by Dorner, Jacobs, and Sommers released some good and bad news about provider coverage under the Affordable Care Act (ACA). The study examined whether health plans offered on the federal marketplace in 34 states offered a sufficient number of physicians in nine specialties. For each plan, the authors searched for the number of providers covered under each specialty in each state’s most populous county. Plans without specialist physicians were labeled specialist-deficient plans. The good: roughly 90% of the plans covered more than five providers in each specialty. The bad: 19 plans were specialist-deficient and 9 of 34 states had at least one specialty deficient plan. Endocrinology, psychiatry, and rheumatology were the most commonly excluded specialties.
Here’s where it gets ugly.
Excluding certain specialists from coverage can be a way for insurers to discriminate against individuals with certain conditions by excluding them from their plans. By excluding rheumatologists, insurers may prevent enrolling individuals with rheumatoid arthritis; by excluding endocrinologists, insurers may prevent enrolling individuals with diabetes. Individuals with chronic conditions need to see specialists more frequently than healthier adults, and how easily a patient with chronic conditions can see a specialist can affect his health care outcomes.
The study adds to the growing body of empirical research showing that even after the ACA, insurers may be structuring their plans to potentially discriminate against individuals with significant chronic conditions. In January, Jacobs and Sommers published a study showing that some plans were discriminating against patients with HIV/AIDS through adverse tiering by placing all branded and generic HIV/AIDS drugs on the highest formulary tier. Another study found that 86% of plans place all medicines in at least one class on the highest cost-sharing tier. These studies show that despite being on a health plan, individuals with certain chronic conditions may still have trouble accessing essential treatments and services. Continue reading
As Michelle noted, the Notice of Proposed Rule Making (NPRM) on human subjects research is out after a long delay. For my (and many Bill of Health bloggers’) view about its predecessor ANPRM, you can check out our 2014 book, Human Subjects Research Regulation: Perspectives on the Future.
Here is HHS’s own summary of what has changed and what it thinks is most important:
The U.S. Department of Health and Human Services and fifteen other Federal Departments and Agencies have announced proposed revisions to modernize, strengthen, and make more effective the Federal Policy for the Protection of Human Subjects that was promulgated as a Common Rule in 1991. A Notice of Proposed Rulemaking (NPRM) was put on public display on September 2, 2015 by the Office of the Federal Register. The NPRM seeks comment on proposals to better protect human subjects involved in research, while facilitating valuable research and reducing burden, delay, and ambiguity for investigators. It is expected that the NPRM will be published in the Federal Register on September 8, 2015. There are plans to release several webinars that will explain the changes proposed in the NPRM, and a town hall meeting is planned to be held in Washington, D.C. in October. Continue reading
UPDATE: Plaintiffs have filed an appeal in the U.S. Court of Appeals for the Eleventh Circuit. Their brief is due on October 19.
The district court has granted summary judgment (opinion pdf) for all remaining defendants as to all of plaintiffs’ remaining claims in Looney v. Moore, the lawsuit arising out of the controversial SUPPORT trial, which I last discussed here. This therefore ends the lawsuit, pending possible appeal by the plaintiffs.
Plaintiff infants include two who were randomized to the low oxygen group and survived, but suffer from “neurological issues,” and one who was randomized to the high oxygen group who developed ROP, but not permanent vision loss. In their Fifth Amended Complaint (pdf), plaintiffs alleged negligence, lack of informed consent, breach of fiduciary duty, and product liability claims against, variously, individual IRB members, the P.I., and the pulse oximeter manufacturer. What unites all of these claims is the burden on plaintiffs to show (among other things) that their injuries were caused by their participation in the trial. Continue reading
By Amanda C. Pustilnik, Professor of Law, University of Maryland Carey School of Law; Faculty Member, Center for Law, Brain & Behavior, Massachusetts General Hospital
What should the future look like for brain-based pain measurement in the law? This is the question tackled by our concluding three contributors: Diane Hoffmann, Henry (“Hank”) T. Greely, and Frank Pasquale. Professors Hoffmann and Greely are among the founders of the fields of health law and law & biosciences. Both discuss parallels to the development of DNA evidence in court and the need for similar standards, practices, and ethical frameworks in the brain imaging area. Professor Pasquale is an innovative younger scholar who brings great theoretical depth, as well as technological savvy, to these fields. Their perspectives on the use of brain imaging in legal settings, particularly for pain measurement, illuminate different facets of this issue.
This post describes their provocative contributions – which stake out different visions but also reinforce each other. The post also highlights the forthcoming conference-based book with Oxford University Press and introduces future directions for the use of the brain imaging of pain – in areas as diverse as the law of torture, the death penalty, drug policy, criminal law, and animal rights and suffering. Please read on!
By Henry T. Greely, Edelman Johnson Professor of Law, Stanford Law School; Professor (by courtesy) of Genetics, Stanford Medical School; Director, Program in Neuroscience & Society, Stanford University
The recent meeting at Harvard on neuroimaging, pain, and the law demonstrated powerfully that the offering of neuroimaging as evidence of pain, in court and in administrative hearings, is growing closer. The science for identifying a likely pattern of neuroimaging results strongly associated with the subjective sensation of pain keeps improving. Two companies (and here) recently were founded to provide electro-encephalography (EEG) evidence of the existence of pain. And at least one neuroscientist has been providing expert testimony that a particular neuroimaging signal detected using functional magnetic resonance imaging (fMRI) is useful evidence of the existence of pain, as discussed recently in Nature.
If nothing more is done, neuroimaging evidence of pain will be offered, accepted, rejected, relied upon, and discounted in the normal, chaotic course of the law’s evolution. A “good” result, permitting appropriate use of some valid neuroimaging evidence and rejecting inappropriate use of other such evidence, might come about. Or it might not.
We can do better than this existing non-system. And the time to start planning a better approach is now. (Read on for more on how)
By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law
Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.
Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.
Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)
I have an op-ed with Christopher Chabris that appeared in this past Sunday’s New York Times. It focuses on one theme in my recent law review article on corporate experimentation: the A/B illusion. Despite the rather provocative headline that the Times gave it, our basic argument, made as clearly as we could in 800 words, is this: sometimes, it is more ethical to conduct a nonconsensual A/B experiment than to simply go with one’s intuition and impose A on everyone. Our contrary tendency to see experiments—but not untested innovations foisted on us by powerful people—as involving risk, uncertainty, and power asymmetries is what I call the A/B illusion in my law review article. Here is how the op-ed begins:
Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent? The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise. But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.
After the jump, some clarifications and further thoughts.