Amidst a roller-coaster presidential campaign, on November 4th the Accreditation Council for Graduate Medical Education (ACGME) presented a plan to change resident duty hour limits. That the specifics have largely flown under the radar is perhaps unsurprising given the current news cycle. But the understated revision to, “Resident Duty Hours in The Learning and Working Environment” is the latest twist in a relatively contentious issue within medical education (see 2016 NEJM op-ed vs. responses). The proposal is currently undergoing requisite comment period until December 19. This week I’ll briefly lay out the history of duty hours to help explain the significance of ACGME’s proposal, and I will then go through general empirical arguments for and against such a change. My next post will examine how well these argument hold in light of the most recent data available.
Today the physicians’ training experience immediately following medical school is no longer the whir of dangerous sleep deprivation lampooned in the House of God. Amid mounting evidence that resident sleep deprivation caused medical errors, and under threat of federal legislation, in 2003 the ACGME first introduced national guidelines restricting resident work hours to 80 hours per week (averaged over 4 weeks), and capped residents to 30 hours of continuous in-house call. Then in 2009 the Institute of Medicine (IOM) released a 427-page report reviewing scientific evidence on resident work hours, sleep deprivation, and fatigue-related errors. The evidence overwhelmingly suggests that sleep deprivation significantly impairs most aspects of cognition. Hence the IOM ultimately recommended that residents not exceed 16 hours of continuous work before dedicated rest.
The ACGME subsequently modified duty hour guidelines in 2011 and limited first-year residents (‘interns’) to working 16-hour stretches. The reason ACGME’s most recent proposal is curious, though, is that it back-tracks on the 2011 intern duty-hour limits, raising their in-house cap to 28 hours. In response to this proposal a national advocacy group, Public Citizen, claimed it, “would expose residents, their patients and the general public to the risk of serious injury and death.” Continue reading →
This past week week I attended the International Neuroethics Society’s (INS) annual conference in San Diego, California. Neuroethics is multidisciplinary field that grapples with the implications of neuroscience for—and from—medicine, law, philosophy, and the social sciences. One of the many excellent panels brought together scholars from each of these four disciplines to discuss the diverse approaches to the field. The panel featured; Paul Appelbaum, a Professor of Psychiatry at Columbia University; Tom Buller, Chair of philosophy at Illinois State University; Jennifer Chandler, Professor of law at the University of Ottawa, and; Ilina Singh, Professor of Neuroscience & Society at the University of Oxford.
The panel started by considering the importance of the “competing identities” present in the field of neuroethics. As moderator Eric Racine explained, right from the start, even the term ‘neuroethics’ suggests a tension. Consider the variety of research methodologies employed in the field. For instance, a scholar trained in philosophy might approach neuroscience from a conceptual and purely analytical basis, and yet a social scientist might research the same question by collecting empirical interview data. The interplay between empirical and theoretical work was a theme that defined the discussion.
A psychiatrist by training, Dr. Applebaum spoke on the medical approach to the field. He argued that a focus on ethical issues in clinical psychiatry and neurology should be viewed as a part (but only a part) of neuroethics. Furthermore, medicine’s empirical approach to neuroethics is one (but not the only) way to advance thinking on neuroethical issues. Continue reading →
The surging opioid epidemic is a threat to the nation’s public health. This year the CDC reported that mortality from drug overdose reached an all-time high, with the annual death toll more than doubling since 2000. Yet in the backdrop of this epidemic, the country also faces ongoing shortages of a different sort–too few organs for transplantation. Every day, approximately 22 people die while waiting for an organ to become available. To some it is not a surprise–or at least not inconceivable–that the fastest-growing source of organ donors is being fueled by the national spike in drug overdoses. This first post will help delineate the scope and scale of the situation. My follow-up will discuss the ethical considerations and ramifications for public policy.
To start: the numbers. The Organ Procurement and Transplantation Network (OPTN) makes domestic transplant data publicly available online, which currently extends from 1994 to September 30th, 2016. Two decades ago, 29 organ donors died from a drug overdose.* In just the first nine months of this year, that number has climbed to 888 donors. Even with a quarter of the calendar year left to be counted, 2016 has already surpassed previous record set in 2015 (Figure 1).
One might question whether this trend is an illusion–perhaps a rise in the incidence of donors who had overdosed reflects an increasing number of transplants. But the data suggest the opposite. Also plotted in Figure 1, the percentage of total organ donors who died from overdose (maroon diamonds, right-sided Y axis) has not remained constant–instead, the percentage has steadily increased. Two decades ago, overdose caused the deaths of 0.6% of all organ donors; this year, it is the cause of death among 12.0% of organ donors nationwide. The rising percentage means that not only are more victims of drug overdose donating organs, but that the pool of organ donors is increasingly composed of such individuals. Continue reading →
According to the Centers for Disease Control and Prevention, more than 6.4 million US children 4-17 years old have been diagnosed with attention-deficit/hyperactivity disorder (ADHD). The percentage of US children diagnosed with ADHD has increased by 3-5 percent per year since the 1990s. Relatedly, the percentage of children in this age group taking ADHD medication also has increased by about 7 percent per year from 2007-2008 to 2011-2012.
In response, some state Medicaid programs have implemented policies to manage the use of ADHD medications and guide physicians toward best practices for ADHD treatment in children. These policies include prescription medication prior authorization requirements that restrict approvals to patients above a certain age, or require additional provider involvement before approval for payment is granted.
In a new article published this afternoon in MMWR, CDC researchers compared Medicaid and employer-sponsored insurance (ESI) claims for “psychological services” (the procedure code category that includes behavior therapy) and ADHD medication among children aged 2–5 years receiving clinical care for ADHD.
The article references a newly released LawAtlas map that examines features of state Medicaid prior authorization policies that pertain to pediatric ADHD medication treatment, including applicable ages, medication types, and criteria for approval.
States with Medicaid programs that have a policy that requires prior authorization for ADHD medications prescribed to children younger than 28 years old.
My last post was a summary of the NAM’s Recommendations on Mitochondrial Replacement Therapy (MRT). Now here is my take on the report. But keep in mind the report was just released and all I could give it was a quick read, so these are really more like initial impressions: Continue reading →
While promising to eventually revolutionize medicine, the capacity to cheaply and quickly generate an individual’s entire genome has not been without controversy. Producing information on this scale seems to violate some of the accepted norms governing how to practice medicine, norms that evolved during the early years of genetic testing when a targeted paradigm dominated. One of these widely accepted norms was that an individual had a right not to know (“RNTK”) genetic information about him or herself. Prompted by evolving professional practice guidelines, the RNTK has become a highly controversial topic. The medical community and bioethicists are actively engaged in a contentious debate about the extent to which individual choice should play a role (if at all) in determining which clinically significant findings are returned.
In a recent paper published in Genetics in Medicine, my coauthors and I provide some data that illuminates this and other issues. Our survey of 800 IRB members and staff about their views on incidental findings demonstrates how malleable views on the RNTK can be. Respondents were first asked about the RNTK in the abstract: “Do research participants have a right not to know their own genetic information? In other words, would it be acceptable for them to choose not to receive any GIFs?” An overwhelming majority (96%) endorsed the right not-to-know. But when asked about a case where a specific patient has chosen not to receive clinically beneficial incidental findings, only 35% indicated that the individual’s RNTK should definitely be respected, and 28% said that they would probably honor the request not to know. Interestingly, the percentage of respondents who indicated that they do not support the RNTK increased from 2% at baseline to 26% when presented with the specific case. The percentage of people who are unsure similarly jumps, from 1% to 11%.
A recent study in JAMA by Dorner, Jacobs, and Sommers released some good and bad news about provider coverage under the Affordable Care Act (ACA). The study examined whether health plans offered on the federal marketplace in 34 states offered a sufficient number of physicians in nine specialties. For each plan, the authors searched for the number of providers covered under each specialty in each state’s most populous county. Plans without specialist physicians were labeled specialist-deficient plans. The good: roughly 90% of the plans covered more than five providers in each specialty. The bad: 19 plans were specialist-deficient and 9 of 34 states had at least one specialty deficient plan. Endocrinology, psychiatry, and rheumatology were the most commonly excluded specialties.
Here’s where it gets ugly.
Excluding certain specialists from coverage can be a way for insurers to discriminate against individuals with certain conditions by excluding them from their plans. By excluding rheumatologists, insurers may prevent enrolling individuals with rheumatoid arthritis; by excluding endocrinologists, insurers may prevent enrolling individuals with diabetes. Individuals with chronic conditions need to see specialists more frequently than healthier adults, and how easily a patient with chronic conditions can see a specialist can affect his health care outcomes.
The study adds to the growing body of empirical research showing that even after the ACA, insurers may be structuring their plans to potentially discriminate against individuals with significant chronic conditions. In January, Jacobs and Sommers published a study showing that some plans were discriminating against patients with HIV/AIDS through adverse tiering by placing all branded and generic HIV/AIDS drugs on the highest formulary tier. Another study found that 86% of plans place all medicines in at least one class on the highest cost-sharing tier. These studies show that despite being on a health plan, individuals with certain chronic conditions may still have trouble accessing essential treatments and services. Continue reading →
Here is HHS’s own summary of what has changed and what it thinks is most important:
The U.S. Department of Health and Human Services and fifteen other Federal Departments and Agencies have announced proposed revisions to modernize, strengthen, and make more effective the Federal Policy for the Protection of Human Subjects that was promulgated as a Common Rule in 1991. A Notice of Proposed Rulemaking (NPRM) was put on public display on September 2, 2015 by the Office of the Federal Register. The NPRM seeks comment on proposals to better protect human subjects involved in research, while facilitating valuable research and reducing burden, delay, and ambiguity for investigators. It is expected that the NPRM will be published in the Federal Register on September 8, 2015. There are plans to release several webinars that will explain the changes proposed in the NPRM, and a town hall meeting is planned to be held in Washington, D.C. in October.Continue reading →
UPDATE: Plaintiffs have filed an appeal in the U.S. Court of Appeals for the Eleventh Circuit. Their brief is due on October 19.
The district court has granted summary judgment (opinion pdf) for all remaining defendants as to all of plaintiffs’ remaining claims in Looney v. Moore, the lawsuit arising out of the controversial SUPPORT trial, which I last discussed here. This therefore ends the lawsuit, pending possible appeal by the plaintiffs.
Plaintiff infants include two who were randomized to the low oxygen group and survived, but suffer from “neurological issues,” and one who was randomized to the high oxygen group who developed ROP, but not permanent vision loss. In their Fifth Amended Complaint (pdf), plaintiffs alleged negligence, lack of informed consent, breach of fiduciary duty, and product liability claims against, variously, individual IRB members, the P.I., and the pulse oximeter manufacturer. What unites all of these claims is the burden on plaintiffs to show (among other things) that their injuries were caused by their participation in the trial. Continue reading →
What should the future look like for brain-based pain measurement in the law? This is the question tackled by our concluding three contributors: Diane Hoffmann, Henry (“Hank”) T. Greely, and Frank Pasquale. Professors Hoffmann and Greely are among the founders of the fields of health law and law & biosciences. Both discuss parallels to the development of DNA evidence in court and the need for similar standards, practices, and ethical frameworks in the brain imaging area. Professor Pasquale is an innovative younger scholar who brings great theoretical depth, as well as technological savvy, to these fields. Their perspectives on the use of brain imaging in legal settings, particularly for pain measurement, illuminate different facets of this issue.
This post describes their provocative contributions – which stake out different visions but also reinforce each other. The post also highlights the forthcoming conference-based book with Oxford University Press and introduces future directions for the use of the brain imaging of pain – in areas as diverse as the law of torture, the death penalty, drug policy, criminal law, and animal rights and suffering. Please read on!
The recent meeting at Harvard on neuroimaging, pain, and the law demonstrated powerfully that the offering of neuroimaging as evidence of pain, in court and in administrative hearings, is growing closer. The science for identifying a likely pattern of neuroimaging results strongly associated with the subjective sensation of pain keeps improving. Two companies (and here) recently were founded to provide electro-encephalography (EEG) evidence of the existence of pain. And at least one neuroscientist has been providing expert testimony that a particular neuroimaging signal detected using functional magnetic resonance imaging (fMRI) is useful evidence of the existence of pain, as discussed recently in Nature.
If nothing more is done, neuroimaging evidence of pain will be offered, accepted, rejected, relied upon, and discounted in the normal, chaotic course of the law’s evolution. A “good” result, permitting appropriate use of some valid neuroimaging evidence and rejecting inappropriate use of other such evidence, might come about. Or it might not.
We can do better than this existing non-system. And the time to start planning a better approach is now. (Read on for more on how)
By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law
Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.
Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.
Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)
I have an op-ed with Christopher Chabris that appeared in this past Sunday’s New York Times. It focuses on one theme in my recent law review article on corporate experimentation: the A/B illusion. Despite the rather provocative headline that the Times gave it, our basic argument, made as clearly as we could in 800 words, is this: sometimes, it is more ethical to conduct a nonconsensual A/B experiment than to simply go with one’s intuition and impose A on everyone. Our contrary tendency to see experiments—but not untested innovations foisted on us by powerful people—as involving risk, uncertainty, and power asymmetries is what I call the A/B illusion in my law review article. Here is how the op-ed begins:
Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent? The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise. But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.
After the jump, some clarifications and further thoughts.
On May 21, along with my frequent co-author Eli Adashi, I published an op-ed in the New York Timesraising some questions about FDA’s proposed guidance recommending a ban on taking the blood on any man who has had sex with another man in the past year, or in other words imposing a one year celibacy requirement on gay men if they want to donate blood. This built on our critique last July in JAMA, wherein we argued that FDA’s then-lifetime ban on gay men and MSM donating blood was out of step with science and the practice of our peer countries, as well as potentially unconstitutional.
Thanks to our work, and a concerted effort by public health, medical, and gay rights groups, FDA has finally moved off of that prior policy and recognized that it was unjustified, and discriminatory.
Just to put this in context It took more than 30 years to convince FDA that it was problematic to ban blood donation for a lifetime any man who ever had sex with another man, even if both have repeatedly tested negative for HIV, while it imposed only a one year ban on people who had sex with individuals known to be HIV positive or a sex worker. FDA is appropriately a conservative agency, but on this issue of the lifetime ban its willingness to listen and reconsider has gone beyond conservatism to the point of lunacy. [By the way to be clear, I *love* FDA. I represented them while at the DOJ and have a new book coming out about FDA in the fall. You can think highly of an agency but think they have a bad track record on an issue. This is critique not hater-aide].
Well with that background, one should be not so quick to assume that a move to a one year ban — a de facto lifetime ban for any gay man who is sexually active, even one who is monogamously married with children — is the best policy. To put it bluntly, refusing to change a lifetime ban for such a long period makes me skeptical we should accept a “just trust us” line on their new restrictive policy.
The question we raised in our op-ed was whether FDA had adequately justified retaining a one year ban in light of the evidence from places like South Africa (with a much shorter time period ban), Italy (which does individualized risk assessment instead of stigmatizing all gay men as high risk for disease), etc.
A remarkable new “sting” of the “diet research-media complex” was just revealed. It tells us little we didn’t already know and has potentially caused a fair amount of damage, spread across millions of people. It does, however, offer an opportunity to explore the importance of prospective group review of non-consensual human subjects research—and the limits of IRBs applying the Common Rule in serving that function in contexts like this.
Journalist John Bohannon, two German reporters, a doctor and a statistician recruited 16 German subjects through Facebook into a three-week randomized controlled trial of diet and weight loss. One-third were told to follow a low-carb diet, one-third were told to cut carbs but add 1.5 ounces of dark chocolate (about 230 calories) per day, and one-third served as control subjects and were told to make no changes to their current diet. They were all given questionnaires and blood tests in advance to ensure they didn’t have diabetes, eating disorders, or other conditions that would make the study dangerous for them, and these tests were repeated after the study. They were each paid 150 Euros (~$163) for their trouble.
But it turns out that Bohannon, the good doctor (who had written a book about dietary pseudoscience), and their colleagues were not at all interested in studying diet. Instead, they wanted to show how easy it is for bad science to be published and reported by the media. The design of the diet trial was deliberately poor. It involved only a handful of subjects, had a poor balance of age and of men and women, and so on. But, through the magic of p-hacking, they managed several statistically significant results: eating chocolate accelerates weight loss and leads to healthier cholesterol levels and increased well-being. Continue reading →
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Continue reading →
I’ve mentioned on this blog before that I had a past life as a nurse. Therefore, I wanted to call attention to an important new study that has just come out in JAMA: Salary Differences Between Male and Female Registered Nurses in the United States. The study found that “[m]ale RNs outearned female RNs across settings, specialties, and positions.” On average, male nurses make $5,150 more per year than female colleagues in similar positions. This salary gap affects 2.5 million female RNs.
Ongoing identification of nursing as “women’s work” and the presence of gender bias in nursing can affect male nurses in different, seemingly contradictory ways. On the one hand, the 2000 National Sample Survey of Registered Nurses found that men leave nursing at a higher rate in their first four years of practice. Some have attributed that attrition to the harmful effects of gender bias. On the other hand, it has been observed that–unlike women who enter male-dominated professions–male nurses who enter this female-dominated profession typically encounter structural advantages that tend to enhance their careers.
There is a need for more nurses. According to the Bureau of Labor Statistics’s Employment Projections, the RN workforce is expected to grow to 3.24 million in 2022. That is a 19% increase. Nursing is a context that highlights how gender stereotyping hurts everyone–men who encounter discrimination, women who earn less than their male counterparts, and patients who benefit most when nursing recruits and retains excellent people.
I personally found nursing to be very rewarding. I hope this study motivates employers to scrutinize their pay structures but also to appreciate and address the broader effects of gender bias on the profession.
For centuries, researchers have studied multiple aspects of women’s reproduction. Research tells us when women are more likely to become pregnant, when infertility kicks in, and even offers significant insights into the psychological dimensions of pregnancy and mothering from the dopamine release associated with breastfeeding to the potential for postnatal depression to occur after birth. Perhaps for this reason, lawmakers and courts tend to focus on women’s environment and conduct, during pregnancy, as the space to promote fetal health and well-being with an eye toward healthy child development.
Has anything been missing? Until recently, very limited attention has focused on paternity. Decades-old studies linking paternity to mental health conditions such as schizophrenia are valuable, but sadly overlooked. And recent research linking older paternity to autism is just beginning to gain attention. Adding to this discourse and carving out unique pathways for understanding paternity is Professor Wendy Goldberg at the University of California at Irvine.
In her book, Father Time: The Social Clock and the Timing of Fatherhood, she takes up overlooked phenomenon, involving fathering. For example, do men experience postnatal depression? It turns out that they do–and more. Some expecting-fathers experience neuroticism, and even jealousy. Goldberg studies different age groups to explain how the “social” clock for dads impacts their relationships with offspring, partners, as well as how it impacts fathers’ mental health. It adds to an important, growing literature.
A WSJ reporter just tipped me off to this news release by Facebook regarding the changes it has made in its research practices in response to public outrage about its emotional contagion experiment, published in PNAS. I had a brief window of time in which to respond with my comments, so these are rushed and a first reaction, but for what they’re worth, here’s what I told her (plus links and less a couple of typos):
There’s a lot to like in this announcement. I’m delighted that, despite the backlash it received, Facebook will continue to publish at least some of their research in peer-reviewed journals and to post reprints of that research on their website, where everyone can benefit from it. It’s also encouraging that the company acknowledges the importance of user trust and that it has expressed a commitment to better communicate its research goals and results.
As for Facebook’s promise to subject future research to more extensive review by a wider and more senior group of people within the company, with an enhanced review process for research that concerns, say, minors or sensitive topics, it’s impossible to assess whether this is ethically good or bad without knowing a lot more about both the people who comprise the panel and their review process (including but not limited to Facebook’s policy on when, if ever, the default requirements of informed consent may be modified or waived). It’s tempting to conclude that more review is always better. But research ethics committees (IRBs) can and do make mistakes in both directions – by approving research that should not have gone forward and by unreasonably thwarting important research. Do Facebook’s law, privacy, and policy people have any training in research ethics? Is there any sort of appeal process for Facebook’s data scientists if the panel arbitrarily rejects their proposal? These are the tip of the iceberg of challenges that the academic IRBs continue to face, and I fear that we are unthinkingly exporting an unhealthy system into the corporate world. Discussion is just beginning among academic scientists, corporate data scientists, and ethicists about the ethics of mass-scale digital experimentation (see, ahem, here and here). It’s theoretically possible, but unlikely, that in its new, but unclear, guidelines and review process Facebook has struck the optimal balance among the competing values and interests that this work involves. Continue reading →