My last post was a summary of the NAM’s Recommendations on Mitochondrial Replacement Therapy (MRT). Now here is my take on the report. But keep in mind the report was just released and all I could give it was a quick read, so these are really more like initial impressions: Continue reading
By Benjamin E. Berkman, JD, MPH
While promising to eventually revolutionize medicine, the capacity to cheaply and quickly generate an individual’s entire genome has not been without controversy. Producing information on this scale seems to violate some of the accepted norms governing how to practice medicine, norms that evolved during the early years of genetic testing when a targeted paradigm dominated. One of these widely accepted norms was that an individual had a right not to know (“RNTK”) genetic information about him or herself. Prompted by evolving professional practice guidelines, the RNTK has become a highly controversial topic. The medical community and bioethicists are actively engaged in a contentious debate about the extent to which individual choice should play a role (if at all) in determining which clinically significant findings are returned.
In a recent paper published in Genetics in Medicine, my coauthors and I provide some data that illuminates this and other issues. Our survey of 800 IRB members and staff about their views on incidental findings demonstrates how malleable views on the RNTK can be. Respondents were first asked about the RNTK in the abstract: “Do research participants have a right not to know their own genetic information? In other words, would it be acceptable for them to choose not to receive any GIFs?” An overwhelming majority (96%) endorsed the right not-to-know. But when asked about a case where a specific patient has chosen not to receive clinically beneficial incidental findings, only 35% indicated that the individual’s RNTK should definitely be respected, and 28% said that they would probably honor the request not to know. Interestingly, the percentage of respondents who indicated that they do not support the RNTK increased from 2% at baseline to 26% when presented with the specific case. The percentage of people who are unsure similarly jumps, from 1% to 11%.
A recent study in JAMA by Dorner, Jacobs, and Sommers released some good and bad news about provider coverage under the Affordable Care Act (ACA). The study examined whether health plans offered on the federal marketplace in 34 states offered a sufficient number of physicians in nine specialties. For each plan, the authors searched for the number of providers covered under each specialty in each state’s most populous county. Plans without specialist physicians were labeled specialist-deficient plans. The good: roughly 90% of the plans covered more than five providers in each specialty. The bad: 19 plans were specialist-deficient and 9 of 34 states had at least one specialty deficient plan. Endocrinology, psychiatry, and rheumatology were the most commonly excluded specialties.
Here’s where it gets ugly.
Excluding certain specialists from coverage can be a way for insurers to discriminate against individuals with certain conditions by excluding them from their plans. By excluding rheumatologists, insurers may prevent enrolling individuals with rheumatoid arthritis; by excluding endocrinologists, insurers may prevent enrolling individuals with diabetes. Individuals with chronic conditions need to see specialists more frequently than healthier adults, and how easily a patient with chronic conditions can see a specialist can affect his health care outcomes.
The study adds to the growing body of empirical research showing that even after the ACA, insurers may be structuring their plans to potentially discriminate against individuals with significant chronic conditions. In January, Jacobs and Sommers published a study showing that some plans were discriminating against patients with HIV/AIDS through adverse tiering by placing all branded and generic HIV/AIDS drugs on the highest formulary tier. Another study found that 86% of plans place all medicines in at least one class on the highest cost-sharing tier. These studies show that despite being on a health plan, individuals with certain chronic conditions may still have trouble accessing essential treatments and services. Continue reading
As Michelle noted, the Notice of Proposed Rule Making (NPRM) on human subjects research is out after a long delay. For my (and many Bill of Health bloggers’) view about its predecessor ANPRM, you can check out our 2014 book, Human Subjects Research Regulation: Perspectives on the Future.
Here is HHS’s own summary of what has changed and what it thinks is most important:
The U.S. Department of Health and Human Services and fifteen other Federal Departments and Agencies have announced proposed revisions to modernize, strengthen, and make more effective the Federal Policy for the Protection of Human Subjects that was promulgated as a Common Rule in 1991. A Notice of Proposed Rulemaking (NPRM) was put on public display on September 2, 2015 by the Office of the Federal Register. The NPRM seeks comment on proposals to better protect human subjects involved in research, while facilitating valuable research and reducing burden, delay, and ambiguity for investigators. It is expected that the NPRM will be published in the Federal Register on September 8, 2015. There are plans to release several webinars that will explain the changes proposed in the NPRM, and a town hall meeting is planned to be held in Washington, D.C. in October. Continue reading
UPDATE: Plaintiffs have filed an appeal in the U.S. Court of Appeals for the Eleventh Circuit. Their brief is due on October 19.
The district court has granted summary judgment (opinion pdf) for all remaining defendants as to all of plaintiffs’ remaining claims in Looney v. Moore, the lawsuit arising out of the controversial SUPPORT trial, which I last discussed here. This therefore ends the lawsuit, pending possible appeal by the plaintiffs.
Plaintiff infants include two who were randomized to the low oxygen group and survived, but suffer from “neurological issues,” and one who was randomized to the high oxygen group who developed ROP, but not permanent vision loss. In their Fifth Amended Complaint (pdf), plaintiffs alleged negligence, lack of informed consent, breach of fiduciary duty, and product liability claims against, variously, individual IRB members, the P.I., and the pulse oximeter manufacturer. What unites all of these claims is the burden on plaintiffs to show (among other things) that their injuries were caused by their participation in the trial. Continue reading
By Amanda C. Pustilnik, Professor of Law, University of Maryland Carey School of Law; Faculty Member, Center for Law, Brain & Behavior, Massachusetts General Hospital
What should the future look like for brain-based pain measurement in the law? This is the question tackled by our concluding three contributors: Diane Hoffmann, Henry (“Hank”) T. Greely, and Frank Pasquale. Professors Hoffmann and Greely are among the founders of the fields of health law and law & biosciences. Both discuss parallels to the development of DNA evidence in court and the need for similar standards, practices, and ethical frameworks in the brain imaging area. Professor Pasquale is an innovative younger scholar who brings great theoretical depth, as well as technological savvy, to these fields. Their perspectives on the use of brain imaging in legal settings, particularly for pain measurement, illuminate different facets of this issue.
This post describes their provocative contributions – which stake out different visions but also reinforce each other. The post also highlights the forthcoming conference-based book with Oxford University Press and introduces future directions for the use of the brain imaging of pain – in areas as diverse as the law of torture, the death penalty, drug policy, criminal law, and animal rights and suffering. Please read on!
By Henry T. Greely, Edelman Johnson Professor of Law, Stanford Law School; Professor (by courtesy) of Genetics, Stanford Medical School; Director, Program in Neuroscience & Society, Stanford University
The recent meeting at Harvard on neuroimaging, pain, and the law demonstrated powerfully that the offering of neuroimaging as evidence of pain, in court and in administrative hearings, is growing closer. The science for identifying a likely pattern of neuroimaging results strongly associated with the subjective sensation of pain keeps improving. Two companies (and here) recently were founded to provide electro-encephalography (EEG) evidence of the existence of pain. And at least one neuroscientist has been providing expert testimony that a particular neuroimaging signal detected using functional magnetic resonance imaging (fMRI) is useful evidence of the existence of pain, as discussed recently in Nature.
If nothing more is done, neuroimaging evidence of pain will be offered, accepted, rejected, relied upon, and discounted in the normal, chaotic course of the law’s evolution. A “good” result, permitting appropriate use of some valid neuroimaging evidence and rejecting inappropriate use of other such evidence, might come about. Or it might not.
We can do better than this existing non-system. And the time to start planning a better approach is now. (Read on for more on how)
By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law
Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.
Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.
Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)
I have an op-ed with Christopher Chabris that appeared in this past Sunday’s New York Times. It focuses on one theme in my recent law review article on corporate experimentation: the A/B illusion. Despite the rather provocative headline that the Times gave it, our basic argument, made as clearly as we could in 800 words, is this: sometimes, it is more ethical to conduct a nonconsensual A/B experiment than to simply go with one’s intuition and impose A on everyone. Our contrary tendency to see experiments—but not untested innovations foisted on us by powerful people—as involving risk, uncertainty, and power asymmetries is what I call the A/B illusion in my law review article. Here is how the op-ed begins:
Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent? The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise. But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.
After the jump, some clarifications and further thoughts.
On May 21, along with my frequent co-author Eli Adashi, I published an op-ed in the New York Times raising some questions about FDA’s proposed guidance recommending a ban on taking the blood on any man who has had sex with another man in the past year, or in other words imposing a one year celibacy requirement on gay men if they want to donate blood. This built on our critique last July in JAMA, wherein we argued that FDA’s then-lifetime ban on gay men and MSM donating blood was out of step with science and the practice of our peer countries, as well as potentially unconstitutional.
Thanks to our work, and a concerted effort by public health, medical, and gay rights groups, FDA has finally moved off of that prior policy and recognized that it was unjustified, and discriminatory.
Just to put this in context It took more than 30 years to convince FDA that it was problematic to ban blood donation for a lifetime any man who ever had sex with another man, even if both have repeatedly tested negative for HIV, while it imposed only a one year ban on people who had sex with individuals known to be HIV positive or a sex worker. FDA is appropriately a conservative agency, but on this issue of the lifetime ban its willingness to listen and reconsider has gone beyond conservatism to the point of lunacy. [By the way to be clear, I *love* FDA. I represented them while at the DOJ and have a new book coming out about FDA in the fall. You can think highly of an agency but think they have a bad track record on an issue. This is critique not hater-aide].
Well with that background, one should be not so quick to assume that a move to a one year ban — a de facto lifetime ban for any gay man who is sexually active, even one who is monogamously married with children — is the best policy. To put it bluntly, refusing to change a lifetime ban for such a long period makes me skeptical we should accept a “just trust us” line on their new restrictive policy.
The question we raised in our op-ed was whether FDA had adequately justified retaining a one year ban in light of the evidence from places like South Africa (with a much shorter time period ban), Italy (which does individualized risk assessment instead of stigmatizing all gay men as high risk for disease), etc.
Here is what FDA said with my analysis in bold:
A remarkable new “sting” of the “diet research-media complex” was just revealed. It tells us little we didn’t already know and has potentially caused a fair amount of damage, spread across millions of people. It does, however, offer an opportunity to explore the importance of prospective group review of non-consensual human subjects research—and the limits of IRBs applying the Common Rule in serving that function in contexts like this.
Journalist John Bohannon, two German reporters, a doctor and a statistician recruited 16 German subjects through Facebook into a three-week randomized controlled trial of diet and weight loss. One-third were told to follow a low-carb diet, one-third were told to cut carbs but add 1.5 ounces of dark chocolate (about 230 calories) per day, and one-third served as control subjects and were told to make no changes to their current diet. They were all given questionnaires and blood tests in advance to ensure they didn’t have diabetes, eating disorders, or other conditions that would make the study dangerous for them, and these tests were repeated after the study. They were each paid 150 Euros (~$163) for their trouble.
But it turns out that Bohannon, the good doctor (who had written a book about dietary pseudoscience), and their colleagues were not at all interested in studying diet. Instead, they wanted to show how easy it is for bad science to be published and reported by the media. The design of the diet trial was deliberately poor. It involved only a handful of subjects, had a poor balance of age and of men and women, and so on. But, through the magic of p-hacking, they managed several statistically significant results: eating chocolate accelerates weight loss and leads to healthier cholesterol levels and increased well-being. Continue reading
I have a new law review article out, Two Cheers for Corporate Experimentation: The A/B Illusion and the Virtues of Data-Driven Innovation, arising out of last year’s terrific Silicon Flatirons annual tech/privacy conference at Colorado Law, the theme of which was “When Companies Study Their Customers.”
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Continue reading
By Emily Largent
I’ve mentioned on this blog before that I had a past life as a nurse. Therefore, I wanted to call attention to an important new study that has just come out in JAMA: Salary Differences Between Male and Female Registered Nurses in the United States. The study found that “[m]ale RNs outearned female RNs across settings, specialties, and positions.” On average, male nurses make $5,150 more per year than female colleagues in similar positions. This salary gap affects 2.5 million female RNs.
There is speculation that a male nurse may be perceived as more expert simply because he is a man. This explanation is deeply ironic. Decades of legal barriers kept men out of the field, and historically, some nursing schools refused to admit men due to sex stereotypes that categorized caring as a feminine trait. The Supreme Court deemed this practice unconstitutional in Mississippi University for Women v. Hogan. While that decision came down in 1982, research suggests that men continue to face pervasive barriers in nursing school (e.g., hearing anti-male remarks from faculty).
Ongoing identification of nursing as “women’s work” and the presence of gender bias in nursing can affect male nurses in different, seemingly contradictory ways. On the one hand, the 2000 National Sample Survey of Registered Nurses found that men leave nursing at a higher rate in their first four years of practice. Some have attributed that attrition to the harmful effects of gender bias. On the other hand, it has been observed that–unlike women who enter male-dominated professions–male nurses who enter this female-dominated profession typically encounter structural advantages that tend to enhance their careers.
There is a need for more nurses. According to the Bureau of Labor Statistics’s Employment Projections, the RN workforce is expected to grow to 3.24 million in 2022. That is a 19% increase. Nursing is a context that highlights how gender stereotyping hurts everyone–men who encounter discrimination, women who earn less than their male counterparts, and patients who benefit most when nursing recruits and retains excellent people.
I personally found nursing to be very rewarding. I hope this study motivates employers to scrutinize their pay structures but also to appreciate and address the broader effects of gender bias on the profession.
By Michele Goodwin
For centuries, researchers have studied multiple aspects of women’s reproduction. Research tells us when women are more likely to become pregnant, when infertility kicks in, and even offers significant insights into the psychological dimensions of pregnancy and mothering from the dopamine release associated with breastfeeding to the potential for postnatal depression to occur after birth. Perhaps for this reason, lawmakers and courts tend to focus on women’s environment and conduct, during pregnancy, as the space to promote fetal health and well-being with an eye toward healthy child development.
Has anything been missing? Until recently, very limited attention has focused on paternity. Decades-old studies linking paternity to mental health conditions such as schizophrenia are valuable, but sadly overlooked. And recent research linking older paternity to autism is just beginning to gain attention. Adding to this discourse and carving out unique pathways for understanding paternity is Professor Wendy Goldberg at the University of California at Irvine.
In her book, Father Time: The Social Clock and the Timing of Fatherhood, she takes up overlooked phenomenon, involving fathering. For example, do men experience postnatal depression? It turns out that they do–and more. Some expecting-fathers experience neuroticism, and even jealousy. Goldberg studies different age groups to explain how the “social” clock for dads impacts their relationships with offspring, partners, as well as how it impacts fathers’ mental health. It adds to an important, growing literature.
A WSJ reporter just tipped me off to this news release by Facebook regarding the changes it has made in its research practices in response to public outrage about its emotional contagion experiment, published in PNAS. I had a brief window of time in which to respond with my comments, so these are rushed and a first reaction, but for what they’re worth, here’s what I told her (plus links and less a couple of typos):
There’s a lot to like in this announcement. I’m delighted that, despite the backlash it received, Facebook will continue to publish at least some of their research in peer-reviewed journals and to post reprints of that research on their website, where everyone can benefit from it. It’s also encouraging that the company acknowledges the importance of user trust and that it has expressed a commitment to better communicate its research goals and results.
As for Facebook’s promise to subject future research to more extensive review by a wider and more senior group of people within the company, with an enhanced review process for research that concerns, say, minors or sensitive topics, it’s impossible to assess whether this is ethically good or bad without knowing a lot more about both the people who comprise the panel and their review process (including but not limited to Facebook’s policy on when, if ever, the default requirements of informed consent may be modified or waived). It’s tempting to conclude that more review is always better. But research ethics committees (IRBs) can and do make mistakes in both directions – by approving research that should not have gone forward and by unreasonably thwarting important research. Do Facebook’s law, privacy, and policy people have any training in research ethics? Is there any sort of appeal process for Facebook’s data scientists if the panel arbitrarily rejects their proposal? These are the tip of the iceberg of challenges that the academic IRBs continue to face, and I fear that we are unthinkingly exporting an unhealthy system into the corporate world. Discussion is just beginning among academic scientists, corporate data scientists, and ethicists about the ethics of mass-scale digital experimentation (see, ahem, here and here). It’s theoretically possible, but unlikely, that in its new, but unclear, guidelines and review process Facebook has struck the optimal balance among the competing values and interests that this work involves. Continue reading
Another stop on my fall Facebook/OKCupid tour: on October 10, I’ll be participating on a panel (previewed in the NYT here) on “Experimentation and Ethical Practice,” along with Harvard Law’s Jonathan Zittrain, Google chief economist Hal Varian, my fellow PersonalGenomes.org board member and start-up investor Ester Dyson, and my friend and Maryland Law prof Leslie Meltzer Henry.
The panel will be moderated by Sinan Aral of the MIT Sloan School of Management, who is also one of the organizers of a two-day Conference on Digital Experimentation (CODE), of which the panel is a part. The conference, which brings together academic researchers and data scientists from Google, Microsoft, and, yes, Facebook, may be of interest to some of our social scientist readers. (I’m told registration space is very limited, so “act soon,” as they say.) From the conference website:
The ability to rapidly deploy micro-level randomized experiments at population scale is, in our view, one of the most significant innovations in modern social science. As more and more social interactions, behaviors, decisions, opinions and transactions are digitized and mediated by online platforms, we can quickly answer nuanced causal questions about the role of social behavior in population-level outcomes such as health, voting, political mobilization, consumer demand, information sharing, product rating and opinion aggregation. When appropriately theorized and rigorously applied, randomized experiments are the gold standard of causal inference and a cornerstone of effective policy. But the scale and complexity of these experiments also create scientific and statistical challenges for design and inference. The purpose of the Conference on Digital Experimentation at MIT (CODE) is to bring together leading researchers conducting and analyzing large scale randomized experiments in digitally mediated social and economic environments, in various scientific disciplines including economics, computer science and sociology, in order to lay the foundation for ongoing relationships and to build a lasting multidisciplinary research community.
By Emily Largent
The Kaiser Family Foundation (KFF) recently conducted a survey of gay and bisexual men in the U.S. focusing on attitudes, knowledge, and experiences with HIV/AIDS. The survey results, released Thursday, can be found here. I was most interested in the finding that only a quarter of those surveyed know about PrEP (pre-exposure prophylaxis).
PrEP (brand name Truvada) is a combination of two medicines (tenofovir and emtricitabine) that has, if taken consistently, been shown to reduce the risk of HIV infection in people who are high risk by up to 92%. The FDA approved an indication for the use of Truvada “in combination with safer sex practices for pre-exposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk” in 2012. The U.S. Public Health Service released the first comprehensive clinical practice guidelines in May of this year. Continue reading
By Emily Largent
Although many lament that the ubiquity of smartphones has contributed to a recent decline in etiquette, a study published this week in Science suggests that smartphones’ ubiquity may make them a valuable–if surprising–tool for studying modern morality.
Most moral judgment experiments are lab-based and driven by hypotheticals. By contrast, this was a field experiment that focused on the moral judgments people make in their daily lives. The authors recruited 1,252 adults from the U.S. and Canada. Participants were contacted via text message five times each day over a three-day period. Each time, they were asked “whether they committed, were the target of, witnessed, or learned about a moral or immoral act within the past hour.” For each moral or immoral event, participants described via text what the event was about; provided situational context; and provided information about nine moral emotions (e.g., guilt and disgust). Political ideology and religiosity were assessed during an intake survey.
Participants reported a moral or immoral event on 28.9% of responses (n = 3,828). Moral and immoral events had similar overall frequencies. The authors found political ideology was reliably associated with the types of moral problems people identified. Liberals mentioned events related to Fairness/Unfairness, Liberty/Oppression, and Honesty/Dishonesty more frequently than did conservatives. By contrast, conservatives were more likely to mention events related to Loyalty/Disloyalty, Authority/Subversion, and Sanctity/Degradation. Continue reading
In “Is it ethical to hire sherpas when climbing Mount Everest?,” a short piece out today in the British Medical Journal, I suggest that the question of whether it is ethical to pay sherpas to assume risks for the benefit of relatively affluent Western climbers is a variant of cases–common in medical ethics–where compensation and assumption of risk coincide. Consider offers of payment to research subjects, organ sales, and paid surrogacy. As a result, medical ethics can offer helpful frameworks for evaluating the acceptability of payment and, perhaps, suggest protections for sherpas as we look forward to the next climbing season on Everest.
I owe particular thanks to Nir Eyal, Harvard Medical School Center for Bioethics and Harvard School of Public Health Department of Global Health and Population; Richard Salisbury, University of Michigan (retired); and Paul Firth, Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital.
Take a look and let me know what you think.