TOMORROW, 11/9! Book Launch: Specimen Science – Ethics and Policy Implications

Book Launch: Specimen Science: Ethics and Policy Implications
November 9, 2017 12:00 PM
Countway Library, Lahey Room
Harvard Medical School, Boston, MA

In September 2017, MIT Press will publish Specimen Science: Ethics and Policy Implications, co-edited by Holly Fernandez Lynch (outgoing Petrie-Flom Executive Director), Barbara Bierer, I. Glenn Cohen (Faculty Director), and Suzanne M. Rivera. This edited volume stems from a conference in 2015 that brought together leading experts to address key ethical and policy issues raised by genetics and other research involving human biological materials, covering the entire trajectory from specimen source to new discovery.  The conference was a collaboration between The Center for Child Health and Policy at Case Western Reserve University and University Hospitals Rainbow Babies & Children’s Hospital; the Petrie-Flom Center  for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School; the Multi-Regional Clinical Trials Center of Harvard and Brigham and Women’s Hospital; and Harvard Catalyst | The Harvard Clinical and Translational Science Center. It was supported by funding from the National Human Genome Research Institute and the Oswald DeN. Cammann Fund at Harvard University.

Continue reading

Book Launch: Specimen Science: Ethics and Policy Implications

Book Launch: Specimen Science: Ethics and Policy Implications
November 9, 2017 12:00 PM
Countway Library, Lahey Room
Harvard Medical School, Boston, MA

In September 2017, MIT Press will publish Specimen Science: Ethics and Policy Implications, co-edited by Holly Fernandez Lynch (outgoing Petrie-Flom Executive Director), Barbara Bierer, I. Glenn Cohen (Faculty Director), and Suzanne M. Rivera. This edited volume stems from a conference in 2015 that brought together leading experts to address key ethical and policy issues raised by genetics and other research involving human biological materials, covering the entire trajectory from specimen source to new discovery.  The conference was a collaboration between The Center for Child Health and Policy at Case Western Reserve University and University Hospitals Rainbow Babies & Children’s Hospital; the Petrie-Flom Center  for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School; the Multi-Regional Clinical Trials Center of Harvard and Brigham and Women’s Hospital; and Harvard Catalyst | The Harvard Clinical and Translational Science Center.  It was supported by funding from the National Human Genome Research Institute and the Oswald DeN. Cammann Fund at Harvard University.

Continue reading

TODAY, 10/16 at 5 PM: Health Law Workshop with I. Glenn Cohen

October 16, 2017 5:00 PM
Hauser Hall, Room 104
Harvard Law School, 1575 Massachusetts Ave., Cambridge, MA

Presentation: “Cops, Docs, and Code: A Dialogue Between Big Data in Health Care and Predictive Policing” by I. Glenn Cohen & Harry S. Graver

This paper is not available for download. To request a copy in preparation for the workshop, please contact Jennifer Minnich at jminnich at law.harvard.edu.

I. Glenn Cohen is Professor of Law and Faculty Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

Glenn’s current research projects relate to health information technologies, mobile health, reproduction/reproductive technology, research ethics, rationing in law and medicine, health policy, FDA law and to medical tourism – the travel of patients who are residents of one country, the “home country,” to another country, the “destination country,” for medical treatment. His past work has included projects on end of life decision-making, FDA regulation and commodification.

 

Emergent Medical Data

By Mason Marks

In this brief essay, I describe a new type of medical information that is not protected by existing privacy laws. I call it Emergent Medical Data (EMD) because at first glance, it has no relationship to your health. Companies can derive EMD from your seemingly benign Facebook posts, a list of videos you watched on YouTube, a credit card purchase, or the contents of your e-mail. A person reading the raw data would be unaware that it conveys any health information. Machine learning algorithms must first massage the data before its health-related properties emerge.

Unlike medical information obtained by healthcare providers, which is protected by the Health Information Portability and Accountability Act (HIPAA), EMD receives little to no legal protection. A common rationale for maintaining health data privacy is that it promotes full transparency between patients and physicians. HIPAA assures patients that the sensitive conversations they have with their doctors will remain confidential. The penalties for breaching confidentiality can be steep. In 2016, the Department of Health and Human Services recorded over $20 million in fines resulting from HIPAA violations. When companies mine for EMD, they are not bound by HIPAA or subject to these penalties.

Continue reading

Democratized Diagnostics: Why Medical Artificial Intelligence Needs Vetting

Pancreatic cancer is one of the deadliest illnesses out there.  The five-year survival rate of patients with the disease is only about 7%.  This is, in part, because few observable symptoms appear early enough for effective treatment.  As a result, by the time many patients are diagnosed the prognosis is poor.  There is an app, however, that is attempting to change that.  BiliScreen was developed by researchers at the University of Washington, and it is designed to help users identify pancreatic cancer early with an algorithm that analyzes selfies.  Users take photos of themselves, and the app’s artificially intelligent algorithm detects slight discolorations in the skin and eyes associated with early pancreatic cancer.

Diagnostic apps like BiliScreen represent a huge step forward for preventive health care.  Imagine a world in which the vast majority of chronic diseases are caught early because each of us has the power to screen ourselves on a regular basis.  One of the big challenges for the modern primary care physician is convincing patients to get screened regularly for diseases that have relatively good prognoses when caught early.

I’ve written before about the possible impacts of artificial intelligence and algorithmic medicine, arguing that both medicine and law will have to adapt as machine-learning algorithms surpass physicians in their ability to diagnose and treat disease.  These pieces, however, primarily consider artificially intelligent algorithms licensed to and used by medical professionals in hospital or outpatient settings.  They are about the relationship between a doctor and the sophisticated tools in her diagnostic toolbox — and about how relying on algorithms could decrease the pressure physicians feel to order unnecessary tests and procedures to avoid malpractice liability.  There was an underlying assumption that these algorithms had already been evaluated and approved for use by the physician’s institution, and that the physician had experience using them.  BiliScreen does not fit this mold — the algorithm is not a piece of medical equipment used by hospitals, but rather part of an app that could be downloaded and used by anyone with a smartphone.  Accordingly, apps like BiliScreen fall into a category of “democratized” diagnostic algorithms. While this democratization has the potential to drastically improve preventive care, it also has the potential to undermine the financial sustainability of the U.S. health care system.

Continue reading

Voice Assistants, Health, and Ethical Design – Part II

By Cansu Canca

[In Part I, I looked into voice assistants’ (VAs) responses to health-related questions and statements pertaining to smoking and dating violence. Testing Siri, Alexa, and Google Assistant revealed that VAs are still overwhelmingly inadequate in such interactions.]

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? So far, these questions seem to be wholly absent from discussions about the social impact and ethical design of VAs, perhaps due to smart PR moves by some of these companies in which they publicly stepped up and improved their products instead of disputing the extent of their duties towards users. These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?

Continue reading

Voice Assistants, Health, and Ethical Design – Part I

By Cansu Canca

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents” (emphasis added).

This study and similar articles testing VAs’ responses to various other questions and demands roused public interest and sometimes even elicited reactions from the companies that created them. Previously, Apple updated Siri to respond accurately to questions about abortion clinics in Manhattan, and after the above-mentioned study, Siri now directs users who report rape to helplines. Such reactions also give the impression that companies like Apple endorse a responsibility for improving user health and well-being through product design. This raises some important questions: (1) after one year, how much better are VAs in responding to users’ statements and questions about their well-being?; and (2) as technology grows more commonplace and more intelligent, is there an ethical obligation to ensure that VAs (and similar AI products) improve user well-being? If there is, on whom does this responsibility fall?

Continue reading

Biobanks as Konwledge Institutions – Seminar 11/3 at the University of Copenhagen

Biobanks as Knowledge Institutions

“Global Genes –Local Concerns” Seminar with Prof. Michael Madison (University of Pittsburgh, U.S.)

Join us at the University of Copenhagen on November 3rd, 2017 to discuss the legal implications of “Biobanks as Knowledge Institutions” with Professor Michael Madison. 

Abstract

The presentation characterizes the material and immaterial attributes of biobanks as knowledge resources, and it characterizes the broader questions that they pose as resource governance questions rather than as questions solely of law or of public policy. Biobanks are knowledge institutions. Professor Madison argues that despite the varied and diverse nature of biobanks today (indeed, precisely because of their diversity), their social and scientific importance dictates the need for a robust program of research of a comparative nature to identify shared features that contribute to their success (where they succeed) and features that likely contribute to problems or even failure. Both their importance and the associated governance challenges have only grown larger and more complex as biobanks meet the era of data science. In that regard Professor Madison points to emerging scholarly literature that focuses on governance challenges of material and data in biobank contexts, which builds on a knowledge commons governance framework. He concludes by suggesting directions for future work. Continue reading

The 100th ‘The Week in Health Law’ Podcast

By Nicolas Terry and Frank Pasquale

Subscribe to TWIHL here!

twihl 5x5

This week, we celebrate Episode 100! Like Episode 1 from 2015, it’s just the two of us–revisiting topics from the first show, commenting on the current health policy landscape, and exploring past and present projects in health information law, privacy, data protection, and AI. Nic’s SSRN page is here, and Frank’s is here.

And we leave you with two of our recent public lectures: Nic Terry’s Rome Lecture (Appification to AI and Healthcare’s New Iron Triangle), and Frank Pasquale’s reflections on the political economy of health automation (inter alia).  Enjoy!

The Week in Health Law Podcast from Frank Pasquale and Nicolas Terry is a commuting-length discussion about some of the more thorny issues in Health Law & Policy. Subscribe at Apple Podcasts, listen at Stitcher Radio Tunein, or Podbean, or search for The Week in Health Law in your favorite podcast app. Show notes and more are at TWIHL.com. If you have comments, an idea for a show or a topic to discuss you can find us on Twitter @nicolasterry @FrankPasquale @WeekInHealthLaw

Genomes on-line and the Health of Privacy

By Effy Vayena and Alessandro Blasimme

Technology Concept

In January 1999, Scott McNealy, CEO of Sun Microsystems (now part of Oracle Corporation), announced that we should no longer be concerned with privacy, since consumers ‘have zero privacy anyway’ and should just ‘get over it.’ His argument, that in the era of information technology we have become unable to protect precisely what such technology relies on and delivers (information) has met the full spectrum of imaginable reactions – from outrage to enthusiastic endorsement. Many different cures have been proposed to treat at least the symptoms of the disease caused by the loss of privacy. Yet there is little disagreement concerning the diagnosis itself: privacy does not enjoy an enviable state of health. Recent emphasis on big data and their inescapable presence have only made the prognosis dimmer for the once cherished ‘right to be let alone’ – as Samuel D. Warren and justice Louis D. Brandeis famously defined privacy back in 1890.

Such a deteriorating outlook should sound especially alarming in the fields of healthcare and medical research. In such domains, professional norms of medical confidentiality have long ensured sufficient levels of privacy protection, accountability, and trust. Yet we are told that this may no longer be the case: sensitive, personal, health-related information – just like any other type of information – now comes in electronic formats, which makes it much more reachable than before, and increasingly difficult to protect. Imagine the consequences this may have in the case of genomic data – arguably one of the most sensitive forms of personal information. Should such information fall into the wrong hands, we may face harsh consequences ranging from discrimination to stigmatization, loss of insurance, and worse. To enjoy the right to genomic privacy, one has to be able to exercise some meaningful amount of control over who gets access to her genetic data, be adequately shielded from harms of the sort just mentioned, and yet retain the possibility of deciphering what’s written in her DNA for a variety of purposes – including, but not limited to, health-related ones. All this is undoubtedly demanding. All the more so now that we know how even apparently innocent and socially desirable uses, like genomic research employing anonymized DNA, are not immune from the threat of malicious re-identification.

In light of such considerations, one might be led to think that health privacy protection is a lost cause. In fact, one may go even further and argue that, all things considered, we shouldn’t worry too much about the decline of privacy. Having our sensitive data in a state of highly restricted accessibility, so the argument goes, prevents us from extracting medically valuable insight from those data and hinders medical discovery from which we may all benefit. Continue reading

Petrie-Flom Center Welcomes New Executive Director!

PFC Logo-New-Horizontal_slide

shachar_peopleWe are thrilled to announce that Carmel Shachar, JD, MPH (HLS ’10, HSPH ’10) will join the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School as our next Executive Director. In this role, Carmel will be responsible for oversight of the Center’s sponsored research portfolio, event programming, fellowships, student engagement, development, and a range of other projects and collaborations.

“We are delighted that Carmel will be joining the Center,” said I. Glenn Cohen, Professor of Law and Faculty Director of the Petrie-Flom Center. “Throughout her career, Carmel has focused on designing, developing, and executing large health law and policy projects. This expertise and leadership will be a strong resource for the Center as it implements the vision for its second decade.”  Continue reading

DTC Genetic Risk Reports Back on Market

By Kayte Spector-Bagdady, JD, MBE & Michele Gornick, PhD, MA

On Thursday, April 6th, the FDA announced that it will allow the direct-to-consumer (DTC) genetic testing company 23andMe to market “Genetic Health Risk” (GHR) tests for 10 diseases or conditions including early-onset Alzheimer’s and Parkinson’s Diseases. This is in addition to 23andMe’s current offering of ancestry, wellness (e.g., lactose intolerance), trait (e.g., hair color), and autosomal recessive carrier screening (e.g., sickle cell anemia) test reports.

The decade since 23andMe entered the market has been a regulatory labyrinth of twists and turns. But what direction are we headed now?

The way we were

23andMe was a pioneer of the field, entering the DTC genetics market in 2007 with a product offering 13 health-related reports for $999. By December 2013, it was offering more than 250 reports; including carrier status, drug response, and over 100 GHRs. In response to a set of FDA Untitled Letters that went out in 2010, 23andMe filed for de novo 510(K) premarket clearance for some tests… but also concurrently marketed them in a national television and web campaign. Continue reading

Drained Swamps and Quackery: Some Thoughts on Efficacy

By Seán Finan

“What makes drug development long and expensive is the need to prove, beyond statistical doubt, that your damn drug works”

Michael Gilman, Biotech Entrepreneur

2017 is going to be terrific. Tremendous, even. Things are going to change, big league.

7770160314_61e7536762_kThe new President has promised fantastic reforms to the drug industry. He’s going to get the big players in the pharmaceutical industry around a table and negotiate huge price reductions. Of course, he’s not going to touch their bottom line. If anything, he’s going to improve it. Innovation is being choked by over-regulation and he’s going remove burdensome FDA hurdles. But he has Executive Orders to give and walls to build, so he’s drafting in the very best people to help. We’re still waiting for those people to be officially named. Meanwhile, the media have had a month and a half of fun and speculation. The volume and variety of names being thrown around make it feel like a food fight at a Chinese buffet. One of those names is Peter Thiel.

Continue reading

Bill of Health Blog Symposium: How Patients Are Creating the Future of Medicine

We are pleased to host this symposium featuring commentary from participants in the University of Minnesota’s Consortium on Law and Values in Health, Environment & the Life Sciences event, “How Patients Are Creating Medicine’s Future: From Citizen Science to Precision Medicine.”  Below, Susan M. Wolf tees up the issues.  All posts in the series will be available here.

How Patients Are Creating the Future of Medicine: Roundtable at the University of Minnesota

By Susan M. Wolf, JD (Chair, Consortium on Law and Values in Health, Environment & the Life Sciences; McKnight Presidential Professor of Law, Medicine & Public Policy; Faegre Baker Daniels Professor of Law; Professor of Medicine, University of Minnesota)

Citizen science, the use of mobile phones and other wearables in research, patient-created medical inventions, and the major role of participant-patients in the “All of Us” Precision Medicine Initiative are just a few of the indicators that a major shift in biomedical research and innovation is under way. Increasingly, patients, families, and the public are in the driver’s seat, setting research priorities and the terms on which their data and biospecimens can be used. Pioneers such as Sharon Terry at Genetic Alliance and Matthew Might at NGLY1.org have been forging a pathway to genuine partnership linking patients and researchers. But the legal and ethical questions remain daunting. How should this research be overseen? Should the same rules apply as in more conventional, academically driven research? What limits should apply to parental use of unvalidated treatments on children affected by severe, rare disease? And should online patient communities be able to set their own rules for research?

In December 2016, the University of Minnesota’s Consortium on Law and Values in Health, Environment & the Life Sciences convened four thinkers with diverse academic and professional backgrounds to analyze these trends. This event, “How Patients Are Creating Medicine’s Future: From Citizen Science to Precision Medicine” was part of the Consortium’s Deinard Memorial Lecture Series on Law & Medicine, co-sponsored by the University’s Center for Bioethics and Joint Degree Program in Law, Science & Technology, with support from the Deinard family and law firm of Stinson Leonard Street. To see a video of the event, visit http://z.umn.edu/patientledvideo.

The four speakers offered diverse and provocative perspectives, each of which is highlighted in this series.

Citizen-Led Bioethics for the Age of Citizen Science: CRexit, BioEXIT, and Popular Bioethics Uprisings

By Barbara J. Evans, MS, PhD, JD, LLM (Alumnae College Professor of Law; Director, Center on Biotechnology & Law, University of Houston)

This post is part of a series on how patients are creating the future of medicine.  The introduction to the series is available here, and all posts in the series are available here.

The citizen science movement goes beyond merely letting people dabble in science projects. It involves giving regular people a voice in how science should be done. And citizen science calls for a new, citizen-led bioethics.

Twentieth-century bioethics was a top-down affair. Ethics experts and regulators set privacy and ethical standards to protect research subjects, who were portrayed as autonomous but too vulnerable and disorganized to protect themselves. The Common Rule’s informed consent right is basically an exit right: people can walk away from research if they dislike the study objectives or are uncomfortable with the privacy protections experts think are good for them. An exit right is not the same thing as having a voice with which to negotiate the purposes, terms, and conditions of research.

Continue reading

The Wearables Revolution: Personal Health Information as the Key to Precision Medicine

By Ernesto Ramirez, PhD (Director of Research & Development, Fitabase)

This post is part of a series on how patients are creating the future of medicine.  The introduction to the series is available here, and all posts in the series are available here.

Personal health data has historically been controlled by the healthcare industry. However, much has changed in the last decade. From wearable devices for tracking physical activity, to services that decode the personal microbiome, there has been an explosion of methods to collect and understand our personal health and health behavior. This explosion has created a new type of data that has the potential to transform our understanding of the deep interactions of health behaviors, exposure, and outcomes — data that is large-scale, longitudinal, real-time, and portable.

New devices, applications, and services are creating large amounts of data by providing methods for collecting information repeatedly over long periods of time. For example, I have tracked over 20 million steps since 2011 using a Fitbit activity tracker. Many of the new tools of personal health data are also connected to the Internet through Bluetooth communication with smartphones and tablets. This connectivity, while commonly used to update databases as devices sync, also provides an opportunity to view data about ourselves in real-time. Lastly, there is an increasing interest in making this data accessible through the use of application programming interfaces (APIs) that allow third parties to access and analyze data as is becomes available. Already we are seeing unique and useful tools being developed to bring consumer personal health data to bear in clinical settings, health research studies, and health improvement tools and services.

The availability of this type of personal health data is having a big impact. The examples provided by the #WeAreNotWaiting and #OpenAPS communities showcase the groundbreaking potential of portable, usable, personal data. It is transforming the quality of life for individuals living with type 1 diabetes. Through access to data from continuous glucose monitors and wireless control of insulin pumps, over 100 individuals have implemented their own version of an artificial pancreas. These pioneering individuals are at the forefront of a revolution using personal health data to take charge of care and customize treatment decisions.

Personal health data will play a major role in the future of precision medicine, healthcare, and health research. Sensors will continue to improve. New data streams will become available. More analytical tools will surface. There will be more support for portable and sharable data. The availability of large-scale, longitudinal, and real-time personal health data will improve not only the ability of individuals to understand their own health, but when pooled, may produce new insights about what works, for what people, under what conditions.

Patient-Driven Medical Innovations: Building a Precision Medicine Supply Chain for All

Kingshuk K. Sinha, PhD (Department Chair and Mosaic Company-Jim Prokopanko Professor of Corporate Responsibility Supply Chain and Operations Department, Carlson School of Management, University of Minnesota)

This post is part of a series on how patients are creating the future of medicine.  The introduction to the series is available here, and all posts in the series are available here.

While the promise and potential of precision medicine are clear, delivering on that promise and making precision medicine accessible to all patients will require clinical adoption and a reliable and responsible supply chain. We already know this is a big problem in pharmacogenomics technology; the science is advancing rapidly, but clinical adoption is lagging. While Big Data can be a powerful tool for health care – whether it be an individual’s whole genome or an online aggregation of information from many patients with a particular disease – building implementation pathways to analyze and use the data to support clinical decision making is crucial. All of the data in the world doesn’t mean much if we can’t ensure that the development of precision medicine is linked with the efficient, safe, and equitable delivery of precision medicine.

Effective implementation means addressing the stark realities of health disparities. Leveraging citizen science to develop and deliver precision medicine has the potential to reduce those disparities. Citizen science complements more traditional investigator-driven scientific research and engages amateur and non-professional scientists, including patients, patients’ families, and communities across socio-economic strata as well as country boundaries.

Continue reading

ACA Repeal and the End of Heroic Medicine

By Seán Finan

Last week, I saw Dr Atul Gawande speak at Health Action 2017. Healthcare advocates and activists sat around scribbling notes and clutching at their choice of whole-food, cold-pressed, green and caffeinated morning lifelines. Gawande speaks softly, lyrically and firmly; the perfect bedside manner for healthcare advocates in these early days of the Trump presidency. He calmly announced to the congregation that the age of heroic medicine is over. Fortunately, he continued, that’s a good thing.

Gawande’s remarks echoed a piece he published in the New Yorker. He writes that for thousands of years, humans fought injury, disease and death much like the ant fights the boot. Cures were a heady mixture of quackery, tradition and hope. Survival was largely determined by luck. Medical “emergencies” did not exist; only medical “catastrophes”. However, during the last century, antibiotics and vaccines routed infection, polio and measles. X-rays, MRIs and sophisticated lab tests gave doctors a new depth of understanding. New surgical methods and practices put doctors in a cage match with Death and increasingly, doctors came out with bloody knuckles and a title belt. Gradually, doctors became heroes and miracles became the expectation and the norm. This changed the way we view healthcare. Gawande writes, “it was like discovering that water could put out fire. We built our health-care system, accordingly, to deploy firefighters.”

But the age of heroic medicine is over. Dramatic, emergency interventions are still an important part of the system. However, Gawande insists that the heavy emphasis on flashy, heroic work is misplaced. Much more important is “incremental medicine” and the role of the overworked and underappreciated primary care physician.

Continue reading

Artificial Intelligence and Medical Liability (Part II)

By Shailin Thomas

Recently, I wrote about the rise of artificial intelligence in medical decision-making and its potential impacts on medical malpractice. I posited that, by decreasing the degree of discretion physicians exercise in diagnosis and treatment, medical algorithms could reduce the viability of negligence claims against health care providers.

It’s easy to see why artificial intelligence could impact the ways in which medical malpractice traditionally applies to physician decision-making, but it’s unclear who should be responsible when a patient is hurt by a medical decision made with an algorithm. Should the companies that create these algorithms be liable? They did, after all, produce the product that led to the patient’s injury. While intuitively appealing, traditional means of holding companies liable for their products may not fit the medical algorithm context very well.

Traditional products liability doctrine applies strict liability to most consumer products. If a can of soda explodes and injures someone, the company that produced it is liable, even if it didn’t do anything wrong in the manufacturing or distribution processes. Strict liability works well for most consumer products, but would likely prove too burdensome for medical algorithms. This is because medical algorithms are inherently imperfect. No matter how good the algorithm is — or how much better it is than a human physician — it will occasionally be wrong. Even the best algorithms will give rise to potentially substantial liability some percentage of the time under a strict liability regime.

Continue reading

Artificial Intelligence, Medical Malpractice, and the End of Defensive Medicine

By Shailin Thomas

Artificial intelligence and machine-learning algorithms are the centerpieces of many exciting technologies currently in development. From self-driving Teslas to in-home assistants such as Amazon’s Alexa or Google Home, AI is swiftly becoming the hot new focus of the tech industry. Even those outside Silicon Valley have taken notice — Harvard’s Berkman Klein Center and the MIT Media Lab are collaborating on a $27 million fund to ensure that AI develops in an ethical, socially responsible way. One area in which machine learning and artificial intelligence are poised to make a substantial impact is health care diagnosis and decision-making. As Nicholson Price notes in his piece Black Box Medicine, Medicine “already does and increasingly will use the combination of large-scale high-quality datasets with sophisticated predictive algorithms to identify and use implicit, complex connections between multiple patient characteristics.” These connections will allow doctors to increase the precision and accuracy of their diagnoses and decisions, identifying and treating illnesses better than ever before.

As it improves, the introduction of AI to medical diagnosis and decision-making has the potential to greatly reduce the number of medical errors and misdiagnoses — and allow diagnosis based on physiological relationships we don’t even know exist. As Price notes, “a large, rich dataset and machine learning techniques enable many predictions based on complex connections between patient characteristics and expected treatment results without explicitly identifying or understanding those connections.” However, by shifting pieces of the decision-making process to an algorithm, increased reliance on artificial intelligence and machine learning could complicate potential malpractice claims when doctors pursue improper treatment as the result of an algorithm error. In it’s simplest form, the medical malpractice regime in the United States is a professional tort system that holds physicians liable when the care they provide to patients deviates from accepted standards so much as to constitute negligence or recklessness. The system has evolved around the conception of the physician as the trusted expert, and presumes for the most part that the diagnosing or treating physician is entirely responsible for her decisions — and thus responsible if the care provided is negligent or reckless. Continue reading