Recently, I wrote about the rise of artificial intelligence in medical decision-making and its potential impacts on medical malpractice. I posited that, by decreasing the degree of discretion physicians exercise in diagnosis and treatment, medical algorithms could reduce the viability of negligence claims against health care providers.
It’s easy to see why artificial intelligence could impact the ways in which medical malpractice traditionally applies to physician decision-making, but it’s unclear who should be responsible when a patient is hurt by a medical decision made with an algorithm. Should the companies that create these algorithms be liable? They did, after all, produce the product that led to the patient’s injury. While intuitively appealing, traditional means of holding companies liable for their products may not fit the medical algorithm context very well.
Traditional products liability doctrine applies strict liability to most consumer products. If a can of soda explodes and injures someone, the company that produced it is liable, even if it didn’t do anything wrong in the manufacturing or distribution processes. Strict liability works well for most consumer products, but would likely prove too burdensome for medical algorithms. This is because medical algorithms are inherently imperfect. No matter how good the algorithm is — or how much better it is than a human physician — it will occasionally be wrong. Even the best algorithms will give rise to potentially substantial liability some percentage of the time under a strict liability regime.