A lawyer friend once told me, “90% of non-criminal law is arguing over who gets stuck with the tab.” More and more people are asking this precise question with respect to who should be held liable when physicians utilize medical artificial intelligence and the AI makes a mistake that harms the patient.
Normally, if a patient suffers harm due to the negligence or wrongdoing of a physician, this is covered by medical malpractice law. For example, if a physician fails to recognize an obvious heart attack and dismisses the patient’s chest pain as mere heartburn, that would constitute malpractice.
(Of course, some cases of early heart attack are subtle and might be reasonably misdiagnosed by the majority of competent physicians. Such errors do not necessarily constitute malpractice, and lawyers can spend a lot of time arguing whether or not a particular diagnosis was reasonable at the time of the patient encounter, even if it turns out to have been mistaken in retrospect.)
Conversely, if a patient suffers harm due to a flawed medical device, this would be typically be covered by product liability law. In the 1980s, flawed software in the Therac-25 radiation therapy machine resulted in several patients receiving 100 times the prescribed dose. Unfortunately, these patients died or suffered serious radiation injuries. The manufacturer ended up settling the subsequent lawsuits for a figure estimated to be over $150 million.
However, if a physician utilizes artificial intelligence software to help with patient care and the AI makes a mistake, this doesn’t quite fall neatly into either of these two categories. This is especially relevant with many current “black box” AI systems, where not even the system designers know how the AI arrived at its conclusion.
The questions over who pays are not just theoretical. According to Dr. Jesse Ehrenfeld, president of the American Medical Association, “We’re seeing lawsuits already.“ Naturally, physicians would like to shift any liability for faulty AI performance to the AI companies as much as possible. On the other hand, technology companies like to point out that medical treatment decisions are ultimately made by physicians, who must sign off on the final treatment plan. So they contend any errors made by AI assistants are ultimately the physician’s legal responsibility
Another complicating factor is the evolving standard of care for physicians using AI. Right now, AI systems are still optional for physicians. But as medical AI systems improve in quality and thus become more commonplace, physicians who deliberately choose not to use AI for fear of computer errors can leave themselves open to charges of practicing below the standard of care.
For example, if radiology AI systems evolve to the point that they can reliably detect 99% of breast cancers on a mammogram, whereas a skilled human only detects 90%, radiologists who fail to use AI could arguably be accused of delivering substandard care. Thus, we may see conflicting legal incentives for physicians both to use and to not use AI.
Dr. Jonathan Mezrich notes that one possible solution would be a “no fault” indemnity system for AI-related medical injuries, much like the National Vaccine Injury Compensation Program. The NVICP covers certain vaccine-caused harms to patients, paid for by a tax on vaccines. In theory, this protects vaccine manufacturers from certain high-risk financial penalties, while also allowing qualified plaintiffs the ability to receive monetary compensation more quickly with fewer bureaucratic and legal hurdles. A similar system could be used to protect medical AI. (Note: The American Association For Justice, an association of trial lawyers, opposes legislation that limits legal accountability for medical AI misdiagnosis.)
Another novel approach would be to declare the AI a legal “person” for the purpose of liability. As explained by Sullivan and Schweikart in the AMA Journal Of Ethics, the AI would be required to be insured “similar to how physicians possess medical malpractice insurance” and would “then be sued directly for any negligence claims.”
At present, these legal issues are unsettled. I am thus encouraged that the US Congress is considering how best to clarify such liability issues for both physicians and technology companies.
I am still very optimistic about the benefits of medical AI for patient care. Not a week goes by that I don’t read news stories such as how AI can predict breast cancer risk up to 5 years in the future, detect early diabetic retinopathy (diabetes-induced vision damage), or detect early spread of lung cancer.
Thus, I hope policymakers and legislators find a way to encourage continued innovation and progress, while providing appropriate legal recourse for those wrongly harmed by flawed medical AI.