A.J. Bahou Speaks on AI Liability Risks in Healthcare Industry
Healthcare Risk Management
Bradley partner A.J. Bahou was quoted in Healthcare Risk Management on AI liability risks for healthcare organizations. While artificial intelligence (AI) offers opportunity to improve diagnoses and patient care, it also comes with significant risks.
Bahou says that healthcare systems and providers may be subject to liability for AI systems under a malpractice theory or other negligence theories when using AI tools to provide care to patients. Similarly, AI vendors may also be subject to product liability regarding the tools used in healthcare.
“Regarding doctors, they should be concerned about using a new AI tool because that tool could be criticized as deviating from the standard of care,” Bahou said. “Until the AI tool is widely used and accepted by the medical profession, doctors should always evaluate the outputs from AI tools and maintain the physician’s judgment in the ultimate decision on patient care. Until AI tools become part of the standard of care by the profession, this early adopter concern will be a persistent risk by the doctor, and vicariously by the health system, during this evolution of using AI tools in healthcare.”
There also is a product liability risk for the AI system designer, such as for the design of the algorithm used in the AI tool, Bahou noted. In this instance, the AI vendor could be liable for the product if it causes harm to the patient. Legal theories in this area could include failure to warn about risks, poor design, unmanageable adjustments to the algorithm due to updates in the machine learning process, or manufacturing defects. For example, if a surgical robot is driven by AI algorithms and causes harm, the AI manufacturer might be liable for that injury to patients if the product is proven to be defective, Bahou explained.
AI vendors are promoting AI assistant tools for the physician-patient interaction, Bahou added. The benefit is like having a smart speaker, such as Alexa or Siri, listen to the physician-patient conversation and transcribe that conversation. The transcription can then become the medical record of that visit. Providers will appreciate the reduced burden of taking notes and documenting everything, Bahou said.
“The doctor may also ask the AI tool for assistance in diagnosis, allergic interactions, medical history, or prescription assistance. In doing so, the AI system can check the patient’s medical history, automatically find available time on the patient’s mobile device to schedule a follow-up visit, and/or read data from the patient’s mobile device for collecting health data as part of the treatment plan,” Bahou said. “The AI tool could send the prescription to your pharmacy of choice and automate that e-prescription process. There are many benefits, but also increased risks.” Some risks with transcribing the conversation include a concern about how the AI tool will interpret or misinterpret sarcasm, he noted.
There will be more concern about patient privacy, cybersecurity, and inherent bias in AI tools because those methods are implemented more deeply in the spectrum of patient care, Bahou said. There is the risk of malpractice if the provider relies too much on the AI tool for assistance and misses a diagnosis.
“Likewise, the AI vendor may have product liability if its outputs cause harm or fail in meeting the standard of care with an improper diagnosis. Cybersecurity risks remain prevalent but with increasing concern about the biometric data now added to the medical record,” Bahou said. “The record of a person’s oral conversation being hacked is much more intrusive as compared to a cryptic medical note written by the doctor in the doctor’s own words.”
The article, “AI Creates Liability Risks for Healthcare Organizations,” was published by Healthcare Risk Management on March 1, 2024.