Digital Healthcare Focus: AI, Medical Liability, in a ‘Moving’ Image

share Share

In this space I explore monthly topics, from concepts to technologies, related to the necessary steps to build Digital Healthcare Systems. For the month of April 2021, I have invited João Santinha to co-author a brief article on ‘AI, Medical Liability, in a "Moving" Image’. We dive into the December 2020 topic and focus on the medical liability of the AI use in medical imaging.


Medical imaging offers a visual representation of both internal and external tissues allowing it to be an essential part of patient diagnosis and treatment. As a result, there is increased pressure on radiology, nuclear medicine and pathology departments due to the rapid increase in the use of medical imaging, which may lead to physician burnout and errors in image reading, and will ultimately result in the degradation of patient care. The success of AI has also made its way to medicine and especially to medical imaging, as “Images Are More than Pictures, They Are Data” (Gillies et al. 2016). It has been demonstrated to improve medical image reconstruction, enhancement, segmentation and registration, and perform computer-aided detection and diagnosis. While some of these enable faster acquisition of better images, others focus on helping physicians to review more images with fewer human errors. The use of AI may even help physicians to reduce the number of unnecessary surgeries that some patients are currently submitted to, as we have learnt recently (Tobaly  et al. 2020).


The number of implementations of AI in clinical practice is still reduced. However, medical imaging AI has some success stories, where AI algorithms can reduce treatment time and racial and socioeconomic biases established into medical knowledge. The first example is, an algorithm that changes patients' treatment path in case of suspected stroke resulting in an average time between the arrival and treatment of 34 minutes against the traditional standard stroke workflow of 123 minutes. Furthermore, it became the first medical AI to be reimbursed, which will likely lead to its wide adoption (Hassan 2020).


More recently, another AI algorithm to measure the severity of osteoarthritis using knee X-rays was developed to reduce racial and socioeconomic pain disparities of the standard severity grading (Pierson et al. 2021), which was developed decades ago using white British populations. This AI algorithm demonstrates their immense potential to improve current medical knowledge and reduce biases that may exist within the current knowledge.


Nevertheless, stories like Google's AI Diabetic Retinopathy screening tool (Beede et al. 2020) also demonstrate the fragilities of these algorithms. Despite the initial promising results, a considerable drop in performance was observed when this AI algorithm was used in real-life conditions, with socio-environmental factors impacting the model performance, the nursing workflows and the patient experience.


As such, it is imperative to consider that medical imaging produces ‘non-stationary’ images. The constantly evolving nature of medical imaging, where scanners, image acquisition sequences and acquisition parameters are frequently changed or replaced by new ones, hinders the implementation of medical imaging AI in clinical practice. While AI algorithms shall be robust, to a certain degree, to these drifts and changes, with considerable efforts and research devoted to improving algorithms' robustness and generalisability, they also need to be continuously monitored and updated when performance degradation is observed. Such updates in the context of continuous learning are already contemplated in the FDA's proposed regulatory framework for modifications to AI/ML-Based Software as a Medical Device (SaMD), where software may continue to learn and evolve to improve patient care or ensure its maintenance without requiring FDA premarket review every time it changes.


What if the clinical outcome predicted by the AI-algorithm changes the patient management path and the ground truth used to assess the performance of the algorithm is no longer collected?


However, how can the real-time monitoring of such algorithms be done when patient management is changed? Moreover, what if the clinical outcome predicted by the AI-algorithm changes the patient management path and the ground truth used to assess the performance of the algorithm is no longer collected? Are clinical trial-like assessments of these algorithms a solution to these questions, with some patients having their clinical outcome predicted by an AI algorithm but going through the traditional treatment to assess the performance of the algorithm? Are there ethical issues of such an approach?


On the other hand, when a new version of an AI algorithm is deployed into clinical practice, how can we be sure that no bug nor bias is being introduced, like what happened with Boeing's 737 MAX new flight control system with a malfunctioning sensor? A solution may be something like Tesla's ‘shadow mode testing’ that enables the comparison of a new version of the algorithm, with the decisions being made by the physicians or the current version of the algorithm.


Although a considerable number of AI applications in health may be viewed as ‘professional practice enhancement’ technologies, this does not reduce medical liability implications of their use. It makes them more complex. A profound analysis of medical and other professionals’ liability is beyond the scope of this article; however, healthcare professionals’ involvement is critical in two parts of the value chain. First, they need to select and valuate the reference guidelines and other scientific evidence, with which the AI systems are to be trained/educated/calibrated. There is little regulation, except the leges artis principle, that rules all healthcare professionals and doctors. These are grey areas and profound medical knowledge (leges artis principle) will not easily help either, as most AI use ‘black box’ algorithms with low or non-explanatory detail that could allow a well-educated nurse or doctor to identify a defective indication/processing.


This area requires proper regulation that goes beyond product-focused regulation as provided by the Medical Device Regulation (to come into full force by May 2021 in the EU) but needs to include practice and malpractice law adaptations. Similar challenges exist with the monitoring of its use in practice. This will ensure that the benefits of this technology are used safely and with trust, while its risks are minimised. By doing so, we will treat patients better and reduce the increasing burden of healthcare systems and staff.

«« Focal Points of Mastering Telemedicine

COVID-19 and Telehealth »»


Beede E et al. (2020) A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). New York, NY: Association for Computing Machinery, 1–12. doi: 10.1145/3313831.3376718

Gillies RJ et al. (2016) Radiomics: images are more than pictures, they are data. Radiology, 278(2):563-77. doi:

Hassan AE (2020) New Technology Add-On Payment (NTAP) for Viz LVO: a win for stroke care. Journal of NeuroInterventional Surgery. Published Online First: 24 November. doi: 10.1136/neurintsurg-2020-016897

Pierson E et al. (2021) An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nat Med, 27:136–140. doi: 10.1038/s41591-020-01192-7

Tobaly D et al. (2020) CT-Based Radiomics Analysis to Predict Malignancy in Patients with Intraductal Papillary Mucinous Neoplasm (IPMN) of the Pancreas. Cancers, 12(11):3089. doi: 10.3390/cancers12113089

Published on : Mon, 29 Mar 2021

Artificial Intelligence, medical imaging , ethics and medical legal issues, #digitalhealthcarefocus, AI in imaging Digital Healthcare Focus: AI, Medical Liability, in a ‘Moving’ Image

No comment

Please login to leave a comment...