A recent article, published 30 September in the Journal of the American College of Radiology, explored post-regulatory pharmacovigilance issues associated with radiological practice AIs and strategies for choosing ideal solutions.

 

AI algorithms in clinical use are increasing. The US FDA has cleared over 100 commercially available AI algorithms for clinical use. A recent survey of ACR members indicated that 30% of respondents used AI in their clinical workflow, and nearly 50% would purchase AI for their practices within the next 5 years. Given this increased demand for radiological AIs, clinical practices need a framework to evaluate algorithms for their own implementation. Practices are ultimately liable for ensuring that AIs they purchase are safe and effective for their patients. Since generalizability issues occur with AI algorithms, practices need evaluation mechanisms using their own data before implementation.  Unfortunately, only institutions with robust informatics infrastructures have tools for evaluating and monitoring AI models. Thus, discussion of the challenges associated with assessing the tools informing purchase decisions for radiological AIs and monitoring AI models after deployment is helpful.

 

The performance of Als without adaptive learning degrades over time due to changing conditions like new imaging equipment or protocols, software updates, or changes in patient demographics. Currently, all commercial diagnostic imaging AI models are “locked,” that is, end-users cannot modify them. If approved by the FDA, adaptive AIs can adjust to these changes. However, end-users must then continuously verify that the algorithm changes actually improve its performance. Bad learning experiences can disrupt the AI’s performance.

 

One possible solution is the creation of an AI registry that captures the sensitivity, specificity, and positive predictive value of the algorithm’s performance against that of the interpreting radiologist. The registry should also include examination metadata like the equipment manufacturer, the protocol used, radiation dose, and patient demographics. The collection of these parameters should provide the data needed for determining performance degradation. The data acquired from multiple end-users can support regulatory requirements.

 

Such a registry, created by the American College of Radiology Data Science Institute, is currently undergoing clinical testing at the University of Rochester.

 

The authors add, “Radiologists will be able to use these to understand the availability and scope of AI tools available for use in clinical practice, to evaluate AI models using their own data, and to monitor the performance of AI models deployed in their practices.”

«« AI Detects Early Alzheimer’s Disease with Nearly 100% Accuracy


Telemedicine: Popularity, Demand and Satisfaction Hits All-Time High »»

References:

Allen, B., et al. (2021) Evaluation and Real-World Performance Monitoring of Artificial Intelligence Models in Clinical Practice Purchase: Try It, Buy It, Check It. Journal of the American College of Radiology.




Latest Articles

machine learning, American College of Radiology, Artificial Intelligence, machine learning algorithms, AI Regulation, 27th Annual Symposium of The European Society of Urogenital Radiology A registry that documents the algorithm’s performance against an interpreting radiologist is a strategy to ensure continued safety and efficacy in adaptive AIs.