Integrating artificial intelligence (AI) into radiology is transforming medical imaging by promising more efficient and accurate diagnostic processes. Radiology AI tools are now assisting clinicians in detecting anomalies and interpreting images at a speed and precision that could redefine patient care. However, as these AI products grow more advanced, the need for robust monitoring systems to ensure their reliability becomes imperative. Unlike traditional tools, AI models are dynamic, requiring continuous oversight to prevent potential failures in real-world applications. With insufficient quality control mechanisms, the rapid adoption of radiology AI tools could lead to unintended and potentially harmful results for patients.
The Rising Need for Quality Control in Radiology AI
Radiology AI tools, though promising, lack comprehensive quality control measures, creating a critical gap in their safe implementation. Addressing attendees at the Society for Imaging Informatics in Medicine (SIIM) Annual Meeting, Dr Raym Geis highlighted the urgency of establishing “watchdog” systems for these AI tools. He warned of a potential "Challenger Shuttle moment" in radiology if these tools are left unchecked. The risk lies in AI tools’ increasing autonomy: as they communicate and make decisions based on their own data interpretations, there is an inherent risk of undetected errors leading to significant harm. Dr Geis advocated that, without proactive quality control, the unmonitored evolution of AI in clinical settings could lead to abrupt operational shutdowns and lengthy investigations should a major incident occur.
Radiology AI quality control requires complex, quantitative monitoring, an area where the field currently lags behind other industries. SIIM, with its history of pioneering Picture Archiving and Communication Systems (PACS) development, is positioned to lead the way in training radiology professionals in reliability engineering for these systems. Dr Geis argues that PACS administrators, who are already responsible for installing, monitoring, managing and rolling back radiology systems, could be equipped with the necessary skills to oversee AI-driven systems. With additional training in systems reliability engineering, PACS administrators could become reliable systems engineers, taking on a crucial role in maintaining safety and efficiency in medical imaging AI.
The Challenge of Maintaining AI Reliability
AI tools in radiology are inherently vulnerable to a gradual decline in accuracy over time due to changes in the input data they process. Dr Geis explained that an AI model’s effectiveness is closely tied to its training data, and any variation in that data—whether from updated imaging protocols, demographic shifts in patient populations, or newly acquired imaging equipment—can lead to reduced performance. This phenomenon, known as “data drift,” is a common issue in AI but is especially concerning in radiology, where a decline in accuracy could have serious consequences for patient care. Unlike errors in other sectors, where failures might result in financial losses, inaccuracies in medical imaging AI could result in missed diagnoses or incorrect treatments.
Dr Geis noted that AI models do not fail in obvious ways; instead, they degrade gradually and often silently. Subtle shifts in performance may go unnoticed without rigorous monitoring. To counter this, Dr Geis proposed a “banded assessment monitoring system,” which would involve evaluating specific features of the input data to detect shifts that could affect AI accuracy. This method would allow radiology departments to identify and address changes in data inputs that could compromise AI tools’ reliability, ensuring consistent performance over time. By implementing such monitoring protocols, healthcare providers can mitigate risks and maintain the integrity of AI systems in clinical environments.
The Importance of Establishing Standards for AI Tools
Dr Geis emphasised the importance of developing industry-wide standards for radiology AI tools to ensure long-term reliability. One such proposal involves the use of standardised model cards that document crucial details about each AI tool, including its model architecture, training data characteristics and ethical considerations. Model cards would provide a clear overview of each AI product’s capabilities and limitations, enabling healthcare professionals to make informed decisions about tool selection and application. By creating a universal template for model cards, SIIM could set the standard for AI tools in radiology, ensuring that each product meets minimum quality and safety requirements.
These model cards would enhance transparency and support more effective monitoring and evaluation. Model cards would enable PACS administrators and other professionals to better understand the factors influencing performance by detailing the conditions under which an AI tool is most reliable. As an established leader in radiology informatics, SIIM is well-suited to lead this standardisation effort, helping foster an environment in which AI tools are applied safely and reliably.
The introduction of AI into radiology heralds a new era in medical imaging, with the potential to vastly improve diagnostic accuracy and efficiency. However, as AI tools become more complex and autonomous, the need for comprehensive quality control mechanisms is paramount. By implementing rigorous monitoring systems and establishing industry standards, the radiology field can ensure that AI applications are safe and effective in clinical use. SIIM’s leadership in this initiative would enable PACS administrators and other radiology professionals to manage these tools reliably, ultimately advancing patient care while safeguarding against the risks of AI failure. With a proactive approach to quality control, the radiology community can use the full potential of AI while prioritising patient safety and clinical accuracy.
Source: Healthcare in Europe
Image Credit: iStock