The European Society of Radiology (ESR) has provided key recommendations for implementing the European Union’s Artificial Intelligence Act in medical imaging. As AI transforms radiology, it's crucial to comply with this regulatory framework to ensure patient safety, enhance clinical effectiveness and uphold ethical standards. While AI innovations can improve efficiency and accuracy in diagnostic imaging, they also raise concerns about transparency, data governance and human oversight. The ESR’s guidance emphasises AI literacy, classification of high-risk systems, data governance and quality management. By fostering collaboration with regulators and medical professionals, the potential of AI can be realised while preserving high standards of patient care.
 

AI Literacy and Human Oversight

The AI Act emphasises the need for radiologists, healthcare providers and patients to have adequate AI literacy for safe and effective engagement with AI. Understanding AI’s capabilities and risks is crucial for its responsible use in radiology. The Act requires all stakeholders to develop sufficient knowledge for informed decision-making regarding AI integration. The ESR advocates for AI education in medical school curricula, residency training and ongoing professional development to reduce automation bias and ensure patients understand AI’s role in their care.
 

Human oversight is critical in AI-assisted radiology, especially for high-risk AI tools. These systems should operate under the supervision of qualified radiologists to prevent misdiagnoses and biases. Research into human-AI interaction is needed to address cognitive burdens and prevent deskilling among radiologists. Thus, AI should be an assistive technology, supported by continuous training, regulatory oversight and clear supervision protocols to uphold medical imaging standards and patient care.
 

Risk Classification and Data Governance

The AI Act classifies most medical imaging AI applications as high-risk due to their impact on clinical decision-making. The ESR emphasises the need to differentiate between AI-driven diagnostic tools and those that enhance workflow efficiency. Triage algorithms, which prioritise cases by urgency, require strict regulatory oversight, while systems improving workflow, like automated viewing protocols, may not need the same level of scrutiny. ESR advocates for precise guidelines to match compliance obligations with risk levels.
 

Data governance is another aspect which is vital for AI regulation. The reliability of AI models hinges on the quality and diversity of training data. The AI Act enforces strict criteria for datasets to reduce bias and ensure equitable healthcare outcomes. ESR supports the creation of GDPR-compliant datasets through initiatives like the European Health Data Space (EHDS). Post-market monitoring is essential for tracking AI performance and addressing potential biases or inaccuracies after deployment, ensuring safety and effectiveness in radiology.
 

Transparency and Post-Market Monitoring

Transparency is a core principle of AI regulation in radiology. The AI Act mandates that high-risk AI systems be developed with sufficient transparency to ensure that users can accurately interpret and utilise AI outputs. ESR recommends that AI providers offer detailed documentation outlining an AI system's purpose, performance, limitations and capabilities. Insufficient transparency can lead to risks, like incorrect clinical decisions based on poorly understood AI findings.
 

An effective way to enhance transparency is through model cards, which provide structured information about an AI model’s development, training data, intended use and potential biases. ESR advocates for adopting these measures to ensure responsible implementation of AI technologies.
 

Additionally, post-market monitoring is crucial for ensuring AI systems perform safely across diverse patient populations. The AI Act requires AI providers to implement robust monitoring mechanisms to track performance and address emerging risks. This is particularly vital for adaptive or self-learning systems, as their performance may evolve with new data. ESR highlights the need for long-term surveillance, including structured evaluations and protocols for reporting AI-related issues, to enhance system reliability.
 

The ESR’s recommendations provide a comprehensive roadmap for the responsible implementation of the EU AI Act in radiology. By prioritising AI literacy, human oversight, clear risk classification, robust data governance, transparency and post-market monitoring, ESR aims to ensure that AI technologies integrate safely and effectively into medical imaging workflows. As AI adoption grows, it is imperative to strike a balance between innovation and regulatory compliance, ensuring that AI tools enhance diagnostic accuracy and efficiency without compromising patient safety.
 

Collaboration with regulatory bodies, healthcare providers and industry stakeholders will be essential in shaping AI policies that promote ethical, effective and transparent AI deployment in radiology. In the long term, a structured and evidence-based regulatory approach will be key to optimising AI’s potential while maintaining the highest standards of patient care.

 

Source: Insights into Imaging
Image Credit: iStock

 


References:

Kotter E, D’Antonoli TA, Cuocolo R. et al. (2025) Guiding AI in radiology: ESR’s recommendations for effective implementation of the European AI Act. Insights Imaging, 16: 33.




Latest Articles

AI in radiology, European Society of Radiology, EU AI Act, medical imaging AI, patient safety The European Society of Radiology (ESR) provides key recommendations for implementing the EU AI Act in medical imaging.