Artificial intelligence is increasingly embedded in medical imaging workflows, with software-based tools now routinely supporting image analysis, detection and workflow prioritisation. While adoption has accelerated, familiarity with the regulatory obligations governing these technologies has lagged behind clinical use. European Union rules applicable to artificial intelligence as medical devices were not originally designed with adaptive algorithms in mind, creating uncertainty for both developers and clinical users. New horizontal legislation has added further obligations without fully resolving imaging-specific questions.
Against this backdrop, post-market surveillance has emerged as a critical mechanism to ensure that AI systems remain safe, effective and clinically relevant after deployment. Although formal legal responsibility rests primarily with manufacturers, radiologists and imaging departments play an active role in monitoring performance, reporting incidents and contributing clinical feedback. Recent consensus recommendations developed within the European radiology community seek to clarify responsibilities, close awareness gaps and establish practical approaches to post-market oversight in everyday clinical settings.
A Complex Regulatory Framework for AI Medical Devices
Post-market monitoring of AI-enabled imaging software in the European Union is governed by a layered legal framework combining sector-specific medical device law with horizontal artificial intelligence regulation. The Medical Device Regulation establishes requirements for safety, performance and clinical oversight across the entire device lifecycle, including mandatory post-market surveillance and post-market clinical follow-up. However, it does not contain provisions tailored specifically to adaptive or data-driven algorithms. The more recent EU AI Act introduces obligations that apply to all high-risk AI systems, including those that qualify as medical devices or function as safety components within regulated products.
Must Read:AI Training Builds Communication and Empathy in Care
Together, these instruments require continuous oversight rather than one-off validation. Post-market surveillance is defined as an ongoing process through which manufacturers proactively collect and review real-world experience, assess safety and performance trends and update risk management documentation where necessary. For AI systems, this obligation takes on added importance because algorithmic performance may change over time due to shifts in patient populations, evolving clinical practice or changes in data inputs and interoperability. The regulatory framework therefore places strong emphasis on longitudinal monitoring and timely detection of degradation, bias or unexpected behaviour.
Shared Responsibilities Between Providers and Deployers
While manufacturers retain primary responsibility for establishing and maintaining post-market surveillance systems, deployers such as hospitals, imaging departments and individual clinicians are assigned explicit duties. Providers must implement surveillance processes proportionate to the device’s risk class, investigate incidents, report safety issues to competent authorities and maintain systems capable of detecting non-serious trends. They are also required to analyse performance data continuously and verify ongoing compliance with legal obligations relating to robustness, transparency and risk mitigation.
Deployers, by contrast, are expected to use AI systems in accordance with instructions, ensure appropriate human oversight and monitor outputs for unexpected results. They must also contribute to incident reporting and retain system logs for defined periods to support traceability and audits. Clinical feedback is therefore an essential component of effective surveillance, supporting both reactive incident management and proactive performance monitoring. Clear delineation of roles and cooperation between providers and deployers is critical to ensuring that surveillance systems reflect real-world use conditions, including interactions with other clinical software and workflow components.
Consensus Recommendations to Standardise Practice
To address gaps in awareness and operational guidance, a group of European imaging experts developed a set of consensus recommendations through a structured Delphi process. These recommendations focus on practical implementation of post-market surveillance and clinical feedback systems for AI medical devices in imaging. They highlight persistent unfamiliarity among imaging professionals with regulatory requirements and emphasise the need for accessible surveillance platforms that allow deployers to review performance data and report relevant events.
The recommendations advocate institutional rather than ad hoc data collection, favouring semi-automated systems managed at departmental or hospital level. They also call for shared access to aggregated performance data, provided in compliance with data protection rules, to support informed clinical use. Surveillance systems should be available from the moment a device is deployed and updated whenever new functions are introduced. Beyond continuous monitoring, periodic performance reviews are encouraged to help detect degradation over time.
Additional guidance addresses the need for baseline accuracy metrics, including uncertainty measures, to be visible to deployers, and for interoperable surveillance standards that reduce fragmentation across multiple vendors and platforms. Mechanisms for recording user feedback and sharing relevant information within institutions are also highlighted as essential for effective clinical follow-up and timely response to AI-related issues.
As AI medical devices become embedded in routine imaging practice, post-market surveillance has moved from a regulatory formality to a central component of patient safety and clinical governance. European regulations establish the principle of continuous oversight but leave important operational questions unanswered for imaging professionals. Consensus recommendations from the radiology community provide a practical reference point, clarifying shared responsibilities and outlining systems that support transparency, accountability and clinical feedback. These principles offer a foundation for safer deployment, sustained performance monitoring and more consistent integration of AI tools into medical imaging workflows.
Source: Insights into Imaging
Image Credit: iStock