Emerging technologies, particularly Artificial Intelligence (AI), are often declared to be transformative in healthcare, with high hopes, expectations, and concerns surpassing those for EHRs, digital health tools, and telemedicine. The FDA has proactively prepared for AI’s integration into healthcare and biomedical product development yet faces distinct challenges regulating this fast-evolving field.
A recent report reviews the FDA’s history of AI regulation and ten key concepts as it refines its regulatory approach.
Since its first AI-enabled device approval in 1995 with PAPNET, a cervical cancer detection tool, the FDA has approved around 1,000 AI-driven medical devices, mostly in radiology and cardiology. The FDA also received increasing submissions for AI-enhanced drug development applications, particularly in oncology and mental health, supporting areas like drug discovery, trial design, and dosage optimisation.
In 2021, the FDA introduced a 5-point action plan for AI and machine learning (ML) in medical devices and issued guidance on clinical decision-support software. Key areas of FDA focus include fostering collaboration, harmonising standards, advancing innovative regulatory frameworks, and researching performance to ensure safe and effective AI applications in healthcare.
The FDA employs a flexible, science-based regulatory scheme to keep pace with rapid AI advancements to balance safety with innovation. This includes a “total product life cycle” approach and experimental programmes like the Software Precertification Pilot. Given the vast range of AI applications, the FDA uses a risk-based regulatory spectrum: lower-risk models (e.g., administrative tools) are often unregulated, while high-risk applications (e.g., AI in critical devices like defibrillators) require stringent oversight. Mid-range applications, such as clinical decision support tools, are regulated based on the risk posed by their lack of clear mechanistic explanations.
One example is the Sepsis ImmunoScore, which calculates sepsis risk based on EHR data. Designated as a Class II device, it is subject to special controls addressing risks like model bias, clinician overreliance, and algorithmic failure. These controls include thorough testing, hazard analysis, labelling, and ongoing performance monitoring to ensure safety and reliability.
The FDA recognises AI’s transformative potential in drug development and clinical research. Although the agency does not endorse specific practices, it requires that AI-integrated medical products undergo rigorous clinical trials to ensure that the benefits outweigh the risks. FDA reviewers need technical expertise to evaluate applications involving AI, such as those using AI for target selection or intervention strategies, making it essential to maintain a workforce with deep AI knowledge to provide industry guidance.
Generative AI, particularly large language models (LLMs), introduces challenges in healthcare due to potential unexpected outputs and “hallucinations,” where models may add inaccurate information. While the FDA has yet to authorise any LLMs, many applications in diagnostics, treatment, and disease prevention will require regulatory oversight. Even AI tools like “scribes” that summarise medical notes may misinterpret or invent details, making stringent oversight essential.
The dynamic nature of AI models and their sensitivity to context underscores the need for continuous, environment-specific performance monitoring, similar to intensive care patient monitoring. Health systems must provide a robust, near-continuous evaluation ecosystem, especially as unmonitored AI models could potentially cause harm. Solutions like external assurance labs or site-specific validations offer promise, but more tools are needed to ensure safe, effective AI performance in clinical settings.
Continuous assessment is also crucial to confirm that AI applications improve patient outcomes; however, many health systems lack the infrastructure for such monitoring. Effective AI oversight demands similar rigour to premarket evaluation, including patient follow-up, even for those with adverse outcomes.
AI introduces a regulatory challenge as its effectiveness and safety can depend on context, unlike traditional products, which maintain consistency across locations. This situational dependency on AI models highlights the need for scrutiny and a more robust regulatory approach as AI use in healthcare becomes commonplace.
AI is set to transform supply chain management for FDA-regulated products, though it remains vulnerable to shortages and outages. The increasingly complex and global nature of supply chains, often based on “just-in-time” inventory, lacks resilience during disruptions like natural disasters or economic crises. AI could help anticipate and mitigate shortages, especially in generic drugs and low-cost devices, by improving data transparency and response systems. Cybersecurity and robust backup systems are also crucial to protect these AI-dependent processes.
The FDA and other regulatory agencies aim to protect public trust by upholding high standards for AI applications, fostering responsible practices, and mitigating misleading claims. It is in the interest of regulated industries, academia, and the FDA to identify and address irresponsible uses of AI. All sectors involved must collaborate to create and refine tools that ensure the ongoing safety and efficacy of AI in healthcare, with a strong focus on patient health outcomes. The FDA will continue to lead in this area, but the success of AI in healthcare will rely on the collective responsibility of all stakeholders.
Source: JAMA
Image Credit: iStock