Artificial intelligence (AI) is reshaping many industries, with healthcare being one of the most transformative areas. The growing reliance on AI for clinical decision-making and operational workflows presents significant opportunities for improved patient care and efficiency. However, alongside these benefits come risks associated with fairness, reliability, and ethical implications. The "Fair, Useful, and Reliable AI Model" (FURM) assessment framework, as developed by Stanford Health Care’s Data Science Team, addresses these concerns by offering a structured, comprehensive evaluation process for integrating AI into healthcare systems. A recent article published in NEJM Catalyst outlines the importance of the FURM framework and examines its stages, ensuring AI systems deployed in healthcare are not only technically proficient but also ethically sound and financially sustainable.
 

Defining the Problem and Use Case

The first step of the FURM framework centres around defining the problem the AI model is designed to solve and how its implementation would affect patients, staff, and the healthcare system. This stage includes an intake session with stakeholders to discuss the model’s intended use, the clinical need it addresses, and the ethical and financial aspects of its deployment.
 

At this stage, it is critical to outline what problem the AI system aims to solve and who benefits from its implementation. The FURM framework emphasises the potential ethical dilemmas that may arise. For example, while an AI model designed to predict readmissions might improve resource allocation, there could be unintended consequences for specific patient groups, such as marginalised communities. It is essential to consider these ethical concerns early in the process to mitigate any unintended negative impacts on patient care or workforce dynamics. Additionally, the financial feasibility of integrating the AI model into existing workflows is evaluated, ensuring that resources are adequately managed and the AI system can be sustained long-term.
 

Technical and Organisational Integration

Once the use case and problem have been thoroughly assessed, the next step is to examine how the AI model will be integrated into the healthcare system’s infrastructure. This includes evaluating the technical aspects of deployment, such as data flow, model training, and whether existing IT systems can support the AI model.
 

A significant challenge in healthcare AI deployment is the seamless integration of AI systems into the organisational workflow. The FURM framework helps address this by ensuring that all technical requirements are met and the model’s integration is both feasible and sustainable. The framework recommends training healthcare personnel who will be interacting with the model to understand its limitations and strengths. Moreover, the deployment stage emphasises the need for ongoing collaboration between the AI model’s developers and the healthcare staff, ensuring that the technology complements the clinical workflow without adding unnecessary burden. Organisational integration also involves establishing governance processes to monitor the AI’s performance and maintain patient safety and data privacy.
 

Monitoring and Evaluation

Post-deployment, the FURM framework emphasises continuous monitoring and evaluation to ensure that the AI model performs as expected. Monitoring is vital to detect any changes in performance that could affect patient care or clinical outcomes. Additionally, once deployed, AI models may behave differently in real-world settings compared to controlled development environments. The FURM process includes creating a plan for ongoing evaluation to monitor how the AI system impacts patient outcomes and operational workflows.
 

Monitoring also addresses the ethical dimensions of AI usage in healthcare. For instance, how is AI impacting clinician decision-making? Are there unanticipated biases emerging from the system that affect specific patient populations? By establishing a rigorous monitoring framework, healthcare systems can ensure that their AI models are both safe and equitable, responding promptly to any concerns that arise. The FURM framework provides guidelines for periodic reassessments of the AI system’s utility, financial sustainability, and ethical impact, ensuring long-term success and adaptability.
 

Conclusion

Implementing AI in healthcare promises significant advancements in patient care and operational efficiency. However, without proper evaluation, AI systems can present risks related to fairness, reliability, and ethical considerations. The FURM framework, developed by Stanford Health Care, provides a robust and repeatable process for evaluating and integrating AI systems in healthcare. Through its three stages—defining the problem, technical and organisational integration, and ongoing monitoring—the framework ensures that AI models are helpful, ethically sound, and sustainable. As AI continues transforming healthcare, frameworks like FURM are essential to bridging the gap between technological innovation and practical, ethical healthcare delivery.
 

Source Credit: NEJM Catalyst
Image Credit: iStock

 


References:

Callahan A, McElfresh D, Banda J M et al. (2024) Standing on FURM Ground: A Framework for Evaluating Fair, Useful, and Reliable AI Models in Health Care Systems. NEJM Catal Innov Care Deliv. 5(10)




Latest Articles

AI in healthcare, FURM framework, healthcare AI, clinical decision-making, ethical AI Learn how the Fair, Useful, and Reliable AI Model (FURM) framework, developed by Stanford Health Care, ensures AI systems in healthcare are technically proficient.