The implementation of artificial intelligence in healthcare holds significant promise for enhancing care delivery, operational efficiency and patient outcomes. However, many health systems face challenges when evaluating vendor-provided AI tools, particularly in determining their long-term value and sustainability. Most existing AI governance frameworks are either overly theoretical or too narrowly focused, offering limited guidance for practical adoption. A real-world, adaptable governance framework is therefore essential for guiding health systems in the early stages of AI solution selection. Such a framework must assess strategic alignment, executive sponsorship, projected impact and potential risks to ensure return on investment (ROI) and support responsible innovation.
Strategic Alignment and Executive Sponsorship
A foundational element of AI governance is ensuring that proposed solutions align with the health system’s strategic priorities. Without alignment, there is a risk of fragmented efforts that do not contribute meaningfully to organisational goals. Strategic alignment streamlines implementation and increases the likelihood of realising expected benefits. Once a solution is aligned, it must be supported by executive sponsorship. This role is typically held by a senior leader who is responsible for championing the AI solution, securing financial and institutional support and mitigating potential barriers to adoption. Executive sponsors also play a vital role in facilitating integration into existing clinical and operational workflows and in promoting education and awareness around the solution. Assigning clear responsibility at the executive level ensures continuity, accountability and shared ownership throughout the lifecycle of the AI tool.
Impact and Value Assessment
Beyond technical performance, an AI solution must demonstrate quantifiable impact and value. The Impact and Value Case Assessment includes several domains: executive summary, background and problem statement, solution landscape, description of the proposed AI solution, value proposition with measurable objectives and cost analysis. This structured approach begins by identifying the specific problem the AI tool is intended to address and determining whether the use of AI is justified. In some cases, traditional non-AI alternatives may be more effective and misidentifying the nature of the problem could lead to inappropriate implementation.
The next step involves reviewing existing solutions in the market, assessing the likelihood of success of the proposed tool and estimating its potential benefits. These may include clinical improvements, operational efficiencies or both. Even highly accurate models may not generate measurable improvements unless their outputs are integrated into workflows that support decision-making and action. The assessment also considers the extent to which benefits can be expressed in quantifiable terms, such as reduced length of stay or fewer readmissions, which are essential for projecting ROI.
Costs must be analysed broadly, covering acquisition, validation, deployment and maintenance. The time horizon and scalability of implementation are also important. For example, disruptions during implementation may incur indirect costs that need to be included in the total investment calculation. Taken together, these elements of the Impact and Value Case Assessment provide a comprehensive foundation for decision-makers to determine whether a vendor solution is viable.
Comprehensive Risk Assessment
Evaluating risks associated with AI solutions is essential for responsible deployment and long-term success. A formal risk assessment tool, based on 12 domains, supports this process. Each domain is weighted according to its contribution to overall risk and includes cybersecurity, model transparency, performance, scalability, data integrity, ethics, fairness and clinical oversight. The three domains identified as high risk are clinical documentation and decision support, quality and patient safety and enterprise risk.
Must Read: Realising ROI in Healthcare AI: Bridging Hype and Reality
In the model type domain, generative AI is considered high risk due to its susceptibility to hallucinations and inaccuracies. Conversely, models based on structured patient data pose lower risks. The data integrity domain evaluates the transparency of training and testing datasets. Full access is ideal, while limited access raises the level of risk. The performance domain focuses on independent validation; if no evaluation is available or concerns have been raised, the model is classified as high risk.
Interpretability is critical for trust and usability. Tools like SHAP and LIME help explain predictions, making AI outputs actionable and reducing the likelihood of errors. Similarly, scalability and maintenance are evaluated to ensure updates, monitoring and retraining processes are well defined. Security assessments consider encryption standards such as FIPS 140-3 and compliance with regulations like HIPAA. Legal and research considerations include vendor use of patient-level data and intellectual property risks.
Ethical considerations and fairness are also essential. The model must not exacerbate disparities or distribute benefits and burdens unevenly. Clinician engagement assesses whether healthcare professionals understand the AI’s limitations and retain control over its outputs. Enterprise risk, which includes reputation and compliance exposure, is closely tied to whether AI outputs are stored in the electronic medical record. This practice, if mismanaged, can lead to misinterpretation, legal complications and increased liability.
The documentation and decision support domain considers the degree of autonomy in the model and its influence on care pathways. Highly autonomous models or those lacking suggested actions may cause confusion. Quality and patient safety are evaluated based on historical events, incident reporting mechanisms and safeguards for unintended harms. Rigorous pre-deployment assessments and post-deployment monitoring are critical for mitigating safety concerns.
The proposed framework offers a structured approach to AI governance that addresses current gaps in practical implementation strategies. By focusing on strategic alignment, executive responsibility, impact assessment and risk evaluation, it provides a scalable and systematic method for evaluating vendor AI solutions. The framework accommodates the diverse needs of health systems, regardless of size, and does not require significant new administrative structures. Its flexibility also allows assessments to be conducted sequentially or concurrently based on system resources and solution characteristics.
To ensure sustainability and maximise ROI, health systems must apply the same level of scrutiny to AI investments as they do to other large-scale purchases. The early assessment of both impact and risk helps identify solutions that offer measurable benefits while minimising potential harms. This process also supports continuous improvement, as outcome metrics from past deployments can refine future decision-making. For AI to meaningfully contribute to better care, safety and operational performance, health systems must adopt a rigorous, ROI-focused governance approach early in the pipeline.
Source: npj digital medicine
Image Credit: iStock