Hospitals are preparing to invest billions in artificial intelligence, but the path from pilot projects to system-wide implementation remains riddled with obstacles. As providers shift from experimental AI deployments to broader adoption strategies, the lack of clear evidence, standardised evaluation methods and foundational infrastructure creates significant friction. Many health systems are still uncertain about which tools deliver tangible value and which fail to deliver measurable outcomes. Without reliable data and cohesive strategies, scaling AI in healthcare is proving far more complex than anticipated.
Measuring ROI: A Challenge of Definitions and Metrics
One of the primary reasons hospitals hesitate to scale AI is the difficulty in assessing return on investment (ROI). Health system leaders continue to experiment with evaluation methods, balancing traditional quantitative indicators, such as reduced hospital stay durations, with softer metrics like physician satisfaction. Many pilot projects launch without a clear upfront ROI framework, which makes it challenging to justify large-scale investments later.
In some cases, the impact of AI tools is qualitative. For example, ambient listening technologies that reduce the cognitive burden on clinicians may not show immediate financial gains but contribute meaningfully to job satisfaction. With healthcare facing a critical workforce shortage and growing clinician burnout, such qualitative improvements have strategic value. Nonetheless, these soft metrics are difficult to quantify and often overlooked in conventional cost-benefit analyses.
Must Read: Preparing Healthcare for AI Integration: Leadership Strategies for Success
Conversely, AI tools with operational impacts — such as streamlining discharge processes — can be more easily evaluated using data-driven metrics. These include tracking patient throughput, reduction in readmissions and time saved per case. Yet even in these scenarios, hospitals must weigh short-term costs against long-term potential, making it critical to estimate ROI before implementation. The absence of a standardised framework across institutions further complicates this task, leaving each system to define success on its own terms.
Scaling AI Across Departments: One Size Does Not Fit All
While a particular AI tool may demonstrate success in one department, scaling it across a health system often reveals new complexities. Clinical workflows vary significantly between specialities, and an approach that works for primary care physicians may be unsuitable for cardiologists or nursing staff. Tailored deployment strategies are essential, but they require time, technical adaptation and input from end-users. Without this customisation, tools that thrived in controlled pilots can stall during expansion.
Beyond workflow compatibility, the lack of enterprise-wide technology infrastructure poses a substantial hurdle. Many hospitals have invested in developing and piloting AI within limited settings but have not built the broader digital foundations required for enterprise-level deployment. Without structured, accessible data and cloud-based operations, even well-designed AI systems may underperform or introduce inefficiencies when rolled out at scale.
Additionally, successful scaling demands significant investment in people and processes. Clinical staff must be trained not only on how to use AI tools effectively but also on understanding the limitations and risks associated with them. As AI evolves rapidly, this education needs to keep pace with emerging use cases, security considerations and ethical guardrails. Without a coordinated strategy for workforce readiness, even promising AI innovations can fail to gain traction within clinical settings.
The Evidence Deficit: A Missing Guidepost for Adoption
Perhaps the most fundamental reason behind AI’s scaling struggles is the scarcity of reliable, real-world evidence. Providers often lack objective data to compare the effectiveness of different tools, creating uncertainty about where to focus investment. Many AI vendors rely on controlled studies or simulated environments that do not reflect real-world complexity. A recent review of over 500 studies involving large language models found that only 5% used real-world patient data, limiting the relevance of most findings.
In addition, the quality and credibility of available evidence vary widely. Research sponsored by vendors may be biased, with results skewed to support commercial objectives. This makes it difficult for health system leaders to separate marketing claims from genuine performance outcomes. Moreover, even when data exists, it may focus on engagement and user satisfaction rather than clinical effectiveness and economic impact — factors that matter most for large-scale adoption.
Organisations like the Peterson Health Technology Institute are beginning to address this gap by producing independent assessments of digital health tools. Their findings demonstrate that some technologies, such as virtual physical therapy platforms, can lower costs and deliver clinical outcomes comparable to in-person care. However, other tools — such as certain diabetes management apps — have been shown to raise costs without delivering better results. These insights highlight the critical need for independent, transparent evaluations based on real-world usage.
Without access to trustworthy evidence, hospitals face the risk of scaling solutions that appear promising but ultimately fail to deliver meaningful improvements. This not only wastes financial resources but can also slow down the broader momentum around AI adoption. By contrast, rigorous evidence can serve as a roadmap, helping providers prioritise technologies with proven value and scalability.
The healthcare sector is at a pivotal juncture in its adoption of artificial intelligence. While enthusiasm and investment are growing, hospitals face a range of barriers that make scaling AI a complex, cautious process. A lack of standardised ROI measurements, variability in departmental needs, weak infrastructure and an absence of credible evidence are stalling progress. To overcome these challenges, health systems must rethink how they evaluate tools, invest in foundational technologies and training and support independent research to guide decision-making. Only by addressing these core issues can the promise of AI be fully realised across the healthcare enterprise.
Source: MedCity News
Image Credit: Freepik