HealthManagement, Volume 26 - Issue 2, 2026
As AI tools proliferate in radiology, healthcare leaders face a critical strategic choice: treat AI as scattered point solutions or build it as foundational infrastructure. This article provides a framework for imaging executives to transform AI from pilot projects into sustainable operational capabilities. Drawing on infrastructure management principles, we outline a practical six-layer approach and a 90-day implementation roadmap for organisations ready to move from experimentation to operations.
Key Point
- AI should be managed as core radiology infrastructure, not scattered point tools.
- Poor integration creates operational debt, cognitive overload and mounting maintenance.
- A six-layer stack enables intent, workflow, integration, governance, measurement and improvement.
- A two-speed model supports fast low-risk tests and rigorous validation for clinical AI.
- A 90-day roadmap builds governance, backbone integration and repeatable end-to-end use cases.
The Infrastructure Imperative
Radiology has navigated multiple technology waves. PACS revolutionised image storage. RIS brought order to scheduling and reporting. Enterprise imaging connected previously isolated departments. Each transformation required thinking beyond individual tools to build reliable, governed systems that could scale.
AI represents a similar inflection point, but the risk profile is different. Unlike earlier imaging technologies, AI intersects with interpretation, triage, communication, operations and patient safety simultaneously. Most organisations are still approaching AI tactically, acquiring algorithms without the infrastructure to deploy, monitor and optimise them at scale.
I have watched this pattern repeat: impressive demos, enthusiastic pilots, then stagnation. The issue is not algorithm performance. It is everything else: workflow integration, alert management, version control, clinical accountability, meaningful measurement (pre and post). These are not AI problems – they are infrastructure problems.
The Cost of Fragmentation
Without infrastructure planning, each AI deployment creates operational debt. Different integrations, different dashboards, different governance, different success metrics. Radiologists face cognitive overload. IT teams struggle with mounting maintenance. Executives lack unified visibility into what is actually working.
This explains why organisations report 'AI fatigue' despite having multiple algorithms live. The problem is not too much AI—it is too little integration. We need a better mental model: AI as a managed platform, not a toolbox.
Reframing AI as Utility Infrastructure
Think about the utilities that underpin radiology today. PACS is your imaging repository. RIS/EHR handles workflow and clinical context. Network infrastructure keeps everything running. AI should function as a fourth utility: a decision-support and operations layer held to the same standards.
This reframing changes the conversation from 'Which AI should we buy?' to 'How do we build AI infrastructure clinicians can trust?' Utilities must be available, safe, governed, measured and standardised. AI deserves no less.
The Six-Layer Stack
Building sustainable AI requires thinking in layers, where each builds on the previous one.
Layer 1: Clinical Intent
Many programmes fail here, quietly. Vague goals produce vague results. 'Enhance efficiency' means nothing. 'Reduce time-to-diagnosis for ICH in ED patients by 30 minutes' now, that is actionable. The precision transforms AI from a tech project into a clinical improvement initiative.
Layer 2: Workflow Orchestration
AI only works when it fits into existing patterns. Where does it appear? Worklist prioritisation? Contextual cues during reads? Critical result pathways? Quality checkpoints before signing? The key: respect radiologist workflow. Poorly designed integrations create alert fatigue that drives people to ignore AI entirely.
Layer 3: Integration Architecture
DICOM routing, HL7/FHIR context, PACS and EHR hooks, worklist updates, audit logs, identity resolution. Unglamorous, but critical. Fragile integration erodes trust faster than any algorithm problem. Organisations that build standard integration patterns accelerate every subsequent deployment while developing real operational expertise.
Layer 4: Governance and Safety
AI governance should match what we demand from other clinical systems. Model approval criteria. Defined ownership – clinical, operational and IT. Change control with versioning and rollback. Incident protocols. Bias and drift monitoring. Without this, AI becomes nobody's system. When AI makes information cheap, judgement becomes everything. Governance is formalised judgement.
Layer 5: Measurement and Analytics
'Cases processed' and 'algorithm accuracy' miss the point. Does AI improve actual outcomes? Better metrics: turnaround time by cohort, downstream actions triggered, radiologist burden (clicks, interruptions, cognitive load), safety outcomes, quantified financial impact. Value happens in workflows, not dashboards.
Layer 6: Continuous Improvement
Unlike traditional software, AI needs active management. Clinician feedback loops before and after AI insights. Performance reviews by service line. Recalibration schedules. Retirement criteria for models that do not deliver. You are building operational muscle to evolve capabilities, not just implement them once.
The Two-Speed Operating Model
We face competing pressures: move fast because competitors are moving, but move carefully because patient safety matters. Infrastructure thinking resolves this through a two-speed approach.
Fast lane: rapid experiments in low-risk areas like operational prioritisation, non-diagnostic automation, internal QC. These create learning opportunities without stringent validation.
Safety lane: rigorous validation for diagnostic or decision-support uses that affect patient outcomes. Higher validation burden protects patients and builds institutional confidence.
You do not slow everything down. You build systems where speed is earned through demonstrated governance.
What Infrastructure Actually Looks Like
When AI functions as infrastructure, you will see:
Single point of entry. No random pilots. There is a transparent path: use case submission, prioritisation criteria, feasibility review, security check, clinical sponsor required, measurement plan required. This prevents pilot sprawl and political whiplash.
Standard integration patterns. New models plug into consistent DICOM routing, metadata expectations, worklist interactions, notification rules. This cuts time-to-value for each deployment.
Trust layer. Radiologists get confidence scoring where appropriate, clear rationale when available, easy paths to disagree, visibility into model behaviour. Most important: AI respects their workflow.
Auditability. AI outputs are clinical data. Log what was shown, what actions were taken, when changes occurred, who reviewed. Essential for safety and medicolegal defensibility.
Platform vs. Products
Every leader eventually faces this: Are we building a platform or collecting products?
Products deliver quick wins but multiply complexity. Platforms require upfront discipline but compound value. A practical middle path: establish a platform 'spine'—governance, integration, monitoring, measurement—and only allow point solutions that conform. The spine is what transforms AI from novelty to operational advantage.
Why Radiology Should Lead
Radiology is uniquely positioned for this. Imaging workflows already run on standardised formats and high-volume digital ops. We live by throughput, TAT, quality metrics, service reliability. And radiologists think in probabilities, edge cases, safety… exactly the mindset for responsible AI.
In short: imaging is one of the few domains where true AI infrastructure can be built with clinical-grade rigor. Success here informs enterprise strategy.
90-Day Roadmap
Here is how to start without massive reorganisation:
Days 1-30: Establish governance
Define 3-5 priority use cases tied to enterprise goals (ED flow, quality/safety, capacity). Assign ownership—clinical and operational and IT—for each. Establish AI change control: versioning, rollback, escalation. These prevent chaos as you scale.
Days 31-60: Build the backbone
Standardise integration for DICOM routing, worklist injection, logging. Define measurement templates capturing workflow impact. Create feedback mechanisms for radiologists and staff. These become reusable assets.
Days 61-90: Prove repeatability
Launch 1-2 use cases end-to-end with your governance and integration backbone. Publish monthly scorecards: outcomes, adoption, burden, issues, improvements. Document what you will stop using. Successful infrastructure requires retirement discipline, not just deployment.
The goal is not perfection. It is a repeatable system.
Looking Forward
AI in radiology will expand beyond detection to protocol assistance, appropriateness guidance, priors synthesis, structured reporting, follow-up tracking, peer learning, operational forecasting. This makes infrastructure essential now.
The question will not be 'Should we use AI?' It will be 'Can we deploy, monitor and optimise it safely at scale?' Organisations that answer yes will have built durable advantages through better outcomes, efficiency and clinical experience.
Conclusion
AI in radiology is not a technology purchase. It is an operational capability built through infrastructure investment: governance, integration, measurement, continuous improvement. Get this right, and AI compounds advantages. Get it wrong, and it becomes another layer frontline teams work around or worse.
Radiology has led healthcare through multiple tech transformations. The opportunity now is to lead AI by building the most effective infrastructure to deploy them. Organisations that embrace this today will define best practices tomorrow.
Conflict of interest
None.
