Artificial intelligence–enabled decision support is increasingly present in radiology departments, where rising imaging volumes, reporting delays and workflow pressure continue to challenge public hospital services. Commercially available and regulator-approved systems are now embedded in routine practice in several settings, supported by strong evidence of technical accuracy in controlled evaluations. Yet clinical performance alone does not determine whether such tools become trusted, routine components of care. Real-world adoption unfolds within complex organisational environments shaped by workflow design, professional judgement, communication practices and regulatory context. A longitudinal qualitative evaluation conducted in a large public tertiary hospital provides insight into how an AI clinical decision support system was introduced, adapted and used over time, revealing how organisational readiness, technology performance and clinician trust interacted across pre-implementation, rollout and routine use.
Planning And Communication Shape Adoption
Organisational factors dominated early experiences of implementation. Before deployment, staff anticipated limited planning and support, reflecting prior encounters with digital tools introduced with minimal preparation. Expectations of reactive coordination and weak feedback mechanisms contributed to cautious attitudes, even where enthusiasm for innovation existed. During rollout, these concerns were reinforced as formal training and structured communication were limited. Many clinicians reported learning about the system informally or after it had already gone live, while some professional groups were insufficiently informed about its purpose or operation.
Fragmented communication between clinical teams, technical staff and external partners constrained organisational readiness. Although the department was broadly open to innovation, coordination across roles and services was inconsistent. Limited involvement of end users in early stages reduced shared ownership and weakened confidence in the organisation’s capacity to manage change. Over time, some improvements emerged, including more structured monitoring and recognition of the need for formal training. However, high staff turnover, rotating rosters and competing clinical demands continued to undermine continuity. Implementation was widely perceived as reactive rather than anticipatory, shaping how clinicians engaged with the system and limiting the consolidation of early experience into stable routines.
Must Read:AI’s Uncertain Role in Radiologist Burnout
Technology Performance and Workflow Fit
Technological characteristics became the most prominent source of friction during and after rollout. Concerns about system accuracy, reliability and interoperability intensified once the tool was used in daily practice. Excessive outputs and low-value information created cognitive load, requiring clinicians to filter artefacts and marginal findings before returning to standard image review. What had been anticipated as efficiency support often translated into additional steps, duplicated checks and delayed report finalisation.
Performance inconsistency further eroded confidence. False positives, variable sensitivity and processing delays reduced willingness to rely on the system, particularly when outputs did not align with established diagnostic reasoning. Interoperability challenges with existing imaging and reporting infrastructure compounded these issues, generating redundant image series and inconsistent display sequences. Although some technical integration problems eased over time, residual variability persisted, affecting availability and usability.
In response, clinicians developed individual workarounds rather than coordinated workflow redesign. Many used the system selectively, treating it as a background reference or secondary safety check rather than an integrated decision aid. These adaptations allowed local functionality but reinforced uneven use and limited standardisation. The gap between what the technology produced and what clinicians could efficiently apply remained a central barrier to sustained adoption.
Value Trust and System Context
Perceptions of value evolved across phases. Initially, clinicians anticipated benefits related to workload relief, prioritisation and safety, particularly for supporting trainees in high-volume settings. As practical experience accumulated, assessments became more conditional. While some recognised limited advantages in triage or reassurance, others viewed the system as duplicating effort rather than reducing it. By routine use, value was framed pragmatically and contextually, with modest benefits acknowledged alongside concerns about cost-effectiveness and efficiency.
Trust emerged as a decisive mediator of adoption. Early inconsistencies and information overload weakened confidence, and initial negative experiences continued to shape later engagement even when performance improved. Clinicians remained professionally cautious, retaining manual control over reporting and positioning the AI as an adjunct rather than an authority. Uncertainty about medicolegal accountability reinforced this stance, as responsibility for errors was perceived to remain with the clinician regardless of automated input.
Wider system influences further constrained uptake. Limited guidance from professional bodies, evolving regulatory expectations and funding pressures shaped the environment in which implementation occurred. Differences between public and private sector incentives highlighted structural constraints on sustained investment and workforce stability. Ethical and policy considerations around data use and governance also surfaced, reflecting broader uncertainty about how AI should be integrated responsibly into clinical care.
The experience of implementing AI decision support in radiology illustrates that adoption is shaped as much by organisational and cultural conditions as by technical capability. Clinicians showed willingness to engage, but sustained use depended on early experiences, workflow alignment, clear communication and trust in system performance. Weak planning, fragmented coordination and performance inconsistencies combined to limit integration into routine practice, even as some users adapted the technology as a secondary safety mechanism. These findings underscore the importance of anticipatory implementation strategies, structured training, iterative feedback and clear governance to support trustworthy AI adoption. For healthcare leaders and policymakers, the lessons highlight that moving from pilot deployment to routine use requires coordinated attention to technology, organisation and professional practice.
Source: Journal of Medical Internet Research
Image Credit: iStock