Artificial intelligence (AI) holds considerable promise for transforming healthcare through earlier diagnoses, more efficient care delivery and tailored treatments. However, despite rapid technological advances, few AI systems are successfully integrated into clinical settings. This persistent translational gap is driven not by a lack of accuracy but by a fundamental misalignment between how AI systems are designed and how clinicians reason and make decisions. Rather than focusing solely on predictive performance in artificial settings, there is a need to develop AI tools that support the cognitive and epistemic tasks of medical professionals. A sociotechnical approach that respects real-world clinical workflows can bridge this gap. The case of paediatric sepsis illustrates how rethinking the role of AI can lead to more effective and acceptable tools in medical practice.
Prioritising Real-World Impact over Technical Excellence
Many AI models in healthcare are developed with the goal of achieving high accuracy on benchmark tasks. These tasks are often selected for convenience, based on data availability or ease of evaluation, rather than their relevance to clinical practice. Such systems tend to treat the dynamic process of patient care as a series of static classifications, which can result in misleading outputs. For instance, two patients at different stages of recovery or deterioration might receive identical predictions if their data appear similar at a single point in time.
This reductionist approach does not reflect the complexity of real-world decision-making, where patient trajectories evolve and context matters. In contrast, AI tools designed with temporality in mind—capable of forecasting when critical interventions are needed or how a condition might progress—can provide far more useful support. Unfortunately, these more appropriate models are rarely developed or deployed. Focusing on integration into workflows, rather than technical supremacy, is essential if AI is to have genuine clinical impact.
Paediatric Sepsis and the Challenge of Cognitive Support
Paediatric sepsis highlights the limitations of current AI systems. It is a serious, life-threatening condition with high mortality and long-term consequences. Yet it remains poorly understood, particularly in children, whose physiological diversity complicates diagnosis and treatment. Sepsis lacks definitive diagnostic markers, and decisions often depend on clinical suspicion, leading to variability and uncertainty in care.
Must Read: Advancing Clinical Decisions through Digital Innovation
Doctors must make critical choices under pressure and with incomplete information. Supporting them in this complex environment requires tools that enhance judgement rather than replace it. AI can play a vital role by helping clinicians reason through ambiguous situations, consider competing hypotheses and test various scenarios. For example, tools that simulate patient outcomes under different treatment paths can help doctors anticipate complications and adapt care plans accordingly.
Rather than relying on black-box predictions, clinicians benefit from systems that provide insights they can interrogate and interpret. This approach respects their expertise, allowing them to draw on data-driven support without relinquishing their decision-making autonomy. It also mitigates cognitive biases and promotes consistency, which is especially important in high-stakes, high-variability conditions such as paediatric sepsis.
From Automation to Collaboration
A common flaw in current AI systems is their attempt to operate autonomously, often offering conclusions without context or explanation. These tools are rarely designed to integrate into clinical workflows or account for organisational structures and communication protocols. As a result, they may disrupt existing practices or foster mistrust among users.
A more effective approach treats AI as part of a wider sociotechnical system. Instead of seeking to automate tasks fully, AI should support specific cognitive functions such as information analysis, sense-making and decision evaluation. By providing clinicians with evidence-based insights, alternative scenarios and clear visualisations of risk, AI can enhance human judgement without overriding it.
This collaborative model aligns with the realities of medical decision-making, which often involve teams working under time constraints and uncertainty. AI systems that blend into this environment—supporting reasoning, prompting reflection and encouraging continuous learning—are more likely to be adopted and valued.
Such tools must also be robust, interpretable and tailored to clinical needs. Ante-hoc interpretability, where models are designed to be understandable from the outset, ensures outputs can be trusted and used safely. Rather than demanding that clinicians understand every detail of the underlying algorithms, these models provide reliable support that can be incorporated into practice much like other medical technologies.
To realise the full potential of artificial intelligence in healthcare, a shift in priorities is required. Rather than aiming for theoretical perfection or attempting to replace clinicians, AI should be designed to complement and support human reasoning. This approach embraces the complexity of clinical practice and focuses on delivering real-world impact through thoughtful integration.
The case of paediatric sepsis illustrates how AI can assist with difficult diagnostic and treatment decisions when developed with clinical realities in mind. Tools that simulate outcomes, prompt reflection and align with existing workflows can improve decision consistency, reduce errors and enhance patient outcomes.
Ultimately, success in medical AI will not come from outperforming humans but from empowering them. By focusing on sociotechnical integration and cognitive support, artificial intelligence can become a trusted partner in delivering safer, more effective care.
Source: npj digital medicine
Image Credit: iStock