The increasing availability of complex health data, coupled with advancing computational capabilities, offers an opportunity to define health and disease states with greater clarity and efficiency. This potential extends to real-time diagnosis and patient management. Due to its time-critical nature, the ICU presents unique challenges in creating effective models. 

Despite ubiquitous machine learning-based artificial intelligence (AI) techniques in modern life, their integration into acute care medicine, especially in ICUs, has been slow and uneven. While numerous papers outline various ML approaches, their practical implementation to assist clinicians has been inconsistent.

Effective application of AI in real-time care for critically ill patients faces significant obstacles. Clinical decision support systems (CDSS) currently cannot replace bedside clinicians in acute and critical care environments for several reasons. These include the immaturity of CDSS in achieving situational awareness, biases in large databases that don't represent target patient populations, and technical challenges in accessing and displaying valid data in a clinically useful manner.


Additionally, the "black-box" nature of many predictive algorithms and CDSS complicates gaining trust and acceptance from the medical community. Logistical challenges in collating and curating real-time multidimensional data streams further hinder these systems. Legal and commercial barriers limit studies addressing fairness and generalisability of predictive models and management tools. These factors underscore the complexities and limitations of implementing CDSSs effectively in clinical practice. Overcoming these barriers requires the exploration of ways to either bypass or address them to achieve effective CDSS in critical care medicine.

Healthcare systems collect vast amounts of detailed data from critically ill patients through EHRs, presenting a significant opportunity for data-driven CDSS. However, accessing and sharing this data for secondary use faces barriers like legal, ethical, and technical challenges, including privacy concerns and semantic inconsistencies in data representation. Overcoming these obstacles requires balancing privacy protection and data usability through governance policies and technical de-identification measures to meet ethical, legal, and regulatory standards. 

Also, social biases in data generation and healthcare delivery must be considered. Current EHRs may not accurately represent diverse patient populations, leading to biases in AI models trained on this data, particularly affecting minority groups. To prevent further marginalisation, regulatory measures are necessary, developed through community engagement and prioritising transparency and accountability in AI development and deployment. 

Situational awareness (SA) is crucial for decision-making in medicine, where lapses can lead to safety incidents. Heavy workloads and fatigue can hinder SA, while experience can enhance it. Well-designed AI-CDSS should improve SA by providing essential information quickly and with minimal cognitive effort. User-centred design is important for successful implementation, as it enhances staff acceptance and trust. Rigorous evaluation frameworks like the DECIDE-AI guideline are crucial for assessing AI-CDSS performance and safety. Additionally, human factor evaluations are essential but often overlooked in clinical AI studies. 

While vendor-provided AI solutions offer convenience, they may lack transparency and customisation. Social challenges include understanding user needs and workflows, ensuring trust and adoption, and addressing concerns about model explainability. Evaluation frameworks from implementation science can guide CDSS evaluations, with pragmatic trial designs offering advantages. Overall, successful implementation of CDSS in critical care requires addressing technical, social, and evaluative challenges to ensure their effectiveness and acceptance.

Integrating AI into healthcare requires meticulous planning, stakeholder involvement, validation, and continuous monitoring. A dynamic approach, including regular assessment and refinement of AI technology, aligns it with evolving healthcare needs. 

The advancement of AI, particularly with the emergence of large language models, has sparked discussions about its promises and risks in society and healthcare. Various governing bodies worldwide are drafting regulations that are expected to be formalised in the next few years, such as reports from the World Health Organization and the European Union on AI ethics and governance in healthcare. While these principles align with effective healthcare delivery, AI introduces novel challenges due to its reliance on rapidly evolving and complex algorithms.

The rapid growth of AI is reshaping industries and labour markets, with a significant portion of the global workforce expected to require AI upskilling or reskilling. In healthcare, adopting AI presents an opportunity to revolutionise patient care and research. However, there is a shortage of AI-literate medical professionals, highlighting the need for accessible and scalable AI training programmes. Specifically, the future ICU workforce will require specialised AI critical care training focusing on conceptual frameworks, model interpretation, and understanding issues like bias and fairness. 

AI-based CDSSs are evolving and becoming integral to healthcare. It's essential for us to responsibly guide their use and continue their development. As these systems become more prevalent, we must ensure they are used ethically and effectively to enhance patient care and clinical outcomes.


Source: Critical Care

Image Credit: iStock 



Pinsky MR, Bedoya A, Bihorac A et al. (2024) Use of artificial intelligence in critical care: opportunities and obstacles. Crit Care. 28(1):113. 

Latest Articles

ICU, Critical Care, AI, Artifical Intelligence, Clinical Decision Support Systems, time-critical clinical applications Artificial Intelligence in Critical Care: Opportunities and Obstacles