ICU Management & Practice, Volume 25 - Issue 5, 2025
Artificial intelligence presents an unprecedented opportunity for critical care. But crucial questions about implementation, safety and responsibility demand immediate attention.
Introduction
Artificial intelligence (AI) has become ubiquitous across industries, with seemingly every technology now touting AI capabilities amongst its features. The exponential growth of AI-related publications—from a 36-fold increase in medical AI research between 2000 and 2022—reflects this transformation (Shi et al. 2023).
Within healthcare, and particularly in intensive care units (ICUs), this revolution promises to reshape how we deliver care. However, beneath the enthusiasm lies a more complex reality that demands critical examination.
As an indicator of growth in the field, we can observe the current landscape of AI-enabled medical devices. The United States Food and Drug Administration (FDA) has approved over 235 AI-enabled medical devices in 2024 (Figure 1), with the majority focusing on radiology applications (FDA 2025). Whilst approval rates appeared to plateau in the last years, insufficient evidence exists to determine whether this represents a genuine trend shift or merely a temporary pause. Importantly, this list represents only devices explicitly identified through AI-related terminology in summary descriptions, suggesting the actual number of AI-enabled technologies in clinical use may be considerably higher.
Regulatory frameworks are rapidly evolving to address this technological transformation. In January 2025, the FDA published its Draft Guidance on Artificial Intelligence-Enabled Device Software Functions, providing comprehensive lifecycle management and marketing submission recommendations. Concurrently, the European Union's Artificial Intelligence Act (Regulation EU 2024/1689) established specific obligations for AI technology providers, creating the world's first comprehensive legal framework for AI systems (European Parliament and European Council 2024). These regulatory developments signal growing recognition of both AI's potential and its risks.

Applications in Critical Care
Several AI applications show particular promise for ICU settings, each addressing distinct clinical challenges. We show a list of domains with concrete examples.
Image and Waveform Processing
Machine learning algorithms enable classification, segmentation and quantification tasks that were previously time-intensive or operator-dependent. There are portable radiography systems that identify pneumothorax and mark it following image acquisition. This commercial solution simultaneously detects endotracheal tube position and measures distance to the carina. These capabilities exemplify how AI can provide real-time support at the bedside, potentially reducing time to intervention in critical situations (Lotano et al. 2000).
Large Language Models and Clinical Documentation
Large language models (LLMs) present opportunities for processing electronic health record information to facilitate documentation tasks. Potential applications include history summarisation, audio transcription and automated report generation. However, the integration of LLMs into clinical workflows requires careful consideration of accuracy, liability and the potential for automated systems to perpetuate or amplify existing documentation biases (Liu et al. 2024).
Predictive Modelling and Phenotyping
Machine Learning enables sophisticated prediction models, using supervised learning techniques. An example is a system designed to identify patients at risk of developing sepsis inwards (Bhargava et al. 2024). Clustering algorithms, usually unsupervised, can identify patient phenotypes not previously evident through traditional analytical methods, potentially revealing subpopulations that respond differently to treatments (Seymour et al. 2019; Åkerlund et al. 2022). These capabilities could enable more personalised therapy. Like traditional statistics, validation across diverse populations remains essential.
Drug Development
AI already influences a relevant percentage of pharmaceuticals reaching human clinical trials. Algorithms such as AlphaFold, which predicts protein structures, accelerate early-phase drug development processes (Abramson et al. 2024). This influence on medication development ultimately affects which therapeutic options become available for critically ill patients. The FDA reported hundreds of regulatory submissions for drugs using AI in their development, with a 10-fold increase from 2020 (Warraich et al. 2025).
Medical Training
AI-powered tools for accessing medical evidence and their application in medical training represent particularly transformative applications (Abdulnour et al. 2025). These systems can significantly influence how new professionals learn and how current practitioners maintain competency. However, this influence demands scrutiny regarding potential biases in training data and the risk of creating knowledge gaps (Kosmyna et al. 2025).
Implementation Challenges
Several issues complicate AI's integration into critical care practice, extending beyond typical concerns about any new technologies.
Concentration of Capability
Current AI development requires substantial computational resources and expertise, concentrating power within large technology companies. These organisations' objectives may not align with patients' interests or healthcare systems' needs. This misalignment creates potential conflicts between profit motives and clinical priorities. Expert consensus warns of this (Cecconi et al. 2025).
Data Representativeness
Training data may not accurately represent relevant populations due to temporal, geographical or demographic variations. Models trained predominantly on data from specific populations may perform poorly when applied to underrepresented groups (Omiye et al. 2023). This challenge proves particularly acute in critical care, where patient heterogeneity already complicates clinical decision-making. It should be mentioned that this problem is not intrinsic to this technology and that the problem of generalising results obtained in specific populations is a widespread problem in many fields of medical science.
Black Box Problem
Most AI systems function as closed, non-deterministic models. These characteristics complicate the evaluation and raise questions about accountability. Research efforts aim to understand AI systems' internal reasoning processes; current technologies often remain opaque. This opacity becomes problematic when clinicians must explain treatment decisions.
Hallucinations
Generative AI systems exhibit a specific error type: producing confident but fabricated information, termed hallucinations. In critical care settings, where decisions carry immediate life-threatening implications, even occasional hallucinations represent unacceptable risks without robust verification systems.
Training Phase Dependency
The results of AI depend on training methodologies and data. Therefore, data generated after that date is unknown to the model. Similarly, retraining a model can significantly alter the system's behaviour. This dependency makes it difficult to maintain consistent performance after system updates and raises questions about how to efficiently validate modified systems.
Security Data Poisoning Risks
Cybersecurity is a real problem in the present world. LLMs can be a new vector of attack, especially when we increase their operational capabilities (Greshake et al. 2023). Another opportunity for malicious actors is introducing corrupted data that alter subsequent models' behaviour (Souly et al. 2025). This vulnerability extends beyond individual institutions, as data sharing—essential for developing robust models—simultaneously increases poisoning attack surfaces.
Critical Care-Specific Challenges
The ICU environment presents unique complications for AI implementation that extend beyond general healthcare applications.
Data Volume and Complexity
Critical care generates enormous quantities of high-frequency, multimodal data. The ICU Cockpit study in Switzerland estimated a median of 2.09 GB of data per patient when incorporating continuous physiological variables (Boss et al. 2022). This data volume, combined with real-time processing requirements and population heterogeneity, creates substantial technical challenges for AI systems operating in ICU environments.
Clinical Resistance to Change
Healthcare professionals' resistance to adopting new technologies, particularly those perceived as threatening clinical autonomy or expertise, represents a significant implementation barrier (Borges do Nascimento et al. 2023). Successful AI integration requires understanding and addressing these concerns through transparent communication, adequate training and meaningful clinician implication in system design and evaluation. Aligning the interests of patients, healthcare staff, and development will be key to the success of the implementation.
Data Sensitivity
Using Patient data raises critical privacy and security concerns. Currently, state-of-the-art models involve remote processing of the data. Legal and security implications vary across countries. Institutions must balance the benefits of centralised AI capabilities against the risks of transmitting sensitive patient information to external servers or invest in local infrastructure.
The Imperative for Clinical Engagement
Many healthcare organisations separate technology acquisition decisions from clinical practice, creating risks and missed opportunities. Critical care clinicians must actively engage in AI adoption processes for several compelling reasons. Firstly, clinicians remain responsible for patient outcomes, regardless of legal liability frameworks eventually applied to AI system providers. This responsibility cannot be delegated to technology vendors or hospital administrators unfamiliar with ICU practice realities. Secondly, these technologies will fundamentally alter how intensivists work, learn and think. Recent research examining the cognitive impacts of AI interaction suggests that passive reliance on AI systems may diminish critical thinking skills over time (Kosmyna et al. 2025). Clinicians must proactively shape AI integration to enhance rather than replace clinical reasoning. Thirdly, the window for meaningful influence narrows as systems become entrenched in clinical workflows. Early engagement enables clinicians to establish evaluation criteria, implementation safeguards and performance monitoring frameworks aligned with patient care priorities.
Practical Approaches for Critical Care Teams
ICU clinicians and leaders can adopt some strategies to responsibly navigate AI integration.
Promote AI Literacy
Understanding fundamental AI concepts—including machine learning types, training processes, validation methodologies and common failure modes—enables more informed technology evaluation. This knowledge need not reach data scientist levels but should suffice for asking pertinent questions about proposed systems.
Establish Rigorous Evaluation Frameworks
Demand comprehensive testing before implementation, with particular attention to performance across relevant patient subpopulations and local performance. Evaluation should extend beyond accuracy metrics to encompass usability, workflow integration and unintended consequences. For example, documentation assistance systems should be assessed not only on summary quality and time to be generated but also on time required to review outputs, error rates and potential for introducing new documentation patterns that obscure clinical reasoning.
Implement Continuous Monitoring
AI system performance requires ongoing assessment rather than one-time validation. Establish processes for tracking relevant metrics, including clinical outcomes, efficiency gains, error patterns and user satisfaction. This monitoring should incorporate mechanisms for detecting performance degradation that might result from population drift, system modifications or changing clinical practices.
Foster Multidisciplinary Collaboration
Effective AI implementation requires collaboration amongst clinicians, data scientists, hospital administrators, ethicists and patient representatives. Create formal structures—such as AI committees—that bring these stakeholders together to evaluate proposed technologies, establish governance frameworks and address emerging concerns.
Maintain Clinical Primacy
Ensure AI systems augment rather than replace clinical judgement. Systems should provide recommendations that clinicians can accept, modify or reject based on contextual factors. Resist pressures to implement black box systems that obscure reasoning processes or limit clinician discretion in individual cases. The only exception is if there is a clear benefit demonstrated in a clinical trial with a significant benefit to the patient, and ensuring its performance is evaluated in local implementation.
Conclusion
Artificial intelligence offers opportunities to advance critical care practice through enhanced diagnostics, predictive capabilities and efficiency improvements. However, these benefits come with significant risks, including opacity of decision-making processes, data representativeness concerns and risks of over-reliance on automated systems. Critical care clinicians bear responsibility for ensuring AI technologies serve patients' interests whilst preserving our autonomy and agency. This responsibility requires developing AI literacy, demanding rigorous evaluation and continuous monitoring of deployed systems. How we respond will determine whether it truly serves our patients and us.
The AI revolution in critical care represents neither unalloyed benefit nor unmitigated threat, but rather an opportunity for transformation. The technology's potential to enhance patient care, improve efficiency and advance medical knowledge remains substantial. However, realising this potential whilst mitigating risks demands active engagement from the critical care community.
Recent regulatory developments, including the FDA's 2025 draft guidance and the EU AI Act, provide helpful frameworks but cannot substitute for clinician vigilance. Healthcare institutions must resist treating AI adoption as primarily a technical or administrative matter, instead recognising it as fundamentally a clinical responsibility.
The coming years will determine whether AI becomes a tool that enhances clinicians' capabilities or one that diminishes clinical autonomy and potentially compromises patient safety. This outcome depends largely on whether critical care professionals actively shape AI integration or passively accept systems designed without adequate clinical input.
Conflict of Interest
None.
References:
Abdulnour R-EE, Gin B, Boscardin CK. Educational Strategies for Clinical Supervision of Artificial Intelligence Use. New England Journal of Medicine. 2025;393(8):786-97.
Abramson J, Adler J, Dunger J, Evans R, Green T, Pritzel A, et al. Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature. 2024;630(8016):493-500.
Åkerlund CAI, Holst A, Stocchetti N, Steyerberg EW, Menon DK, Ercole A, et al. Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study. Critical Care. 2022;26(1):228.
Bhargava A, López-Espina C, Schmalz L, et al. FDA-Authorized AI/ML Tool for Sepsis Prediction: Development and Validation. NEJM AI. 2024;1(12).
Borges do Nascimento IJ, Abdulazeem H, Vasanthan LT, Martinez EZ, Zucoloto ML, Østengaard L, et al. Barriers and facilitators to utilizing digital health technologies by healthcare professionals. npj Digital Medicine. 2023;6(1):161.
Boss JM, Narula G, Straessle C, et al. ICU Cockpit: a platform for collecting multimodal waveform data, AI-based computational disease modeling and real-time decision support in the intensive care unit. Journal of the American Medical Informatics Association. 2022;29(7):1286-91.
Cecconi M, Greco M, Shickel B, et al. Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22. Critical Care. 2025;29(1):290.
European Parliament and European Council. Artificial Intelligence Act. Laying down harmonised rules on artificial intelligence and amending Regulations. Official Journal of the European Union. 2024;1689(3):1-144.
FDA US. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: US Food and Drug Administration. US Food and Drug Administration. 2025.
Greshake K, Abdelnabi S, Mishra S, Endres C, Holz T, Fritz M. Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection. 2023.
Kosmyna N, Hauptmann E, Yuan YT, et al. Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. 2025.
Liu T-L, Hetherington TC, Dharod A, et al. Does AI-Powered Clinical Documentation Enhance Clinician Efficiency? A Longitudinal Study. NEJM AI. 2024;1(12).
Lotano R, Gerber D, Aseron C, Santarelli R, Pratter M. Utility of postintubation chest radiographs in the intensive care unit. Critical Care. 2000;4(1):50.
Omiye JA, Lester JC, Spichak S, Rotemberg V, Daneshjou R. Large language models propagate race-based medicine. npj Digital Medicine. 2023;6(1):195.
Seymour CW, Kennedy JN, Wang S, et al. Derivation, Validation, and Potential Treatment Implications of Novel Clinical Phenotypes for Sepsis. JAMA. 2019;321(20):2003.
Shi J, Bendig D, Vollmar CH, et al. Mapping the Bibliometrics Landscape of AI in Medicine: Methodological Study. Journal of Medical Internet Research. 2023;25:e45815.
Souly A, Rando J, Chapman E, et al. Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples. 2025.
Warraich HJ, Tazbaz T, Califf RM. FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. JAMA. 2025;333(3):241.
