Artificial intelligence (AI) has the potential to improve diagnosis, prognostication, workflows, and personalised care in critical care, but implementation without a structured, risk-aware approach may lead to harm. Despite ICU pressures from staffing shortages, case complexity, and rising costs, most AI tools remain poorly validated in real-world settings.

 

A new paper calls for the critical care community to adopt a pragmatic, clinically informed framework for AI integration and provides concrete, multidisciplinary consensus recommendations to guide safe and effective adoption of AI in critical care.

 

The expert consensus identified key barriers to AI implementation in critical care, prompting recommendations for clinicians, patients, and societal stakeholders on advancing AI in healthcare. Challenges and guiding principles are organised into four areas:

 

Human-centric and ethical AI: AI’s misuse risk requires clinician involvement alongside regulatory frameworks to guide safe, effective implementation.

Human-centric AI development: AI should enhance empathetic care and patient-physician communication by reducing administrative burdens (e.g., documentation), preserving the human core of medicine.

Social contract for AI use: To prevent AI from worsening healthcare disparities, stakeholders, including patients, should help define AI’s roles, limits, and accountability. Hospitals should establish oversight mechanisms (e.g., AI committees) to ensure safe, fair, and transparent AI deployment.

Human oversight and ethical governance: Clinicians must lead AI integration while upholding ethical responsibility, fairness, and scientific rigour, ensuring AI aligns with patient-centred care and ICU decision-making complexities.

 

Most AI models are developed outside the medical community, creating a misalignment with clinical ethics. To address this, the authors propose multidisciplinary boards, including clinicians, patients, ethicists, and technology experts, to systematically review AI behaviour in critical care, assess bias risks, and promote transparency. This approach positions AI development as an opportunity to advance ethical principles in patient care.

 

To integrate AI into critical care, it is essential to understand and design the human-AI interface to complement clinical reasoning. Research should focus on how clinicians interact with AI, avoiding overreliance or unintended influences on AI outputs. Emphasising human-AI augmentation, where AI enhances, rather than replaces, clinician performance, is a practical starting point. Tools like interpretable, real-time dashboards can synthesise complex data into clear visuals, improving situational awareness without overwhelming clinicians and supporting effective AI use in clinical practice.

 

AI’s impact on healthcare requires equipping clinicians with foundational knowledge of data science, AI concepts, methods, and limitations, starting in undergraduate medical education. Including these topics in core curricula will enable clinicians to critically assess AI, identify biases, and make informed decisions, while opening new career pathways in AI and data analysis.

 

Beyond undergraduate training, ongoing education for physicians, nurses, and allied health professionals is essential. AI can also support personalized, adaptive training using tools like chatbots and intelligent tutoring systems to tailor education in both clinical and AI-specific domains, preparing healthcare workers for responsible AI use in practice.

 

Uncertainty is inherent in clinical decision-making, but AI introduces new uncertainty, especially when models operate as opaque “black boxes,” which can undermine clinician trust. Explainable AI (XAI) helps by making predictions more interpretable, but interpretability alone is insufficient. To build trust and accelerate adoption, physicians should be trained to interpret AI outputs under uncertainty, assessing plausibility, consistency with biology, and alignment with clinical reasoning, rather than expecting complete explainability.

 

Key infrastructures for AI in critical care require investment that improves patient outcomes, efficiency, and lowers costs. Data ownership should remain with healthcare institutions, recognising patients and providers as stakeholders who benefit from their data’s value. Without safeguards, clinical data risk becoming proprietary assets for private companies, which may resell data back to institutions instead of using it to improve care.

 

Standardising data collection is vital for creating reproducible, generalisable AI models and ensuring interoperability across centres. Critical care data come from diverse sources, EHRs, multi-omics, imaging, and unstructured clinical notes, whose integration is complicated by differences in format, quality, and local policies. Hospitals, device manufacturers, and EHR vendors must adopt these standards to prevent interoperability barriers. Additionally, AI can aid data standardisation by automatically labelling, tracking provenance, and harmonising formats, improving reliability and scalability of AI applications.

 

AI research in critical care should meet the same rigorous methodological standards as other medical research, with greater accountability from reviewers and journals to ensure transparency, rigour, and clinical relevance. AI can enhance randomised clinical trials (RCTs) by enabling precise patient subgroup selection, improving trial efficiency, and addressing population heterogeneity that often limits critical care trials.

 

Critical care AI research should adopt these standards, including pre-registered trials, prospective validation across diverse ICU populations, and standardised performance benchmarks, to ensure clinical effectiveness, reproducibility, and safe integration into practice.

 

AI regulation remains a major challenge for clinical use in critical care due to the complexity of governance, surveillance, and performance evaluation across diverse settings. The EU AI Act classifies medical AI as high-risk, imposing strict requirements for transparency, human oversight, and post-market monitoring. While these regulations provide a foundation, critical care AI needs specialised oversight. Effective governance should integrate regulatory, professional, and institutional efforts to create actionable policies that balance innovation with patient safety.

 

Developing large AI models requires significant resources, so physicians and regulators should encourage partnerships among healthcare institutions, tech companies, and governments. Professional societies and regulatory bodies like SCCM, ESICM, and EMA must create clinical AI guidelines covering validation, clinician collaboration, and accountability. Governance should be transparent, multidisciplinary, and operate at national and supranational levels. Regulation should be adaptive and risk-based, emphasising ongoing monitoring over rigid pre-market controls. Mandatory AI performance reporting and hospital AI safety committees can help ensure clinical AI reliability and safety.

 

AI adoption varies widely across regions due to differences in technology access, investments, and priorities, creating an “AI divide” that risks worsening social and economic inequalities. The EU Commission aims to coordinate strategies to reduce this divide by supporting training, infrastructure, and common guidelines. The United Nations also recommends education, cooperation, and equitable AI resource distribution. The medical community should advocate nationally and internationally, through societies and organisations like WHO, for collaborations, standardised data policies, and targeted grants to promote equitable AI access and infrastructure worldwide.

 

Current regulations inadequately address dynamic AI models in critical care, which continuously evolve with new data. Unlike static FDA approvals, the EU AI Act requires continuous risk assessment and post-market surveillance for high-risk AI systems. This real-time monitoring. covering auditing, validation, incident reporting, and bias detection. should be adopted globally to manage AI risks in ICUs. Companies must maintain ongoing surveillance and report serious incidents promptly.

 

Deploying AI in complex clinical settings like ICUs requires adequate regulation focused on three key aspects: (1) rigorous evaluation of AI safety and efficacy before use; (2) mandatory continuous post-market evaluation similar to other medical devices; and (3) clear liability frameworks to determine accountability and ensure proper insurance coverage if AI-related harm occurs. Regulatory bodies should update legislation accordingly, and patients and clinicians should advocate for these regulatory improvements to support safe AI integration and reduce legal risks.

 

Source: Critical Care

Image Credit: iStock 

 


References:

Cecconi M, Greco M, Shickel B et al. (2025) Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22. Crit Care. 29, 290. 



Latest Articles

ICU, AI, Critical Care Medicine, Artifical Intelligence, clinical practice Artificial Intelligence in Critical Care Medicine