On November 30, 2022, OpenAI released ChatGPT, the first chatbot and virtual assistant powered by large language models (LLMs). Within five days, ChatGPT attracted over 1 million users and reached 200 million monthly active users worldwide within fifteen months. This rapid surge in interest transformed artificial intelligence (AI) from a niche concept into a mainstream phenomenon.
 
AI and machine learning have made significant progress in medicine and healthcare. The emergence of prescriptive and generative AI has introduced new opportunities to revolutionise how healthcare professionals diagnose, treat, and monitor patients. AI has the potential to enhance diagnostic accuracy and provide personalised care by connecting digital medical data, clinical decisions, and optimised healthcare delivery. As LLMs evolve, addressing the technical, ethical, social, and practical challenges is crucial.

 

AI’s role is evolving from a mere tool to an assistant and potentially a colleague. Like human colleagues who follow strict ethical and professional standards, AI systems must also be designed with similar guidelines to support healthcare professionals and maintain integrity and trust in clinical settings.

 

Establishing clear guidelines and regulations for augmented intelligence is essential for integrating AI into healthcare teams. This ensures that AI enhances care delivery safely and reliably without compromising patient safety and autonomy, benefiting all communities, including those in low-resource settings and minority groups.

 

Numerous studies have shown that predictive models can identify patterns or early warning signs of critical conditions, leading to timely interventions and better patient outcomes. AI systems can integrate data from various sources, such as imaging, electronic health records, and wearable devices, providing a comprehensive view of a patient’s condition. This capability helps healthcare professionals make informed decisions tailored to individual patient needs.

 

AI can also streamline administrative tasks like note-taking, documentation, and communication between healthcare providers and patients. In research, AI can improve trial design and execution by identifying precise patient phenotypes for accurate inclusion criteria, enabling real-time monitoring of participants, and allowing for adaptive trial designs based on emerging data.

 

The concept of creating digital twins—accurate, data-driven simulations of patients and healthcare systems—can optimise resource allocation and guide care delivery. This approach enables controlled experiments to identify the best strategies for personalised precision medicine, potentially reducing the risks and costs of testing new treatments.

 

Despite its promise, implementing AI in real-time clinical decision-making remains challenging. Standardised data frameworks are essential to facilitate seamless healthcare data exchange across systems. Data fragmentation hinders the development of robust AI models and their integration into clinical workflows.

 

In ICUs, the diverse conditions patients present make it challenging to classify medical phenotypes without detailed patient data. Real-time data collection and analysis are vital but not yet widely implemented. Collaborative real-time data networks are crucial, as no single ICU can independently gather all the necessary information.

 

AI-based clinical decision support systems often lack situational awareness due to limited training in real-world clinical decision-making processes. This limitation can prevent AI from understanding clinical contexts and offering valuable input. Concerns about privacy, data security, and transparency also pose challenges for patients, families, healthcare organisations, and governments.

 

Clinician acceptance is further complicated by the ‘black box’ problem, where AI models lack transparency. This can lead to skepticism among clinicians who need help understanding the systems’ decision-making processes.

 

Addressing these challenges requires a comprehensive framework prioritising some key elements. It is important to develop a social contract with input from clinicians, data experts, policymakers, patients, and families to ensure AI tools respect patient rights and autonomy while upholding ethical standards. Designing AI systems that enhance clinical decision-making and maintain the clinician-patient relationship is also important. 

 

Establish unified standards and infrastructure to enable seamless data sharing, foster collaboration and create real-time clinical research networks to study rare events, improve phenotyping, and enable personalised AI models.

 

Healthcare professionals must be trained to use AI tools effectively, understand their limitations, and interpret probabilistic information. Partnerships between the public and private sectors should be encouraged to address needs in acute and critical care, focusing on inclusivity and support for low-resource settings.

 

Integrating AI in medicine has the potential to transform healthcare delivery. Achieving this vision requires a unified effort among stakeholders to advocate for robust data infrastructures, ethical frameworks, and collaborative networks. By focusing on data standardisation, real-time ICU networks, education, and establishing a new “social contract for AI,” critical care can move towards a future where AI-enabled care enhances patient outcomes and strengthens the clinician-patient relationship.

 

Source: Critical Care
Image Credit: iStock 

 


References:

Cecconi M, Greco M, Shickel B et al. (2024) Artificial intelligence in acute medicine: a call to action. Crit Care. 28, 258.



Latest Articles

Artificial Intelligence, AI, OpenAI, ChatGPT, large language models, LLMs, machine learning, acute medicine Call to Action: Artificial intelligence in Acute Medicine