The integration of AI tools into clinical practice is advancing rapidly, but mechanisms to ensure their safety and effectiveness remain insufficient. That is why healthcare organisations (HCOs) and clinicians face challenges in adopting AI responsibly.
Risks like model drift, algorithmic errors, biases, and overfitting have already been observed in early AI applications, resulting in reduced accuracy and errors in classification or prediction. Generative AI adds the risk of "hallucinations," where false or misleading information is generated, potentially endangering patient safety. Addressing these issues is essential to improve clinical decision-making and patient outcomes.
AI is defined as a machine-based system that uses inputs to perceive environments, create models, and make predictions, recommendations, or decisions to influence real or virtual settings. Often integrated into or reliant on electric health records (EHRs), AI systems require shared responsibility among developers, EHR vendors, and healthcare organisations. This responsibility involves implementing clinical, technical, and administrative governance, along with policies, risk management, and monitoring, to ensure AI is used safely, securely, ethically, and equitably.
HCOs should rely on real-world clinical evaluations published in reputable medical journals before implementing AI-enabled systems in routine care. Peer-reviewed studies, while not guaranteeing safety or effectiveness, offer independent assessments of AI systems. Additionally, HCOs should conduct their own testing and monitoring with local data to ensure patient safety. Ongoing assessments are essential to confirm that AI applications provide clinical benefits, are financially sustainable, and adhere to ethical principles.
HCOs should establish a dedicated AI governance and safety committee or incorporate AI experts—such as data scientists, machine learning professionals, human factors experts, and clinical specialists—into existing oversight committees. This group should have the expertise to evaluate AI system performance, create safety-focused governance structures, and review evidence for the safety and effectiveness of new AI applications before implementation. Regular meetings should be held to monitor AI performance and ensure ongoing oversight.
Before using AI-enabled systems for patient care, HCOs must establish policies and procedures to ensure that both patients and clinicians are aware—when possible—that AI systems are involved in clinical and administrative decision-making.
The AI committee should maintain an inventory of AI-enabled systems used in clinical care, including details like deployment date, version, responsible personnel, review dates, authorised users, data sources, and validation information. HCOs should also keep a transaction log for AI system use, tracking the version, usage time, patient and user IDs, input data, and AI recommendations. An internal process should be developed to evaluate AI system performance on local data before routine use and periodically check for issues like drift, bias, or decay. The committee should oversee ongoing testing to ensure the safe performance and use of AI systems.
HCOs should develop high-quality training programmes for clinicians using AI systems, focusing on the risks and benefits of AI tools. Training should include a formal consent process with signatures to confirm clinicians understand these factors before gaining access. Additionally, steps should be taken to inform patients about the development and use of AI systems, ensuring they understand how AI recommendations are reviewed by clinicians before being shared. AI-generated recommendations should always be reviewed and approved by humans, who take responsibility for them before they are sent to patients.
HCOs should establish a clear process for reporting AI-related safety issues and implement a rigorous, multidisciplinary approach to analyse and mitigate these risks. They should also participate in national postmarketing surveillance systems that collect and analyse deidentified safety data. This requires HCOs to submit standardised, deidentified information to a national repository, detailing the sociotechnical aspects of safety concerns, including technical and non-technical factors, the AI system involved, and the impact on patient care.
HCOs must establish clear instructions and authority for authorised personnel to disable AI systems at any time in case of urgent malfunctions. Similar to EHR downtime procedures, HCOs should have policies to manage clinical and administrative processes when AI is unavailable. Additionally, HCOs should regularly assess the impact of AI systems on patient outcomes, clinician workflows, and overall quality. If AI systems fail to meet their pre-implementation goals, the models should be revised, or the system should be decommissioned if improvements are not possible.
As HCOs adopt AI-driven technologies, unintended adverse consequences are likely, especially during transitions. To mitigate these risks, HCOs and AI/EHR developers must work together to ensure AI systems are robust, reliable, and transparent. HCOs should establish AI safety assurance programmes that focus on shared responsibility, implement a comprehensive approach to AI adoption, and monitor its use while engaging clinicians and patients.
Ongoing risk monitoring is essential to maintaining system integrity, prioritising patient safety, and ensuring data security. These recommendations aim to reduce risks, build trust, and promote the safe and effective adoption of AI in health care.
Source: JAMA
Image Credit: iStock
References:
Sittig DF, Singh H (2024) Recommendations to Ensure Safety of AI in Real-World Clinical Care. JAMA.