Artificial intelligence is increasingly integral to healthcare, offering advancements in diagnostics, treatment recommendations and resource allocation. However, without deliberate safeguards, AI can perpetuate and even deepen health inequities, particularly for marginalised populations. Biases embedded in electronic health record data risk reinforcing systemic disparities. To address these challenges, the AI Community-based Ethical Dialogue and Decision-making (AI CODE) framework provides a structured, stakeholder-driven approach to ensure AI systems align with ethical principles and promote health equity.
Addressing the Core Challenges of AI in Healthcare
AI systems trained on electronic health records inherit the biases embedded within those datasets. Such biases can stem from historical disparities in healthcare access, diagnosis and treatment, resulting in algorithms that produce inequitable recommendations. Marginalised patients may be disadvantaged if predictive models underestimate their risk or prioritise cost over clinical need. Transparency and accountability are also critical concerns; without interpretability, AI-driven recommendations can appear opaque, undermining trust and limiting adoption. Incomplete or unrepresentative EHR data compounds these risks, particularly when conditions disproportionately affecting certain groups are underdiagnosed or misclassified. Addressing these challenges requires bias detection mechanisms, representative datasets, regular audits and collaborative accountability among developers, clinicians and policymakers.
Must Read: Advancing Equity Through AI in Primary Care
The AI CODE Framework: Principles and Process
The AI CODE framework integrates ethical dialogue and deliberation into the design and deployment of AI systems in healthcare. Ethical dialogue ensures that diverse voices—patients, clinicians, developers, policymakers and community members—are heard and valued. Ethical deliberation enables shared decision-making where solutions are assessed not just for technical feasibility but for alignment with community values. The framework consists of five interlinked steps. First, contextual diversity and priority ensures that datasets and priorities reflect the needs of underserved populations. Second, sharing ethical propositions gathers stakeholder concerns about privacy, bias and fairness. Third, dialogic decision-making allows participants to evaluate and refine solutions collaboratively. Fourth, integrating ethical solutions modifies AI systems to align with agreed values, such as removing stigmatising language or adjusting risk profiles. Fifth, evaluating effectiveness establishes ongoing monitoring to detect and address emerging biases.
Applying AI CODE to Electronic Health Records
EHR-specific application of AI CODE begins with identifying and addressing data gaps, including underrepresented populations and missing social determinants of health. Structured discussions with community representatives help define core ethical priorities, while collaborative decision-making ensures AI outputs remain fair and contextually relevant. Developers can incorporate socioeconomic factors, cultural preferences and accessibility considerations into AI recommendations. For example, predictive models might account for transportation barriers, recommending telehealth where appropriate. Prototypes are reviewed and tested against stakeholder expectations before wider implementation. Regular auditing and feedback loops then ensure that AI systems continue to perform equitably across demographic groups, with refinements made as necessary to maintain fairness and trust.
Overcoming Barriers to Implementation
While AI CODE offers a structured approach, its adoption faces practical obstacles. Smaller healthcare organisations may lack the resources for full-scale implementation, making phased pilots and external funding critical. Some stakeholders may be hesitant to engage due to unfamiliarity with AI ethics, highlighting the need for early engagement and targeted training. Limited access to diverse datasets remains a persistent challenge, necessitating partnerships with community organisations and the establishment of data-sharing agreements. Finally, the long-term nature of bias detection demands sustained monitoring, which can be costly but is essential for maintaining ethical standards. Integrating these processes with EHR systems can improve efficiency, patient outcomes and reduce certain operational burdens.
The need for ethical, inclusive and transparent implementation grows more urgent. The AI CODE framework provides a practical pathway to ensure that AI tools are designed and deployed in ways that prioritise health equity and reflect the values of diverse communities. By addressing algorithmic bias, enhancing transparency and fostering sustained stakeholder engagement, healthcare organisations can harness AI’s potential while safeguarding against the perpetuation of disparities. The future of AI in healthcare must be guided not only by technical innovation but by a commitment to equitable outcomes for all patients.
Source: JAMIA Open
Image Credit: iStock