The rapid integration of artificial intelligence in healthcare has sparked transformative possibilities, offering benefits ranging from enhanced diagnostic accuracy to streamlined administrative processes. However, with these advancements come significant legal challenges that remain inconsistently addressed across the various disciplines involved in health AI. This fragmented landscape poses a threat to the responsible governance of AI technologies. A scoping review of multidisciplinary literature on health AI reveals the extent of this divergence and the urgent need for coordinated efforts to ensure safe, fair and effective regulation. 

 

Disciplinary Gaps in Legal Engagement 
Despite the interdisciplinary nature of health AI, substantial gaps persist in how different fields engage with its legal implications. Medical and legal professionals dominate discussions of legal risks, collectively producing nearly two-thirds of the reviewed literature. These groups frequently prioritise regulatory efficiency and privacy but diverge in their secondary concerns: legal scholars tend to emphasise liability, while clinicians focus more on safety and quality.

 

Must Read: The Evolution of AI in Healthcare: Navigating Innovation and Regulation 

 

Meanwhile, authors from engineering and computer science, who are often at the forefront of AI innovation, have a strikingly minimal presence in legal discourse. This lack of participation from AI developers is problematic. Developers possess critical insights into technical feasibility and innovation constraints, which are essential for shaping realistic and enforceable legal frameworks. Their absence also reduces the likelihood that privacy, bias mitigation and accountability mechanisms are embedded into systems from the design stage. Furthermore, there is a notable lack of literature from the Global South, limiting the global applicability of regulatory strategies and overlooking context-specific concerns in low- and middle-income countries.

 

Diverging Legal Concerns Across Disciplines 
A key finding of the review is the shared concern over inefficient regulation, particularly among authors from medicine and law. However, when examined more closely, the disciplines often characterise issues differently. For example, informed consent is approached from contrasting angles: legal scholars emphasise patient rights and disclosure obligations, while clinicians tend to associate consent with data privacy, showing limited discussion on AI-specific consent for treatment.

 

These differences also manifest in views on liability. Legal authors are more likely to anticipate a reduced liability burden on physicians as AI becomes part of the standard of care. In contrast, medical writers express concerns that reliance on AI could be interpreted as negligence, especially if outcomes are adverse. Such divergent interpretations could result in inconsistent court decisions and regulatory responses if not harmonised.

 

Equity and access are additional areas where perspectives vary. Legal experts highlight systemic risks, such as insurance discrimination or algorithmic exclusion from healthcare services, while clinicians focus on individual-level inequities, such as the digital divide and unequal access to AI tools. Without integrating these perspectives, regulatory efforts risk favouring one type of inequality while neglecting others, undermining the goal of equitable healthcare.

 

Towards a Multidisciplinary Regulatory Framework 
The fragmented nature of current discussions underscores the need for an integrated approach to AI governance in healthcare. Multidisciplinary collaboration can reveal the blind spots in each discipline’s perspective and foster more comprehensive regulation. For instance, AI developers can clarify what is technically achievable in areas like algorithm explainability or bias mitigation, while legal experts can identify the risks and obligations associated with these innovations. Similarly, input from clinicians ensures that any regulatory framework is practical for real-world application and that patient safety remains central.

 

Such collaboration also aids in balancing competing priorities, such as privacy versus equity. Effective AI requires access to large, diverse datasets, including sensitive information related to race or socioeconomic status. While stronger privacy laws can protect individuals, they may inadvertently reduce the data needed to train unbiased algorithms. Reconciling these tensions requires dialogue across domains, especially as patient trust depends on how well these trade-offs are navigated.

 

The inclusion of underrepresented voices, particularly from the Global South, is also vital. Legal and ethical risks often have different implications in countries with fewer regulatory resources or distinct healthcare challenges. Moreover, global equity in AI governance demands that those most likely to be affected by its limitations or failures have a say in shaping solutions.

 

Health AI promises considerable benefits, but only if its development and deployment are matched by robust, inclusive governance. The scoping review highlights the current lack of interdisciplinary integration in legal discourse, particularly the underrepresentation of developers, clinicians and Global South stakeholders. Addressing these gaps is essential for crafting legal frameworks that are both effective and equitable.

 

Governments and institutions must actively promote collaboration across disciplines to ensure that AI regulation reflects the complexity of healthcare and technology alike. By bringing together diverse perspectives, stakeholders can build a shared understanding of risks and trade-offs, ultimately fostering responsible innovation and safeguarding public trust in health AI systems.

 

Source: BMJ Health & Care Informatics 

Image Credit: Freepik


References:

Nunnelley S, Flood CM, Da Silva M et al. (2025) Cracking the code: a scoping review to unite disciplines in tackling legal issues in health artificial intelligence. BMJ Health & Care Informatics, 32:e101112. 



Latest Articles

Health AI, artificial intelligence, AI regulation, digital health law, health technology governance, AI ethics, interdisciplinary healthcare, legal framework, health data privacy The rapid integration of artificial intelligence in healthcare has sparked transformative possibilities, offering benefits ranging from enhanced diagn...