Artificial intelligence is being adopted across healthcare faster than legal and governance frameworks are being put in place, according to the World Health Organization (WHO). WHO/Europe is urging countries to align AI use with public health goals, build workforce skills and strengthen legal and ethical safeguards, with particular emphasis on clarifying accountability when AI systems make mistakes or cause harm. The warning comes as many health systems report active deployment of AI tools, especially in diagnostics and patient-facing support. At the same time, widespread legal uncertainty is being reported as a major barrier to wider adoption, raising questions about trust, safety and recourse when outcomes fall short.
Adoption Moves Faster Than Governance
Survey responses from 50 of the 53 member states in the WHO European region, which includes the UK, indicate that AI is already embedded in several parts of care delivery. Thirty-two countries, representing 64% of respondents, reported using AI-assisted diagnostics, particularly in imaging and detection. Patient engagement and support tools are also becoming common, with half of countries in the region having introduced AI chatbots.
Must Read: AI Efficiency Raises Demand and Reframes Value in Global Health
Alongside deployment, countries are beginning to define where AI should be applied. Twenty-six countries, or 52%, reported identifying priority areas for AI in health. However, implementation appears uneven, with only a quarter allocating funding to put those priorities into practice. WHO/Europe’s position is that national approaches should be shaped by public health objectives rather than technology momentum alone, supported by investment in skills and clearer safeguards that keep patients and health workers at the centre of decisions.
Liability Gaps and Barriers to Trust
Despite the pace of adoption, WHO/Europe reports a limited presence of liability standards that determine responsibility when an AI system makes an error or causes harm. Fewer than one in 10 countries, 8%, reported having such standards for AI in health. This gap is closely linked to broader concerns about uncertainty: 86% of countries said legal uncertainty is the primary barrier to AI adoption, while 78% cited financial constraints as another major obstacle.
WHO/Europe argues that uncertainty about accountability can affect both clinical confidence and patient protection. Where legal standards are unclear, clinicians may be reluctant to rely on AI tools, while patients may not have a clear route for redress if something goes wrong. In response, WHO/Europe is calling for clearer accountability and the establishment of mechanisms for addressing harm. It also emphasises that AI systems should be tested for safety, fairness and real-world effectiveness before they reach patients, positioning these checks as foundational to safe deployment rather than optional enhancements.
Countries’ stated motivations for adopting AI underline why clarity matters. Improving patient care was cited by 98% of countries, followed by reducing workforce pressures at 92% and increasing efficiency and productivity at 90%. WHO/Europe frames the current moment as a choice between using AI to support wellbeing, ease pressure on exhausted health workers and reduce costs or allowing weak safeguards to undermine patient safety, compromise privacy and entrench inequalities in care.
National Strategies and Practical Examples
At a strategic level, WHO/Europe identifies limited national planning dedicated specifically to health. Only four countries, 8%, reported having a dedicated national AI strategy for health, with a further seven, 14%, developing one. This suggests that while AI tools are already in use, formal national frameworks tailored to healthcare may lag behind operational activity.
WHO highlighted several examples of countries integrating AI into healthcare in ways that connect policy direction with enabling infrastructure or training. Estonia was cited for linking electronic health records, insurance data and population databases into a unified platform that supports AI tools. Finland was cited for investing in AI training for health workers. Spain was cited for piloting AI to support early disease detection in primary healthcare. These examples reflect different entry points for implementation, including data integration, workforce capability and targeted pilots in front-line settings.
Related reporting from the Global Government Forum adds context from the National Health Service (NHS) in England. Based on interviews with chief digital and information officers in trusts, it reported that many trusts are using or exploring AI across both back-office and clinical domains. Some, but not all, reported having formal AI policies, with measures in certain organisations including AI and data ethics committees and AI working groups. Several accounts described a gap between experimentation and governance, with a sense that adoption is advancing ahead of consistent oversight. NHS England chief executive Jim Mackey characterised AI as attractive as a single solution, while stressing that implementation is more complicated and needs wider adoption across clinical and operational settings, grounded in shared understanding. In the same forum, Northumbria Healthcare NHS Foundation Trust chief executive Dr Birju Bartoli emphasised that public confidence depends on openness and communication about why AI is being used and what checks and balances are in place, linking acceptance to visible improvement in patient experience.
WHO/Europe’s message is that healthcare AI is progressing rapidly in practice, while accountability, funding and governance remain uneven. With 64% of surveyed countries reporting AI-assisted diagnostics and half reporting AI chatbots, the technology is already influencing how care is delivered. Yet only 8% report liability standards, and legal uncertainty is cited as the primary barrier by 86% of countries. The central operational implication is that safe scaling depends on clarifying responsibility for harm, establishing routes for redress and ensuring systems are tested for safety, fairness and real-world effectiveness before reaching patients, while matching ambitions with workforce skills and funded priorities.
Source: Global Government Forum
Image Credit: iStock