The World Health Organisation (WHO) has issued fresh guidelines regarding the ethical and governance aspects of large multi-modal models (LMMs) – a class of generative artificial intelligence (AI) that's rapidly expanding within the healthcare sector.

 

Over 40 recommendations have been put forward for governments, tech firms, and healthcare institutions to ensure LMMs are utilised in ways that safeguard and promote public health.

 

LMMs possess the capacity to process various forms of data inputs, including text, videos, and imagery, to produce a wide array of outputs that transcend the initial data type. Notable for their imitation of human interaction and their ability to execute tasks beyond their explicit programming, LMMs have seen the quickest adoption rate of any consumer technology to date, with platforms such as ChatGPT, Bard, and Bert becoming household names in 2023.

 

Dr Jeremy Farrar, WHO Chief Scientist, stated, "Whilst generative AI holds promise for enhancing healthcare, its success hinges on the identification and comprehensive management of the risks involved by developers, regulators, and users. There's a critical need for transparent information and policies to navigate the creation, development, and application of LMMs to enhance health outcomes and bridge existing health disparities."

 

Potential Advantages and Perils

The recent WHO guidance delineates five key healthcare applications for LMMs:

 

  • Diagnostic and clinical care, including responses to written patient enquiries;

  • Patient-led usage for symptoms and treatment exploration;

  • Administrative duties such as electronic health record documentation and patient visit summaries;

  • Educational support for medical and nursing trainees through simulated patient interactions;

  • Support for scientific research and pharmaceutical development, such as discovering new compounds.

 

However, alongside the potential for specific health applications, there is evidence of risks associated with LMMs, including the production of misleading, inaccurate, biased, or incomplete information that may detrimentally influence health-related decision-making. Moreover, LMMs might be developed with data that are intrinsically biased across various demographic lines.

 

The guidelines also highlight broader risks for health systems, like the availability and affordability of top-tier LMMs. These models can induce 'automation bias' in healthcare practitioners and patients, leading to overlooked errors or improper delegation to LMMs. Additionally, as with other AI forms, LMMs face cybersecurity threats, potentially compromising patient data or the integrity of healthcare services.

 

To foster the development of secure and effective LMMs, WHO emphasises the need for collective engagement from all societal sectors, including government bodies, tech companies, healthcare providers, patients, and community groups throughout the lifecycle of these technologies, as well as in their regulation and oversight.

 

Dr Alain Labrique, WHO Director for Digital Health and Innovation, noted, "An international, cooperative approach is vital for governments to effectively regulate and manage the development and utilisation of AI technologies like LMMs."

 

Principal Recommendations

The guidance calls on governments to take the lead in establishing norms for LMM development and application, as well as their integration into public health and medical fields. For instance, governments are encouraged to:

 

  • Invest in or facilitate access to non-commercial or state-supported infrastructure, including computational resources and public data, available to all sectors, which in return for access, mandates adherence to ethical principles.

  • Apply legislation, policies, and regulatory measures to ensure all LMM applications in healthcare comply with ethical and human rights standards, safeguarding individual dignity, autonomy, and privacy.

  • Delegate authority to an existing or new regulatory body to evaluate and authorise LMMs for healthcare use, within available resources.

  • Implement compulsory auditing and impact evaluations post-release, conducted by independent entities, to be made public and include detailed analysis of effects across different demographics.

 

Developers of LMMs are also advised to ensure the involvement of a broad range of stakeholders, including potential users and those affected by the technology, in the AI's early design stages. The aim is to foster an inclusive and transparent process, allowing for ethical considerations and feedback to shape the development of AI applications.

 

Furthermore, LMMs should be crafted to perform specific tasks with the required precision and dependability to enhance healthcare system capacities and promote patient welfare. Developers must also anticipate and comprehend potential indirect consequences.

 

This latest document, "Ethics and governance of AI for health: Guidance on large multi-modal models," builds on WHO's prior guidance released in June 2021.

The full publication can be accessed here: https://www.who.int/publications/i/item/9789240084759

Source & Image Credit: WHO

 

«« Impact of AI Recent Advances in Musculoskeletal Radiology


Leading Radiology Societies Unite to Guide AI Integration »»



Latest Articles

World Health Organisation, Artificial Intelligence, Large Multi-Modal Models, Ethical AI Use, AI Policy Recommendations, LLM Guidelines Explore the World Health Organization's comprehensive recommendations for the ethical development, deployment, and use of Large Multi-Modal Models (LMMs) in healthcare.