WHO Report: AI in Health - 6 Guiding Principles for its Design and Use

WHO Report: AI in Health - 6 Guiding Principles for its Design and Use
share Share

Ethics and human rights must be a part of the design, deployment, and use of Artificial Intelligence (AI) in the delivery of healthcare, according to the World Health Organisation’s first global report on AI in health. The WHO report, Ethics and governance of artificial intelligence for health, was released June 28th, and is the result of two years of consultations by international experts.


“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This important new report provides a valuable guide for countries on how to maximise the benefits of AI, while minimising its risks and avoiding its pitfalls.”


WHO provides the following six principles to ensure AI works for the public interest:


  1. Protecting human autonomy: In the context of healthcare, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.
  2. Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.
  3. Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
  4. Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
  5. Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.
  6. Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimise their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for healthcare workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.


Source: WHO 

Photo: iStock

«« Is Alexa Spying on You?


Can We Make Text-Based AI Less Racist, Please? »»

Published on : Mon, 28 Jun 2021



Related Articles

In a class action lawsuit filed last week, healthcare workers alleged that their Amazon Alexa-enabled devices may have recorded... Read more

Ransomware attacks against healthcare organisations have jumped about 45% since early November, following an alarming 71% increase... Read more

ethics, healthcare system, Artificial Intelligence, WHO, digital ethics Ethics and human rights must be a part of the design, deployment, and use of Artificial Intelligence (AI) in the delivery of healthcare, according to the

No comment


Please login to leave a comment...

Highlighted Products