AI ethical-legal issues: CAR White Paper

share Share
Artificial intelligence (AI) software that analyses medical images is becoming increasingly prevalent. However, the promise of AI as a tool for promoting and enhancing personalised medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimisation, and prognostication. 

With radiology poised to lead development and implementation of AI algorithms, the Canadian Association of Radiologists (CAR) has released a white paper summarising key ethical and legal issues related to AI use, primarily focusing on imaging data, although the issues raised are also pertinent to non-imaging data in electronic medical records.


For AI to be implemented successfully beyond individual projects, there needs to be a strong guarantee of data security and of complete anonymisation of all data processed for secondary use in AI. There also needs to be a better understanding of the risks versus benefits of sharing health information.

The authors of this paper, representing the CAR, believe that the benefits of AI can outweigh risks when institutional protocols and technical considerations are appropriately implemented to safeguard or remove the individually identifiable components of medical imaging data. The CAR AI Working Group makes these recommendations.

1. CAR to advocate for public education programmes to increase public awareness of the benefits of sharing fully anonymised personal health data, and harm reduction strategies.

2. CAR to advocate for general adoption of revised forms of consent (such as “broad consent”) for appropriately safeguarded secondary use of data for AI in Canadian healthcare.

3. CAR to develop a framework to guide approaches to data security, anonymisation, and secondary use of radiology data.

Paradigm shift

Sharing of medical data is particularly important for radiology AI data analysis, which uniquely requires large quantities of sensitive image data for algorithm training. A paradigm shift — from a patient's right to near-absolute data privacy, to the sharing of anonymised data becoming regarded as one of the duties or responsibilities of a citizen — is underway. This requires a move from “informed consent” for traditional research projects, toward other forms of consent (“broad consent,” “opt-out' consent,” “presumed consent”) for AI data analyses.

Radiology databases are built using DICOM files, and de-identification of patient data is often more challenging than expected. CAR says best practices for anonymisation in radiology can include modifications to the DICOM standard, working with manufacturers to avoid placing identifiable data in proprietary fields within DICOM files, optimising hospital protocols to minimise data risks, encouraging researchers to use validated protocols of de-identification, and investigating safer means of data sharing such as containerisation and blockchain. 

Data privacy also involves minimising risks during the process of data transfer. Data sharing is needed between AI stakeholders, such as between a healthcare institution with patient data and an AI team or company performing analysis. Each method of data transfer has inherent security risks, e.g., internet data interception or loss or theft of disks being physically transferred. CAR therefore recommends that for any AI analysis that requires data transfer between stakeholders, a protocol be developed specifically demonstrating the security of this transfer.

Data custodian

Especially when consent has not been explicitly granted for a specific project, the data custodian (such as a hospital, regional health authority, or even a radiology partnership) plays a vital role in determining which AI projects are ethically appropriate to perform. Data custodians serve as the patient's proxy to make decisions that balance positive consequences for a particular group with justice for all groups and privacy for the individual.

Given that liability for malpractice may rest increasingly on the institution implementing AI, a radiology department or group adopting AI must pay careful attention to the algorithms integrating AI into their healthcare environment. Fear of liability can have a chilling effect; for example, discouraging data sharing out of fear that this will be used against the institution providing the data.

In a world where data is increasingly valuable and the role of the data custodian is increasingly important, implementation of AI in radiology requires thoughtful planning and frequent re-evaluation.

Image Credit: iStock

References:


Published on : Wed, 10 Apr 2019



Related Articles

“Mammography does more harm than good” sums up countless headlines seen in many medical journals, books, and other published articles... Read more

A new initiative launched by the ACR Data Science Institute aims to empower radiologists to develop artificial intelligence... Read more

The spread of infectious diseases in hospitals and the increase in hospital deaths due to Health Care Acquired Infections is... Read more

Radiology, Healthcare, Data, patient data, algorithms, diagnosis, liability, CAR, malpractice, DICOM, data sharing, Artificial Intelligence, AI, blockchain, Data Privacy, Medical Images, prognostication, Canadian Association of Radiologists, radiology department, medical data, patient consent, radiology partnership, Data custodians, containerisation, data risks, algorithm training, anonymisation, The CAR AI Working Group, ethical issues, legal issues, ethics and medical legal issues, predicting disease, treatment optimization Artificial intelligence (AI) software that analyses medical images is becoming increasingly prevalent. However, the promise of AI as a tool for promoting and enhancing personalised medicine can only be fulfilled with access to large quantities of medical

No comment


Please login to leave a comment...

Highlighted Products