The application of conversational agents (CAs, or chatbots) in healthcare is just beginning, but there are a number of clinical, legal, and ethical aspects to be considered by clinicians and organisations.

 

You might also like:Chatbot Use in Mental Health: Decisive Factors

 

The COVID-19 pandemic has been a catalyst for telehealth system adoption. Chatbots is one promising area that increasingly finds its way into healthcare practice. The authors of a viewpoint published online in JAMA (McGreevey et al. 2020) note that in view of their rapidly developing autonomy, implementation of CA systems should be accompanied by thorough analysis of various factors. The suggest focussing on the following points when implementing chatbots into practice.

  • Patient safety. Here the focus lies with the interactions monitoring, ie its organisation, scheduling, supervision by responsible persons as well as the chatbot’s technical capabilities.
  • Scope. It should be carefully decided which tasks would be shifted to chatbots and to what extent.
  • Trust and transparency. The necessary level of understanding the technology must be identified to ensure that clinicians and patients trust it.
  • Content decisions. Validity of sources for the CAs recommendations must be thoroughly assessed.
  • Data use, privacy, and integration. How the data generated during the interactions is stored, accessed, controlled and used is another important point, as well as whether that data can be integrated with the existing EHR system.
  • Bias and health equity. Special attention must be given to on which patient populations the chatbot algorithms are trained (eg their language, health literacy level, etc) and whether they can be adjusted if new populations are added.
  • Third-party involvement. There should be a balance between commercial and clinical use of chatbot data.
  • Cybersecurity. Encryption of data and access restrictions might help to protect the data.
  • Legal and licensing. Points for consideration here are accountability in case of the system’s failure, level of insurance involvement and credentials for chatbot systems.
  • Research and development questions. These include clarifications with regard to CA system’s approach and tone, topics and needs most common for patients as well as reasons and motives to use – or not – the system.
  • Governance, testing, and evaluation. Before the deployment, testing and evaluation procedures and methods should be analysed and fine-tuning capabilities assessed.
  • Supporting innovation. There should be a balance between implementation of the new technologies and patient comfort and safety.

 

The authors conclude that all stakeholders, such as clinicians, patients, healthcare leaders and vendors involved in creation, implementation and use of CA systems must rigorously evaluate the above points to ensure safe and efficient use of this emerging technology.

 

Source: JAMA

Image credit: danijelala via iStock

«« Remote Monitoring for COVID-19 Symptoms


mHealth with Pencil and Paper »»

References:

McGreevey JD et al. (2020) Clinical, Legal, and Ethical Aspects of Artificial Intelligence–Assisted Conversational Agents in Health Care. JAMA. Published online July 24. doi:10.1001/jama.2020.2724



Latest Articles

patient safety, clinical practice, clinical outcomes, health chatbots, fully automated conversational agents Questions to Ask About Chatbots