Fully automated conversational agents (CAs), also known as bots or chatbots, now commonly assist consumers when doing online shopping and banking transactions, as well as in making doctors' appointments.

 

You might also like:Healthcare Data: Creating a Learning Healthcare Ecosystem

 

A new area where intelligent chatbots can be useful is in supplementing traditional mental health treatment. With trained mental healthcare providers in short supply, and given the social stigma attached to mental illness, CAs can help provide needed therapies by mimicking human interactions in an engaging and nonjudgmental way.

 

However, new research (Prakash and Das 2020) has found several major factors (described below) that influence a person's decision to use mental healthcare CA. These factors can help guide vendors in developing new services or improving their current offerings to meet customer/user expectations. In addition, health authorities can use these research findings as a guide in formulating policies for integrating intelligent CAs in the healthcare delivery system.

 

Perceived risks. Data privacy and confidentiality are among the concerns that can deter consumers from adopting mental health chatbots. Users want adequate measures to protect and safeguard their identity, as they are worried their data may be shared with third parties. As such, CA application developers should reduce dependence on third-party messenger platforms to avoid data-sharing controversies. Another key concern is patient safety. Existing chatbots lack enough safeguards to deal with emergency situations – eg, to counter the risk to life from suicide or self-harming. Also, CAs must be able to limit the daily usage time beyond ‘unhealthy’ limits, preferably with a built-in mechanism to also encourage human interactions as part of mental healthcare.

 

Perceived benefits. CAs should have the ability to handle complex cases, especially in relation to users with severe mental disorders. It would help if vendors specify the target audience and the limitations of the chatbot to restrict users with complex mental issues. As some apps do not always allow free text input (providing instead a list to choose from), this restricts the free flow of conversation and does not contribute to ease of use. Moreover, since hedonistic value is also known to be a factor impacting CA use, researchers say vendors should enhance the ‘humoristic personality’ aspect of the chatbot.

 

Trust issues. Users may frown about the use of technology in a very personal context such as mental health support. Therefore, vendors need to show scientific evidence about the clinical effectiveness of their apps or bots to gain the trust and confidence of users. 'Trust in providers' is another factor that influences use of CAs. This concern stems primarily from data privacy worries, the researchers note, and app developers need to communicate to users that data protection mechanism is in place to ease such worries.

 

Perceived anthropomorphism. Since CAs are designed to act as humans, this study found that users' perception of the bot's 'personality,' 'intelligence (conversational responsiveness and comprehension),' and 'empathy' is an important consideration for CA developers. Aside from finding ways to strengthen positive outcomes (eg, user satisfaction), it is important for designers to be aware "that enhancing humanness beyond optimal levels may trigger negative emotional reactions in users, especially in the sensitive context of mental healthcare," the study authors note.

 

Source: Association for Information Systems

Image credit: werayuth via iStock


«« VR in Medical Education


‘Your COVID Recovery’ – NHS Launches Online Rehabilitation Service »»

References:

Prakash AV, Das S (2020) Intelligent Conversational Agents in Mental Healthcare Services: A Thematic Analysis of User Perceptions. Pacific Asia Journal of the Association for Information Systems, 12(2): 1-34. https://doi.org/10.17705/1pais.12201



Latest Articles

Telehealth, mental health, Telemedicine, health chatbots, fully automated conversational agents Chatbot Use in Mental Health: Decisive Factors