A team of researchers, from the University of Utah, have published a study in the Journal of The National Cancer Institute Cancer Spectrum to demonstrate whether a large language model (LLM) ChatGPT can deliver accurate information related to cancer information and dispel common concerns.

 

Given the importance of accurate information in the field of cancer, incorrectly interpreted answers could negatively impact patient decision-making.

 

It is critical for clinicians to determine the accuracy of AI (mis)information outputs in order to avoid implications for patient care. This small-scale study aims to take the initial step in monitoring the information ChatGPT generates.

 

In 2022, the team asked ChatGPT 13 questions about cancer that are normally common points of confusion for people. The answers were then compared with those from the National Cancer Institute (NCI).

 

The answers from NCI and ChatGPT were blinded, which implies that the five experts reviewing the answers did not know the source that the response came from. The answers were evaluated by level of accuracy based on the established knowledge of five scientific reviewers with expertise in cancer treatment and cancer misinformation.

 

ChatGPT outputs were determined to be accurate by all five reviewers for 11 of the 13 questions, providing an accuracy percentage of 96.9%. Whereas 13 of 13 NCI answers were rated as accurate by all five expert reviewers, demonstrating 100% interrater agreement.

 

While the team acknowledged there were noticeable differences in the number of words or the readability of the answers from NCI and ChatGPT, it was clear that the ChatGPT messages often used terms linked to uncertainty or vagueness which could lead to harmful practice. Although ChatGPT provided accurate information regarding common cancer questions and misconceptions, the terms were associated with hedging and uncertainty, which could lead to patients wrongly interpreting information.

 

Overall, ChatGPT outputs in response to common cancer misinformation were found to be accurate and similar to the answers NCI provided. However, whether such AI-driven systems can consistently deliver accurate cancer information is yet to be established through future research.

 

Source: JNCI Cancer Spectrum

Image Credit: iStock

«« Will Health Systems Accelerate Their Use of AI to Overcome Challenges?


The Challenges U.S. Provider Organisations Are Facing »»

References:

Skyler B J et al. (2023) Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information. JNCI Cancer Spectrum, 7 (2).



Latest Articles

ChatGPT ,Cancer ,Myths,Misconceptions A team of researchers, from the University of Utah, have published a study in the Journal of The National Cancer Institute Cancer Spectrum to demonstrate whether a large language model (LLM) ChatGPT can deliver accurate information related to cancer infor