According to a recent analysis, published in European Radiology, ChatGPT and other large language models could prove useful in rewriting radiology reports that were were factually correct, complete and simplified.

 

To assess the quality of simplified radiology reports, researchers from LMU University Hospital, Germany, tested the performance of ChatGPT.

 

A radiologist created three fictitious radiology reports and simplified them using ChatGPT, instructing it to “explain this medical report to a child using simple language”.

 

15 radiologists were enlisted to evaluate the quality of simplified radiology reports in terms of factual correctness, completeness, and potential patient harm using Likert scale analysis and inductive free-text categorisation.

 

In this study, most participating radiologists felt that the simplified reports were factually correct, complete, and posed no potential harm to patients, suggesting ChatGPT's ability to simplify radiology reports.

 

The Likert scale analysis showed that participants generally rated the simplified reports as factually correct and complete, with approximately 75% agreeing or strongly agreeing on both quality criteria, scoring a median of 2. Radiologists disagreed on the potential for patients to draw incorrect conclusions from the simplified reports, scoring a median of 4.

 

The free-text analysis showed there were instances of incorrect text passages and missing relevant medical information. In about one-third of all reports, radiologists identified errors that could consequently lead patients to draw harmful conclusions. Radiologists identified incorrect information in 10 simplified reports (22%), while 16 (36%) contained potentially harmful conclusions.

 

Additionally, there were some cases whereby there were passages of imprecise language. For example, the medical compartment of the knee was described as the middle part of the leg and the brain was referred to as the head. This suggests that further model adaption to the medical field and professional medical oversight is needed.

 

As the authors summarised, “While we see a need for further adaption to the medical field, the initial insights of this study indicate a tremendous potential in using LLMs like ChatGPT to improve patient-centered care in radiology”.

 

Source: European Radiology

Image Credit: iStock

«« Exploring the Factors Influencing Breast Radiologists' Decision to Stay or Go


The Need for Reproducible Scoring Systems to Enhance Radiomics Research Quality »»

References:

Jeblick K et al. (2023) ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. European Radiology.



Latest Articles

ChatGPT ,Radiology Reports,European Radiology,radiology According to a recent analysis, published in European Radiology, ChatGPT and other large language models could prove useful in rewriting radiology reports that were were factually correct, complete and simplified.