The diagnosis of hepatocellular carcinoma (HCC), the most prevalent form of primary liver cancer, remains a significant challenge due to its complex imaging features and the need for accurate interpretation by experienced radiologists. Gadoxetic acid-enhanced magnetic resonance imaging (MRI) is a widely accepted technique for detecting HCC, but it demands high levels of expertise and consistency, which are often hard to maintain across institutions and practitioners. In recent years, deep learning models have emerged as powerful tools for image-based diagnostics. However, their clinical application has been limited by a lack of interpretability and user control. In response to these challenges, a new approach combining interactive and explainable deep learning has been developed to support radiologists in diagnosing HCC more effectively and transparently. 

 

Development of the Interactive Model 

The proposed interactive model integrates deep learning with an explainability mechanism that allows radiologists to visualise the decision-making process. Built upon a ResNet-50 backbone, the model extracts features from hepatobiliary phase (HBP) images and provides visual explanations through heatmaps. A key innovation is the incorporation of a saliency-based interaction tool, enabling radiologists to adjust lesion boundaries directly within the model’s heatmap interface. This interactive functionality allows clinicians to correct misidentified regions or reinforce accurate lesion localisation, making the diagnostic process more collaborative. 

 

Must Read: Consensus Imaging Guidelines for HCC Treatment 

 

By allowing input from expert users, the model bridges the gap between artificial intelligence and human judgment. When radiologists interact with the heatmaps, the model uses this feedback to fine-tune its internal representation of the lesion area, recalculating its classification with improved accuracy. This iterative feedback loop improves diagnostic confidence and aligns model predictions more closely with clinical reasoning. The system is trained and evaluated on an extensive dataset of patients with suspected liver tumours, providing robust evidence of its generalisability across different clinical cases. 

 

Evaluation and Clinical Performance 

To assess its clinical utility, the model’s performance was evaluated against multiple benchmarks, including standard deep learning classifiers, radiologists without AI support and radiologists assisted by the interactive model. The study revealed that the interactive model significantly outperformed conventional deep learning methods in accuracy, specificity and sensitivity. When compared with unaided radiologists, the model-assisted approach demonstrated a clear improvement in diagnostic precision and consistency. 

 

Radiologists using the interactive system achieved higher inter-reader agreement and improved lesion localisation, particularly in challenging cases where image features were ambiguous or lesions were small. The interactive element not only enhanced the model’s classification performance but also provided users with increased trust and understanding of AI-generated outputs. Visual explanations helped clarify why certain features were deemed significant, promoting informed decision-making and reducing uncertainty. The model also showed consistent performance across external validation cohorts, indicating its robustness in varied clinical environments. 

 

Implications for Radiology Practice 

The integration of explainable AI into radiology represents a transformative step in clinical imaging. By allowing radiologists to interact with and influence the AI’s decision pathway, the new model promotes a symbiotic relationship between human expertise and machine intelligence. This collaboration fosters greater transparency, reduces the likelihood of critical misinterpretations and helps build trust in AI applications within healthcare. 

 

In practice, the model could be used as a decision support tool in routine liver cancer screening or in cases requiring second opinions. Its ability to adapt to user feedback and visualise diagnostic reasoning can also support training and education for less experienced radiologists. Furthermore, the model’s architecture is flexible enough to be extended to other imaging phases or types of tumours, offering a foundation for broader applications in oncological imaging. As AI adoption in medicine continues to grow, models that prioritise explainability and user control are likely to see greater acceptance and integration into clinical workflows. 

 

The development of an interactive and explainable deep learning model for HCC diagnosis marks a significant advancement in AI-assisted radiology. By combining high diagnostic accuracy with interpretability and user engagement, the system addresses longstanding barriers to clinical implementation of AI. The approach enhances diagnostic confidence, supports consistent outcomes and aligns artificial intelligence with the human-centred values of medical practice. As such, it sets a new standard for future innovations in medical imaging, emphasising collaboration, transparency and adaptability. 

 

Source: Radiology: Imaging Cancer 

Image Credit: iStock


References:

Li M, Zhang Z, Chen Z (2025) Interactive Explainable Deep Learning Model for Hepatocellular Carcinoma Diagnosis at Gadoxetic Acid–enhanced MRI: A Retrospective, Multicenter, Diagnostic Study. Radiology: Imaging Cancer, 7:3. 



Latest Articles

interactive AI, liver cancer diagnosis, HCC detection, explainable AI, MRI liver imaging, AI in radiology, deep learning in oncology, hepatocellular carcinoma An explainable AI model improves liver cancer diagnosis by enabling interactive, accurate and transparent MRI interpretation.