Accurate identification of contrast phases in abdominal CT imaging is critical for detecting liver diseases and optimising diagnostic quality. Each phase—early arterial (EAP), late arterial (LAP), portal venous (PVP) and delayed (DP)—offers unique visual information essential for identifying lesions, particularly hepatocellular carcinoma. However, current approaches using DICOM metadata and manual evaluation by radiologists are often limited by inconsistencies, time constraints and human error. Addressing these limitations, a deep learning model based on residual networks (ResNet18) was developed and evaluated through both internal and external datasets. This model introduces a two-step classification strategy, aimed at improving the precision and reliability of contrast phase identification in abdominal CT images. 

 

Model Development and Two-Step Strategy Design 

To construct the model, 1175 abdominal contrast-enhanced CT examinations from a single institution were retrospectively collected and segmented into training, validation and test sets. An additional 215 CT examinations from five separate hospitals were used as an external test dataset to evaluate model generalisability. Each CT scan included three distinct contrast phases—AP, PVP and DP—and these were reviewed and annotated independently by two radiologists. When disagreements occurred, a senior radiologist intervened to reach consensus. 

 

Two classification strategies were designed. Strategy A attempted direct classification into the four phases—EAP, LAP, PVP and DP—in a single step. In contrast, Strategy B followed a two-step approach. First, it classified the phases as AP, PVP or DP. Then, if AP was detected, a second step was triggered to differentiate between EAP and LAP. This two-tier design was intended to manage the subtle differences between EAP and LAP more effectively, by reducing the model’s task complexity. 

 

ResNet18 was employed for feature extraction across all contrast phases, using parallel branches with shared parameters. An attention module was incorporated to enhance the model’s focus on inter-phase distinctions, leveraging contextual cues from all three contrast phases. The feature maps were refined through calculated attention weights, and the model outputs comprised probability values for each phase. Normalisation and resampling were applied to ensure uniformity in image processing, with images adjusted to a fixed size and intensity range. 

 

Performance Evaluation and External Validation 

In the internal test set, Strategy B demonstrated significantly higher performance than Strategy A. While the one-step approach achieved an overall accuracy of 91.7%, the two-step strategy reached 98.3%. Each phase saw improved sensitivity and specificity with the two-step method, particularly in differentiating between EAP and LAP. 

 

In the external validation set, which included data from five hospitals and scanners from four manufacturers, Strategy B maintained high performance. The model achieved an overall accuracy of 99.1%, with phase-wise sensitivities of 95.1% for EAP, 99.4% for LAP and 99.5% for both PVP and DP. Examination-based analysis showed that 210 of 215 examinations were correctly classified across all phases. 

 

The few misclassifications were mostly limited to confusion between EAP and LAP, where enhancement differences in the portal vein were minimal. In one case, overlapping characteristics between PVP and DP were observed, likely influenced by structural changes due to liver disease. These findings underscore the model’s sensitivity to subtle vascular changes, which could lead to misinterpretation under certain physiological conditions. 

 

Must Read: Dual-Energy CT for Abdominal Cancer Monitoring 

 

The model’s performance remained stable across subgroups stratified by patient sex, age, scanner manufacturer and slice thickness. No statistically significant differences were identified, indicating strong generalisability across diverse imaging conditions. 

 

Implications, Limitations and Future Directions 

The study demonstrated the ability of a two-step ResNet-based model to significantly enhance the classification of contrast phases in abdominal CT scans. The tiered approach proves particularly effective in handling closely related phases such as EAP and LAP. The introduction of an attention module further strengthens the model’s ability to identify nuanced features by integrating contextual information from all phases. 

 

Despite its strengths, the model has limitations. It was validated only retrospectively, with no prospective real-world testing. Although the external dataset was diverse, the absence of forward-looking validation introduces uncertainty regarding its performance in clinical practice. Additionally, potential misalignments caused by respiratory motion across contrast phases may affect accuracy, as the model does not currently compensate for spatial misregistration. 

 

The dataset also included relatively few LAP cases, which may have constrained the model’s ability to learn distinguishing features for this phase. Expanding the LAP dataset and implementing prospective validation are essential next steps. Moreover, refining the model to incorporate phase registration methods could address misalignment issues and further improve performance. 

 

The proposed two-step deep learning model demonstrates high accuracy and robustness in identifying contrast phases in abdominal CT images. By separating the classification task into manageable stages and incorporating attention mechanisms, the model overcomes the limitations of both DICOM metadata and one-step classification approaches. It not only enhances imaging quality control but also provides a reliable platform for future AI-driven diagnostic tools based on contrast-enhanced CT data. 

 

Source: Insights into Imaging 

Image Credit: iStock


References:

Liu Q, Jiang J, Wu K et al. (2025) A two-step automatic identification of contrast phases for abdominal CT images based on residual networks. Insights Imaging, 16:139. 



Latest Articles

CT phase classification, liver disease detection, AI imaging, abdominal CT, ResNet18, two-step model, contrast-enhanced CT, radiology AI AI-driven two-step CT phase classification enhances liver disease diagnosis with up to 99.1% accuracy.