Accurate identification of epidermal growth factor receptor (EGFR) mutations plays a central role in guiding targeted treatment decisions for patients with lung adenocarcinoma. Current clinical methods for EGFR status detection, including tissue and liquid biopsies, present several limitations. Tissue biopsies are invasive, may not be feasible in patients with poor physical status and often fail to capture tumour heterogeneity. Liquid biopsies, while noninvasive, are hindered by limited sensitivity and high costs. In response to these challenges, a recent study investigated a noninvasive solution using computed tomography (CT) images combined with radiomic analysis, deep learning and fusion modelling. This approach aims to enhance EGFR mutation prediction accuracy by extracting imaging features from both within the tumour and its surrounding region.
Radiomic and Deep Learning Models from Imaging Data
Radiomics enables the quantification of tumour characteristics from medical images, extracting features related to shape, texture and intensity. In this study, radiomic features were derived from intratumoural regions and from peritumoural zones extended up to 10 mm beyond the tumour. The combined model using intratumoural and 2 mm peritumoural regions (VOI_Comb2) demonstrated the best performance among radiomic models, with area under the curve (AUC) values of 0.843 and 0.803 on internal and external validation datasets. These results suggest that peritumoural features capture additional biological information relevant to EGFR mutation status, likely reflecting tumour–microenvironment interactions.
Must read: Non-Invasive Imaging for Lung Cancer Prediction
In addition to radiomics, deep learning models were developed in three formats: 2D, 2.5D and 3D. Each model type processed different representations of CT images to capture spatial information. Notably, the 3D model trained on the VOI_Comb2 region achieved an AUC of 0.814 in the external validation set, outperforming its 2D and 2.5D counterparts. Visualisation through gradient-weighted class activation mapping showed that the models focused attention on both tumour and surrounding tissue. These findings underscore the value of spatial context and depth in improving prediction performance.
Combining Models through Fusion Strategies
Recognising the complementary strengths of radiomic, deep learning and clinical models, the study evaluated several fusion strategies to enhance predictive accuracy. Early fusion integrated features at the data level, while late fusion combined predictions from separate models. Three decision-level fusion approaches were tested: hard voting, soft voting and stacking. Among them, the soft voting model delivered the highest performance, with AUCs of 0.925 and 0.889 in the internal and external validation sets respectively.
Soft voting employed Bayesian optimisation to determine optimal weightings for each model type. The final combination gave the most weight to the radiomics model (70%), followed by deep learning (20%) and clinical variables (10%). This distribution highlights the strong predictive signal derived from radiomic features, while also recognising the added value of data-driven learning and patient demographics. Compared with stacking and early fusion approaches, soft voting offered a simpler yet more effective solution, reducing overfitting risks and improving interpretability.
The clinical model alone, which included age, sex and smoking history as features, performed modestly with an AUC of 0.649 on the external dataset. When integrated through fusion, however, clinical information played a valuable supporting role, particularly in refining predictions and anchoring the model in patient-specific characteristics. Overall, fusion models proved significantly superior to any individual modality, with soft voting emerging as the most robust method for practical implementation.
Implications for Personalised Cancer Treatment
The findings of this study have important implications for the future of noninvasive precision medicine in lung cancer care. By incorporating peritumoural features and using advanced modelling techniques, the research demonstrates that image-based models can rival or even exceed current invasive methods for mutation prediction. In particular, the radiomics model built on the VOI_Comb2 region consistently emerged as a high-performing component across all fusion scenarios. The addition of peritumoural data provided critical insight into tissue heterogeneity and tumour behaviour beyond the lesion boundary.
Deep learning models, especially those using 3D representations, added an ability to autonomously identify high-dimensional patterns not easily captured by manually defined radiomic features. Their integration in the fusion model allowed for a more complete representation of tumour phenotype. Importantly, the choice of soft voting enabled the benefits of each approach to be retained without excessive complexity, making it suitable for real-world deployment in clinical decision support systems.
Despite these strengths, certain limitations must be addressed. The study’s retrospective design introduces the potential for selection bias, and differences in imaging protocols between centres could influence generalisability. Additionally, only plain CT scans were used, excluding contrast-enhanced imaging which might reveal further discriminatory features. Future research should incorporate a wider range of imaging data and pursue prospective, multicentre studies to validate these findings. Exploring stratified models for EGFR mutation subtypes could also enhance clinical relevance and application.
The study provides strong evidence that combining radiomic and deep learning models from both tumour and peritumoural CT image regions can significantly improve the prediction of EGFR mutation status in lung adenocarcinoma. The application of fusion strategies, especially soft voting, effectively harnesses the strengths of each modality, offering a practical and noninvasive tool for supporting targeted therapy decisions. The integration of image-based techniques into clinical workflows promises to advance personalised treatment in oncology.
Source: Academic Radiology
Image Credit: iStock