Predicting mortality in intensive care units (ICUs) is critical for effective patient management and resource allocation. Traditional predictive scoring systems, such as the Simplified Acute Physiology Score (SAPS) II, rely on structured clinical data from electronic health records (EHRs). However, these models often fail to incorporate valuable insights from unstructured data, such as radiology reports and medical images. Recent advancements in deep learning enable the integration of multimodal data to improve ICU mortality prediction accuracy. A recent study published in JAMIA Open presents a deep learning-based survival prediction model that combines physiological measurements, text representations of radiology reports and chest X-ray image features to enhance predictive performance.
 

Accurate ICU mortality prediction is essential to improving patient outcomes and optimising resource allocation. Traditional methods have been used for decades but often fail to fully capture the complexity of critical illness. By leveraging deep learning models capable of analysing both structured and unstructured clinical data, this study provides a more comprehensive approach. The research focuses on integrating various data types to enhance model performance, with an emphasis on extracting additional insights from chest X-rays and radiology reports.
 

Integrating Multimodal Data in ICU Mortality Prediction

Traditional ICU scoring models primarily utilise structured clinical variables, limiting their predictive capabilities. To overcome this, a deep learning model incorporating multimodal data was developed. The model integrates four feature sets: SAPS-II physiological measurements, predefined thorax disease labels, transformer-based text embeddings and chest X-ray image features. This approach was evaluated using the Medical Information Mart for Intensive Care IV (MIMIC-IV) dataset. The study demonstrated that incorporating text and imaging data significantly improved prediction accuracy compared to SAPS-II alone. The multimodal model achieved a C-index of 0.7829, outperforming the baseline SAPS-II model’s 0.7470.
 

A major limitation of conventional scoring methods is their reliance on numerical and categorical inputs, which can overlook critical diagnostic indicators available in radiology reports and medical images. By leveraging multimodal data, the proposed model captures a more detailed representation of patient health. The findings indicate that radiology-based features contribute significantly to risk assessment, underscoring the importance of including text and image-derived insights in predictive modelling. The combination of structured and unstructured data enhances decision-making in ICUs, enabling clinicians to better anticipate patient outcomes.


Related Read: The Next Frontier of AI in Healthcare: Prediction and Proactive Care
 

Deep Learning Approach and Feature Fusion

The model utilises a feature fusion strategy to combine different data types effectively. Early fusion was employed, where extracted text and image features were averaged and concatenated with SAPS-II features before feeding into the survival model. Transformer-based embeddings captured complex textual patterns from radiology reports, while a graph convolutional network (GCN) model represented relationships between radiology findings. Chest X-ray image features were extracted using a DenseNet-121 model trained on medical imaging datasets. Compared to traditional machine learning methods, deep learning approaches demonstrated superior performance in handling high-dimensional data and capturing intricate feature interactions.
 

The multimodal approach allows for the incorporation of diverse clinical variables that may not be evident from structured data alone. Feature extraction from textual and image data ensures that all available information is utilised effectively. Additionally, the model benefits from deep neural networks’ ability to learn complex feature representations, leading to more accurate survival predictions. The application of graph-based learning techniques further refines prediction accuracy, making the model particularly suited for ICU environments where precise prognostic assessments are crucial.
 

Impact of Text and Image Data on Prediction Accuracy

Analysis of different text feature extraction methods revealed that both transformer-based embeddings and GCN-based representations contributed significantly to model performance. The model trained with SAPS-II risk factors and GCN features achieved a C-index of 0.7720, while incorporating image features further improved the accuracy to 0.7752. Further investigation into thorax disease contributions indicated that conditions such as lung opacity and pleural effusion were associated with higher ICU mortality risks. This highlights the potential of leveraging radiology-derived insights to refine patient risk stratification. Additionally, deep learning models outperformed traditional Cox proportional hazards models in predicting ICU outcomes, reinforcing the advantages of data-driven approaches in survival analysis.
 

By comparing different text extraction techniques, this study identifies the most effective methods for integrating radiology data into predictive models. The results suggest that leveraging domain-specific embeddings enhances predictive performance, ensuring that nuanced clinical observations are incorporated into risk assessments. Moreover, image-based predictions offer an additional layer of accuracy, confirming the relevance of visual features in assessing disease severity. This demonstrates the necessity of multimodal approaches in medical AI, particularly in high-stakes environments such as ICUs.
 

This study underscores the benefits of integrating multimodal data into ICU mortality prediction models. By incorporating structured clinical variables, radiology text reports, and medical imaging features, the proposed deep learning framework enhances predictive accuracy beyond conventional scoring systems. Future work could explore joint feature fusion techniques to optimise representation learning and address selection biases in dataset composition. Additionally, advancing interpretability methods can foster trust in AI-driven clinical decision-making. The findings emphasise the transformative role of deep learning in intensive care, paving the way for more precise and data-informed patient management strategies.
 

The results highlight the importance of continued research into AI-driven prediction models in healthcare. Enhancing the interpretability of deep learning predictions will be key to increasing clinical adoption. Additionally, future studies could explore incorporating temporal data to better understand disease progression over time. While deep learning provides a significant performance boost, ensuring that the models remain interpretable and clinically relevant will be crucial for their successful integration into ICU decision-making processes.

 

Source: JAMIA Open
Image Credit: iStock

 


References:

Mingquan L, Song W, Ying D et al. (2025) An empirical study of using radiology reports and images to improve intensive care unit mortality prediction. JAMIA Open, 8(1) : ooae137



Latest Articles

ICU mortality prediction AI, deep learning survival models, medical imaging in ICU, AI-driven radiology analysis, transformer-based embeddings healthcare Deep learning enhances ICU mortality prediction by integrating structured data, radiology reports, and chest X-ray features.