Rapid recognition of free intraperitoneal fluid is a decisive factor in the management of abdominal trauma, where early clinical decisions often determine survival. Trauma-related deaths remain high worldwide, with a substantial proportion occurring shortly after injury. Abdominal trauma is particularly dangerous when internal bleeding leads to haemorrhagic shock, making timely identification of free fluid essential. Focused Assessment with Sonography for Trauma (FAST) has become a standard screening approach because it is fast, portable and non-invasive, allowing bedside assessment in emergency settings.
Despite these advantages, the diagnostic value of FAST depends heavily on the operator’s experience. Skilled sonographers can identify free fluid efficiently, whereas clinicians without formal ultrasound training and non-specialist operators often show lower accuracy and require more time. These limitations reduce the reliability of FAST in emergency, pre-hospital and resource-limited contexts. Against this background, deep learning applied to ultrasound imaging has emerged as a potential means of reducing operator dependence while preserving the clinical utility of FAST.
Clinical Context and Operator-Related Limitations
Detection of free intraperitoneal fluid is a key indicator of severe visceral injury and directly influences management pathways. In unstable patients, a positive FAST finding may lead to immediate surgical intervention, while stable patients may undergo additional imaging. However, accurate interpretation of FAST images remains challenging. Abdominal anatomy is complex, image quality varies, and non-specific clinical presentations can obscure findings. Although short training sessions enable non-specialists to perform FAST examinations, diagnostic performance frequently falls short of clinical expectations. This gap is particularly relevant in situations where experienced sonographers are scarce, such as mass casualty incidents, rural emergency care or extreme environments. In these contexts, reliance on clinicians or non-clinical personnel increases the risk of missed or incorrect findings. Reducing variability related to operator skill is therefore critical for improving the consistency and scalability of FAST-based trauma assessment.
Development and Validation of a Transformer-Based Model
A deep learning model using a Transformer architecture was developed to support automated detection of free intraperitoneal fluid on FAST images. The approach combined two functions: segmentation of suspected fluid regions and classification to determine whether these regions represented true free fluid. Training data included several thousand ultrasound images collected retrospectively over multiple years at one hospital, with both positive and negative cases represented. External validation was performed using an independent dataset from a second hospital, providing a balanced mix of images with and without free fluid.
Must Read:Continual Learning for Medical Image Analysis
Images were acquired using different ultrasound systems, including portable and handheld devices, without standardised acquisition settings. This approach reflected routine emergency practice rather than controlled imaging conditions. Pre-processing steps such as cropping, augmentation and normalisation were applied to improve robustness. The segmentation component used a Transformer encoder to capture both local image features and broader contextual relationships, followed by a decoder adapted to produce fluid masks. The classification component analysed regions identified by segmentation to reduce false positives. Model performance was assessed through internal cross-validation and external testing, using standard metrics for classification and segmentation accuracy.
Performance and Impact on Less Experienced Operators
Internal validation showed that the model achieved high overall accuracy, with balanced sensitivity and specificity, alongside consistent segmentation performance. External testing confirmed that these results were maintained when applied to data from a different institution. When compared with human operators, the model demonstrated performance similar to that of a junior sonographer and exceeded that of a clinician and a non-clinical operator.
The model was also evaluated as an assistive tool. Three operators with differing ultrasound experience independently assessed external images before and after receiving model assistance. Prior to assistance, clear performance differences were observed, with the sonographer outperforming the other operators. After assistance, all operators showed improved segmentation accuracy, with the largest gains seen among the clinician and non-clinical operator. Post-assistance results indicated that performance differences between groups were no longer significant, suggesting that the model helped standardise interpretation regardless of prior experience.
Error analysis showed that non-specialists commonly misidentified anatomical structures such as vessels, ducts or bowel walls as free fluid. Model assistance reduced these errors and improved boundary delineation. Some limitations remained, including occasional misclassification in the presence of acoustic shadowing or when fluid extended beyond the ultrasound field of view. Nevertheless, overall detection performance remained robust across operators and imaging conditions.
A Transformer-based deep learning model demonstrated reliable performance for automated detection of free intraperitoneal fluid on FAST ultrasound images. The model achieved diagnostic accuracy comparable to that of a junior sonographer and outperformed less experienced operators. When used as an assistive tool, it significantly improved performance among clinicians and non-specialist operators, effectively reducing experience-related variability. These findings highlight the potential role of deep learning in supporting rapid and consistent trauma assessment, particularly in emergency, pre-hospital and resource-limited settings where specialist ultrasound expertise is not always available.
Source: Ultrasound in Medicine and Biology
Image Credit: iStock