Radiolucent foreign body aspiration (FBA) is challenging to recognise on computed tomography (CT) because visual cues can be faint or ambiguous. An artificial intelligence approach has been designed to support CT assessment by combining detailed mapping of the airways with a review of multiple perspectives of each case. Development and testing drew on data gathered from several hospitals, with strict separation between training and evaluation to prevent overlap. Independent readers worked without access to reference outcomes, allowing a direct comparison between the automated method and experienced clinicians. Reported results point to reliable performance across settings and suggest fewer missed cases, addressing a diagnostic scenario where timing and accuracy matter. 

 

Two-step Pipeline for CT Workflows 

The method follows a simple sequence tailored to CT. First, it builds a three-dimensional representation of the airway tree so that potential sites of lodgement are captured with anatomical context. This step focuses attention on the structures that matter most when a foreign body is suspected yet difficult to see. Second, it generates a set of snapshots from different viewpoints of the airway model and uses a trained classifier to decide whether the pattern is more consistent with aspiration or with a normal appearance. 

 

Must Read: AI on Routine Chest X-Rays Flags Future COPD Risk 

 

During development, the team examined how each element contributed to the final result. Training solely on raw images proved less effective for the radiolucent scenario. Introducing data augmentation improved robustness by exposing the model to a wider range of plausible appearances. The best balance emerged when airway modelling, multiple views and augmentation were combined, reinforcing the idea that structure, context and diversity are useful in highlighting subtle signs that may otherwise be overlooked. 

 

Careful Development and Independent Testing 

The project followed a retrospective, multi-centre design with approvals from the participating institutions and anonymisation of all imaging. To maintain rigour, training used a strategy that kept patients strictly separated across folds so the same person never appeared in both training and validation. Preprocessing and augmentation were confined to the training partitions, further reducing the risk of leakage. Chronological splits were not applied because the period of collection was narrow and confirmed cases were relatively uncommon, which would have reduced positive examples for learning. Instead, the approach relied on independent cohorts held back for final checks. 

 

Generalisation was a central focus. Data came from three hospitals with different radiology departments and scanner models, adding institutional variety that can challenge an automated method. Results from the development environment were then compared with performance on a separate external cohort and on a fully independent evaluation set. This layered approach provided a view of accuracy beyond the training conditions and a fair comparison with clinical reading, as the independent evaluation involved experienced thoracic radiologists who interpreted scans without access to reference procedures. 

 

Performance and Clinical Relevance 

Across internal development and external validation, accuracy remained high and stable. The independent evaluation offered a clear benchmark against clinical practice. Here, the automated method identified more true cases than the expert readers, achieving higher recall and a stronger overall balance between precision and sensitivity. Clinicians showed very high precision with few false alarms, yet they missed more aspirated cases in this radiolucent context. The contrast reflects a familiar trade-off in diagnostic work: minimising false positives while ensuring that true positives are not overlooked. In scenarios where aspiration can be life-threatening and signs are subtle, improved recall can be particularly valuable. 

 

The work is open about constraints. Because the analysis was retrospective, selection bias cannot be excluded. The overall number of confirmed radiolucent cases was modest, reflecting clinical reality. To offset these factors, data were pooled from several institutions that serve diverse populations and use different CT systems, adding heterogeneity that brings the evaluation closer to real-world practice. Code for the implementation has been made publicly accessible, supporting reproducibility and independent scrutiny. 

 

An AI pipeline that first maps the airways, then evaluates multiple perspectives, shows promise for detecting radiolucent FBA on CT. Consistent accuracy across settings and higher recall than experienced readers in an independent comparison suggest potential to surface cases where visual signs are easy to miss. Although retrospective design and cohort size limit generalisation, the emphasis on strict separation, multi-centre data and transparent methods provides a credible foundation. The approach offers a reproducible aid that can be integrated into existing workflows to support timely recognition and intervention. 

 

Source: npj digital medicine

Image Credit: iStock 


References:

Liu X, Chen Z, Tang Z et al. (2025) Automated detection of radiolucent foreign body aspiration on chest CT using deep learning. npj Digit Med; 8, 647. 



Latest Articles

radiolucent foreign body aspiration, AI CT analysis, airway mapping, multi-view CT assessment, CT workflow AI, diagnostic accuracy, radiology AI, subtle CT findings, thoracic imaging, FBA detection, medical imaging AI, npj digital medicine research AI-powered CT analysis improves detection of radiolucent foreign body aspiration with airway mapping and multi-view assessment for higher diagnostic accuracy.