Point-of-care ultrasound (POCUS) is rapidly transforming clinical workflows through its portability, cost-efficiency and capacity to deliver immediate diagnostic insights. Yet, the full potential of POCUS remains constrained by the need for specialised training and its dependency on the operator’s skill. Artificial intelligence (AI) offers a promising path to expand POCUS use across diverse settings by assisting clinicians with image acquisition, interpretation and quantification. Despite early successes, widespread adoption of AI in POCUS faces technical, clinical and ethical challenges. These include limited data availability, explainability of outputs and the risk of biased performance. Addressing these issues is critical to deploying AI-enhanced POCUS systems that are both trustworthy and clinically impactful.
Technical Challenges in AI for POCUS
Ultrasound images differ fundamentally from natural images, making traditional AI techniques difficult to apply without adaptation. Unlike optical imaging, ultrasound images are formed through sound wave reflections, often resulting in low contrast, indistinct boundaries and artefacts. These characteristics hinder the effectiveness of standard computer vision methods. Furthermore, most AI models are trained on large, curated datasets of natural images, while POCUS datasets are typically small and heterogeneous. This discrepancy complicates the use of transfer learning and limits model generalisability.
To address these limitations, AI models must be trained specifically on POCUS-acquired images and integrate domain-specific medical knowledge. Techniques such as style transfer, domain adaptation and human-in-the-loop annotation help optimise learning from limited and variable data. Self-supervised learning and probabilistic models also show promise in enhancing model robustness while minimising the need for exhaustive labelling.
Must Read: Implementing Effective Governance for Point-of-Care Ultrasound
Ensuring Interpretability and Clinical Usefulness
In a clinical setting, it is not enough for an AI system to be accurate—it must also be interpretable, especially in high-stakes environments like emergency medicine. To build trust, AI outputs must be calibrated, indicating the model’s confidence in its predictions. Moreover, systems should be able to acknowledge uncertainty rather than offering potentially misleading outputs when data quality is poor or unfamiliar.
Interpretability can be achieved by designing AI systems that output intermediate results rather than binary classifications. For example, in diagnosing developmental dysplasia of the hip (DDH), AI can segment anatomical structures, place landmarks and compute diagnostic angles, such as the alpha angle and hip coverage. These results align with clinical decision-making processes and are transparent to practitioners. Similarly, for estimating left ventricular ejection fraction, AI can track cardiac wall motion, identify key frames and apply geometric methods to calculate functional parameters, enabling visual verification by clinicians.
Mitigating Bias and Aligning with Clinical Values
Bias in AI systems remains a key barrier to equitable deployment. Many AI models are trained on limited, non-representative datasets that may not reflect the diversity of real-world populations or device settings. Bias can originate from multiple sources—image acquisition equipment, operator experience, patient demographics—and can result in disproportionate errors for specific groups.
Effective strategies to mitigate bias include ensuring data diversity, incorporating images from users with varied levels of expertise and actively monitoring system performance across different populations. Additionally, models must be designed to align with the values of all stakeholders—clinicians, patients and administrators—balancing sensitivity and specificity according to the clinical context. This alignment is essential when considering selective deployment or deciding whether to delay implementation in underrepresented settings.
Artificial intelligence holds great promise for enhancing the utility of point-of-care ultrasound, especially in settings where access to specialist imaging is limited. However, for AI to fulfil this promise, it must overcome critical challenges related to data scarcity, image complexity, interpretability and fairness. Training on POCUS-specific data, incorporating clinical knowledge and using transparent, calibrated models are essential steps. Just as importantly, developers and healthcare providers must engage in continuous dialogue to ensure AI systems align with real-world needs and clinical values. When implemented thoughtfully, AI can become a powerful tool to democratise diagnostic imaging and support clinicians in delivering timely, accurate and equitable care.
Source: npj Digital Medicine
Image Credit: iStock