Artificial intelligence now supports reconstruction, segmentation, synthetic image generation, disease classification, triage and scheduling across radiology. Yet strong performance still depends on expert-labelled data, which are costly and slow to assemble. Active learning addresses this constraint by selecting the most informative or uncertain cases for annotation so that accuracy can be maintained while labelling effort falls. Embedded within routine workflows, this approach enables radiologists to accept, reject or refine outputs and return feedback to models, helping systems adapt as data and populations change. Effective use depends on fit with PACS-centred infrastructure, secure data handling, transparent update processes and practical education that supports hands-on use.  

 

Active Learning to Reduce Labelling Burden 

Active learning sits within semi-supervised methods, combining labelled and unlabelled data to accelerate training by querying experts only on the highest-value cases. Strategies include generating synthetic samples for targeted annotation, sampling a data stream based on model uncertainty, and selecting from a pool by ranking informativeness across the full dataset. While pool-based sampling can demand more computation, it aligns well with clinical relevance by capturing rare or complex pathologies for focused review. Offline use remains common, but near real-time cycles are feasible in AI-assisted workflows where new cases are periodically queried, labels are added, and models are fine-tuned. Any continuous-learning device must follow existing regulatory frameworks, supported by a data management plan covering data collection, retraining protocols, change documentation and impact assessment. 

 

 More to read: Physician Sentiment on AI: Evolving Perspectives and Adoption 

 

A physician-in-the-loop model turns those targeted interactions into a practical mechanism for performance gains. Radiologists can accept or correct model outputs and refine regions of interest, with feedback flowing back into training. This interactive loop promotes transparency and explainability, provided interfaces are efficient and aligned to routine tasks. Evidence cited in the review indicates that such strategies can minimise redundant effort by directing attention to ambiguous cases where expert input most improves learning.  

 

Embedding Tools in Routine Radiology 

Integration rests on established infrastructure. PACS is a natural anchor for AI, aided by standards such as DICOM, HL7 and FHIR that support data exchange in time for radiologist review. A meta-analysis linked PACS-integrated AI to higher diagnostic accuracy and reductions in diagnostic time of up to 90 percent, though broad adoption remains limited by integration complexity and gaps in secure, automated image handling. Streamlined interfaces and clear workflow placement are therefore essential.  

 

The physician-in-the-loop approach can range from simple auditing of outputs to true interactive learning that improves models over time. Recommendations from professional societies emphasise integration into the reading workflow, focus on unmet clinical needs, and transparency about model behaviour. At the same time, operational realities matter. Non-interpretive tasks already occupy a large portion of reading-room time, frequent interruptions are common, and intensive AI use can contribute to burnout. Current reimbursement does not cover labelling work, which underscores the need for incentives and for education that moves beyond short lectures to practical workshops on active learning tools.  

Commercial availability is expanding, yet few products explicitly implement active learning and public information about development can be limited, with only a minority backed by peer-reviewed evidence. Efficient feedback capture is therefore a priority. Examples include gaze-tracking to accelerate segmentation by focusing on areas under review, and user interfaces that allow quick acceptance, rejection or refinement of findings. Such mechanisms reduce friction, preserve radiologist control and help maintain performance as data drift occurs.  

 

Evidence, Constraints and What Comes Next 

Across tasks, active learning can cut labelling requirements markedly. Studies show that annotating a fraction of a dataset, from about 5 to 50 percent, can achieve accuracy comparable to models trained on fully labelled sets, and fine-tuning with active learning has reduced labelling effort by more than 80 percent. Practical gains come when experts validate model-generated labels rather than annotate from scratch, enabling earlier model development with lower burden.  

 

Interactive segmentation illustrates the point. Physician-guided tools on CT and MRI have reduced interaction time while sustaining or improving accuracy. In case studies, once a small share of images is labelled by humans, the remainder can be auto-labelled with expert review, supported by simple interactions such as bounding boxes, brief scribbles or a few clicks. Classification tasks have also benefited, including chest radiograph triage applications trained with far fewer manual labels through active selection of informative cases.  

 

Challenges remain. Deployment requires adequate compute for training and inference, efficient data transfer between PACS and AI hosts, and close collaboration among radiologists, AI scientists and IT. Safeguards are needed against performance degradation from mislabelled or conflicting data, with external testing and quality checks, plus consensus-based labelling to manage variability. Fairness concerns can be addressed by targeted sampling to enrich under-represented groups. Continual learning should prevent catastrophic forgetting, and any update cycles must respect privacy, legal requirements and predetermined change-control plans for regulated devices. Evaluation should extend beyond technical metrics to staging accuracy, workflow efficiency, reproducibility and clinician workload.  

 

Looking ahead, foundation models such as segmentation-anywhere architectures and their medical adaptations can leverage clinician feedback to scale domain-specific labelling more efficiently. Active learning can help these models acquire targeted knowledge, while re-weighting approaches maintain performance when mixing expert and model-generated annotations. Generative models and large language models may support reporting, enhancement and synthesis within active learning loops, provided governance and implementation science perspectives guide adoption across reason, means, method and desire to use.  

 

Active learning offers a pragmatic path to high-value physician involvement in radiology AI. By focusing labelling effort on the most informative cases, it reduces workload, accelerates model improvement and supports transparent, usable tools within existing systems. Real-world success will depend on fit with PACS-centred workflows, simple and rapid feedback capture, robust evaluation and sustained education for clinicians. With these foundations in place, physician-in-the-loop active learning can help deliver reliable models that enhance decision-making and patient care while making better use of scarce expert time.  

 

Source: American Journal of Roentgenology 

Image Credit: iStock


References:

Luo M, Yousefirizi F, Rouzrokh P et al. (2025) Physician-in-the-Loop Active Learning in Radiology Artificial Intelligence Work-flows: Opportunities, Challenges, and Future Directions. AJR. 



Latest Articles

active learning radiology AI, physician-in-the-loop AI, medical imaging AI, PACS integration, radiology workflow AI, labelling burden reduction, radiologist feedback AI, diagnostic accuracy AI, semi-supervised learning radiology, healthcare AI Active learning in radiology AI reduces labelling effort, boosts accuracy, and integrates physician feedback for smarter, efficient workflows.