Referring physicians who rely on radiology services are central to how artificial intelligence is adopted in clinical pathways. A survey of licensed doctors in Germany explored how these clinicians view AI in radiological diagnosis, what they trust and which applications they value most. The sample included internists, surgeons and general practitioners, recruited across all federal states via institutional websites, with responses analysed using standard statistical methods. Overall sentiment was positive on average, yet views varied and specific concerns persisted around opacity of algorithms, questions of responsibility and data protection. Physicians placed the greatest value on AI that supports lesion detection, along with tools for analysing large datasets and managing workflow. The findings highlight where confidence is strongest and where barriers remain for AI integration in everyday radiology referrals.
Respondents and Views Measurements
From 2,195 screened contacts, 453 physicians were invited and 169 completed key survey items, yielding a completion rate of approximately 37%. The cohort comprised 68 internists, 41 surgeons and 60 general practitioners. All general practitioners worked in private practice, while internists and surgeons were hospital based. Among 109 hospital respondents, 87 were assistant physicians, 20 were senior physicians and one was a chief physician. The average completion time was 10 minutes.
Must Read: Deploying Radiology AI That Works, Helps and Endures
Respondents reported an average of 9.8 years of professional experience (median 5 years, mode 3 years), with a range from less than a year to 42 years. Experience differed by specialty: internists averaged 6.9 years, surgeons 11.5 years, and general practitioners 19.2 years, with the difference reaching statistical significance. Perceptions of AI in radiological diagnostics were captured on a five-point scale. Among 145 physicians who provided a rating, the mean score was 3.7 ± 1, indicating a generally positive outlook. Ratings of 4 and 5 accounted for 36.6% and 23.5% respectively, while 30.3% selected 3 and 9.7% provided negative ratings of 1 or 2. No significant differences in overall sentiment were observed between specialties, with an average rating of 3.6 ± 1.
Trust Hinges on Transparency, Liability and Data Protection
Trust determinants were examined across several predefined domains. Transparency of AI models was most frequently selected as the key requirement for building confidence, cited by 56.3% of respondents. Physicians emphasised the need to understand how systems operate and to see full disclosure of training datasets. Responsibility and liability followed as the next major concern, selected by 25.0%, reflecting demand for clear guidance on who is accountable for diagnostic errors or adverse outcomes when AI contributes to decisions. Data protection audits were identified by 11.7% as essential to trust, signalling continued attention to confidentiality and secure handling of patient information. A smaller group of 7.0% highlighted additional factors. Transparency was rated significantly higher than other trust elements.
These trust patterns align with how respondents balanced perceived benefit and risk in clinical use. While many viewed AI favourably for diagnostic support, reservations centred on the so-called black-box nature of machine learning, the absence of unambiguous accountability and assurances that data governance meets appropriate standards. The distribution of trust factors did not vary significantly by specialty in subgroup analyses.
Clinical Priorities for AI Applications
When asked to prioritise AI applications in radiology, physicians placed lesion detection highest, with a total priority score of 254. Research and data analysis of large datasets, including quantification of pathologies, ranked second with a score of 219. Workflow management that supports prioritisation of image datasets, particularly where acute pathologies are suspected, ranked third at 207. Automated image quality control and automated tumour volume determination were closely placed, each with a score of 199. Automated determination of organ volumes was considered the lowest priority among the listed options, with a score of 158 from 60 participants.
Specialty-level comparisons showed broad convergence on lesion detection as the top use case. Internists also gave automated tumour volume determination equal priority to lesion detection for preferred applications. Differences in application rankings across groups were highly statistically significant, indicating distinct patterns of perceived value even where overall sentiment was broadly positive.
Referring physicians expressed cautious optimism about AI in radiology, with average ratings indicating a favourable stance and clear preferences for applications that directly support detection and data-driven analysis. Confidence depends most on transparent model behaviour, clear allocations of responsibility and demonstrable data protection. Addressing these areas while focusing development on high-value use cases such as lesion detection, research analytics and workflow support may accelerate clinical acceptance. The results reflect views from a German cohort with varied experience levels and practice settings and should inform implementation strategies that respond to trust requirements and application priorities without assuming uniform expectations across specialties.
Source: Insights into Imaging
Image Credit: iStock