Integration of artificial intelligence into healthcare systems prompts essential questions about efficacy, ethics and patient trust. Among imaging modalities, mammography stands out for its suitability for AI applications due to its standardisation and high volume of diagnostic data. However, clinical implementation hinges not only on technical validation but also on patient perception. Understanding patient acceptance of AI in this setting is critical for its responsible deployment. A recent study involving 518 women offers insights into the key demographic and experiential factors influencing perceptions of AI in breast cancer screening.
AI as an Adjunct, Not a Replacement
The majority of surveyed patients expressed a preference for AI to function as a second reader rather than as a standalone interpreter of screening mammograms. Only 4.4% of respondents supported AI-only interpretation, while 71% preferred it as a supplementary tool to radiologists. This aligns with broader themes observed in earlier research, where patients valued the human element in medical diagnostics. When presented with an AI-reported abnormality, 88.9% of participants wanted a radiologist to confirm the findings. This preference for human oversight also emerged in follow-up decisions, where patients were more likely to proceed with diagnostic imaging following a radiologist's recall than an AI's.
Must Read:Transforming Patient Experiences: Lessons from Leading Healthcare Innovators
The perceived diagnostic accuracy of AI compared to radiologists was mixed. Most respondents (43.4%) believed both performed similarly, while smaller proportions viewed AI as either better (13.5%) or worse (26.2%). These responses reflect cautious optimism and reinforce the notion that AI is better received as a supporting tool rather than a substitute for human expertise. Furthermore, participants consistently expressed the need for informed consent before AI use, with 74.1% stating that permission should be sought, underlining the importance of transparency and autonomy in AI deployment.
Demographics, Knowledge and Personal History Matter
Patient acceptance of AI showed strong correlations with education level, AI familiarity and personal medical history. Participants with a graduate or professional degree were twice as likely to accept AI use as those with lower education levels. Similarly, those reporting higher AI knowledge were significantly more accepting, indicating that education and exposure may positively influence attitudes toward new technologies.
Participants with a history of abnormal mammograms were more responsive to discrepancies between radiologist and AI assessments, showing higher rates of agreement to pursue follow-up diagnostics in such cases. Those with first-degree relatives diagnosed with breast cancer also showed a heightened desire for dual confirmation—AI and radiologist—following abnormal findings, underscoring the impact of personal and familial experience on decision-making.
Conversely, acceptance was lower among some racial and ethnic groups. Non-Hispanic Black participants were less likely to agree to AI involvement compared to non-Hispanic White participants. Hispanic and Asian participants also showed heightened concern about bias, data privacy and overall transparency. These disparities signal the need for culturally sensitive engagement strategies and equitable data practices when implementing AI technologies.
Concerns of Trust, Transparency and Accountability
Despite general support for AI in a supportive role, significant concerns remained. A majority of participants were moderately to extremely concerned about AI transparency (74.1%), reduced human interaction (76.6%) and bias (63.2%). Data privacy also emerged as a prominent issue, with 65% expressing moderate to high concern over the handling of their personal medical information.
Accountability in the event of AI error remains a contentious topic. In cases where cancer was missed by AI, 57.7% of participants believed all involved parties—the radiologist, institution and AI developer—should share responsibility. This multifaceted attribution of blame suggests that trust in AI is contingent on robust safety nets, legal frameworks and shared clinical governance.
Demographics again influenced perceptions of these risks. Higher education correlated with reduced privacy concerns, suggesting a better understanding of data governance mechanisms. On the other hand, older age was associated with greater concern about AI bias. Notably, racial and ethnic minorities consistently reported greater apprehension about systemic bias and fairness, which may reflect broader experiences of inequity in healthcare systems.
The study highlights a nuanced landscape of patient attitudes toward AI in screening mammography. While there is substantial support for AI as an adjunctive tool, especially among those with higher education or AI familiarity, serious concerns remain about transparency, bias, privacy and loss of human interaction. Acceptance is not universal and varies across demographic groups, indicating that successful AI implementation must be accompanied by targeted education, transparent consent processes and ongoing dialogue with patients. Addressing these concerns through inclusive design, rigorous validation and patient engagement is essential for AI to gain and retain trust in breast imaging and broader healthcare applications.
Source: Radiology: Imaging Cancer
Image Credit: iStock