Artificial intelligence is becoming increasingly embedded in medical care, offering benefits across diagnostic, therapeutic and administrative domains. While much attention has been given to attitudes towards AI tools themselves, less is known about how AI use affects the perception of physicians who implement it in their practice. Recent online research has examined how AI usage in various clinical contexts influences public judgments of doctors’ competence, trustworthiness, empathy and patient engagement.
Approach and Participant Evaluation
The research involved a quota sample of 1276 adults recruited online in January 2025 to reflect the demographic structure of the US population. Participants were shown simulated advertisements for family doctors, designed to resemble real-world media such as social posts or billboards. Each participant viewed one version of an advert, with randomised content varying only in whether it mentioned AI use. The control version made no mention of AI, while the three other conditions specified that the physician used AI for administrative, diagnostic or therapeutic purposes.
Must Read: Bridging the AI Trust Gap in Healthcare
Participants were asked to assess the advertised physician across four dimensions: competence, trustworthiness, empathy and willingness to schedule an appointment. Ratings were given on a five-point scale. The analysis involved comparisons between the four groups using two-sided t tests. Significance thresholds were adjusted for multiple comparisons using Bonferroni-Holm correction. The study adhered to ethical guidelines and included written informed consent. Data were processed using R statistical software. Supplementary material provided additional methodological detail.
Effects on Perception and Behaviour
Across all AI conditions, participants rated the physicians less favourably than those in the control group, which did not mention AI use. In terms of perceived competence, the control group gave a mean rating of 3.85. This fell to 3.71 in the administrative AI group, 3.66 in the diagnostic AI group and 3.58 in the therapeutic AI group. Trustworthiness ratings also declined from 3.88 in the control group to 3.66, 3.62 and 3.61 respectively in the AI groups. Empathy followed a similar pattern, with control physicians scoring 4.00, compared to 3.80, 3.82 and 3.72 across the AI scenarios.
The strongest effect was seen in participants’ willingness to make an appointment with the physician. Those in the control group gave a mean score of 3.61, whereas ratings dropped to 3.32 for administrative AI, 3.16 for diagnostic AI and 3.15 for therapeutic AI. Despite these consistent patterns, differences between the three AI types were not statistically significant. However, each was rated significantly lower than the control condition on all four dimensions. Effect sizes varied, with the smallest observed for administrative AI and the largest for therapeutic AI in relation to competence, trust and appointment intent.
Interpretation and Broader Implications
The findings point to a general public hesitancy regarding physicians who disclose the use of AI, regardless of the specific application. Although the reductions in ratings were not substantial in absolute terms, they could have practical implications, particularly because trust and personal impression play a central role in healthcare experiences and outcomes. Several factors may contribute to the reservations observed. These include a perception that doctors may become overly dependent on technology, reduced human interaction during consultations, concerns over data security and anxieties about increasing healthcare costs.
From the perspective of the physician, such perceptions could influence patient satisfaction and engagement. It may become important for clinicians to explain how AI tools support rather than replace their clinical judgement. Clear communication about the safety, privacy and patient benefit of AI technologies may help to reassure patients and strengthen the therapeutic relationship. The findings also suggest that different forms of AI use—whether administrative or clinical—do not currently alter public sentiment in significantly different ways. This indicates a broader scepticism rather than specific concerns linked to particular functions.
There are limits to how far the results can be generalised. The use of hypothetical adverts does not fully reflect real-world patient experiences, and the artificial setting may influence participant responses. The sample consisted of individuals who opted into an online survey platform, and results might differ among patients with direct healthcare needs or those already familiar with digital tools. Further investigations in clinical environments and with diverse populations could help clarify how attitudes change with increasing exposure to AI-supported care.
AI technologies bring not only technical transformations but also changes in how medical professionals are perceived. People tend to judge physicians less positively when their use of AI is highlighted, with lower ratings across trust, empathy and willingness to engage. These findings highlight the need for transparent, patient-centred communication about AI’s role in care. Building public understanding and confidence in these tools may be key to ensuring that their integration supports rather than undermines the physician-patient relationship.
Source: JAMA Network Open
Image Credit: iStock