Artificial intelligence is being used in clinical care for tasks such as predicting mortality, monitoring sepsis, analysing images, generating clinical notes, screening for cancer and answering medical questions. It may also help extend care where access is limited. Wider adoption, however, depends not only on technical capability and workflow integration but also on patient trust. A survey of 3000 English-speaking adults with internet access examined responses to hypothetical AI-assisted visits for rash diagnosis. Participants compared paired visits that varied by clinician presence, AI performance, governance arrangements and information about training data. They then selected their preferred visit and rated trust in the diagnosis. The findings showed that performance had the strongest association with both trust and choice.

 

Performance Had the Strongest Association with Trust

AI performance was the most important factor shaping both visit choice and trust in diagnosis. When the AI was described as performing better than a specialist, the probability of selecting that visit increased by 32.5%. When performance was described as about the same as a specialist, the increase was 24.8%. Performance at the level of a general practitioner also had a strong association with preference, increasing the probability of choosing a visit by 19.1%.

 

Must Read: AI Strengthens Strategic Decision-Making in Healthcare Operations

 

Trust followed the same overall pattern. Compared with AI performing worse than a general practitioner, trust rose most when performance was above specialist level, followed by performance at specialist level and then general practitioner level. AI performing at the level of a general practitioner had nearly the same association with visit choice as the presence of a clinician. For trust, that level of performance had a greater association than clinician oversight.

 

Open-ended responses reinforced these results. AI performance was the most frequently cited reason for preferring one visit over another, mentioned by 25.7% of respondents. It ranked ahead of clinician presence and other attributes. The results indicate that patients placed greatest weight on whether the system appeared able to provide a reliable diagnosis. Governance and oversight mattered, but neither had as large an association with trust and preference as performance.

 

Clinician Presence and Data Quality Also Mattered

The presence of a clinician was also strongly associated with both trust and visit choice. A visit that included a clinician was 18.4% more likely to be chosen than one without a clinician. Trust in the diagnosis also increased when a clinician was present. Respondents therefore showed a clear preference for a human in the loop, even when AI was being used to support diagnosis.

That preference carries practical implications. Clinician oversight may strengthen confidence in care, but adequately trained clinicians are not always available, particularly in lower-resource settings, underserved populations and underserved specialties. Strong patient preference for clinician involvement may therefore limit some of the potential for medical AI to expand services where workforce capacity is limited.

 

Information about training data also influenced responses. Participants preferred AI trained on a representative population dataset over AI for which no training data information was provided. That attribute increased the probability of choosing a visit by 11.9%. Trust in the diagnosis also rose when representative data were disclosed. By contrast, AI trained on a disproportionately White, male and wealthy dataset was neither preferred nor rejected relative to receiving no data information, although trust was slightly lower in that scenario.

 

The survey made these attributes explicit and highly visible. In routine care, information about training data, performance and oversight may not be clearly conveyed. The findings suggest that greater transparency around these features may support stronger patient trust in the use of medical AI.

 

Governance Increased Confidence, but Less Than Performance

All forms of governance examined in the survey were associated with stronger trust and greater preference than having no governance. Respondents preferred AI with national regulatory approval, AI certified by a nationally recognised medical institution and AI certified by a local hospital over AI without those forms of validation. National regulatory approval and national institutional certification had the same association with visit choice, each increasing the probability of selection by 11.1%. Local hospital certification also increased preference, but by a smaller margin of 7.8%.

 

The same pattern appeared in trust ratings. National regulatory approval and national institutional certification were linked to larger gains in trust than local hospital certification. Local validation still had a positive association, but it mattered less than broader forms of approval. Open-ended responses reflected that pattern as well. Governance signals were mentioned less often than performance and clinician presence, and local certification appeared less influential than national forms of validation.

 

The results are notable because local validation may be especially relevant to how an AI system performs in a specific care environment. Even so, respondents attached more weight to broader governance arrangements. Local governance also requires resources and capacity that are not evenly distributed across health systems. Women and men did not differ significantly in how performance, governance, clinician presence or data quality shaped their preferences, although women showed a lower overall level of trust in visits involving AI.

 

Patient trust in medical AI was shaped by several distinct factors, but performance had the strongest association with both trust and choice. Clinician presence also had a substantial effect, while disclosure of representative training data and formal governance mechanisms each improved acceptance. National forms of approval and certification had stronger associations than local hospital certification, but none of these factors outweighed performance. The findings indicate that patient acceptance of AI-supported care depends on a combination of technical capability, clinical oversight, transparency and governance. As medical AI becomes more integrated into care delivery, the balance between these elements is likely to play an important role in how confidently patients accept its use.

 

Source: JAMA Network Open

Image Credit: iStock




Latest Articles

medical AI, patient trust, AI performance, clinician oversight, AI governance, healthcare AI, AI-assisted diagnosis AI in healthcare boosts patient trust most through performance; clinician oversight, governance, and data transparency also influence acceptance.