Patient-facing medical artificial intelligence (MAI) is increasingly used to interact directly with patients through conversational interfaces enabled by natural language processing and large language models. More than 120 patient-facing MAIs are already in deployment or development, ranging from systems built specifically for healthcare to general-purpose tools accessed independently by patients. Reported performance varies, yet some chatbots have demonstrated diagnostic accuracy comparable to clinicians in defined contexts, and purpose-built systems are already used in public healthcare settings such as the NHS. As MAI becomes more visible in clinical pathways, attention is shifting from technical performance to design choices that shape how patients perceive and relate to these systems. These choices have implications not only for trust and engagement, but also for the integrity of human clinical relationships and the ethical boundaries of care.
Anthropomorphic Design and Its Limits
Many patient-facing MAIs adopt anthropomorphic features to foster trust, reflecting a broader tendency to present AI as human-like through names, pronouns and visual representations. People often respond socially to technology, behaving politely or attributing personality and intent to non-human systems. In healthcare, this has encouraged designs that resemble clinicians, based on the assumption that trust improves engagement and outcomes.
Research into interpersonal trust has identified physical and visual traits associated with perceived trustworthiness, including facial proportions, eye characteristics, expressions and attire. Professional clothing, such as white coats, is often perceived more positively than casual dress. Translating such findings into MAI design is problematic. Many traits are culturally contingent and may reinforce bias if generalised across populations. Despite these issues, three broad design strategies are commonly considered: a single standardised appearance intended to maximise trust, automated adaptation of appearance to patient characteristics, or allowing patients to choose how their MAI appears.
Each approach carries risks. A fixed appearance may impose culturally biased norms. Automated adaptation may be seen as manipulative if it occurs without consent. User choice supports autonomy but may not align with clinical effectiveness or adherence. Beyond these concerns lies the problem of the uncanny valley, where increasing human likeness can provoke discomfort once an AI becomes almost, but not quite, human. Mismatches between realistic visual features and less natural speech or behaviour can disrupt expectations and trigger negative reactions. This phenomenon presents a structural challenge for anthropomorphic MAI design in medicine.
Human Care and the Doctor–Patient Relationship
Anthropomorphic MAI draws much of its appeal from evidence that strong doctor–patient relationships are associated with better health outcomes. This has led to the assumption that replicating relational aspects of clinical care through AI could support improved outcomes. Such reasoning relies on a reductive view of care, where relational qualities are treated as transferable components that can be engineered into machines.
Healthcare, however, is presented as more than symptom management or measurable outcomes. Broader conceptions of healing emphasise restoration of integrity and wholeness, particularly when cure is not possible. Illness can fragment a person’s sense of self, producing loss of control and alienation. In this context, healing involves moral and relational dimensions, not only technical intervention. Healthcare professionals are positioned as moral agents who engage with patients through shared human vulnerability.
From this perspective, human care is irreducibly relational and cannot be fully substituted by non-human systems. Designing MAIs to resemble clinicians risks implying equivalence where none exists. In situations of serious illness, uncannily human-like MAIs may weaken or confuse human relationships rather than support them. This suggests a need for an alternative design philosophy that preserves the distinct role of clinicians while allowing MAIs to contribute meaningfully without imitation.
Xenomorphic MAI as a Complement to Care
A xenomorphic approach proposes designing MAIs to be explicitly non-human, potentially even alien in appearance, to avoid conflation with clinicians. Rather than abstract or purely functional systems, xenomorphic MAIs would be embodied and visually distinctive, creating a new category of relational agent. Embodiment is considered important because people tend to engage more effectively with embodied entities than with disembodied interfaces.
Xenomorphic MAIs could take physical or virtual forms with clearly non-human traits, such as unusual proportions, multiple limbs or non-human textures and voices distinct from human speech patterns. While initially unfamiliar, such designs aim to establish relationships that do not compete with the doctor–patient relationship. The concept draws parallels with animal-assisted therapy, where non-human beings provide comfort, support and therapeutic benefit without replacing human care. Relationships with therapy animals, emotional support animals and guide dogs are often meaningful and health-affirming while remaining clearly distinct from human relationships.
Several potential advantages are identified. Xenomorphic design avoids the uncanny valley by abandoning the goal of human likeness. It may reduce bias linked to human stereotypes and clarify that MAIs are not clinicians. This distinction could help preserve human relationships while enabling MAIs to play complementary roles in care. Concerns remain about patient acceptance, especially among vulnerable individuals, and about the tendency to anthropomorphise even clearly non-human systems. Design choices around standardisation, adaptation and personalisation would still apply, and xenomorphic MAIs could still be used as substitutes for clinicians in resource-limited settings.
As patient-facing MAI becomes embedded in healthcare delivery, design decisions will shape patient experience and clinical relationships. Anthropomorphic approaches seek to build trust by imitating clinicians, but risk bias, discomfort and role confusion, alongside deeper tensions about substituting uniquely human care. Xenomorphic design offers an alternative that maintains clear boundaries between human clinicians and machines while supporting meaningful, embodied engagement. By positioning MAIs as complementary non-human agents rather than artificial doctors, xenomorphic approaches may enable patient-facing MAI to support care without undermining the human relationships at its core.
Source: npj digital medicine
Image Credit: iStock
References:
Milford SR, Herger E, Eichinger J et al. (2025) Promoting xenomorphic patient-facing AIs: The case against anthropomorphism in medical AIs. npj Digit Med; 8, 667.