Large language models (LLMs) such as DeepSeek are increasingly being used to support health-related decision-making by everyday users. These generative AI systems offer conversational, on-demand answers to a wide range of queries, including those related to symptoms, diagnoses and treatments. However, while interest is growing, the decision to trust and adopt these tools in personal health care depends on multiple interrelated factors. A recent study involving users from India, the United Kingdom and the United States explored how perceived ease of use, usefulness and risk shape trust in DeepSeek, and how trust influences the intention to use it for health purposes. The findings point to trust as the central mechanism through which adoption is facilitated; they also highlight the importance of understanding user psychology when designing AI for sensitive health contexts.
Trust as the Critical Enabler
Trust emerged as the most influential factor determining user intention to adopt DeepSeek for health-related tasks. The study showed that although ease of use does not directly drive intent, it plays a major role in building trust, which in turn drives adoption. Users who find the system easy to understand and interact with are more likely to believe in its credibility. In the health domain, where the consequences of poor advice can be serious, users must have confidence not only in the system’s design but in the reliability of its outputs.
Perceived usefulness also played a strong role, both directly and indirectly. Participants who felt DeepSeek provided relevant and effective support were more inclined to use it and to trust it. For example, the ability to clarify medical terminology or interpret clinical documents can offer immediate, tangible value, which reinforces confidence in the tool. However, this relationship is not one-directional; trust itself enhances perceptions of usefulness, especially in repeat interactions. These findings reflect the layered nature of user behaviour, where a sense of benefit is often strengthened by an underlying belief in the system’s trustworthiness.
Risk Perception as a Limiting Factor
While ease of use and usefulness promote trust, risk perception works against it. Concerns about privacy, data handling and incorrect outputs were common and significantly reduced the likelihood that users would adopt DeepSeek for health purposes. This suggests that even if a system is helpful and intuitive, fears about data misuse or potential harm can undermine its uptake.
Interestingly, the relationship between risk and intent was not linear. Users who perceived a moderate level of risk did not necessarily avoid the platform and may have used it more cautiously. However, those who perceived either very high or very low risk were less likely to engage. High perceived risk clearly discouraged use, but extremely low perceived risk appeared to diminish critical engagement, suggesting some users might become complacent. This pattern implies that a certain level of perceived caution may actually benefit adoption, prompting users to treat the AI as a supportive tool rather than a definitive authority.
Must Read: DeepSeek: The Open-Access AI Transforming Radiology
Developers and health providers should therefore aim for transparency that conveys both the strengths and limitations of AI. Overstating accuracy or safety could backfire by creating unrealistic expectations or suspicion. Conversely, candid messaging about how the system works, where it might fall short and how users should apply the information can encourage responsible use and foster a balanced sense of trust.
Threshold Effects and Design Implications
Another key finding of the study is that user intent does not always increase proportionally with improvements in usability or perceived value. Instead, these relationships often follow threshold or plateau patterns. For example, once a system reaches a certain level of ease of use, further refinements may no longer have a meaningful effect on adoption. Likewise, perceived usefulness follows a curve where extremely high levels may trigger scepticism, rather than boosting confidence.
These nonlinear dynamics have practical implications. Design teams should focus on meeting core expectations first, such as intuitive navigation and clear responses. Past this threshold, efforts may be better spent improving reliability and supporting transparency than refining the interface further. For users with low digital literacy, more attention may need to be paid to basic usability, while more experienced users may value customisation or specialised functionality.
Similarly, communications around safety and utility should avoid extremes. Rather than claiming flawless performance, it may be more effective to highlight practical use cases, share error rates and clarify which health scenarios are suitable for AI support. Health care is a high-stakes environment where credibility is critical. Striking the right balance between approachability and realism is key to maintaining user engagement.
The adoption of AI systems such as DeepSeek for health care use is shaped by more than functionality. Trust plays a decisive role, serving as the mechanism through which ease of use and perceived usefulness translate into user intent. At the same time, perceptions of risk—whether too high or too low—can suppress adoption. Nonlinear patterns in user behaviour further complicate the picture, suggesting that design and communication must be carefully tuned to avoid diminishing returns.
For developers and health care providers, these insights offer valuable guidance. Creating a system that is both user-friendly and transparent, while also managing expectations about its capabilities, is more likely to encourage responsible, long-term use. Understanding the subtle dynamics of trust and risk will be essential to supporting safe, effective engagement across diverse user groups.
Source: JMIR Human Factors
Image Credit: iStock