Digital health technologies have expanded access to personal metrics and health insights, from fitness trackers to continuous monitoring and self-diagnostic platforms. While these tools can support self-management, the volume and frequency of notifications, dashboards and alerts can create fatigue, anxiety and information overload. Some users misinterpret normal fluctuations as warning signs, disengage from tools or abandon healthy behaviours they were intended to reinforce. The result is a paradox: more data does not always translate into better outcomes. Emerging artificial intelligence health companions are being explored as a way to filter, contextualise and personalise information so that what reaches the user is timely, comprehensible and actionable without adding to cognitive burden.

 

Data Deluge and Cognitive Burden

Continuous data streams can impose emotional and cognitive load, particularly when users face endless metrics and nudges without clear meaning or prioritisation. Terms such as cyberchondria reflect anxiety exacerbated by self-tracking. Misinterpretation of smartwatch electrocardiogram alerts or sleep scores can prompt concern even when individuals feel well. As information accumulates, some users become preoccupied with targets and thresholds, shifting from flexible health management to rigid tracking behaviours. This can fuel scepticism about tools, reduce engagement and ultimately worsen outcomes as users step back from both the technology and the behaviours it was meant to encourage.

 

These concerns echo long-standing issues around low-value testing in clinical settings, where unnecessary diagnostics generate false positives, incidental findings and stress. In consumer contexts, analogous “false alarms” can occur at greater scale as wearables and apps produce frequent signals without expert oversight. Existing mitigation strategies focus on filtering, consolidation and presentation. Personalised thresholds aim to reduce alert fatigue by aligning notifications to an individual’s norms rather than population averages. Consolidated dashboards bring together disparate sources, such as heart rate, sleep and glucose metrics, into a unified view. Yet these approaches still leave the interpretive burden with the user, who must decide what matters and what to do next.

 

Promise of AI Health Companions

AI-driven companions have been proposed as intelligent mediators that surface salient insights while suppressing noise. Large language models (LLMs) adapted for health applications exemplify this direction. A personal health LLM has been fine-tuned to interpret wearable data and generate recommendations for sleep and fitness. Across 857 expert-curated cases, fitness guidance was rated comparable to human experts, and sleep guidance received the top score 73% of the time. By distilling dense streams into prioritised, plain-language suggestions, such systems may reduce digital fatigue while maintaining sensitivity to changes that merit attention.

 

Must Read: Virtual Medical Assistants Ease Admin Load and Burnout

 

A complementary multi-agent architecture—the Personal Health Agent—illustrates how layered roles can operationalise this mediation. In this framework, a data science agent analyses personal and population data, a domain expert agent situates findings in medical knowledge, and a health-coach agent supports behaviour change. Beyond wearables, LLMs fine-tuned for patient comprehension can identify unfamiliar but clinically important terms in medical records and present accessible explanations, helping readers focus on key elements rather than becoming overwhelmed by jargon. In practice, LLMs have also translated continuous glucose monitoring data into concise two-week overviews that clinicians judged highly on accuracy, completeness and safety. Taken together, these early signals suggest that AI companions can triage high volumes of inputs into user-specific guidance, provided they meet standards for validation, privacy, security and ongoing oversight.

 

Design, Architecture and Oversight

Technical foundations for managing overload extend from ingestion to delivery. Data can be pulled via device and platform interfaces, including electronic records through standards such as FHIR, combined with natural language processing of reports and adapters that turn sensor streams into structured inputs for language models. Processing often involves normalising to individual baselines or population percentiles and aggregating over configurable windows to detect trends. While robust LLM systems dedicated to selective delivery are still emerging, related approaches include knowledge-based filters, contextual prefiltering that suppresses non-urgent notifications in specific circumstances and condition-driven signal selection that prioritises metrics aligned to diagnoses or historical patterns. Outputs can present raw data when appropriate or structured responses such as summaries and SMART recommendations.

 

Design choices must balance usefulness with intrusiveness. Excessive autonomy risks suppressing information that users want, yet insufficient filtering reverts to overload. Clear, adjustable controls over thresholds and notification logic, transparency about what has been filtered and why, and mechanisms to escalate concerns appropriately are central to maintaining agency. Decisions about when to involve healthcare professionals versus encouraging user action require careful framing so that support does not devolve into unsupervised decision-making where clinical input is warranted.

 

Governance remains pivotal as AI companions blur lines between consumer products and medical devices. Oversight must ensure that automation augments rather than undermines clinical judgement. Safety validation, privacy and security safeguards, transparency about system capabilities and limits, and meaningful avenues for accountability are necessary to build trust. As these systems evolve, regulatory approaches will need to accommodate their hybrid nature while maintaining protections commensurate with potential impact on health decisions.

 

AI health companions offer a pathway to reconcile abundant data with human attention by filtering, contextualising and personalising information. Early evidence indicates that LLM-based tools can convert complex inputs into focused, actionable insights and can present medical information in formats that reduce cognitive load without obscuring clinically relevant detail. Realising this promise depends on sound technical architecture, user-centred design and strong governance. With thoughtful implementation, collaboration across consumers, clinicians and developers can help ensure these systems elevate understanding and engagement, align with appropriate oversight and support better self-management without adding to the burden of information.

 

Source: npj Digital Medicine

Image Credit: iStock


References:

Mahajan A & Gilbert S. (2025) Do we need AI guardians to protect us from health information overload? npj Digit Med; 8, 632. 



Latest Articles

AI health companions, digital health, data overload, health technology, wearable devices, personal health data, cognitive burden, artificial intelligence, healthcare innovation, LLMs, health apps, patient engagement, digital wellbeing, personalised insights AI health companions transform data overload into clear, personalised insights, easing digital fatigue and supporting better self-management.