Digital health interventions (DHIs) are now widely used across healthcare systems to support care delivery, patient engagement and system performance. Their growth reflects increasing pressure from ageing populations, multimorbidity, workforce shortages and limited access to timely data. While DHIs are often linked to better access and empowerment, many struggle with acceptance, usability and sustained use. These gaps are frequently linked to limited involvement of end users during design and evaluation. Participatory approaches seek to address this by involving patients, healthcare professionals, developers and policymakers throughout the process. However, inconsistent reporting of participatory methods has made it difficult to compare projects, assess outcomes and learn from prior work. A new consensus-based reporting guideline aims to improve clarity and consistency in this area.
Inconsistent Reporting Limits Learning
DHIs are defined as digital tools intended to support health sector goals, in line with World Health Organization classifications. Although they have been associated with improved outcomes and access to care, real-world use often falls short of expectations. Problems with uptake, adherence and long-term engagement remain common.
Must Read:Language Concordance in Digital Patient Outcomes
Participatory design and research are intended to reduce these risks by aligning DHIs more closely with user needs and real-world settings. The source text describes participatory approaches as involving shared learning, collaborative decision making and joint problem solving. In practice, however, a wide range of terms and frameworks are used, including participatory health research, public and patient involvement, user-centred design, human-centred design and co-design. These approaches overlap but are not always clearly distinguished.
Development is described as the iterative creation and refinement of a DHI, while evaluation focuses on assessing value and effectiveness during or after development. In participatory work, these activities are closely linked, with feedback often shaping further design. Despite this, reporting is often incomplete or inconsistent, particularly regarding stakeholder roles, decision processes and context. This has contributed to fragmented evidence and limited comparability across projects.
Delphi Process Builds the ParDE-DHI Checklist
To address the lack of dedicated reporting guidance, an international consensus process was conducted to develop a new checklist for participatory DHI development and evaluation. The work followed recommendations from the EQUATOR Network and used a classic Delphi approach to gather expert agreement.
The process included two online survey rounds and one virtual workshop. An initial draft checklist was developed based on a prior scoping review and an assessment of existing reporting guidelines relevant to digital health and health research. These sources were adapted to the participatory digital health context, resulting in an initial checklist of 64 items across six sections.
Experts were recruited based on their experience with DHI development, evaluation or implementation. Sixty-six experts from 23 countries participated in the first round, most of whom reported strong familiarity with participatory methods. Items were rated for importance, with consensus defined as at least 75% agreement. Based on voting and qualitative feedback, items were revised, merged, removed or added across subsequent rounds.
A final virtual workshop addressed remaining disagreements and refined wording and structure. The final guideline, titled ParDE-DHI for Participatory Development and Evaluation of Digital Health Interventions, consists of 68 items including sub-items. An introductory text accompanies the checklist to support practical use.
Supporting Clearer and More Comparable Reporting
ParDE-DHI is intended to improve transparency and consistency in reporting participatory digital health work. By clarifying how participation is planned, conducted and evaluated, the guideline aims to support better comparison and interpretation across projects. It is positioned as a practical tool for both research and applied settings.
The checklist reflects the specific challenges of participatory DHI work, including the need to describe context, stakeholder involvement and evaluation criteria. It does not mandate specific frameworks or formats, allowing flexibility across disciplines and settings. The increase in item count reflects the inclusion of elements considered essential by the expert panel.
Several limitations are acknowledged, including the predominance of health science perspectives, limited geographic balance and the absence of citizen and patient contributors. Participation decreased across Delphi rounds, and the consensus threshold was relatively strict. Despite these constraints, the guideline represents a structured attempt to address long-standing reporting gaps.
ParDE-DHI provides a consensus-based checklist to improve reporting of participatory development and evaluation of digital health interventions. By addressing inconsistency and lack of detail, it supports clearer documentation, improved comparability and more effective knowledge sharing. For healthcare professionals, better reporting of participatory processes can support more informed interpretation of evidence and more confident adoption of digital health solutions.
Source: npj digital medicine
Image Credit: iStock