Artificial intelligence reshapes clinical practice, yet its paediatric use remains limited and unevenly governed. Evidence shows a pronounced gap between adult and paediatric applications, alongside growing regulatory attention to software as a medical device. Paediatric care adds challenges spanning assent, privacy, data scarcity and the heightened impact of errors. A transparent, equitable and developmentally aware governance approach is required to ensure safe deployment and sustained performance monitoring. Aligning ethical principles with practical oversight can help systems integrate AI responsibly while protecting children’s rights and delivering clinical value.
Uneven Adoption and Regulatory Signals
The imbalance between adult and paediatric AI is stark. Broad literature suggest an 8.6:1 ratio of adult-to-paediatric publications across disciplines, widening to 26:1 when restricted to AI-related terms. Data abundance in adults, ethical and regulatory complexity in research with children and weaker commercial incentives all contribute to the gap, risking an innovation trajectory that sidelines paediatric needs in model design and evaluation.
Regulatory activity clarifies where paediatric AI has advanced. An assessment of US Food and Drug Administration approvals identified 189 submissions explicitly indicated for paediatric use. Submissions were sparse before 2017, then rose to regular double-digit quarters from 2021, peaking at 20 in 2023. Radiology dominates with about 80% of submissions (151), followed by neurology (8%, 15) and cardiovascular applications (5%, 10). Anaesthesiology, haematology and clinical chemistry together account for less than 10%, underscoring the skew toward imaging.
The pathway profile is concentrated. The 510(k) route accounts for 97.4% of submissions (184), with 2.6% (5) via De Novo. Origin patterns are led by the United States (103), followed by Japan (14) and South Korea (12), with Israel, China, France and Canada contributing 8 each. Intended use reflects clinical context: diagnostic functions predominate overall (83.9%), monitoring represents 8.7%, treatment planning 2.6% and therapeutic use 1.9%. Within radiology, diagnostic use is 92.4%. Cardiovascular devices are largely for monitoring with fewer diagnostic applications. Neurology shows a diagnostic emphasis with a meaningful monitoring component. The profile points to maturing imaging use cases alongside missed opportunities in other paediatric domains.
Must Read: Closing the Governance Gap in Healthcare AI
Paediatric-Specific Risks and Stakeholder Duties
Paediatric AI intersects with biodevelopment, where physiology and cognition vary across childhood and adolescence. Data distributions shift across sub-cohorts, and gaps between chronological and developmental age complicate design and validation. Stakeholder roles multiply: children, caregivers and clinicians should be engaged from conception to deployment, yet assent and consent requirements add operational burden. Determining when a child can meaningfully assent is challenging, and non-dissent approaches may apply outside research. Surveys indicate that most parents want to be informed when AI supports decision-making, reinforcing the need for clear communication about model use, benefits and limits.
Data scarcity is structural. Many paediatric conditions are individually rare, slowing accrual and heightening reliance on multi-centre collaboration or alternative sources. Regulators increasingly allow real-world evidence to supplement trials, yet privacy concerns remain acute. Children’s data differ from adults’ and risk profiles evolve, with longer windows for potential misuse. Integrating data from multiple sources can help reduce bias but widens security and governance challenges. Off-label practice is common in paediatrics, small distributional shifts can degrade model performance, raising the stakes for calibration and continuous monitoring.
The impact of AI-related errors is amplified in childhood, where delayed or incorrect decisions can reduce decades of quality-adjusted life years. Trust is fragile, particularly among groups already cautious about healthcare AI. Governance should emphasise transparency about deployment, context-specific performance and limitations. It should also set expectations for post-deployment vigilance so that decay in performance is detected and remediated before harm occurs. Clear roles for clinical teams in oversight and escalation support safer use and help maintain confidence among families.
Closing Governance Gaps with Practical Principles
Existing frameworks for AI and children provide high-level guidance, yet paediatric healthcare remains under-served. Many documents omit children, tilt toward protection without enabling access to benefits or lack concrete accountability mechanisms for products outside formal regulation. Adoption is hindered by siloed literature, limited stakeholder engagement methods, scarce bias-mitigation strategies and minimal incentives for teams to shoulder paediatric-specific burdens such as multi-centre data collection, developmentally appropriate consent and ongoing re-evaluation.
A practical path is to anchor governance in a paediatric-centric ethical heuristic that asks whether an AI system is true, good and wise. Truthfulness requires representative data, context-specific validation and transparency about correctness, dependability and verifiability. Performance varies across hospitals and populations, and models can generalise poorly or drift over time. Many deployments require local training, calibration and monitoring, with clear disclosure when limitations exist. Goodness demands that AI improves care while minimising harm and respecting autonomy and rights through developmentally appropriate assent and consent. It also requires attention to privacy, security and just treatment of variations linked to race, ethnicity, sex and socio-economic factors, as well as paediatric-specific variations in age and development. Wisdom calls for algorithmovigilance, active stakeholder involvement and governance capable of asking the right questions across the model lifecycle. Predetermined change control plans support continuous assessment, but many tools sit outside formal regulation and need alternative accountability routes.
Bias mitigation and equitable access require intentional design choices and data strategies suited to paediatrics. Collaborative networks, responsible data sharing and privacy-preserving approaches can help address small datasets and reduce bias. Governance should clarify roles and expectations, balancing the need for innovation with protections that do not block safe paediatric research or deployment. Incentives that reward rigorous paediatric governance, inclusive participation and post-deployment monitoring can help align system effort with children’s interests.
Paediatric AI is advancing, but concentrated in imaging and constrained by gaps in governance, data and stakeholder engagement. A paediatric-centric approach grounded in truth, goodness and wisdom can translate high-level principles into practical oversight, ensuring transparent development, equitable data practices and rigorous post-deployment monitoring. Health systems that adopt such governance will be better positioned to integrate AI safely, maintain trust among families and clinicians and deliver benefits to children while minimising harm.
Source: npj digital medicine
Image Credit: iStock