ECR 2026’s second-day session on the AI-POD project framed obesity-related cardiovascular risk as a practical AI challenge. Imaging, clinical factors and behaviour produce rich signals, but they need to be translated into validated, interpretable outputs that clinicians can use in routine care.
Extracting Risk Signals from Imaging and Multimodal Data
Philipp Seeböck described why AI-POD targets individualised prediction, stressing that obesity-related cardiovascular disease risk varies widely and is not adequately captured by current approaches. He framed today’s use of imaging as “just the tip” and argued that the project aims to assess the individual risk of patients for cardiovascular diseases: “we do not know the individual risk.”
Related Read: The Role of AI in Cardiovascular Imaging and Interventional Cardiology
He outlined the project’s data foundation, combining hospital imaging and clinical information with prospectively collected activity-related data. He then walked through modelling directions, including automated cardiac CT substructure segmentation, feature extraction for downstream risk modelling, and anomaly detection strategies. Seeböck also discussed multimodal learning that links radiology text with imaging, pointing out limits in generic language models for this context and the need for approaches that handle weak supervision and anatomical specificity.
Turning Models into a Decision Support Tool at the Point of Care
Johannes Grapentin focused on clinical deployment, arguing that there is a gap between documenting findings and deciding what to do next, particularly when information is multimodal and workflows are pressured. He described the challenge as information that is not contextualised, with knowledge that is hard to apply “in the moment where the action is actually needed.”
He presented the AI-POD clinical decision support system as an interface that aggregates imaging findings, clinical context and biomarkers to generate risk scores, structured summaries and decision prompts. A repeated theme was explainability and clinician control: the system must provide transparent rationale and remain supportive rather than directive. Grapentin stated that in the end, “it’s always the physician that makes the decision” and emphasised that the approach is “not a one-size-fits all model.”
Early Clinical Signals and the Conditions for Trust and Adoption
Anastasia Bartashova shared early observations from the AI-POD clinical study, describing a cohort design that combines structured hospital data with continuous lifestyle and activity monitoring. She argued that obesity-related cardiovascular risk reflects multiple clinical and behavioural factors and that existing risk scores were not designed to integrate such continuous and complex data streams.
She reported initial cohort characteristics and described the imaging and monitoring approach, including practical implications such as data volume and the need for reference annotations to support algorithm development. Alongside feasibility, she emphasised the intended role of AI in care: “AI is not here to replace the human perspective in medicine. It is here to enhance it.”
Kaat Goossens than argued that technical performance alone will not deliver impact unless systems are accepted by clinicians and patients. Her stakeholder analysis explored ethical and societal concerns, barriers and enablers, including risks of bias and exclusion. On participation bias, she warned that studies relying on digital tools may skew towards better-resourced users: “you will probably get a higher socioeconomic class who can afford the newest phone.” She also framed adoption as a long-term partnership, calling for “co-creation not as a symbolic exercise, but really as a continuous and ongoing process.”
Across modelling, deployment, early feasibility data and stakeholder work, AI-POD was presented as an effort to turn fragmented multimodal signals into patient-specific risk understanding, while keeping outputs explainable, traceable and clinically workable. The session’s through-line was not simply better prediction, but better decisions: technically grounded scores, delivered at the right time, with human control and social legitimacy built in from the start.
Source & Image Credit: ECR 2026