At ECR 2025, the session "Sustainability and Equity in Radiology AI: Ensuring a Fair Future" brought together leading experts to discuss the environmental and ethical challenges of artificial intelligence in radiology. As AI becomes increasingly integrated into clinical practice, its impact extends beyond efficiency and accuracy to broader concerns such as sustainability, fairness, and transparency. The speakers explored AI’s carbon footprint, the risk of bias in healthcare applications and the importance of responsible AI development to ensure long-term benefits for both patients and the environment.
Sustainable AI as a Shared Responsibility
Sophie Thornander (Amsterdam, Netherlands) highlighted the need for a lifecycle approach to AI sustainability, considering its environmental impact from development to deployment. She addressed the challenges of measuring AI’s energy and material consumption, stressing the absence of universal standards: “It’s really a global challenge to measure the environmental impact of AI.” Thornander outlined Philips’ three core principles for sustainable AI: using less energy, prioritising clean energy sources and reducing material consumption. She underscored the necessity of incorporating environmental considerations into AI development, stating: “We need to start also including the environment as part of our considerations.”
How Green is Clinical AI? The Carbon Impact of AI in Clinical Routine
Jan Niklas Clusmann (Dresden, Germany) explored the carbon footprint of AI in healthcare, comparing emissions from AI model training to those of hospitals and medical conferences. He underscored the need for more energy-efficient AI models, warning against the unintended consequences of technological advancements: “As soon as the technology became more efficient… the demand increases much more.” Clusmann urged healthcare professionals to assess AI’s net environmental impact rather than focusing solely on individual efficiencies, concluding: “We have to think the climate impact at the entire healthcare system level and not just in the training aspect.”
Equity and Bias of AI in Radiology
Judy Gichoya (Indianapolis, United States) examined AI biases in radiology, illustrating how shortcuts in AI training can reinforce existing healthcare disparities. She emphasised the distinction between equity and bias, explaining: “If the model doesn’t work well for everyone, that’s not a bias model. If the model doesn’t work well for a specific group of patients, that’s a bias model.” Gichoya also highlighted how AI-generated biases can influence radiologists’ decision-making, sometimes leading to a decline in performance even after AI assistance is removed. She stressed the need for careful model evaluation to prevent inequities in patient care.
Did AI Report My Scan: A Patient’s Perspective
Erik Briers (Hasselt, Belgium), a prostate cancer survivor and patient advocate, provided insights into the patient perspective on AI in radiology. He emphasised the importance of patient trust and transparency, advocating for rigorous validation of AI systems: “If it does not benefit a patient, don’t do it.” Briers raised concerns about AI’s ability to provide comprehensive diagnoses, questioning whether it considers all relevant clinical factors: “Would AI look below the knee? I don’t know.” He also warned against the risks of AI-generated reports influencing future AI training, cautioning: “We are contaminating expertise with AI… Is this going to lead to an improvement or will this cause problems?”
The session underscored the shared responsibility of healthcare professionals, technology developers and policymakers to ensure AI adoption is both sustainable and equitable. Discussions reinforced the importance of minimising AI’s environmental impact while addressing biases that could affect patient care. From promoting responsible AI practices to fostering transparency and collaboration, the session highlighted the need for ethical oversight to secure AI’s role in a fair and sustainable future for radiology.
Source & Image Credit: ECR 2025