At ECR 2026, Catherine Mary Jones (Brisbane, Australia) argued that “AI for the people” has to start with the reality of unequal access to imaging and specialist expertise. She linked the equity gap to workforce shortages, infrastructure constraints, governance and training, alongside the limits of algorithms trained on narrow datasets. Access was framed as a practical barrier as much as a policy challenge, including the experience of patients in remote settings who may travel “over 5 or 600 km to get to an X-ray machine”. Jones anchored the urgency by noting that “radiology involves about 70% of diagnoses around the world”.

 

The Equity Gap in Workforce and Access

Jones used global distribution data to show that radiologist availability broadly tracks national income, with the lowest-income regions generally reporting the lowest radiologist density. She placed the global radiologist workforce at approximately 65,000 and highlighted scale mismatch against global population needs. The global average was given as 45 radiologists per million, with sub-Saharan Africa described as particularly low, including countries with none.

 

Europe was portrayed as comparatively well-resourced but far from uniform. Jones cited a range of “51 to 270 radiologists per million population” across European countries, underlining that disparity persists within the EU. The UK example was used to show that high-income health systems can still face structural pressure, with about 100 radiologists per million, below the European average while remaining more than twice the global average. Australia, despite being high income, was described as “middle green” on radiologist density maps, reflecting that many regional and remote areas have no radiologists at all.

 

Must Read: Unlocking Digital Solutions for Medicine Access

 

On impact, Jones tied limited radiology access to poorer outcomes across a huge number of diseases and to weaker preventive health, since screening programmes that depend on imaging are difficult to deliver without local services. The argument was not that technology alone solves scarcity, but that scarcity defines what counts as an acceptable, safe pathway to diagnosis.

 

How AI Can Help and Where It Can Fail

Jones positioned AI as attractive in resource-restricted contexts because it is scalable, fast and can support consistent triage and prioritisation. Triage was presented as a pivotal lever: in settings where radiologists are scarce, AI can help push critical findings to the top of lists and shorten time to action. She reported an internal experience from the largest radiology network in Australia, with about 500 radiologists, where AI triage for CT brain findings made results “about 35% faster overall” to return, even in a system where all studies are read by radiologists.

 

Alongside strengths, Jones listed limitations that can block real-world adoption. Dataset bias and generalisability were central, framed as the risk that models trained on one population perform poorly elsewhere due to differences in ethnicity, disease prevalence and presentation. Tuberculosis was used as a recurring example of why local epidemiology matters: tools developed where TB is uncommon may not translate to settings where TB is endemic. She also described infrastructure barriers that can be basic, including environments without PACS, limited storage and constrained connectivity, where images may exist only on a scanner until capacity runs out. Regulatory inconsistency was highlighted as a barrier in regions lacking national guidance for safe AI deployment, with Europe’s CE marking and Australia’s TGA referenced as useful signals that others can adopt rather than “reinvent the wheel”.

 

Trust and explainability were treated as operational requirements rather than academic ideals. Jones argued that clinicians may not use even excellent tools if they do not understand or trust them, particularly in regions with little exposure to AI. She described a country in Southeast Asia with no regulation on AI or data privacy, where tools were being marketed widely, alongside concerns about patient data practices and quality. The overall message was that AI can narrow inequity but can also widen it if only the already-resourced half of the world adopts it first.

 

Building Equitable AI Deployment at Scale

Jones proposed an approach to “democratising AI” built around accessibility, representative data, transparency, governance and education. Cost was addressed directly, including vendor pricing models that reflect local ability to pay and the value of open solutions, arguing that “a free solution that works pretty well is better than no solution at all.” She emphasised the need to strengthen policy and governance foundations, noting that Australia only published a national AI framework last year, after earlier sector-specific frameworks. She highlighted the importance of data privacy legislation, local validation requirements and post-market surveillance to ensure tools remain safe and effective over time.

 

On technical strategies, Jones referenced federated learning as a way to train models without data leaving an institution, alongside “more lightweight models” with lower compute requirements and combined cloud and edge approaches for settings without reliable cloud access. Workforce development was treated as non-negotiable: overstretched staff cannot absorb AI as an extra burden without upskilling and support. She cited growth in free online learning platforms, institutional and cross-country partnerships and remote mentorship models, including observerships and fellowships that make new workflows visible and replicable.

 

As a concrete case study, Jones focused on TB screening with chest X-ray AI/CAD tools, describing benefits in speed, consistency and scalability, particularly where radiologists are not available and decisions may be made by nurses or community health workers. She referenced a 2021 WHO publication endorsing AI/CAD use in TB screening programmes and described a later updated position that was even stronger, including the sentiment that “if you're not doing this, you need to explain why.” She suggested success should be measured through availability of imaging and accurate reporting, clinician usage and trust, turnaround times for critical findings, geographic coverage, morbidity and mortality outcomes and cost-effectiveness metrics that support sustainable expansion.

 

Jones closed by warning that AI could worsen inequity if it primarily benefits those who already have access. She argued that progress depends on deliberate choices about collaboration, regulation, education and affordability, not on the technology’s existence. In her words, “this is not about technology”, because “it’s us people, we will make that decision.”

 

Source & Image Credit: ECR 2026




Latest Articles

radiology AI, medical imaging AI, AI healthcare equity, radiologist shortage, global health imaging, AI triage radiology, TB screening AI AI in radiology can reduce global imaging inequity by improving triage, access and scalability, but fair datasets, governance and training are critical.