With more than 60 sessions dedicated to AI and machine learning, this new technology is definitely the centrepiece of this year’s edition of ECR. This session chaired by Daniela Baditescu (Romania) showcases the efforts and initiatives undertaken in the field, spanning from the establishment of federated infrastructures for cancer imaging to the development of novel frameworks for AI deployment and assessment. Lecturers shared their experiences and shed light on key issues such as governance, education, and implementation strategies.

 

Early platform release of the federated European cancer imaging infrastructure

Ignacio Blanquer (Spain) started the session by reporting on the recent advances made by EUCAIM (https://cancerimage.eu/), a pan-European federated infrastructure for cancer images. The EUCAIM federated infrastructure is designed with core services including a public metadata catalogue, federated search, access negotiation, coherent AAI, and distributed processing. A recent prototype release incorporates 40 image datasets from nine cancer types, related to projects within the AI4HI network. These datasets, totaling over 200,000 image series from approximately 20,000 individuals, adhere to a common metadata model. The prototype features a dashboard, public catalogue, federated search engine, and beta access negotiation system. This platform enables users to discover, search, request, access, and process medical imaging and clinical data flexibly. It is built on cloud and container technologies, with plans to integrate with computing infrastructures like EGI and supercomputing centers.

 

Radiology AI deployment and assessment rubric (RADAR) for value-based AI in radiology

Jacob Johannes Visser (Netherlands) presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework and facilitates valuation of radiology artificial intelligence (AI) from conception to local implementation. The RADAR framework aims to provide a comprehensive method for assessing the value of AI in radiology, emphasizing the importance of evaluating systems within their local contexts. It consists of seven hierarchical levels, guiding users through various stages of assessment, from technical efficacy to societal impact. RADAR is adaptable to different stages of AI development and integrates diverse study designs, including in-silico trials, randomized controlled trials, and health-economic evaluations. By prioritizing local efficacy and offering a structured approach, RADAR serves as a comprehensive tool for evaluating the value of AI in radiology, aligning with the principles of value-based healthcare.

 

Knowledge of AI governance, perceived challenges, opportunities, and suggestions for AI implementation by UK radiographers

Nikolaos Stogiannos (Greece) presented a study investigating UK radiographers' understanding and perspectives on AI governance, recognizing their pivotal role in clinical imaging and radiation therapy. An online survey was conducted via Qualtrics, targeting radiographers with AI knowledge or experience. Analysis of 88 valid responses reveals challenges including lack of training, guidance, and funding for AI implementation. A significant portion of radiographers lacks awareness of AI evaluation methods and specific training. Key priorities identified by respondents include robust governance frameworks, tailored training, and involvement of patients and the public. The study emphasizes the importance of effective leadership, adequate resources, and further research to maximize the benefits and mitigate risks of AI implementation. However, limitations such as potential selection bias and skewed geographical distribution of respondents may affect the generalizability of the findings.

 

Black box no more: a survey to explore AI adoption and governance in medical imaging and radiation therapy in the UK

Another study presented by Nikolaos Stogiannos (Greece) investigates challenges to adopting AI in medical imaging and radiation therapy (MIRT) and explores opportunities for its implementation. A survey was conducted among MIRT professionals in the UK, revealing issues such as a lack of knowledge about AI governance frameworks. The study found that prior AI training was linked to a better understanding of governance concepts. Respondents emphasized the importance of clear governance frameworks, AI training, and effective leadership for successful AI adoption. However, the study's small sample size limits the generalizability of its findings to the wider MIRT AI ecosystem in the UK.

 

Radiographer education and learning in artificial intelligence (REAL-AI)

Geraldine Doherty (United Kingdom) addressed the lack of information on education and training for staff in medical imaging regarding artificial intelligence (AI). It aims to investigate the current provision of AI education at UK higher education institutes (HEIs) and explore the attitudes and opinions of educators. Data were collected through online surveys distributed to HEIs and medical imaging educators in the UK and Europe. Preliminary findings indicate that while many HEIs have introduced AI into their curriculum, educators themselves have received little to no training on AI, mainly due to limited resources. There's a perceived need for AI concepts to be taught by experts. By surveying both educators and HEIs separately, the study reveals a disconnect between them regarding the provision of AI education, highlighting challenges in integrating AI into the curriculum. However, the study's limitation lies in conducting surveys, focus groups, and interviews only in English.

 

International medical students' perceptions towards artificial intelligence in medicine: a multicentre, cross-sectional survey among 192 universities

Felix Busch (Germany) set out to investigate medical students' attitudes towards the integration of artificial intelligence (AI) in medical education and practice on a global scale. A multicentre, multinational survey was conducted among medical, dentistry, and veterinary students, assessing their preferences for AI events in the curriculum, current AI education, and attitudes towards using AI in their future careers. The majority of participants expressed positive attitudes towards AI in medicine and desired more AI education. However, they reported limited general knowledge of AI and felt unprepared to utilize AI in their professions. Subgroup analyses revealed differences in attitudes based on factors such as geographic location. The study underscores the need for increased AI education in medical curricula. Limitations include unequal regional representation and potential selection bias.

 

AI in routine teleradiology use: results of a large-scale test across Germany and Austria

Torsten Bert Thomas Moeller (Germany) presented a study investigating the impact of AI on improving the quality of routine teleradiological reporting in Germany and Austria. It analyses 2,707 CCT scans from 140 hospitals, using AI for haemorrhage analysis and comparing the results with those of teleradiologists. Discrepant findings were evaluated by neuroradiologists. Approximately 7% of cases were found to have intracranial haemorrhage, with AI detecting cases missed by radiologists. Further analysis classified some AI detections as false positives. In-house error statistics showed a significant decrease in reported false findings for intracranial haemorrhage. The study confirms the positive effects of AI on radiological reporting quality, especially in teleradiology. However, further research is needed to substantiate these findings.

 

Artificial intelligence should only read a mammogram when it is certain: a hybrid breast cancer screening reading strategy

The study from Sarah Delaja Verboom (Netherlands) aims to integrate uncertainty quantification metrics into an AI breast cancer detection model and assess their effectiveness in guiding a novel hybrid reading strategy for breast cancer screening. Uncertainty metrics are obtained through modified Monte Carlo dropout from a commercial AI model, used to gauge AI certainty in malignancy predictions. The hybrid reading strategy relies on AI for recall decisions only when predictions are highly certain, otherwise employing standard radiologist double-reading. Retrospective testing on a subset of digital mammographic screenings demonstrates that this strategy, with a recall rate matching standard practice, allows 46% of cases to be read solely by AI without compromising cancer detection rates. Additionally, leveraging uncertainty metrics improves the AI model's performance, potentially reducing workload without sacrificing accuracy. Leveraging AI uncertainty to guide a hybrid AI-radiologist screening reading strategy can potentially reduce workload by ~46% without decreasing performance.

 

Setting up a complaint data registry for research on the human: a Swiss experience

Benoît Dufour (Switzerland) detailed in his study the establishment of a Complaint Data Registry (CDR) within a private radiology network in Switzerland, in accordance with the Law on Human Research (LRH) since 2014. The registry comprises DICOM images, examination reports, and clinical/demographic data, with key elements including governance (legal structure, consent procedures) and operational procedures (data storage, pseudonymisation). A structured organisational framework was developed, and a workflow for informed consent, including consent for AI-based analysis, was implemented, resulting in increased patient consent rates for AI-based data analysis. Results indicate successful implementation with a high rate of consenting research data reuse. The experience serves as a potential model for institutions aiming to enhance healthcare outcomes through complaint data utilization, though limitations in generalizability are noted due to the specific Swiss context and system integration considerations.

 

Advancements in generative AI for radiological imaging

Can Ozan Tan (Netherlands) focused on the use of generative artificial intelligence (AI) in radiology to enhance image quality, reconstruct degraded data, and synthesize realistic images, ultimately improving diagnostic accuracy and efficiency. A pipeline was developed to create artificial 2D radiologic images using publicly available chest CT scans. Radiologists assessed the quality of synthetic images, rating them close to real ones. An extended diffusion-based model was utilized to generate synthetic images reflecting key features of lung nodules, achieving high accuracy in distinguishing malignant from benign nodules. The study suggests that synthetic images faithfully represent radiographic features of pathology, potentially enabling tailored imaging based on individual patient profiles. However, ethical considerations regarding the use of generative AI in radiology must also be addressed.

 

Improving CT justification practices with machine learning and deep learning: a multi-site study

Jaka Potočnik (Ireland) compared human experts with machine learning (ML) and deep learning (DL) models in assessing the justification of CT brain referrals. Anonymized referrals from three Irish CT centers were categorized by two radiologists and radiographers using iGuide. ML and DL models were trained to classify referrals as justified, unjustified, or potentially justified. Features were extracted using various methods, and classifiers were evaluated. The best-performing model achieved 94.4% accuracy and accurately predicted referral justifications in line with iGuide categorization. ML and DL approaches show promise in improving justification practices in radiology referrals. ML and DL-based approaches can generalise and accurately predict the justification of radiology referrals in accordance with the iGuide categorisation. This may help in addressing poor European justification practices.

 

Artificial Intelligence in automated protocolling for Finnish brain MRI referrals

Heidi Huhtanen (Finland) assessed in his study different AI models for automating the assignment of suitable protocols and the need for contrast medium in emergency brain MRI referrals. Using Finnish referral texts, the researchers labeled data and trained baseline machine learning (ML) models and newer deep learning (DL) models for classification. They tested variations in training data size and augmentation techniques. GPT3 emerged as the top-performing model, with accuracies of 84% for protocol prediction and 91% for contrast medium prediction, outperforming BERT and ML models. DL models showed potential for improvement with larger datasets, while ML models' performance remained stable. However, limitations include class imbalance and data from only one institute. Overall, the study suggests AI's potential for automating MRI protocolling, with DL models showing promise for further enhancement with more data.

 

Image Credit: iStock

 

«« Deloitte Warns of Potential Blind Spots in Generative AI Implementations


ECR 2024 Day 2: Overview of Key Milestones in AI Regulation »»



Latest Articles

AI, machine learning, radiology, ECR 2024, healthcare technology Discover groundbreaking AI and machine learning innovations revolutionizing radiology at ECR 2024. From federated infrastructures to AI deployment strategies