HealthManagement, Volume 21 - Issue 4, 2021

img PRINT OPTIMISED
An overview of the latest research regarding artificial intelligence in radiology and how these findings can transform the future of the field.

Key Points

  • AI disease detection can help radiologists identify pathologies using subtle features not visible to the human eye and help prioritise images with positive findings in the read queue.
  • AI disease characterisation can help differentiate between benign and malignant tumours which appear very similarly to radiologists, as well as help identify specific tumour mutations and subtypes without the need for invasive biopsy.
  • AI disease monitoring can help predict the likelihood of tumour recurrence, as well as help radiologists distinguish tumour evolution from treatment-related tissue changes.



For the past five years, research into the use of artificial intelligence (AI) in radiology has been growing at an incredible rate. A PubMed search reveals that before 2018, less than 500 annual manuscripts included the terms “artificial intelligence” and “radiology”. However, in 2018, this rate doubled to ~1000 articles, and in 2019 and 2020, this number reached ~2000. It is clear that the use of AI in radiology is gaining momentum, primarily due to its potential to enhance the field. Many studies have shown that AI has the ability to increase radiologist efficiency, highlight urgent cases, increase diagnostic confidence, reduce workload, and help inform patient prognosis and treatment strategies. Thus, rather than competing with radiologists as once suspected, AI can actually augment radiologists in providing optimal patient care. AI has the potential to transform the work of a radiologist through three major steps in image analysis: detection, characterisation, and monitoring. This article will review the status of current AI research in each of these categories, as well as highlight the potential impact of these findings on future radiology practice.


Detection

Detection refers to the process of flagging and bounding a specific subregion in an image that is likely to contain a lesion or anomaly (Montagnon et al. 2020). Current technologies that help radiologists detect areas of interest are known as Computer Aided Detection (CADe). However, current CADe systems are limited by a high rate of false-positives and by high labour requirements, as each flag necessitates assessment by radiologists. In addition, each CADe algorithm is task-specific, and not generalisable across diseases and imaging modalities. Some studies on mammogram interpretation highlight these limitations, reporting that radiologists “rarely altered their diagnostic decisions after viewing results from predefined, feature based CADe systems” and that the use of these systems had “no statistical significance on the radiologist’s performance” (Hosny et al. 2018).


Artificially Intelligent CADe systems have been proposed as a solution to the limitations of current detection technologies. AI-based detection tools utilise pattern-recognition to evaluate large volumes of images in a timely manner (Hosny et al. 2018). Subtly suspicious areas that otherwise may have been missed by the human eye are quickly highlighted and presented to the reader (Bi et al. 2019). In addition to improving radiologist sensitivity, deep-learning based CADe systems can also aid with task prioritisation, as the highlighted images can be escalated to the top of the read queue (Mouridsen et al. 2020).


The potential advantages of artificially intelligent CADe systems can already be seen through preliminary research in several areas of image screening. Recent studies have highlighted the ability of deep-learning-based CADe systems to detect pulmonary nodules on CT (Rauschecker et al. 2020). In mammography, convolutional neural networks (CNN) have been shown to outperform traditional CADe systems at low sensitivities, while showing similar performance at high sensitivities (Jung et al. 2018; Ribili et al. 2018). In addition, an AI CADe system was shown to have similar performance compared to human mammogram readers (Kooi et al. 2016). In a 2020 study involving Coronary Computed Tomography Angiography imaging, AI applications were able to accurately detect coronary artery disease in under two minutes, which could help future radiologists prioritise CCTA images with positive results for a more detailed report (Van Assen et al. 2020). AI CADe systems have also been used in neurology to identify intracranial LVOs with excellent sensitivity (82%) and specificity (94%); implementation of these systems can also aid task prioritisation by alerting senior physicians to the case (Mouridsen et al. 2020). All of these findings highlight the utility of using AI in developing future high-performing CADe systems.


Characterisation

Characterisation refers to identifying specific qualities of a pathologic finding, such as size, extent, and internal texture. These qualities can be used to classify lesions into different diagnostic categories (benign vs malignant, pathologic subtypes). Current technologies for characterisation include Computer Aided Diagnosis (CADx) systems that, similar to the CADe systems, are based on predefined discriminative features that lack generalisability (Montagnon et al. 2020). This can limit their utility, as qualitative descriptions are often difficult to quantitatively define and measure (Hosny et al. 2018). This limitation is magnified by the fact that humans are limited in the number of qualitative features they are able to identify by visual exam alone, leading to lack of standardisation and significant variability among readers (Montagnon et al. 2020). For example, it is often difficult for human readers to accurately identify high-risk lung nodules because malignant and benign lesions appear very similarly. Furthermore, for any pathology, radiologists usually need to manually define the borders of regions of interest, risking omission of subclinical disease from analysis (Bi et al. 2019).




AI has the potential to overcome these limitations through its ability to consider a large amount of qualitative features in a reproducible and timely manner. Deep learning-based algorithms are especially promising due to their ability to learn from patient populations without need for pre-definition of discriminative features. In addition, AI algorithms can account for the degree of relevance of each qualitative feature they detect (Hosny et al. 2018). Most impressively, AI CADx systems have been shown to predict tumour response to different treatment options, as they are able to detect subtle qualities that indicate different mutations and subtypes. These tools can help inform providers on which treatment strategy they should try first without the need for invasive biopsy that may not yield a representative sample tissue.


For lung nodules, CNN have been shown to distinguish between benign and malignant classifications at a higher performance than traditional CADx systems due to their ability to function at higher degrees of noise tolerance (Hosny et al. 2018; Nasrullah et al. 2019). Furthermore, in a study done on patients with non-small cell lung cancer, AI CADx algorithms were able to use CT images to significantly predict which cancers contained EGFR mutations, informing on potential treatment with Gefitinib (Bi et al. 2019). Deep learning algorithms have also been trained to accurately classify prostate cancer on Magnetic Resonance Imaging (MRI), which can promote early treatment as well as decrease the number of unnecessary prostate biopsies and prostatectomy procedures performed (Bi et al. 2019). An additional study reported an AI system that was able to use MRI imaging to accurately generate brain tumour classification differentials at a level that exceeded human performance. The algorithm generated the correct diagnosis in one of its top three differentials 91% of the time, outperforming academic neuroradiologists (86%), fellows (77%), general radiologists (57%), and radiology residents (56%) (Rauschecker et al. 2020). Brain MRI AI algorithms have also been able to classify gliomas into molecular subtypes by identifying imaging features associated with alterations in IDH1/IDH2, EGFR, MGMT, and/or chromosomes 1p and 19q (80% sensitivity and 95% specificity) (Bi et al. 2019). This degree of characterisation is significant as it provides a phenotype of the entire tumour, rather than the core of the tumour alone from biopsy, which can more accurately inform treatment. It is clear from these studies that implementation of AI CADx systems in radiology can lead to vast improvements in accuracy, subclassification, and treatment strategies.


Monitoring

Monitoring refers to the longitudinal follow-up of an identified pathology over time to assess for changes, either in natural history or response to treatment. In solid tumour monitoring, radiologists currently use protocols such as Response Evaluation Criteria in Solid Tumors (RECIST) or World Health Organization (WHO) criteria (Hosny et al. 2018). Although these criteria have been validated over the years, they have been criticised for their oversimplified approach that can allow more subtle features of change to be missed (Bi et al. 2019). These features could include slight variations in texture or heterogeneity (Hosny et al. 2018).


AI-based monitoring could supplement these protocols by capturing a large number of discriminative features that are undetectable by the human eye. The ability to identify these subtle characteristics allows AI-based monitoring systems to provide a clearer picture of tumour evolution. One of the most significant studies demonstrating the usefulness of AI in lesion monitoring described an algorithm that was able to distinguish brain tumour progression from treatment-related changes (Bi et al. 2019). Radiation and chemotherapy can cause enlargement of contrast-enhancing lesions, known as “pseudoprogression”. It is difficult for radiologists to distinguish the appearance of pseudoprogression from actual tumour extension. Machine-learning algorithms have been shown to successfully discriminate between therapy-related changes and pathologic tumour (Bi et al. 2019). These algorithms achieve this ability by combining a large number of qualitative image features that are difficult for radiologists to assess.


Not only can AI-based monitoring technologies accurately detect progression in active tumours, they can also predict areas of future recurrence after tumour resection. One MRI study demonstrated that an AI algorithm was able to identify new margins of tumour cell infiltration that were undetectable to the human eye on post-contrast images (Liu et al. 2019). Using this information, the algorithm then created a predictive spatial map on the surrounding tissues, plotting the likelihood of tumour recurrence in each region (Liu et al. 2019). This algorithm can help physicians decide upon extending the area of tissue resection to improve post-operative remission rates. In addition, this technology can help radiologists more carefully monitor follow-up imaging, as they pay special attention to areas marked with a high recurrence probability.


In conclusion, the opportunities for AI implementation in radiology are vast and exciting. Current research highlights the ways in which AI has the potential to enhance diagnostic imaging care. Advancements in pathology detection, characterisation, and monitoring are just a few avenues by which AI is predicted to revolutionise the specialty in the coming years. Improved imaging analysis will lead to more timely and targeted diagnosis and therapies, which ultimately will improve patient outcomes. In the near future, the partnership between radiologists and AI technology will begin to redefine imaging care for years to come.


Conflict of Interest

None. 


«« How to Avoid Post-Pandemic Burnout: 5 Top Leadership Tips


Trust in Healthcare Declines Amongst US Doctors, Patients During Pandemic »»

References:

Bi WL, Hosny A, Schabath MB et al. (2019) Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin, 69(2):127-157.


Hosny A, Parmar C, Quackenbush J et al. (2018) Artificial intelligence in radiology. Nat Rev Cancer, 18(8):500-510.

  

Jung H, Kim B, Lee I et al. (2018) Detection of masses in mammograms using a one-stage object detector based on a deep convolutional neural network. PLoS One, 13(9):e0203355.


Kooi T, Litjens G, van Ginneken B et al. (2017) Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal.


Liu Z, Wang S, Dong D et al. (2019) The applications of radiomics in precision diagnosis and treatment of oncology: opportunities and challenges. Theranostics, 9(5):1303-1322.


Montagnon E, Cerny M, Cadrin-Chênevert A et al. (2020) Deep learning workflow in radiology: a primer. Insights Imaging. 11(1):22.


Mouridsen K, Thurner P, Zaharchuk G et al. (2020) Artificial Intelligence Applications in Stroke. Stroke, 51(8):2573-2579.


Nasrullah N, Sang J, Alam MS et al. (2019) Automated Lung Nodule Detection and Classification Using Deep Learning Combined with Multiple Strategies. Sensors (Basel), 19(17):3722.


Rauschecker AM, Rudie JD, Xie L et al. (2020) Artificial Intelligence System Approaching Neuroradiologist-level Differential Diagnosis Accuracy at Brain MRI. Radiology, 295(3):626-637.


Ribili D, Horváth A, Unger Z et al. (2018) Detecting and classifying lesions in mammograms with Deep Learning. Sci Rep, 8(1):4165.


Van Assen M, Muscogiuri G, Caruso D et al. (2020) Artificial intelligence in cardiac radiology. Radiol Med.