A group of biomedical informaticists and computer scientists, from the universities of Harvard and Stanford, developed a new machine learning system that can detect diseases on chest X-rays, and requires no human annotations to learn.
AI models need to be trained using relevant imaging data in order to learn to detect disease presented in medical images. However, this is often an expensive process and clinicians require a significant amount of time to annotate images. For example, to label a chest X-ray dataset, expert radiologists must examine and explicitly annotate hundreds of thousands of X-ray images, and label each one with the conditions detected.
The model, known as CheXzero, eliminates the time and cost hurdles for AI developers as it can effectively skip the image labelling process.
Instead, the new model is self-supervised. It can learn independently to detect diseases on chest X-rays by relying on clinical reports, without the need for hand-labeled data.
As Pranav Rajpurkar, PhD, assistant professor of biomedical informatics in the Blavatnik Institute at HMS, explains, “with CheXzero, one can simply feed the model a chest X-ray and corresponding radiology report, and it will learn that the image and the text in the report should be considered as similar—in other words, it learns to match chest X-rays with their accompanying report”.
Until recently, AI models have relied on the annotation of significant amounts of data to achieve a high performance. Now that the next generation of medical AI models are able to learn from text independently, clinical workflows will become more efficient.
Their model was tested in a study published by Nature Biomedical Engineering. Its performance was tested against three other self-supervised AI tools and outperformed them.
The researchers are hopeful that this approach could be applied to imaging modalities beyond X-rays, such as CT scans, MRIs, and echocardiograms.
Source: Harvard Medical School
Image Credit: iStock