Fusing multimodality neuroimaging and genetic data for dementia diagnosis
Researchers have developed a three-stage deep learning framework to integrate multimodality neuroimaging and genetic data for the diagnosis of multistatus Alzheimer’s disease (AD). The new framework has been tested using ADNI (Alzheimer’s Disease Neuroimaging Initiative) dataset, with results showing that the proposed framework outperforms other state-of-the-art methods, according to a study published in the journal Machine Learning in Medical Imaging.
AD is the most common form of dementia that often affects individuals over 65 years old. As there is no cure for AD, the accurate diagnosis of AD and especially its prodromal status, i.e., a multistatus dementia diagnosis problem, is highly desirable in clinical application.
In search of an accurate biomarker for AD, data from different types of modalities have been collected and investigated. Among these modalities, neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission topography (PET), are able to provide anatomical and functional information about the brain, respectively.
Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient’s AD risk factors. When used in conjunction, AD diagnosis may be improved. However, as the researchers note, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging.
In this study, the researchers proposed a novel three-stage deep feature learning – a machine learning technique – and fusion framework to address the above challenges. "Each stage of the network learns feature representations for different combination of modalities, via effective training using maximum number of available samples," they explained. The three stages of their model are described below.
Stage 1: In this stage, the aim is to learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. The maximum number of available samples for each modality is used in this stage, for effective training.
Stage 2: Here, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. The fusion of complementary information from different modalities is intended to further improve the performance of the model.
Stage 3: After training the networks in Stage 2, we can obtain joint representations between any pair of modalities, which can then be used to get a final prediction.
The proposed framework was compared with three popular dimension reduction methods, i.e., Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA), and Lasso. To fuse the three modalities, the researchers concatenated the feature vectors of the multimodality data into a single long vector for the above three comparison methods.
"Comparing with another deep feature learning strategy, i.e., SAE [Stack Auto-Encoder], its learned high-level features did not perform as well as ours, probably due to the fact the SAE is an unsupervised feature learning method that did not consider label information," the researchers say. "In addition, the good performance of our proposed framework could also be due to stage-wise feature learning strategy, which uses the maximum number of available samples for training."
Source: Machine Learning in Medical Imaging
Image Credit: Pixabay
Published on : Tue, 6 Feb 2018
The first wireless, handheld breast ultrasound scanner with exceptional image quality Optimized with preset modes for breast, dense breast and interventional procedures that simplify workflow On-demand high-resolution images – 14 MHz, 192 elements, and...
The SonoSite EDGE II is a high-resolution, all-digital, 9.0-pound (4.1 kg) ultrasound system with a 12.1in. LED full-bleed glass display. The Edge II boosts improvements in cardiac and abdominal image quality through DirectClear Technology and a new...
Shown to improve comfort in 93% of patients who reported moderate to severe discomfort with standard compression technology2 Curved, unique design mirrors the shape of a woman’s breast to reduce pinching and apply uniform compression over the entire...
The TE7 touch screen ultrasound system is designed to provide superior quality imaging for rapid patient-care decisions. Intuitive gesture controls and efficient focused point-of-care exams minimize the user learning curve, with no need to navigate a...
The best patient care is your ultimate goal. To achieve this requires confident diagnosis even with daily increases in patient throughput. Built on the foundation of Mindray’s continuous customer insights into clinical needs and the inheritance from...