Researchers have developed a three-stage deep learning framework to integrate multimodality neuroimaging and genetic data for the diagnosis of multistatus Alzheimer’s disease (AD). The new framework has been tested using ADNI (Alzheimer’s Disease Neuroimaging Initiative) dataset, with results showing that the proposed framework outperforms other state-of-the-art methods, according to a study published in the journal Machine Learning in Medical Imaging.

AD is the most common form of dementia that often affects individuals over 65 years old. As there is no cure for AD, the accurate diagnosis of AD and especially its prodromal status, i.e., a multistatus dementia diagnosis problem, is highly desirable in clinical application.  

In search of an accurate biomarker for AD, data from different types of modalities have been collected and investigated. Among these modalities, neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission topography (PET), are able to provide anatomical and functional information about the brain, respectively.

Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient’s AD risk factors. When used in conjunction, AD diagnosis may be improved. However, as the researchers note, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging.

In this study, the researchers proposed a novel three-stage deep feature learning – a machine learning technique – and fusion framework to address the above challenges. "Each stage of the network learns feature representations for different combination of modalities, via effective training using maximum number of available samples," they explained. The three stages of their model are described below.

Stage 1: In this stage, the aim is to learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. The maximum number of available samples for each modality is used in this stage, for effective training.

Stage 2: Here, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. The fusion of complementary information from different modalities is intended to further improve the performance of the model.

Stage 3: After training the networks in Stage 2, we can obtain joint representations between any pair of modalities, which can then be used to get a final prediction.

The proposed framework was compared with three popular dimension reduction methods, i.e., Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA), and Lasso. To fuse the three modalities, the researchers concatenated the feature vectors of the multimodality data into a single long vector for the above three comparison methods.

"Comparing with another deep feature learning strategy, i.e., SAE [Stack Auto-Encoder], its learned high-level features did not perform as well as ours, probably due to the fact the SAE is an unsupervised feature learning method that did not consider label information," the researchers say. "In addition, the good performance of our proposed framework could also be due to stage-wise feature learning strategy, which uses the maximum number of available samples for training."

Source: Machine Learning in Medical Imaging
Image Credit: Pixabay

«« Incidental imaging findings: facilitating patients' risk comprehension


Thrombectomy for ischaemic stroke with selection by perfusion imaging »»

References:

Zhou T, Thung KH, Zhu X, Shen D (2017) Feature Learning and Fusion of Multimodality Neuroimaging and Genetic Data for Multi-status Dementia Diagnosis. Mach Learn Med Imaging. 2017 Sep; 10541: 132–140. doi: 10.1007/978-3-319-67389-9_16



Latest Articles

Dementia, multimodality neuroimaging, genetic data Researchers have developed a three-stage deep learning framework to integrate multimodality neuroimaging and genetic data for the diagnosis of multistatus Alzheimer’s disease (AD). The new framework has been tested using ADNI (Alzheimer’s Disease Neuroima