Fusing multimodality neuroimaging and genetic data for dementia diagnosis
Researchers have developed a three-stage deep learning framework to integrate multimodality neuroimaging and genetic data for the diagnosis of multistatus Alzheimer’s disease (AD). The new framework has been tested using ADNI (Alzheimer’s Disease Neuroimaging Initiative) dataset, with results showing that the proposed framework outperforms other state-of-the-art methods, according to a study published in the journal Machine Learning in Medical Imaging.
AD is the most common form of dementia that often affects individuals over 65 years old. As there is no cure for AD, the accurate diagnosis of AD and especially its prodromal status, i.e., a multistatus dementia diagnosis problem, is highly desirable in clinical application.
In search of an accurate biomarker for AD, data from different types of modalities have been collected and investigated. Among these modalities, neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission topography (PET), are able to provide anatomical and functional information about the brain, respectively.
Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient’s AD risk factors. When used in conjunction, AD diagnosis may be improved. However, as the researchers note, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging.
In this study, the researchers proposed a novel three-stage deep feature learning – a machine learning technique – and fusion framework to address the above challenges. "Each stage of the network learns feature representations for different combination of modalities, via effective training using maximum number of available samples," they explained. The three stages of their model are described below.
Stage 1: In this stage, the aim is to learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. The maximum number of available samples for each modality is used in this stage, for effective training.
Stage 2: Here, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. The fusion of complementary information from different modalities is intended to further improve the performance of the model.
Stage 3: After training the networks in Stage 2, we can obtain joint representations between any pair of modalities, which can then be used to get a final prediction.
The proposed framework was compared with three popular dimension reduction methods, i.e., Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA), and Lasso. To fuse the three modalities, the researchers concatenated the feature vectors of the multimodality data into a single long vector for the above three comparison methods.
"Comparing with another deep feature learning strategy, i.e., SAE [Stack Auto-Encoder], its learned high-level features did not perform as well as ours, probably due to the fact the SAE is an unsupervised feature learning method that did not consider label information," the researchers say. "In addition, the good performance of our proposed framework could also be due to stage-wise feature learning strategy, which uses the maximum number of available samples for training."
Source: Machine Learning in Medical Imaging
Image Credit: Pixabay
Published on : Tue, 6 Feb 2018
The TE7 touch screen ultrasound system is designed to provide superior quality imaging for rapid patient-care decisions. Intuitive gesture controls and efficient focused point-of-care exams minimize the user learning curve, with no need to navigate a...
BioZorb® 3D Bioabsorbable Marker The BioZorb® is a 3D implantable marker that consists of a spiral, bioabsorbable framework embedded with 6 permanent, titanium clips designed to precisely mark your surgical excision site. Designed to improve outcomes...
THE VEVO MD IS THE WORLD’S FIRST ULTRA HIGH FREQUENCY ULTRASOUND SYSTEM DESIGNED FOR CLINICAL USE WITH FREQUENCIES UP TO 70 MHz. This groundbreaking technology opens up new possibilities for medical imaging that have never been seen before. Whether...
Remarkable image quality, right at the point of care. The Trident system revolutionizes breast tissue imaging by incorporating a micro-focused tube, unique specimen image processing algorithms and amorphous selenium direct digital detector. The...
Introducing a new chapter in ultrasound visualization technology. X-Porte was developed from the ground up to incorporate a breakthrough, proprietary beam-forming technology: XDI (Extreme Definition Imaging). This signal analysis algorithm shapes...