Brain MRI artificial intelligence often depends on large labelled datasets and narrowly defined tasks, which can limit adaptability across clinical settings. BrainIAC was developed as a self-supervised foundation model designed to learn reusable imaging features from unlabelled scans. Evaluated across seven downstream tasks using 48,965 MRI scans, it was compared with supervised training from random initialisation and two pretrained alternatives. Performance was assessed across different levels of training data availability, few-shot settings and robustness experiments.

 

Foundation Model Training from Brain MRI Data

BrainIAC was pretrained using 32,015 multiparametric brain MRI scans drawn from 16 datasets representing 10 medical conditions. A contrastive self-supervised learning framework based on SimCLR was used to train a vision encoder on three-dimensional MRI volumes. Training involved extracting randomly cropped image patches from MRI scans and applying intensity augmentations to create alternative views of the same anatomical region. Augmented views from the same patch were treated as related examples, while unrelated patches were treated as contrasting examples, enabling representation learning without manual annotation.

 

Several candidate backbone architectures were evaluated through few-shot adaptation across downstream tasks. A SimCLR Vision Transformer base model demonstrated consistent performance and was selected as the encoder architecture. Downstream evaluation involved end-to-end fine-tuning across training data fractions ranging from limited subsets to full datasets, with independent test sets used for performance measurement. Few-shot experiments included one-sample-per-class and five-samples-per-class settings. Linear probing experiments examined whether frozen encoder representations retained task-relevant information when only task-specific prediction layers were trained.

 

The evaluation suite included MRI sequence classification, mutation classification, survival prediction, time-to-stroke estimation, cognitive impairment classification, brain age prediction and tumour segmentation. These tasks were selected to reflect a range of imaging objectives and levels of difficulty, including applications where labelled data are limited or difficult to obtain.

 

Performance Across Multiple Imaging Tasks

Across the seven tasks, BrainIAC generally showed stronger performance than supervised training from scratch and the two pretrained comparator models, particularly when labelled training data were limited. In MRI sequence classification involving four sequences commonly used in brain tumour imaging, performance improved as training data increased. BrainIAC achieved higher balanced accuracy at lower training data fractions, with performance differences decreasing when larger datasets were available.

 

Must Read: DL Improves Early Prediction of Brain Haematoma Expansion

 

In brain age prediction using T1-weighted MRI scans, prediction error decreased as training data increased across internal and external test sets. BrainIAC achieved lower prediction error than comparator models at moderate training data availability and maintained this advantage as dataset size increased. Latent feature visualisation showed clustering patterns aligned with age groupings.

 

For mutation prediction in low-grade glioma, BrainIAC maintained higher classification performance across training data fractions. In survival prediction for glioblastoma, model performance improved with increasing training data across evaluation cohorts, and model outputs separated patients into higher-risk and lower-risk groups. In cognitive impairment classification, BrainIAC achieved higher performance across data availability levels, with differences more visible when training data were limited.

 

In time-to-stroke prediction from MRI, BrainIAC achieved lower prediction error across data splits. In tumour segmentation using FLAIR MRI, BrainIAC achieved higher segmentation accuracy across training data fractions, including low-data settings. Few-shot experiments using one or five labelled samples per class showed BrainIAC matching or exceeding comparator performance across tasks. Linear probing experiments indicated that frozen encoder representations retained useful task information across applications.

 

Robustness and Interpretation Analyses

Robustness testing examined model performance under simulated imaging variability intended to reflect clinical acquisition differences. Perturbations included contrast changes, Gibbs ringing artefacts and bias field distortions. Under these conditions, BrainIAC maintained more stable performance across tasks than comparator models, with degradation in alternative approaches particularly visible in mutation prediction, survival prediction and time-to-stroke estimation.

 

Model interpretation analyses used saliency visualisation methods to examine attention patterns. Pretraining saliency maps showed attention across MRI sequences for individual subjects. After task-specific fine-tuning, attention patterns aligned with anatomically plausible regions depending on the task. Cognitive impairment classification showed attention in hippocampal regions, while brain age prediction highlighted periventricular white matter areas. Tumour mutation and survival prediction tasks showed attention focused on tumour regions.

 

Several limitations were identified. The model was restricted to structural MRI sequences including T1-weighted, T2-weighted, FLAIR and contrast enhanced T1-weighted imaging. Diffusion imaging, functional imaging and other modalities were not included. BrainIAC was designed as a single-sequence model to maintain compatibility across heterogeneous datasets. Training relied on skull-stripped MRI data, limiting application to intracranial analysis. Image registration was treated as a preprocessing step rather than a modelling task. Additional improvements were considered possible through larger training datasets, alternative architectures, new training strategies and multimodal integration with clinical and molecular data.

 

BrainIAC represents a foundation model approach for brain MRI that uses self-supervised learning to extract reusable imaging representations from unlabelled scans. Evaluation across seven imaging tasks showed improved performance compared with supervised training from scratch and two pretrained alternatives, with advantages most visible when labelled data were limited. Robustness experiments indicated greater stability under simulated acquisition variability, and interpretation analyses showed anatomically relevant attention patterns across tasks. The model was limited to structural MRI and skull-stripped inputs, with future work focused on expanded datasets, additional imaging modalities and multimodal integration.

 

Source: Nature Neuroscience

Image Credit: iStock 


References:

Tak D, Garomsa BA, Zapaishchykova A et al. (2026) A generalizable foundation model for analysis of human brain MRI. Nature Neuroscience: In Press.



Latest Articles

BrainIAC, brain MRI AI, self-supervised learning, medical imaging AI, foundation model MRI, neuroimaging analysis, MRI segmentation, clinical AI BrainIAC uses self-supervised AI on brain MRI to improve imaging tasks, boosting accuracy, robustness, and performance with limited data.