Dice Question Streamline Icon: https://streamlinehq.com

Pre-training utility of PUMBA for few-shot fine-tuning

Investigate whether features learned by PUMBA (PUrely synthetic Multimodal/species invariant Brain extrAction), a synthetic-data-trained skull stripping model based on a 3D U-Net, provide a beneficial initialization for few-shot fine-tuning on unique brain MRI datasets (e.g., rodent brains), compared to training from scratch or alternative pre-training strategies.

Information Square Streamline Icon: https://streamlinehq.com

Background

PUMBA is introduced as a skull stripping model trained entirely on synthetic data generated from minimal anatomical and intensity assumptions, aiming for strong generalizability across modalities, pathologies, and species. In the discussion, the authors note attempts to use PUMBA as a pre-trained model for few-shot learning on unique brains, such as rodents.

Despite these attempts, the authors have not observed evidence that PUMBA’s learned features serve as an effective starting point for training with only a few images. Establishing whether PUMBA can function as a useful pre-training source is important for enabling data-efficient adaptation to new species or imaging protocols where labeled data are scarce.

References

However, we are yet to see any evidence that the feature extracted from this model can serve as a good starting point for training with a few images.

Skull stripping with purely synthetic data (2505.07159 - Park et al., 12 May 2025) in Section 5 (Discussion)