Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disruptive Autoencoders: Leveraging Low-level features for 3D Medical Image Pre-training (2307.16896v1)

Published 31 Jul 2023 in cs.CV

Abstract: Harnessing the power of pre-training on large-scale datasets like ImageNet forms a fundamental building block for the progress of representation learning-driven solutions in computer vision. Medical images are inherently different from natural images as they are acquired in the form of many modalities (CT, MR, PET, Ultrasound etc.) and contain granulated information like tissue, lesion, organs etc. These characteristics of medical images require special attention towards learning features representative of local context. In this work, we focus on designing an effective pre-training framework for 3D radiology images. First, we propose a new masking strategy called local masking where the masking is performed across channel embeddings instead of tokens to improve the learning of local feature representations. We combine this with classical low-level perturbations like adding noise and downsampling to further enable low-level representation learning. To this end, we introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations. Additionally, we also devise a cross-modal contrastive loss (CMCL) to accommodate the pre-training of multiple modalities in a single framework. We curate a large-scale dataset to enable pre-training of 3D medical radiology images (MRI and CT). The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance. Notably, our proposed method tops the public test leaderboard of BTCV multi-organ segmentation challenge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Jeya Maria Jose Valanarasu (31 papers)
  2. Yucheng Tang (67 papers)
  3. Dong Yang (163 papers)
  4. Ziyue Xu (58 papers)
  5. Can Zhao (35 papers)
  6. Wenqi Li (59 papers)
  7. Vishal M. Patel (230 papers)
  8. Bennett Landman (13 papers)
  9. Daguang Xu (91 papers)
  10. Yufan He (25 papers)
  11. Vishwesh Nath (33 papers)
Citations (10)