Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models (2207.03335v1)

Published 4 Jul 2022 in cs.CV and cs.LG

Abstract: While fine-tuning pre-trained networks has become a popular way to train image segmentation models, such backbone networks for image segmentation are frequently pre-trained using image classification source datasets, e.g., ImageNet. Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i.e., backbone+segmentation modules) in an end-to-end manner. The segmentation modules are left to random initialization in the fine-tuning process due to the lack of segmentation labels in classification datasets. In our work, we propose a method that leverages Pseudo Semantic Segmentation Labels (PSSL), to enable the end-to-end pre-training for image segmentation models based on classification datasets. PSSL was inspired by the observation that the explanation results of classification models, obtained through explanation algorithms such as CAM, SmoothGrad and LIME, would be close to the pixel clusters of visual objects. Specifically, PSSL is obtained for each image by interpreting the classification results and aggregating an ensemble of explanations queried from multiple classifiers to lower the bias caused by single models. With PSSL for every image of ImageNet, the proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse. Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models, i.e., PSPNet-ResNet50, DeepLabV3-ResNet50, and OCRNet-HRNetW18, on a number of segmentation tasks, such as CamVid, VOC-A, VOC-C, ADE20K, and CityScapes, with significant improvements. The source code is availabel at https://github.com/PaddlePaddle/PaddleSeg.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xuhong Li (40 papers)
  2. Haoyi Xiong (98 papers)
  3. Yi Liu (543 papers)
  4. Dingfu Zhou (24 papers)
  5. Zeyu Chen (48 papers)
  6. Yaqing Wang (59 papers)
  7. Dejing Dou (112 papers)
Citations (6)

Summary

  • The paper introduces a novel pre-training method that leverages ensemble explanations to create pseudo semantic labels for segmentation models.
  • It employs multiple explanation algorithms such as CAM, SmoothGrad, and LIME to aggregate pixel-level insights bridging classification and segmentation tasks.
  • Empirical results show significant improvements in mean Intersection over Union (mIoU) across challenging datasets including CamVid, VOC, ADE20K, and CityScapes.

Distilling Ensemble of Explanations: Enhancing Image Segmentation Pre-Training

The paper "Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models" addresses the inefficiencies in the current methodologies for pre-training image segmentation models. Conventionally, segmentation models leverage backbones pre-trained on image classification datasets such as ImageNet. However, this approach leaves segmentation modules randomly initialized due to a lack of segmentation labels in classification datasets, resulting in suboptimal segmentation model performance.

Pseudo Semantic Segmentation Labels (PSSL)

The authors propose an innovative approach using Pseudo Semantic Segmentation Labels (PSSL) to enable end-to-end pre-training of segmentation models within classification datasets. This method uses explanation algorithms like CAM, SmoothGrad, and LIME to deduce pixel clusters corresponding to visual objects within images. By aggregating explanation results from multiple classifiers, they aim to produce pseudo labels that reduce the individual model biases.

Methodology

The methodology involves generating explanations for each image using multiple classifiers, followed by an ensemble of these explanations to formulate PSSL. The segmentation model is then pre-trained on ImageNet augmented with PSSL, utilizing a weighted segmentation learning procedure. This approach purportedly bridges the gap between image classification and semantic segmentation, allowing simultaneous initialization of both the backbone and segmentation module in an end-to-end manner.

Experimental Findings

The experiments demonstrate that their pre-training strategy using PSSL significantly improves performance across various segmentation tasks, including CamVid, VOC-A, VOC-C, ADE20K, and CityScapes. Notably, it enhances models such as PSPNet-ResNet50, DeepLabV3-ResNet50, and OCRNet-HRNetW18. Performance metrics reveal substantial improvements in mean Intersection over Union (mIoU) compared to traditional pre-training methods, particularly on datasets like CamVid with limited data.

Implications and Future Work

The proposed approach is compelling due to its reduced reliance on pixel-wise annotated datasets for pre-training. This offers a scalable solution transferable across diverse segmentation tasks. Furthermore, the PSSL dataset and pre-trained models are made publicly available, enabling other researchers to build upon this work.

Future research could examine integrating more sophisticated weak-supervision techniques or further refining pseudo-label accuracy. Additionally, investigating the scalability and impact of varying degrees of random initialization within segmentation modules remains an intriguing avenue of exploration.

In summary, the paper presents a robust framework for pre-training image segmentation models, offering a significant step towards efficient and effective semantic segmentation in computer vision.