Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Semantic Segmentation via Self-Training (2004.14960v2)

Published 30 Apr 2020 in cs.CV

Abstract: Deep learning usually achieves the best results with complete supervision. In the case of semantic segmentation, this means that large amounts of pixelwise annotations are required to learn accurate models. In this paper, we show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm. We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data. Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets while requiring significantly less supervision. We also demonstrate the effectiveness of self-training on a challenging cross-domain generalization task, outperforming conventional finetuning method by a large margin. Lastly, to alleviate the computational burden caused by the large amount of pseudo labels, we propose a fast training schedule to accelerate the training of segmentation models by up to 2x without performance degradation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yi Zhu (233 papers)
  2. Zhongyue Zhang (13 papers)
  3. Chongruo Wu (9 papers)
  4. Zhi Zhang (113 papers)
  5. Tong He (124 papers)
  6. Hang Zhang (164 papers)
  7. R. Manmatha (31 papers)
  8. Mu Li (95 papers)
  9. Alexander Smola (7 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.