Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Half-Real Half-Fake Distillation for Class-Incremental Semantic Segmentation (2104.00875v1)

Published 2 Apr 2021 in cs.CV

Abstract: Despite their success for semantic segmentation, convolutional neural networks are ill-equipped for incremental learning, \ie, adapting the original segmentation model as new classes are available but the initial training data is not retained. Actually, they are vulnerable to catastrophic forgetting problem. We try to address this issue by "inverting" the trained segmentation network to synthesize input images starting from random noise. To avoid setting detailed pixel-wise segmentation maps as the supervision manually, we propose the SegInversion to synthesize images using the image-level labels. To increase the diversity of synthetic images, the Scale-Aware Aggregation module is integrated into SegInversion for controlling the scale (the number of pixels) of synthetic objects. Along with real images of new classes, the synthesized images will be fed into the distillation-based framework to train the new segmentation model which retains the information about previously learned classes, whilst updating the current model to learn the new ones. The proposed method significantly outperforms other incremental learning methods and obtains state-of-the-art performance on the PASCAL VOC 2012 and ADE20K datasets. The code and models will be made publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zilong Huang (42 papers)
  2. Wentian Hao (1 paper)
  3. Xinggang Wang (163 papers)
  4. Mingyuan Tao (13 papers)
  5. Jianqiang Huang (62 papers)
  6. Wenyu Liu (146 papers)
  7. Xian-Sheng Hua (85 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.