Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Segmentation with Disentangled Objectness Learning and Class Recognition (2403.03477v3)

Published 6 Mar 2024 in cs.CV

Abstract: Most continual segmentation methods tackle the problem as a per-pixel classification task. However, such a paradigm is very challenging, and we find query-based segmenters with built-in objectness have inherent advantages compared with per-pixel ones, as objectness has strong transfer ability and forgetting resistance. Based on these findings, we propose CoMasTRe by disentangling continual segmentation into two stages: forgetting-resistant continual objectness learning and well-researched continual classification. CoMasTRe uses a two-stage segmenter learning class-agnostic mask proposals at the first stage and leaving recognition to the second stage. During continual learning, a simple but effective distillation is adopted to strengthen objectness. To further mitigate the forgetting of old classes, we design a multi-label class distillation strategy suited for segmentation. We assess the effectiveness of CoMasTRe on PASCAL VOC and ADE20K. Extensive experiments show that our method outperforms per-pixel and query-based methods on both datasets. Code will be available at https://github.com/jordangong/CoMasTRe.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Dark Experience for General Continual Learning: A Strong, Simple Baseline. In NeurIPS, 2020.
  2. End-to-End Object Detection with Transformers. In ECCV, 2020.
  3. Modeling the Background for Incremental Learning in Semantic Segmentation. In CVPR, 2020.
  4. Incremental Learning in Semantic Segmentation From Image Labels. In CVPR, 2022.
  5. CoMFormer: Continual Learning in Semantic and Panoptic Segmentation. In CVPR, 2023.
  6. Co2L: Contrastive Continual Learning. In ICCV, 2021a.
  7. SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning. In NeurIPS, 2021b.
  8. Rethinking Atrous Convolution for Semantic Image Segmentation. CoRR, abs/1706.05587, 2017.
  9. Per-Pixel Classification is Not All You Need for Semantic Segmentation. In NeurIPS, 2021.
  10. Masked-Attention Mask Transformer for Universal Image Segmentation. In CVPR, 2022.
  11. PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. In ECCV, 2020.
  12. PLOP: Learning Without Forgetting for Continual Semantic Segmentation. In CVPR, 2021.
  13. DyTox: Transformers for Continual Learning With DYnamic TOken eXpansion. In CVPR, 2022.
  14. The Pascal Visual Object Classes (VOC) Challenge. IJCV, 88(2):303–338, 2010.
  15. Evidence for large long-term memory capacities in baboons and pigeons and its implications for learning and the evolution of cognition. PNAS, 103(46):17564–17567, 2006.
  16. Robert M. French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4):128–135, 1999.
  17. Long-term dendritic spine stability in the adult cortex. Nature, 420:812–816, 2002.
  18. Deep Residual Learning for Image Recognition. In CVPR, 2016.
  19. Distilling the Knowledge in a Neural Network. CoRR, abs/1503.02531, 2015.
  20. Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision. In ICCV, 2023.
  21. OneFormer: One Transformer To Rule Universal Image Segmentation. In CVPR, 2023.
  22. Overcoming catastrophic forgetting in neural networks. PNAS, 114(13):3521–3526, 2017.
  23. Harold W. Kuhn. The hungarian method for the assignment problem. In 50 Years of Integer Programming 1958-2008 - From the Early Years to the State-of-the-Art, pages 29–47. Springer, 2010.
  24. DN-DETR: Accelerate DETR Training by Introducing Query DeNoising. In CVPR, 2022.
  25. Learning without Forgetting. IEEE TPAMI, 40(12):2935–2947, 2018.
  26. Microsoft COCO: Common Objects in Context. In ECCV, 2014.
  27. Focal Loss for Dense Object Detection. In CVPR, 2017.
  28. Decoupled Weight Decay Regularization. In ICLR, 2018.
  29. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. In Psychology of Learning and Motivation, pages 109–165. Academic Press, 1989.
  30. Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations. In CVPR, 2021.
  31. Calibrating Deep Neural Networks using Focal Loss. In NeurIPS, 2020.
  32. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In ICCV, 2017.
  33. Extending pretrained segmentation networks with additional anatomical structures. IJCARS, 14(7):1187–1195, 2019.
  34. Class Similarity Weighted Knowledge Distillation for Continual Semantic Segmentation. In CVPR, 2022.
  35. Progressive Prompts: Continual Learning for Language Models without Forgetting. In ICLR, 2023.
  36. Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference. In ICLR, 2019.
  37. Anthony Robins. Catastrophic Forgetting, Rehearsal and Pseudorehearsal. Connection Science, 7(2):123–146, 1995.
  38. Incrementer: Transformer for Class-Incremental Semantic Segmentation With Knowledge Distillation Focusing on Old Class. In CVPR, 2023.
  39. CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning. In CVPR, 2023.
  40. Segmenter: Transformer for Semantic Segmentation. In ICCV, 2021.
  41. Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations. In DLMIA, 2017.
  42. Attention is All you Need. In NeurIPS, 2017.
  43. FOSTER: Feature Boosting and Compression for Class-Incremental Learning. In ECCV, 2022a.
  44. Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation. In ECCV, 2020.
  45. MaX-DeepLab: End-to-End Panoptic Segmentation With Mask Transformers. In CVPR, 2021.
  46. DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning. In ECCV, 2022b.
  47. Learning To Prompt for Continual Learning. In CVPR, 2022c.
  48. Endpoints Weight Fusion for Class Incremental Semantic Segmentation. In CVPR, 2023.
  49. Unified Perceptual Parsing for Scene Understanding. In ECCV, 2018.
  50. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. In NeurIPS, 2021.
  51. DER: Dynamically Expandable Representation for Class Incremental Learning. In CVPR, 2021.
  52. K-means Mask Transformer. In ECCV, 2022.
  53. Representation Compensation Networks for Continual Semantic Segmentation. In CVPR, 2022a.
  54. DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection. In ICLR, 2023a.
  55. Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation. In NeurIPS, 2022b.
  56. CoinSeg: Contrast Inter- and Intra- Class Representations for Incremental Segmentation. In ICCV, 2023b.
  57. Rethinking Semantic Segmentation From a Sequence-to-Sequence Perspective With Transformers. In CVPR, 2021.
  58. Scene Parsing Through ADE20K Dataset. In CVPR, 2017.
  59. A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning. In ICLR, 2023.
  60. Continual Semantic Segmentation With Automatic Memory Sample Selection. In CVPR, 2023.
  61. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In ICLR, 2021.
Citations (1)

Summary

We haven't generated a summary for this paper yet.