Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation (2403.06462v2)

Published 11 Mar 2024 in cs.CV

Abstract: Semi-supervised semantic segmentation allows model to mine effective supervision from unlabeled data to complement label-guided training. Recent research has primarily focused on consistency regularization techniques, exploring perturbation-invariant training at both the image and feature levels. In this work, we proposed a novel feature-level consistency learning framework named Density-Descending Feature Perturbation (DDFP). Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore, which is the regions with lower density. We propose to shift features with confident predictions towards lower-density regions by perturbation injection. The perturbed features are then supervised by the predictions on the original features, thereby compelling the classifier to explore less dense regions to effectively regularize the decision boundary. Central to our method is the estimation of feature density. To this end, we introduce a lightweight density estimator based on normalizing flow, allowing for efficient capture of the feature density distribution in an online manner. By extracting gradients from the density estimator, we can determine the direction towards less dense regions for each feature. The proposed DDFP outperforms other designs on feature-level perturbations and shows state of the art performances on both Pascal VOC and Cityscapes dataset under various partition protocols. The project is available at https://github.com/Gavinwxy/DDFP.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. In ICCV, 2021.
  2. Framework for easily invertible architectures (freia), 2022.
  3. Mixmatch: A holistic approach to semi-supervised learning. In NeurIPS, 2019.
  4. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In ICLR, 2020.
  5. Semi-supervised classification by low density separation. In International Conference on Artificial Intelligence and Statistics, 2005.
  6. Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning. In ICLR, 2023.
  7. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. PAMI, 40(4):834–848, 2018a.
  8. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018b.
  9. Semi-supervised semantic segmentation with cross pseudo supervision. In CVPR, 2021.
  10. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
  11. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  12. Density estimation using real nvp. ICLR, 2017.
  13. The pascal visual object classes (voc) challenge. IJCV, 88(2):303–338, 2009.
  14. Dmt: Dynamic mutual training for semi-supervised learning. PR, 130:108777, 2020.
  15. Semi-supervised semantic segmentation needs strong, varied perturbations. In BMVC, 2020.
  16. Unbiased subclass regularization for semi-supervised semantic segmentation. In CVPR, 2022.
  17. Semantic contours from inverse detectors. In ICCV, 2011.
  18. Deep residual learning for image recognition. In CVPR, 2016.
  19. Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation. In ICCV, 2021.
  20. Semi-supervised semantic segmentation via adaptive equalization learning. In NeurIPS, 2021.
  21. Adversarial learning for semi-supervised semantic segmentation. In BMVC, 2018.
  22. Semi-supervised learning with normalizing flows. In ICML, 2019.
  23. Consistency-based semi-supervised learning for object detection. In NeurIPS, 2019.
  24. Semi-supervised semantic segmentation via gentle teaching assistant. In NeurIPS, 2022.
  25. Guided collaborative training for pixel-wise semi-supervised learning. In ECCV, 2020.
  26. Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS, 2018.
  27. Semi-supervised semantic segmentation with directional context-aware consistency. In CVPR, 2021.
  28. Temporal ensembling for semi-supervised learning. In ICLR, 2017.
  29. Bootstrapping semantic segmentation with regional contrast. In ICLR, 2022a.
  30. Perturbed and strict mean teachers for semi-supervised semantic segmentation. In CVPR, 2022b.
  31. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  32. Semi-supervised semantic segmentation with high- and low-level consistency. PAMI, 43(4):1369–1379, 2021.
  33. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. PAMI, 41(8):1979–1993, 2019.
  34. Semi-supervised semantic segmentation with cross-consistency training. In CVPR, 2020.
  35. Meta pseudo labels. In CVPR, 2021.
  36. Semi-supervised self-training of object detection models. In WACV, 2005.
  37. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In NeurIPS, 2016.
  38. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020.
  39. Semi supervised semantic segmentation using generative adversarial network. In ICCV, 2017.
  40. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, 2017.
  41. Hunting sparsity: Density-guided contrastive learning for semi-supervised semantic segmentation. In CVPR, 2023a.
  42. Semi-supervised semantic segmentation using unreliable pseudo-labels. In CVPR, 2022.
  43. Freematch: Self-adaptive thresholding for semi-supervised learning. In ICLR, 2023b.
  44. Conflict-based cross-view consistency for semi-supervised semantic segmentation. In CVPR, 2023c.
  45. Self-training with noisy student improves imagenet classification. In CVPR, 2020.
  46. Semi-supervised semantic segmentation with prototype-based consistency regularization. In NeurIPS, 2022.
  47. St++: Make self-training work better for semi-supervised semantic segmentation. In CVPR, 2022.
  48. Revisiting weak-to-strong consistency in semi-supervised semantic segmentation. In CVPR, 2023.
  49. A simple baseline for semi-supervised semantic segmentation with strong data augmentation. In ICCV, 2021.
  50. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In NeurIPS, 2021.
  51. Pyramid scene parsing network. In CVPR, 2017.
  52. Contrastive learning for label efficient semantic segmentation. In ICCV, 2021.
  53. Augmentation matters: A simple-yet-effective approach to semi-supervised semantic segmentation. In CVPR, 2023.
  54. Pixel contrastive-consistent semi-supervised semantic segmentation. In ICCV, 2021.
  55. C3-semiseg: Contrastive semi-supervised segmentation via cross-set learning and dynamic class-balancing. In ICCV, 2021.
  56. Rethinking pre-training and self-training. In NeurIPS, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.