Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inherit with Distillation and Evolve with Contrast: Exploring Class Incremental Semantic Segmentation Without Exemplar Memory (2309.15413v1)

Published 27 Sep 2023 in cs.CV

Abstract: As a front-burner problem in incremental learning, class incremental semantic segmentation (CISS) is plagued by catastrophic forgetting and semantic drift. Although recent methods have utilized knowledge distillation to transfer knowledge from the old model, they are still unable to avoid pixel confusion, which results in severe misclassification after incremental steps due to the lack of annotations for past and future classes. Meanwhile data-replay-based approaches suffer from storage burdens and privacy concerns. In this paper, we propose to address CISS without exemplar memory and resolve catastrophic forgetting as well as semantic drift synchronously. We present Inherit with Distillation and Evolve with Contrast (IDEC), which consists of a Dense Knowledge Distillation on all Aspects (DADA) manner and an Asymmetric Region-wise Contrastive Learning (ARCL) module. Driven by the devised dynamic class-specific pseudo-labelling strategy, DADA distils intermediate-layer features and output-logits collaboratively with more emphasis on semantic-invariant knowledge inheritance. ARCL implements region-wise contrastive learning in the latent space to resolve semantic drift among known classes, current classes, and unknown classes. We demonstrate the effectiveness of our method on multiple CISS tasks by state-of-the-art performance, including Pascal VOC 2012, ADE20K and ISPRS datasets. Our method also shows superior anti-forgetting ability, particularly in multi-step CISS tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (109)
  1. Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 2935–2947, 2018.
  2. A. Douillard, Y. Chen, A. Dapogny, and M. Cord, “Plop: Learning without forgetting for continual semantic segmentation,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4039–4049, 2021.
  3. J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol. 114, pp. 3521 – 3526, 2017.
  4. M. D. Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks.” IEEE transactions on pattern analysis and machine intelligence, vol. PP, 2021.
  5. V. V. Ramasesh, A. Lewkowycz, and E. Dyer, “Effect of scale on catastrophic forgetting in neural networks,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=GhVS8_yPeEa
  6. G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” ArXiv, vol. abs/1503.02531, 2015.
  7. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5533–5542, 2017.
  8. J. Smith, Y.-C. Hsu, J. C. Balloch, Y. Shen, H. Jin, and Z. Kira, “Always be dreaming: A new approach for data-free class-incremental learning,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9354–9364, 2021.
  9. F. Zhu, X.-Y. Zhang, C. Wang, F. Yin, and C.-L. Liu, “Prototype augmentation and self-supervision for incremental learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5867–5876, 2021.
  10. Y. Liu, B. Schiele, and Q. Sun, “Adaptive aggregation networks for class-incremental learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2544–2553, 2021.
  11. C. Zhang, N. Song, G. Lin, Y. Zheng, P. Pan, and Y. Xu, “Few-shot incremental learning with continually evolved classifiers,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12 450–12 459, 2021.
  12. M. Rostami, “Lifelong domain adaptation via consolidated internal distribution,” in NeurIPS, 2021.
  13. Y. Tang, Y. Peng, and W. Zheng, “Learning to imagine: Diversify memory for incremental learning using unlabeled data,” ArXiv, vol. abs/2204.08932, 2022.
  14. D.-W. Zhou, F. L. Wang, H.-J. Ye, L. Ma, S. Pu, and D.-C. Zhan, “Forward compatible few-shot class-incremental learning,” ArXiv, vol. abs/2203.06953, 2022.
  15. K. Shmelkov, C. Schmid, and A. Karteek, “Incremental learning of object detectors without catastrophic forgetting,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3420–3429, 2017.
  16. K. J. Joseph, J. Rajasegaran, S. H. Khan, F. S. Khan, V. N. Balasubramanian, and L. Shao, “Incremental object detection via meta-learning,” IEEE transactions on pattern analysis and machine intelligence, vol. PP, 2021.
  17. X. Liu, H. Yang, A. Ravichandran, R. Bhotika, and S. Soatto, “Multi-task incremental learning for object detection,” arXiv: Computer Vision and Pattern Recognition, 2020.
  18. J. Wang, X. Wang, Y. Shang-Guan, and A. K. Gupta, “Wanderlust: Online continual object detection in the real world,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10 809–10 818, 2021.
  19. T. Feng, M. Wang, and H. Yuan, “Overcoming catastrophic forgetting in incremental object detection via elastic response distillation,” ArXiv, vol. abs/2204.02136, 2022.
  20. F. Cermelli, M. Mancini, S. R. Bulò, E. Ricci, and B. Caputo, “Modeling the background for incremental learning in semantic segmentation,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9230–9239, 2020.
  21. S. Cha, B. Kim, Y. Yoo, and T. Moon, “SSUL: Semantic segmentation with unknown label for exemplar-based class-incremental learning,” in Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., 2021. [Online]. Available: https://openreview.net/forum?id=8tgchc2XhD
  22. C.-B. Zhang, J. Xiao, X. Liu, Y. Chen, and M.-M. Cheng, “Representation compensation networks for continual semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  23. U. Michieli and P. Zanuttigh, “Incremental learning techniques for semantic segmentation,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3205–3212, 2019.
  24. A. Treisman and G. A. Gelade, “A feature-integration theory of attention,” Cognitive Psychology, vol. 12, pp. 97–136, 1980.
  25. V. Lomonaco, L. Pellegrini, P. R. López, M. Caccia, Q. She, Y. Chen, Q. Jodelet, R. Wang, Z. Mai, D. Vázquez, G. I. Parisi, N. Churamani, M. Pickett, I. H. Laradji, and D. Maltoni, “Cvpr 2020 continual learning in computer vision competition: Approaches, results, current challenges and future directions,” Artif. Intell., vol. 303, p. 103635, 2022.
  26. H. Qu, H. Rahmani, L. Xu, B. M. Williams, and J. Liu, “Recent advances of continual learning in computer vision: An overview,” ArXiv, vol. abs/2109.11369, 2021.
  27. E. Belouadah, A. D. Popescu, and I. Kanellos, “A comprehensive study of class incremental learning algorithms for visual tasks,” Neural networks : the official journal of the International Neural Network Society, vol. 135, pp. 38–54, 2021.
  28. C. de Masson d’Autume, S. Ruder, L. Kong, and D. Yogatama, “Episodic memory in lifelong language learning,” ArXiv, vol. abs/1906.01076, 2019.
  29. M. Biesialska, K. Biesialska, and M. R. Costa-jussà, “Continual lifelong learning in natural language processing: A survey,” ArXiv, vol. abs/2012.09823, 2020.
  30. S. Bhat, B. Banerjee, S. Chaudhuri, and A. Bhattacharya, “Cilea-net: Curriculum-based incremental learning framework for remote sensing image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 5879–5890, 2021.
  31. M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” Psychology of Learning and Motivation, vol. 24, pp. 109–165, 1989.
  32. J. Bang, H. Kim, Y. J. Yoo, J.-W. Ha, and J. Choi, “Rainbow memory: Continual learning with a memory of diverse samples,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8214–8223, 2021.
  33. E. Belouadah and A. D. Popescu, “Il2m: Class incremental learning with dual memory,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 583–592, 2019.
  34. A. Chaudhry, A. Gordo, P. K. Dokania, P. H. S. Torr, and D. Lopez-Paz, “Using hindsight to anchor past knowledge in continual learning,” in AAAI, 2021.
  35. C. D. Kim, J. Jeong, S. chul Moon, and G. Kim, “Continual learning on noisy data streams via self-purified replay,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 517–527, 2021.
  36. R. Aljundi, L. Caccia, E. Belilovsky, M. Caccia, M. Lin, L. Charlin, and T. Tuytelaars, “Online continual learning with maximally interfered retrieval,” ArXiv, vol. abs/1908.04742, 2019.
  37. R. Aljundi, M. Lin, B. Goujaud, and Y. Bengio, “Gradient based sample selection for online continual learning,” in NeurIPS, 2019.
  38. B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. W.-H. Tsang, and M. Sugiyama, “Co-teaching: Robust training of deep neural networks with extremely noisy labels,” in NeurIPS, 2018.
  39. D. Rolnick, A. Ahuja, J. Schwarz, T. P. Lillicrap, and G. Wayne, “Experience replay for continual learning,” in NeurIPS, 2019.
  40. E. Fini, S. Lathuilière, E. Sangineto, M. Nabi, and E. Ricci, “Online continual learning under extreme memory constraints,” ArXiv, vol. abs/2008.01510, 2020.
  41. Y. Shi, L. Yuan, Y. Chen, and J. Feng, “Continual learning via bit-level information preserving,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16 669–16 678, 2021.
  42. S. Ho, M. Liu, L. Du, L. Gao, and Y. Xiang, “Prototypes-guided memory replay for continual learning,” ArXiv, vol. abs/2108.12641, 2021.
  43. H. Ahn, D. Lee, S. Cha, and T. Moon, “Uncertainty-based continual learning with adaptive regularization,” in NeurIPS, 2019.
  44. K. Zhu, Y. Cao, W. Zhai, J. Cheng, and Z. Zha, “Self-promoted prototype refinement for few-shot class-incremental learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6797–6806, 2021.
  45. K. Javed and M. White, “Meta-learning representations for continual learning,” in NeurIPS, 2019.
  46. Z. Wang, S. V. Mehta, B. Póczos, and J. G. Carbonell, “Efficient meta lifelong-learning with limited memory,” ArXiv, vol. abs/2010.02500, 2020.
  47. M. Banayeeanzade, R. Mirzaiezadeh, H. Hasani, and M. Soleymani, “Generative vs. discriminative: Rethinking the meta-continual learning,” in NeurIPS, 2021.
  48. J. Hurtado, A. Raymond-Saez, and A. Soto, “Optimizing reusable knowledge for continual learning via metalearning,” in NeurIPS, 2021.
  49. S. Ebrahimi, F. Meier, R. Calandra, T. Darrell, and M. Rohrbach, “Adversarial continual learning,” ArXiv, vol. abs/2003.09553, 2020.
  50. Y. Xiang, Y. Fu, P. Ji, and H. Huang, “Incremental learning using conditional adversarial networks,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6618–6627, 2019.
  51. V. K. Verma, K. J. Liang, N. Mehta, P. Rai, and L. Carin, “Efficient feature transformations for discriminative and generative continual learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13 860–13 870, 2021.
  52. M. Kang, J. Park, and B. Han, “Class-incremental learning by knowledge distillation with adaptive feature consolidation,” ArXiv, vol. abs/2204.00895, 2022.
  53. A. Douillard, M. Cord, C. Ollion, T. Robert, and E. Valle, “Podnet: Pooled outputs distillation for small-tasks incremental learning,” in ECCV, 2020.
  54. H.-J. Ye, S. Lu, and D.-C. Zhan, “Distilling cross-task knowledge via relationship matching,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12 393–12 402, 2020.
  55. J. Tian, X. Xu, Z. Wang, F. Shen, and X. Liu, “Relationship-preserving knowledge distillation for zero-shot sketch based image retrieval,” Proceedings of the 29th ACM International Conference on Multimedia, 2021.
  56. T.-B. Xu and C.-L. Liu, “Deep neural network self-distillation exploiting data representation invariance,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, pp. 257–269, 2022.
  57. Q. Zhao, Y. Ma, S. Lyu, and L. Chen, “Embedded self-distillation in compact multibranch ensemble network for remote sensing scene classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022.
  58. S. Ahn, S. X. Hu, A. C. Damianou, N. D. Lawrence, and Z. Dai, “Variational information distillation for knowledge transfer,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9155–9163, 2019.
  59. H. Slim, E. Belouadah, A.-S. Popescu, and D. M. Onchis, “Dataset knowledge transfer for class-incremental learning without memory,” 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3311–3320, 2022.
  60. M. Kanakis, D. Bruggemann, S. Saha, S. Georgoulis, A. Obukhov, and L. V. Gool, “Reparameterizing convolutions for incremental multi-task learning without task interference,” in ECCV, 2020.
  61. S. Yan, J. Xie, and X. He, “Der: Dynamically expandable representation for class incremental learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3013–3022, 2021.
  62. J. Serra, D. Suris, M. Miron, and A. Karatzoglou, “Overcoming catastrophic forgetting with hard attention to the task,” in International Conference on Machine Learning.   PMLR, 2018, pp. 4548–4557.
  63. X. Hu, K. Tang, C. Miao, X. Hua, and H. Zhang, “Distilling causal effect of data in class-incremental learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3956–3965, 2021.
  64. P. Kaushik, A. Gain, A. Kortylewski, and A. L. Yuille, “Understanding catastrophic forgetting and remembering in continual learning with optimal relevance mapping,” ArXiv, vol. abs/2102.11343, 2021.
  65. A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. S. Torr, “Riemannian walk for incremental learning: Understanding forgetting and intransigence,” in Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
  66. M. Klingner, A. Bär, P. Donn, and T. Fingscheidt, “Class-incremental learning for semantic segmentation re-using neither old data nor old labels,” 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8, 2020.
  67. G. Yang, E. Fini, D. Xu, P. Rota, M. Ding, M. Nabi, X. Alameda-Pineda, and E. Ricci, “Uncertainty-aware contrastive distillation for incremental semantic segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. PP, 2022.
  68. J. Li, X. Sun, W. Diao, P. Wang, Y. Feng, X. Lu, and G. Xu, “Class-incremental learning network for small objects enhancing of semantic segmentation in aerial imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–20, 2022.
  69. O. Tasar, Y. Tarabalka, and P. Alliez, “Incremental learning for semantic segmentation of large-scale remote sensing data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, pp. 3524–3537, 2019.
  70. L. Shan, W. Wang, K. Lv, and B. Luo, “Class-incremental learning for semantic segmentation in aerial imagery via distillation in all aspects,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–12, 2022.
  71. A. Maracani, U. Michieli, M. Toldo, and P. Zanuttigh, “Recall: Replay-based continual learning in semantic segmentation,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7006–7015, 2021.
  72. A. Cheraghian, S. Rahman, P. Fang, S. K. Roy, L. Petersson, and M. T. Harandi, “Semantic-aware knowledge distillation for few-shot class-incremental learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2534–2543, 2021.
  73. X. Xu, C. Deng, M. Yang, and H. Wang, “Progressive domain-independent feature decomposition network for zero-shot sketch-based image retrieval,” ArXiv, vol. abs/2003.09869, 2020.
  74. C. Bucila, R. Caruana, and A. Niculescu-Mizil, “Model compression,” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD’06), 2006.
  75. L. Wang and K.-J. Yoon, “Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, pp. 3048–3068, 2022.
  76. T. Furlanello, Z. C. Lipton, M. Tschannen, L. Itti, and A. Anandkumar, “Born again neural networks,” in ICML, 2018.
  77. Z. Yang, L. Shou, M. Gong, W. Lin, and D. Jiang, “Model compression with two-stage multi-teacher knowledge distillation for web question answering system,” Proceedings of the 13th International Conference on Web Search and Data Mining, 2020.
  78. C. Yang, L. Xie, C. Su, and A. L. Yuille, “Snapshot distillation: Teacher-student optimization in one generation,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2854–2863, 2019.
  79. B. Heo, J. Kim, S. Yun, H. Park, N. Kwak, and J. Y. Choi, “A comprehensive overhaul of feature distillation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1921–1930, 2019.
  80. Y. Liu, C. Shu, J. Wang, and C. Shen, “Structured knowledge distillation for dense prediction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, no. 99, pp. 1–1, 2020.
  81. Y. Feng, X. Sun, W. Diao, J. Li, and X. Gao, “Double similarity distillation for semantic image segmentation,” IEEE Transactions on Image Processing, vol. 30, pp. 5363–5376, 2021.
  82. C. Shu, Y. Liu, J. Gao, Z. Yan, and C. Shen, “Channel-wise knowledge distillation for dense prediction,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5291–5300, 2021.
  83. Y. Wang, W. Zhou, T. Jiang, X. Bai, and Y. Xu, “Intra-class feature variation distillation for semantic segmentation,” in ECCV, 2020.
  84. P. Zhou, L. Mai, J. Zhang, N. Xu, Z. Wu, and L. S. Davis, “M2kd: Multi-model and multi-level knowledge distillation for incremental learning,” ArXiv, vol. abs/1904.01769, 2019.
  85. Y. Qiu, Y. Shen, Z. Sun, Y. Zheng, X. Chang, W. Zheng, and R. Wang, “Sats: Self-attention transfer for continual semantic segmentation,” ArXiv, vol. abs/2203.07667, 2022.
  86. X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, 2020.
  87. R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, “Learning deep representations by mutual information estimation and maximization,” in International Conference on Learning Representations, 2018.
  88. Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3733–3742, 2018.
  89. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning.   PMLR, 2020, pp. 1597–1607.
  90. N. Komodakis and S. Gidaris, “Unsupervised representation learning by predicting image rotations,” in International Conference on Learning Representations (ICLR), 2018.
  91. H. H. Lee, Y. Tang, Q. Yang, X. Yu, S. Bao, B. A. Landman, and Y. Huo, “Attention-guided supervised contrastive learning for semantic segmentation,” ArXiv, vol. abs/2106.01596, 2021.
  92. W. Wang, T. Zhou, F. Yu, J. Dai, E. Konukoglu, and L. V. Gool, “Exploring cross-image pixel contrast for semantic segmentation,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7283–7293, 2021.
  93. U. Michieli and P. Zanuttigh, “Continual semantic segmentation via repulsion-attraction of sparse and disentangled latent representations,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1114–1124, 2021.
  94. S. Kan, Y. Cen, Y. Li, V. Mladenovic, and Z. He, “Relative order analysis and optimization for unsupervised deep metric learning,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13 994–14 003, 2021.
  95. E. Arnaudo, F. Cermelli, A. Tavera, C. Rossi, and B. Caputo, “A contrastive distillation approach for incremental semantic segmentation in aerial images,” in ICIAP, 2022.
  96. U. Michieli and P. Zanuttigh, “Knowledge distillation for incremental learning in semantic segmentation,” Computer Vision and Image Understanding, p. 103167, 2021.
  97. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” ArXiv, vol. abs/1706.05587, 2017.
  98. Z. Zhou, C. Zhuge, X. Guan, and W. Liu, “Channel distillation: Channel-wise attention for knowledge distillation,” 2020.
  99. A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” ArXiv, vol. abs/1703.07737, 2017.
  100. M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision, vol. 111, no. 1, pp. 98–136, 2015.
  101. B. Zhou, Z. Hang, F. X. P. Fernandez, S. Fidler, and A. Torralba, “Scene parsing through ade20k dataset,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  102. “Isprs test project on urban classification and 3d building reconstruction,” GIM international, 2013.
  103. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  104. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
  105. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in COMPSTAT, 2010.
  106. M. H. Phan, T.-A. Ta, S. L. Phung, L. Tran-Thanh, and A. Bouzerdoum, “Class similarity weighted knowledge distillation for continual semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 16 866–16 875.
  107. L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research, vol. 9, no. 86, pp. 2579–2605, 2008. [Online]. Available: http://jmlr.org/papers/v9/vandermaaten08a.html
  108. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation.” in ECCV, 2018, pp. 833–851.
  109. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” ArXiv, vol. abs/2103.14030, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Danpei Zhao (11 papers)
  2. Bo Yuan (151 papers)
  3. Zhenwei Shi (77 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.