Modeling the Label Distributions for Weakly-Supervised Semantic Segmentation (2403.13225v1)
Abstract: Weakly-Supervised Semantic Segmentation (WSSS) aims to train segmentation models by weak labels, which is receiving significant attention due to its low annotation cost. Existing approaches focus on generating pseudo labels for supervision while largely ignoring to leverage the inherent semantic correlation among different pseudo labels. We observe that pseudo-labeled pixels that are close to each other in the feature space are more likely to share the same class, and those closer to the distribution centers tend to have higher confidence. Motivated by this, we propose to model the underlying label distributions and employ cross-label constraints to generate more accurate pseudo labels. In this paper, we develop a unified WSSS framework named Adaptive Gaussian Mixtures Model, which leverages a GMM to model the label distributions. Specifically, we calculate the feature distribution centers of pseudo-labeled pixels and build the GMM by measuring the distance between the centers and each pseudo-labeled pixel. Then, we introduce an Online Expectation-Maximization (OEM) algorithm and a novel maximization loss to optimize the GMM adaptively, aiming to learn more discriminative decision boundaries between different class-wise Gaussian mixtures. Based on the label distributions, we leverage the GMM to generate high-quality pseudo labels for more reliable supervision. Our framework is capable of solving different forms of weak labels: image-level labels, points, scribbles, blocks, and bounding-boxes. Extensive experiments on PASCAL, COCO, Cityscapes, and ADE20K datasets demonstrate that our framework can effectively provide more reliable supervision and outperform the state-of-the-art methods under all settings. Code will be available at https://github.com/Luffy03/AGMM-SASS.
- E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651, Apr. 2016.
- V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, Dec. 2017.
- L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834–848, Apr. 2017.
- Y. Wei, X. Liang, Y. Chen, X. Shen, M.-M. Cheng, J. Feng, Y. Zhao, and S. Yan, “Stc: A simple to complex framework for weakly-supervised semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 11, pp. 2314–2320, 2016.
- B. Zhang, J. Xiao, J. Jiao, Y. Wei, and Y. Zhao, “Affinity attention graph neural network for weakly supervised semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 8082–8096, 2021.
- P.-T. Jiang, L.-H. Han, Q. Hou, M.-M. Cheng, and Y. Wei, “Online attention accumulation for weakly supervised semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 10, pp. 7062–7077, 2021.
- Y. Liu, Y.-H. Wu, P. Wen, Y. Shi, Y. Qiu, and M.-M. Cheng, “Leveraging instance-, image-and dataset-level information for weakly supervised instance segmentation,” IEEE Trans. Pattern Analy. Mach. Intell., vol. 44, no. 3, pp. 1415–1428, 2020.
- L. Sui, C.-L. Zhang, and J. Wu, “Salvage of supervision in weakly supervised object detection and segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
- M. Lee, S. Lee, J. Lee, and H. Shim, “Saliency as pseudo-pixel supervision for weakly and semi-supervised semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
- J. Fan and Z. Zhang, “Memory-based cross-image contexts for weakly supervised semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
- W. Wang, G. Sun, and L. Van Gool, “Looking beyond single images for weakly supervised semantic segmentation learning,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
- Y. Wei, H. Xiao, H. Shi, Z. Jie, J. Feng, and T. S. Huang, “Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7268–7277.
- A. Bearman, O. Russakovsky, V. Ferrari, and F. F. Li, “What’s the Point: Semantic segmentation with point supervision,” in Eur. Conf. Comput. Vis., 2016.
- M. Tang, F. Perazzi, A. Djelouah, I. Ben Ayed, C. Schroers, and Y. Boykov, “On regularized losses for weakly-supervised cnn segmentation,” in Proc. Euro. Conf. Comput. Vis. (ECCV), 2018, pp. 507–522.
- H. Chen, J. Wang, H. C. Chen, X. Zhen, F. Zheng, R. Ji, and L. Shao, “Seminar learning for click-level weakly supervised semantic segmentation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 6920–6929.
- J. Xu, C. Zhou, Z. Cui, C. Xu, Y. Huang, P. Shen, S. Li, and J. Yang, “Scribble-supervised semantic segmentation inference,” in Int. Conf. Comput. Vis., 2021, pp. 15 354–15 363.
- D. Lin, J. Dai, J. Jia, K. He, and J. Sun, “Scribblesup: Scribble-supervised convolutional networks for semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 3159–3167.
- Z. Pan, P. Jiang, Y. Wang, C. Tu, and A. G. Cohn, “Scribble-supervised semantic segmentation by uncertainty reduction on neural representation and self-supervision on neural eigenspace,” in Int. Conf. Comput. Vis., 2021, pp. 7416–7425.
- J. Dai, K. He, and J. Sun, “Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation,” in Int. Conf. Comput. Vis., 2015, pp. 1635–1643.
- A. Khoreva, R. Benenson, J. Hosang, M. Hein, and B. Schiele, “Simple does it: Weakly supervised instance and semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 876–885.
- G. Papandreou, L.-C. Chen, K. P. Murphy, and A. L. Yuille, “Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation,” in Int. Conf. Comput. Vis., 2015, pp. 1742–1750.
- B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 2921–2929.
- J. Lee, E. Kim, and S. Yoon, “Anti-adversarially manipulated attributions for weakly and semi-supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 4071–4080.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Jour. Mach. Learn. Research, vol. 9, no. 11, 2008.
- N. Araslanov and S. Roth, “Single-stage semantic segmentation from image labels,” in IEEE Conf. Comput. Vis. Pattern Recog., 2020, pp. 4253–4262.
- P. Vernaza and M. Chandraker, “Learning random-walk label propagation for weakly-supervised semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 7158–7166.
- J. Ahn and S. Kwak, “Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 4981–4990.
- P. Krähenbühl and V. Koltun, “Parameter learning and convergent inference for dense random fields,” in Inter. Conf. Mach. Learn. PMLR, 2013, pp. 513–521.
- W. Shen, Z. Peng, X. Wang, H. Wang, J. Cen, D. Jiang, L. Xie, X. Yang, and Q. Tian, “A survey on label-efficient deep image segmentation: Bridging the gap between weak supervision and dense prediction,” EEE Trans. Pattern Anal. Mach. Intell., 2023.
- L. Ru, Y. Zhan, B. Yu, and B. Du, “Learning affinity from attention: end-to-end weakly-supervised semantic segmentation with transformers,” in IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 16 846–16 855.
- T.-W. Ke, J.-J. Hwang, and S. X. Yu, “Universal weakly supervised segmentation by pixel-to-segment contrastive learning,” in Inter. Conf. Learn. Repre., 2021.
- J. Ahn, S. Cho, and S. Kwak, “Weakly supervised learning of instance segmentation with inter-pixel relations,” in IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 2209–2218.
- L. Wu, L. Fang, X. He, M. He, J. Ma, and Z. Zhong, “Querying labeled for unlabeled: Cross-image semantic consistency guided semi-supervised semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 7, pp. 8827–8844, Jul. 2023.
- J. Fan, Z. Zhang, T. Tan, C. Song, and J. Xiao, “Cian: Cross-image affinity net for weakly supervised semantic segmentation,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 07, 2020, pp. 10 762–10 769.
- P.-T. Jiang, Y. Yang, Q. Hou, and Y. Wei, “L2g: A simple local-to-global knowledge transfer framework for weakly supervised semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 16 886–16 896.
- L. Ru, H. Zheng, Y. Zhan, and B. Du, “Token contrast for weakly-supervised semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2023.
- Z. Liang, T. Wang, X. Zhang, J. Sun, and J. Shen, “Tree energy loss: Towards sparsely annotated semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 16 907–16 916.
- M. Tang, A. Djelouah, F. Perazzi, Y. Boykov, and C. Schroers, “Normalized cut loss for weakly-supervised cnn segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 1818–1827.
- D. Marin, M. Tang, I. B. Ayed, and Y. Boykov, “Beyond gradient descent for regularized segmentation losses,” in IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 10 187–10 196.
- L. Wu, Z. Zhong, L. Fang, X. He, Q. Liu, J. Ma, and H. Chen, “Sparsely annotated semantic segmentation with adaptive gaussian mixtures,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2023, pp. 15 454–15 464.
- B. Zhang, J. Xiao, Y. Wei, and Y. Zhao, “Credible dual-expert learning for weakly supervised semantic segmentation,” Inter. Jour. Comput. Vis., pp. 1–17, 2023.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput Vis. Pattern Recognit., 2016, pp. 770–778.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in Inter. Conf. Learn. Repres., 2020.
- M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” Int. Jour. Comput. Vision, vol. 88, no. 2, pp. 303–338, Jun. 2010.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Euro Conf. Comput Vis. Springer, 2014, pp. 740–755.
- M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 3213–3223.
- B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene parsing through ade20k dataset,” in IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 633–641.
- Y. Wang, J. Zhang, M. Kan, S. Shan, and X. Chen, “Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2020, pp. 12 275–12 284.
- R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in IEEE Int. Conf. Comput. Vis., 2017, pp. 618–626.
- Z. Huang, X. Wang, J. Wang, W. Liu, and J. Wang, “Weakly-supervised semantic segmentation network with deep seeded region growing,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7014–7023.
- J. Lee, E. Kim, S. Lee, J. Lee, and S. Yoon, “Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 5267–5276.
- L. Wu, M. Lu, and L. Fang, “Deep covariance alignment for domain adaptive remote sensing image segmentation,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–11, 2022.
- Q. Liu, M. He, Y. Kuang, L. Wu, J. Yue, and L. Fang, “A multi-level label-aware semi-supervised framework for remote sensing scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–12, 2023.
- L. Wu, J. Zhuang, and H. Chen, “Voco: A simple-yet-effective volume contrastive learning framework for 3d medical image analysis,” in IEEE Conf. Comput. Vis. Pattern Recog., 2024.
- B. Wang, G. Qi, S. Tang, T. Zhang, Y. Wei, L. Li, and Y. Zhang, “Boundary perception guidance: A scribble-supervised semantic segmentation approach,” in IJCAI Int. Joint Conf. Artifi. Intell., 2019.
- M. A. Ruzon and C. Tomasi, “Alpha estimation in natural images,” in IEEE Conf. Comput. Vis. Pattern Recog. CVPR 2000, vol. 1. IEEE, 2000, pp. 18–25.
- Y.-Y. Chuang, B. Curless, D. H. Salesin, and R. Szeliski, “A bayesian approach to digital matting,” in IEEE Conf. Comput. Vis. Pattern Recog. CVPR 2001, vol. 2. IEEE, 2001, pp. II–II.
- Y. Boykov and G. Funka-Lea, “Graph cuts and efficient nd image segmentation,” Int. J. Comput. Vis., vol. 70, no. 2, pp. 109–131, 2006.
- C. Rother, V. Kolmogorov, and A. Blake, “Interactive foreground extraction using iterated graph cuts,” ACM Trans. Graphics, vol. 23, p. 3, 2012.
- Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 11, pp. 1222–1239, 2001.
- C. Rother, V. Kolmogorov, and A. Blake, “” grabcut” interactive foreground extraction using iterated graph cuts,” ACM Trans. Graphics (TOG), vol. 23, no. 3, pp. 309–314, 2004.
- T. K. Moon, “The expectation-maximization algorithm,” IEEE Signal Process. magazine, vol. 13, no. 6, pp. 47–60, 1996.
- C. Liang, W. Wang, J. Miao, and Y. Yang, “Gmmseg: Gaussian mixture based generative semantic segmentation models,” Adv. Neural Inform. Process. Syst., 2022.
- Z. Wang, S. Wang, S. Yang, H. Li, J. Li, and Z. Li, “Weakly supervised fine-grained image classification via gaussian mixture model oriented discriminative learning,” in IEEE Conf. Comput. Vis. Pattern Recog., 2020, pp. 9749–9758.
- R. Xu, C. Wang, J. Sun, S. Xu, W. Meng, and X. Zhang, “Self correspondence distillation for end-to-end weakly-supervised semantic segmentation,” in AAAI, 2023.
- B. Zhang, J. Xiao, Y. Wei, M. Sun, and K. Huang, “Reliability does matter: An end-to-end weakly supervised semantic segmentation approach,” in AAAI, vol. 34, no. 07, 2020, pp. 12 765–12 772.
- J. Pan, P. Zhu, K. Zhang, B. Cao, Y. Wang, D. Zhang, J. Han, and Q. Hu, “Learning self-supervised low-rank network for single-stage weakly and semi-supervised semantic segmentation,” Inter. Jour. Comput. Vis., vol. 130, no. 5, pp. 1181–1195, 2022.
- H. Kweon, S.-H. Yoon, H. Kim, D. Park, and K.-J. Yoon, “Unlocking the potential of ordinary classifier: Class-specific adversarial erasing framework for weakly supervised semantic segmentation,” in Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 6994–7003.
- F. Zhang, C. Gu, C. Zhang, and Y. Dai, “Complementary patch for weakly supervised semantic segmentation,” in Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 7242–7251.
- J. Lee, J. Choi, J. Mok, and S. Yoon, “Reducing information bottleneck for weakly supervised semantic segmentation,” Adv. Neural Inform. Process. Sys., vol. 34, pp. 27 408–27 421, 2021.
- L. Ru, B. Du, Y. Zhan, and C. Wu, “Weakly-supervised semantic segmentation with visual words learning and hybrid pooling,” Inter. Jour. Comput. Vis., vol. 130, no. 4, pp. 1127–1144, 2022.
- Q. Chen, L. Yang, J.-H. Lai, and X. Xie, “Self-supervised image-specific prototype exploration for weakly supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 4288–4298.
- J. Lee, S. J. Oh, S. Yun, J. Choe, E. Kim, and S. Yoon, “Weakly supervised semantic segmentation using out-of-distribution data,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 16 897–16 906.
- Z. Chen, T. Wang, X. Wu, X.-S. Hua, H. Zhang, and Q. Sun, “Class re-activation maps for weakly-supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 969–978.
- S. Liu, K. Liu, W. Zhu, Y. Shen, and C. Fernandez-Granda, “Adaptive early-learning correction for segmentation from noisy annotations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 2606–2616.
- J. LI, Z. JIE, X. Wang, L. Ma et al., “Expansion and shrinkage of localization for weakly-supervised semantic segmentation,” in Adv. Neural Inform. Process. Sys., 2022, pp. 1–12.
- R. Haifeng, B. Tu, Z. Wang, and J. Li, “Boundary-enhanced co-training for weakly supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2023.
- D. Zhang, H. Zhang, J. Tang, X.-S. Hua, and Q. Sun, “Causal intervention for weakly-supervised semantic segmentation,” Adv. Neural Inform. Process. Sys., vol. 33, pp. 655–666, 2020.
- S. Lee, M. Lee, J. Lee, and H. Shim, “Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 5495–5505.
- Y. Su, R. Sun, G. Lin, and Q. Wu, “Context decoupling augmentation for weakly supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 7004–7014.
- L. Xu, W. Ouyang, M. Bennamoun, F. Boussaid, and D. Xu, “Multi-class token transformer for weakly supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 4310–4319.
- Y. Li, Y. Duan, Z. Kuang, Y. Chen, W. Zhang, and X. Li, “Uncertainty estimation via response scaling for pseudo-mask noise mitigation in weakly-supervised semantic segmentation,” in Proc. AAAI Conf. Artifi. Intell., vol. 36, no. 2, 2022, pp. 1447–1455.
- L. Wu, L. Fang, J. Yue, B. Zhang, P. Ghamisi, and M. He, “Deep bilateral filtering network for point-supervised semantic segmentation in remote sensing images,” IEEE Trans. Image Process., vol. 31, pp. 7419–7434, 2022.
- L. Song, Y. Li, Z. Li, G. Yu, H. Sun, J. Sun, and N. Zheng, “Learnable tree filter for structure-preserving feature transform,” Adv. Neural Inform. Process. Syst., vol. 32, 2019.
- L. Song, Y. Li, Z. Jiang, Z. Li, X. Zhang, H. Sun, J. Sun, and N. Zheng, “Rethinking learnable tree filter for generic feature transform,” Adv. Neural Inform. Process. Syst., vol. 33, pp. 3991–4002, 2020.
- S. Z. Li, “Markov random field models in computer vision,” in Eur. Conf. Comput. Vis. Springer, 1994, pp. 361–370.
- L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Euro. Conf. Comput. Vis. (ECCV), 2018, pp. 801–818.
- M. Pu, Y. Huang, Q. Guan, and Q. Zou, “GraphNet: Learning image pseudo annotations for weakly-supervised semantic segmentation,” in ACM Int. Conf. Multimedia, 2018, pp. 483–491.
- S. Xie and Z. Tu, “Holistically-nested edge detection,” in Int. Conf. Comput. Vis., 2015, pp. 1395–1403.
- C. Song, Y. Huang, W. Ouyang, and L. Wang, “Box-driven class-wise region masking and filling rate guided loss for weakly supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 3136–3145.
- J. Lee, J. Yi, C. Shin, and S. Yoon, “BBam: Bounding box attribution map for weakly supervised semantic and instance segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 2643–2652.
- Y. Oh, B. Kim, and B. Ham, “Background-aware pooling and noise-aware loss for weakly-supervised semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 6913–6922.
- S. Rossetti, D. Zappia, M. Sanzari, M. Schaerf, and F. Pirri, “Max pooling with vision transformers reconciles class and shape in weakly supervised semantic segmentation,” in ECCV, 2022, pp. 446–463.
- X. Wang, X. Ma, and W. E. L. Grimson, “Unsupervised activity perception in crowded and complicated scenes using hierarchical bayesian models,” IEEE Trans. Pattern Analy. Mach. Intell., vol. 31, no. 3, pp. 539–555, 2008.
- Linshan Wu (11 papers)
- Zhun Zhong (60 papers)
- Jiayi Ma (53 papers)
- Yunchao Wei (151 papers)
- Hao Chen (1006 papers)
- Leyuan Fang (26 papers)
- Shutao Li (28 papers)