Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-modal Cognitive Consensus guided Audio-Visual Segmentation (2310.06259v5)

Published 10 Oct 2023 in eess.IV, cs.SD, and eess.AS

Abstract: Audio-Visual Segmentation (AVS) aims to extract the sounding object from a video frame, which is represented by a pixel-wise segmentation mask for application scenarios such as multi-modal video editing, augmented reality, and intelligent robot systems. The pioneering work conducts this task through dense feature-level audio-visual interaction, which ignores the dimension gap between different modalities. More specifically, the audio clip could only provide a Global semantic label in each sequence, but the video frame covers multiple semantic objects across different Local regions, which leads to mislocalization of the representationally similar but semantically different object. In this paper, we propose a Cross-modal Cognitive Consensus guided Network (C3N) to align the audio-visual semantics from the global dimension and progressively inject them into the local regions via an attention mechanism. Firstly, a Cross-modal Cognitive Consensus Inference Module (C3IM) is developed to extract a unified-modal label by integrating audio/visual classification confidence and similarities of modality-agnostic label embeddings. Then, we feed the unified-modal label back to the visual backbone as the explicit semantic-level guidance via a Cognitive Consensus guided Attention Module (CCAM), which highlights the local features corresponding to the interested object. Extensive experiments on the Single Sound Source Segmentation (S4) setting and Multiple Sound Source Segmentation (MS3) setting of the AVSBench dataset demonstrate the effectiveness of the proposed method, which achieves state-of-the-art performance. Code is available at https://github.com/ZhaofengSHI/AVS-C3N.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (105)
  1. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
  2. G. Gao, G. Xu, J. Li, Y. Yu, H. Lu, and J. Yang, “Fbsnet: A fast bilateral symmetrical network for real-time semantic segmentation,” IEEE Transactions on Multimedia, 2022.
  3. X. Yin, D. Min, Y. Huo, and S.-E. Yoon, “Contour-aware equipotential earning for semantic segmentation,” IEEE Transactions on Multimedia, 2022.
  4. B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik, “Simultaneous detection and segmentation,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VII 13.   Springer, 2014, pp. 297–312.
  5. C. Yin, J. Tang, T. Yuan, Z. Xu, and Y. Wang, “Bridging the gap between semantic segmentation and instance segmentation,” IEEE Transactions on Multimedia, vol. 24, pp. 4183–4196, 2021.
  6. T. Li, K. Zhang, S. Shen, B. Liu, Q. Liu, and Z. Li, “Image co-saliency detection and instance co-segmentation using attention graph clustering based graph convolutional network,” IEEE Transactions on Multimedia, vol. 24, pp. 492–505, 2021.
  7. A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár, “Panoptic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9404–9413.
  8. Y. Li, H. Zhao, X. Qi, L. Wang, Z. Li, J. Sun, and J. Jia, “Fully convolutional networks for panoptic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 214–223.
  9. M. Li, W. Cai, K. Verspoor, S. Pan, X. Liang, and X. Chang, “Cross-modal clinical graph transformer for ophthalmic report generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 656–20 665.
  10. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning.   PMLR, 2021, pp. 8748–8763.
  11. Z. Yang, J. Wang, Y. Tang, K. Chen, H. Zhao, and P. H. Torr, “Lavt: Language-aware vision transformer for referring image segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 155–18 165.
  12. T. Lüddecke and A. Ecker, “Image segmentation using text and image prompts,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7086–7096.
  13. F. Liu, Y. Liu, Y. Kong, K. Xu, L. Zhang, B. Yin, G. Hancke, and R. Lau, “Referring image segmentation using text supervision,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 22 124–22 134.
  14. J. Zhou, J. Wang, J. Zhang, W. Sun, J. Zhang, S. Birchfield, D. Guo, L. Kong, M. Wang, and Y. Zhong, “Audio–visual segmentation,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII.   Springer, 2022, pp. 386–403.
  15. S. H. Lee, W. Roh, W. Byeon, S. H. Yoon, C. Kim, J. Kim, and S. Kim, “Sound-guided semantic image manipulation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 3377–3386.
  16. S. H. Lee, G. Oh, W. Byeon, C. Kim, W. J. Ryoo, S. H. Yoon, H. Cho, J. Bae, J. Kim, and S. Kim, “Sound-guided semantic video generation,” in European Conference on Computer Vision.   Springer, 2022, pp. 34–50.
  17. T.-J. Fu, X. E. Wang, S. T. Grafton, M. P. Eckstein, and W. Y. Wang, “M3l: Language-based video editing via multi-modal multi-level transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 513–10 522.
  18. F. Z. Kaghat, A. Azough, M. Fakhour, and M. Meknassi, “A new audio augmented reality interaction and adaptation model for museum visits,” Computers & Electrical Engineering, vol. 84, p. 106606, 2020.
  19. J. Yang, A. Barde, and M. Billinghurst, “Audio augmented reality: A systematic review of technologies, applications, and future research directions,” journal of the audio engineering society, vol. 70, no. 10, pp. 788–809, 2022.
  20. X. Wu, H. Gong, P. Chen, Z. Zhong, and Y. Xu, “Surveillance robot utilizing video and audio information,” Journal of Intelligent and Robotic Systems, vol. 55, pp. 403–421, 2009.
  21. C. Gan, Y. Zhang, J. Wu, B. Gong, and J. B. Tenenbaum, “Look, listen, and act: Towards audio-visual embodied navigation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 9701–9707.
  22. A. Younes, D. Honerkamp, T. Welschehold, and A. Valada, “Catch me if you hear me: Audio-visual navigation in complex unmapped environments with moving sounds,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 928–935, 2023.
  23. Y. Mao, J. Zhang, M. Xiang, Y. Lv, Y. Zhong, and Y. Dai, “Contrastive conditional latent diffusion for audio-visual segmentation,” arXiv preprint arXiv:2307.16579, 2023.
  24. D. Hao, Y. Mao, B. He, X. Han, Y. Dai, and Y. Zhong, “Improving audio-visual segmentation with bidirectional generation,” arXiv preprint arXiv:2308.08288, 2023.
  25. Y. Mao, J. Zhang, M. Xiang, Y. Zhong, and Y. Dai, “Multimodal variational auto-encoder based audio-visual segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 954–965.
  26. C. Liu, P. P. Li, X. Qi, H. Zhang, L. Li, D. Wang, and X. Yu, “Audio-visual segmentation by exploring cross-modal mutual semantics,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 7590–7598.
  27. C. Liu, P. Li, H. Zhang, L. Li, Z. Huang, D. Wang, and X. Yu, “Bavs: Bootstrapping audio-visual segmentation by integrating foundation knowledge,” arXiv preprint arXiv:2308.10175, 2023.
  28. J. S. Chung and A. Zisserman, “Out of time: automated lip sync in the wild,” in Computer Vision–ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13.   Springer, 2017, pp. 251–263.
  29. C. Lyu, W. Li, T. Ji, L. Wang, L. Zhou, C. Gurrin, L. Yang, Y. Yu, Y. Graham, and J. Foster, “Graph-based video-language learning with multi-grained audio-visual alignment,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 3975–3984.
  30. H. Chen, W. Xie, T. Afouras, A. Nagrani, A. Vedaldi, and A. Zisserman, “Audio-visual synchronisation in the wild,” arXiv preprint arXiv:2112.04432, 2021.
  31. N. Khosravan, S. Ardeshir, and R. Puri, “On attention modules for audio-visual synchronization.” in CVPR Workshops, 2019, pp. 25–28.
  32. J. Wang, Z. Fang, and H. Zhao, “Alignnet: A unifying approach to audio-visual alignment,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp. 3309–3317.
  33. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
  34. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890.
  35. E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 077–12 090, 2021.
  36. S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P. H. Torr et al., “Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 6881–6890.
  37. L. Ru, Y. Zhan, B. Yu, and B. Du, “Learning affinity from attention: end-to-end weakly-supervised semantic segmentation with transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 846–16 855.
  38. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  39. S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8759–8768.
  40. T. Zhang, S. Wei, and S. Ji, “E2ec: An end-to-end contour-based method for high-quality high-speed instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4443–4452.
  41. Y. Xiong, R. Liao, H. Zhao, R. Hu, M. Bai, E. Yumer, and R. Urtasun, “Upsnet: A unified panoptic segmentation network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8818–8826.
  42. S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool, “One-shot video object segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 221–230.
  43. S. W. Oh, J.-Y. Lee, K. Sunkavalli, and S. J. Kim, “Fast video object segmentation by reference-guided mask propagation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7376–7385.
  44. P. Voigtlaender, Y. Chai, F. Schroff, H. Adam, B. Leibe, and L.-C. Chen, “Feelvos: Fast end-to-end embedding learning for video object segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9481–9490.
  45. Z. Yang, Y. Wei, and Y. Yang, “Collaborative video object segmentation by foreground-background integration,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V.   Springer, 2020, pp. 332–348.
  46. K. Xu, L. Wen, G. Li, and Q. Huang, “Self-supervised deep triplenet for video object segmentation,” IEEE Transactions on Multimedia, vol. 23, pp. 3530–3539, 2020.
  47. B. Duke, A. Ahmed, C. Wolf, P. Aarabi, and G. W. Taylor, “Sstvos: Sparse spatiotemporal transformers for video object segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5912–5921.
  48. F. Perazzi, A. Khoreva, R. Benenson, B. Schiele, and A. Sorkine-Hornung, “Learning video object segmentation from static images,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2663–2672.
  49. Y.-T. Hu, J.-B. Huang, and A. G. Schwing, “Videomatch: Matching based video object segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 54–70.
  50. J. Johnander, M. Danelljan, E. Brissman, F. S. Khan, and M. Felsberg, “A generative appearance model for end-to-end video object segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 8953–8962.
  51. L. Chen, J. Shen, W. Wang, and B. Ni, “Video object segmentation via dense trajectories,” IEEE Transactions on Multimedia, vol. 17, no. 12, pp. 2225–2234, 2015.
  52. Y.-T. Hu, J.-B. Huang, and A. G. Schwing, “Unsupervised video object segmentation using motion saliency-guided spatio-temporal propagation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 786–802.
  53. W. Wang, H. Song, S. Zhao, J. Shen, S. Zhao, S. C. Hoi, and H. Ling, “Learning unsupervised video object segmentation through visual attention,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3064–3074.
  54. X. Lu, W. Wang, C. Ma, J. Shen, L. Shao, and F. Porikli, “See more, know more: Unsupervised video object segmentation with co-attention siamese networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3623–3632.
  55. S. Ren, W. Liu, Y. Liu, H. Chen, G. Han, and S. He, “Reciprocal transformations for unsupervised video object segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 15 455–15 464.
  56. S. Li, B. Seybold, A. Vorobyov, A. Fathi, Q. Huang, and C.-C. J. Kuo, “Instance embedding transfer to unsupervised video object segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6526–6535.
  57. R. Arandjelovic and A. Zisserman, “Look, listen and learn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 609–617.
  58. A. Senocak, T.-H. Oh, J. Kim, M.-H. Yang, and I. S. Kweon, “Learning to localize sound source in visual scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4358–4366.
  59. R. Arandjelovic and A. Zisserman, “Objects that sound,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 435–451.
  60. D. Hu, F. Nie, and X. Li, “Deep multimodal clustering for unsupervised audiovisual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9248–9257.
  61. R. Qian, D. Hu, H. Dinkel, M. Wu, N. Xu, and W. Lin, “Multiple sound sources localization from coarse to fine,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16.   Springer, 2020, pp. 292–308.
  62. H. Chen, W. Xie, T. Afouras, A. Nagrani, A. Vedaldi, and A. Zisserman, “Localizing visual sounds the hard way,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 867–16 876.
  63. Z. Song, Y. Wang, J. Fan, T. Tan, and Z. Zhang, “Self-supervised predictive learning: A negative-free method for sound source localization in visual scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3222–3231.
  64. T. Afouras, Y. M. Asano, F. Fagan, A. Vedaldi, and F. Metze, “Self-supervised object detection from audio-visual correspondence,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 575–10 586.
  65. Y.-B. Lin, H.-Y. Tseng, H.-Y. Lee, Y.-Y. Lin, and M.-H. Yang, “Unsupervised sound localization via iterative contrastive learning,” Computer Vision and Image Understanding, vol. 227, p. 103602, 2023.
  66. D. Hu, R. Qian, M. Jiang, X. Tan, S. Wen, E. Ding, W. Lin, and D. Dou, “Discriminative sounding objects localization via self-supervised audiovisual matching,” Advances in Neural Information Processing Systems, vol. 33, pp. 10 077–10 087, 2020.
  67. T. Afouras, A. Owens, J. S. Chung, and A. Zisserman, “Self-supervised learning of audio-visual objects from video,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16.   Springer, 2020, pp. 208–224.
  68. Y. Cheng, R. Wang, Z. Pan, R. Feng, and Y. Zhang, “Look, listen, and attend: Co-attention network for self-supervised audio-visual representation learning,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 3884–3892.
  69. Y. Tian, D. Hu, and C. Xu, “Cyclic co-learning of sounding object visual grounding and sound separation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2745–2754.
  70. T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
  71. J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
  72. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” 2018.
  73. A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving language understanding by generative pre-training,” 2018.
  74. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  75. D. Yogatama, D. Gillick, and N. Lazic, “Embedding methods for fine grained entity type classification,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 2015, pp. 291–296.
  76. H. Zhang, L. Xiao, W. Chen, Y. Wang, and Y. Jin, “Multi-task label embedding for text classification,” arXiv preprint arXiv:1710.07210, 2017.
  77. G. Wang, C. Li, W. Wang, Y. Zhang, D. Shen, X. Zhang, R. Henao, and L. Carin, “Joint embedding of words and labels for text classification,” arXiv preprint arXiv:1805.04174, 2018.
  78. N. Pappas and J. Henderson, “Gile: A generalized input-label embedding for text classification,” Transactions of the Association for Computational Linguistics, vol. 7, pp. 139–155, 2019.
  79. C. Du, Z. Chen, F. Feng, L. Zhu, T. Gan, and L. Nie, “Explicit interaction model towards text classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 6359–6366.
  80. Y. Xiong, Y. Feng, H. Wu, H. Kamigaito, and M. Okumura, “Fusing label embedding into bert: An efficient improvement for text classification,” in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 1743–1750.
  81. Z.-M. Chen, Q. Cui, X.-S. Wei, X. Jin, and Y. Guo, “Disentangling, embedding and ranking label cues for multi-label image recognition,” IEEE Transactions on Multimedia, vol. 23, pp. 1827–1840, 2020.
  82. W. Huang, E. Chen, Q. Liu, Y. Chen, Z. Huang, Y. Liu, Z. Zhao, D. Zhang, and S. Wang, “Hierarchical multi-label text classification: An attention-based recurrent network approach,” in Proceedings of the 28th ACM international conference on information and knowledge management, 2019, pp. 1051–1060.
  83. L. Cai, Y. Song, T. Liu, and K. Zhang, “A hybrid bert model that incorporates label semantics via adjustive attention for multi-label text classification,” Ieee Access, vol. 8, pp. 152 183–152 192, 2020.
  84. Z. Wang, H. Huang, and S. Han, “Idea: Interactive double attentions from label embedding for text classification,” arXiv preprint arXiv:2209.11407, 2022.
  85. J. Chen and S. Lv, “Long text truncation algorithm based on label embedding in text classification,” Applied Sciences, vol. 12, no. 19, p. 9874, 2022.
  86. T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka, “Metric learning for large scale image classification: Generalizing to new classes at near-zero cost,” in ECCV 2012-12th European Conference on Computer Vision, vol. 7573.   Springer, 2012, pp. 488–501.
  87. Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid, “Label-embedding for image classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 7, pp. 1425–1438, 2015.
  88. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International journal of computer vision, vol. 115, pp. 211–252, 2015.
  89. J. F. Gemmeke, D. P. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP).   IEEE, 2017, pp. 776–780.
  90. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9446–9454.
  91. Z. Cheng, M. Gadelha, S. Maji, and D. Sheldon, “A bayesian perspective on the deep image prior,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5443–5451.
  92. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  93. A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” in 2018 IEEE winter conference on applications of computer vision (WACV).   IEEE, 2018, pp. 839–847.
  94. X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803.
  95. H. Chen, W. Xie, A. Vedaldi, and A. Zisserman, “Vggsound: A large-scale audio-visual dataset,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2020, pp. 721–725.
  96. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
  97. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
  98. W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pvt v2: Improved baselines with pyramid vision transformer,” Computational Visual Media, vol. 8, no. 3, pp. 415–424, 2022.
  99. S. Hershey, S. Chaudhuri, D. P. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A. Saurous, B. Seybold et al., “Cnn architectures for large-scale audio classification,” in 2017 ieee international conference on acoustics, speech and signal processing (icassp).   IEEE, 2017, pp. 131–135.
  100. Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley, “Panns: Large-scale pretrained audio neural networks for audio pattern recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 2880–2894, 2020.
  101. S. Chen, Y. Wu, C. Wang, S. Liu, D. Tompkins, Z. Chen, and F. Wei, “Beats: Audio pre-training with acoustic tokenizers,” arXiv preprint arXiv:2212.09058, 2022.
  102. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  103. S. Mahadevan, A. Athar, A. Ošep, S. Hennen, L. Leal-Taixé, and B. Leibe, “Making a case for 3d convolutions for object segmentation in videos,” arXiv preprint arXiv:2008.11516, 2020.
  104. Y. Mao, J. Zhang, Z. Wan, Y. Dai, A. Li, Y. Lv, X. Tian, D.-P. Fan, and N. Barnes, “Transformer transforms salient object detection and camouflaged object detection,” 2021.
  105. J. Zhang, J. Xie, N. Barnes, and P. Li, “Learning generative vision transformer with energy-based latent space for saliency prediction,” vol. 34, 2021, pp. 15 448–15 463.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhaofeng Shi (5 papers)
  2. Qingbo Wu (32 papers)
  3. Fanman Meng (30 papers)
  4. Linfeng Xu (20 papers)
  5. Hongliang Li (59 papers)
Citations (2)