Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MLCA-AVSR: Multi-Layer Cross Attention Fusion based Audio-Visual Speech Recognition (2401.03424v3)

Published 7 Jan 2024 in cs.SD, cs.AI, and eess.AS

Abstract: While automatic speech recognition (ASR) systems degrade significantly in noisy environments, audio-visual speech recognition (AVSR) systems aim to complement the audio stream with noise-invariant visual cues and improve the system's robustness. However, current studies mainly focus on fusing the well-learned modality features, like the output of modality-specific encoders, without considering the contextual relationship during the modality feature learning. In this study, we propose a multi-layer cross-attention fusion based AVSR (MLCA-AVSR) approach that promotes representation learning of each modality by fusing them at different levels of audio/visual encoders. Experimental results on the MISP2022-AVSR Challenge dataset show the efficacy of our proposed system, achieving a concatenated minimum permutation character error rate (cpCER) of 30.57% on the Eval set and yielding up to 3.17% relative improvement compared with our previous system which ranked the second place in the challenge. Following the fusion of multiple systems, our proposed approach surpasses the first-place system, establishing a new SOTA cpCER of 29.13% on this dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. “Achieving human parity in conversational speech recognition,” arXiv preprint arXiv:1610.05256, 2016.
  2. “Watch or Listen: Robust audio-visual speech recognition with visual corruption modeling and reliability scoring,” in Proc. CVPR. IEEE/CVF, 2023, pp. 18783–18794.
  3. “Auto-AVSR: Audio-visual speech recognition with automatic labels,” in Proc. ICASSP. IEEE, 2023, pp. 1–5.
  4. “End-to-end audio-visual speech recognition with conformers,” in Proc. ICASSP. IEEE, 2021, pp. 7613–7617.
  5. “Joint CTC-attention based end-to-end speech recognition using multi-task learning,” in Proc. ICASSP. IEEE, 2017, pp. 4835–4839.
  6. “Deep residual learning for image recognition,” in Proc. CVPR. IEEE/CVF, 2016, pp. 770–778.
  7. “Conformer: Convolution-augmented Transformer for speech recognition,” in Proc. Interspeech. ISCA, 2020, pp. 5036–5040.
  8. “Attention-based audio-visual fusion for robust automatic speech recognition,” in Proc. MI. ACM, 2018, pp. 111–115.
  9. “Attentive fusion enhanced audio-visual encoding for Transformer based robust speech recognition,” in Proc. APSIPA ASC. IEEE, 2020, pp. 638–643.
  10. “Audio-visual multi-Talker speech recognition in a cocktail party,” in Proc. Interspeech. ISCA, 2021, pp. 3021–3025.
  11. “VE-KWS: Visual modality enhanced end-to-end keyword spotting,” in Proc. ICASSP. IEEE, 2023, pp. 1–5.
  12. “Robust audio-visual ASR with unified cross-modal attention,” in Proc. ICASSP. IEEE, 2023, pp. 1–5.
  13. “The DKU audio-visual wake word spotting system for the 2021 MISP challenge,” in Proc. ICASSP. IEEE, 2022, pp. 9256–9260.
  14. “The XMU system for audio-visual diarization and recognition in MISP challenge 2022,” in Proc. ICASSP. IEEE, 2023, pp. 1–2.
  15. “Audio-visual speech recognition in misp2021 challenge: Dataset release and deep analysis,” in Proc. Interspeech. ISCA, 2022, pp. 1766–1770.
  16. “The first multimodal information based speech processing (MISP) challenge: Data, tasks, baselines and results,” in Proc. ICASSP. IEEE, 2022, pp. 9266–9270.
  17. “The multimodal information based speech processing (MISP) 2022 challenge: Audio-visual diarization and recognition,” in Proc. ICASSP. IEEE, 2023, pp. 1–5.
  18. “The NIO System for audio-visual diarization and recognition in MISP challenge 2022,” in Proc. ICASSP. IEEE, 2023, pp. 1–2.
  19. “The NPU-ASLP system for audio-visual speech recognition in MISP 2022 challenge,” in Proc. ICASSP. IEEE, 2023, pp. 1–2.
  20. “Intermediate loss regularization for CTC-based speech recognition,” in Proc. ICASSP. IEEE, 2021, pp. 6224–6228.
  21. “Branchformer: Parallel MLP-attention architectures to capture local and global context for speech recognition and understanding,” in Proc. ICML. PMLR, 2022, pp. 17627–17643.
  22. “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. ICML. PMLR, 2006, pp. 369–376.
  23. “Attention is all you need,” in Proc. NIPS. 2017, vol. 30, Curran Associates, Inc.
  24. “E-branchformer: Branchformer with enhanced merging for speech recognition,” in Proc. SLT. IEEE, 2023, pp. 84–91.
  25. “Generalization of multi-channel linear prediction methods for blind MIMO impulse response shortening,” TASLP, vol. 20, no. 10, pp. 2707–2720, 2012.
  26. “Front-end processing for the CHiME-5 dinner party scenario,” in Proc. CHiME5 Workshop, 2018.
  27. “Musan: A music, speech, and noise corpus,” arXiv preprint arXiv:1510.08484, 2015.
  28. “Tsup speaker diarization system for conversational short-phrase speaker diarization challenge,” in Proc. ISCSLP. IEEE, 2022, pp. 502–506.
  29. “Espnet: End-to-end speech processing toolkit,” in Proc. Interspeech. ISCA, 2018, pp. 2207–2211.
  30. Jonathan G Fiscus, “A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER),” in Proc. ASRU. IEEE, 1997, pp. 347–354.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. He Wang (294 papers)
  2. Pengcheng Guo (55 papers)
  3. Pan Zhou (220 papers)
  4. Lei Xie (337 papers)
Citations (8)