Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach (2505.14336v2)

Published 20 May 2025 in eess.AS, cs.CV, cs.MM, and cs.SD

Abstract: Audio-Visual Speech Recognition (AVSR) enhances robustness in noisy environments by integrating visual cues. While recent advances integrate LLMs into AVSR, their high computational cost hinders deployment in resource-constrained settings. To address this, we propose Llama-SMoP, an efficient Multimodal LLM that employs a Sparse Mixture of Projectors (SMoP) module to scale model capacity without increasing inference costs. By incorporating sparsely-gated mixture-of-experts (MoE) projectors, Llama-SMoP enables the use of smaller LLMs while maintaining strong performance. We explore three SMoP configurations and show that Llama-SMoP DEDR (Disjoint-Experts, Disjoint-Routers), which uses modality-specific routers and experts, achieves superior performance on ASR, VSR, and AVSR tasks. Ablation studies confirm its effectiveness in expert activation, scalability, and noise robustness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Umberto Cappellazzo (10 papers)
  2. Minsu Kim (115 papers)
  3. Stavros Petridis (64 papers)
  4. Daniele Falavigna (19 papers)
  5. Alessio Brutti (30 papers)

Summary

We haven't generated a summary for this paper yet.