Enhancing In-Domain and Out-Domain EmoFake Detection via Cooperative Multilingual Speech Foundation Models (2507.12595v1)
Abstract: In this work, we address EmoFake Detection (EFD). We hypothesize that multilingual speech foundation models (SFMs) will be particularly effective for EFD due to their pre-training across diverse languages, enabling a nuanced understanding of variations in pitch, tone, and intensity. To validate this, we conduct a comprehensive comparative analysis of state-of-the-art (SOTA) SFMs. Our results shows the superiority of multilingual SFMs for same language (in-domain) as well as cross-lingual (out-domain) evaluation. To our end, we also propose, THAMA for fusion of foundation models (FMs) motivated by related research where combining FMs have shown improved performance. THAMA leverages the complementary conjunction of tucker decomposition and hadamard product for effective fusion. With THAMA, synergized with cooperative multilingual SFMs achieves topmost performance across in-domain and out-domain settings, outperforming individual FMs, baseline fusion techniques, and prior SOTA methods.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.