Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CoLLAB: A Collaborative Approach for Multilingual Abuse Detection (2406.03205v1)

Published 5 Jun 2024 in eess.AS

Abstract: In this study, we investigate representations from paralingual Pre-Trained model (PTM) for Audio Abuse Detection (AAD), which has not been explored for AAD. Our results demonstrate their superiority compared to other PTM representations on the ADIMA benchmark. Furthermore, combining PTM representations enhances AAD performance. Despite these improvements, challenges with cross-lingual generalizability still remain, and certain languages require training in the same language. This demands individual models for different languages, leading to scalability, maintenance, and resource allocation issues and hindering the practical deployment of AAD systems in linguistically diverse real-world environments. To address this, we introduce CoLLAB, a novel framework that doesn't require training and allows seamless merging of models trained in different languages through weight-averaging. This results in a unified model with competitive AAD performance across multiple languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Orchid Chetia Phukan (38 papers)
  2. Yashasvi Chaurasia (1 paper)
  3. Arun Balaji Buduru (47 papers)
  4. Rajesh Sharma (73 papers)

Summary

We haven't generated a summary for this paper yet.