Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Modal Transformers Utterance-Level Code-Switching Detection (2011.02132v1)

Published 4 Nov 2020 in eess.AS

Abstract: An utterance that contains speech from multiple languages is known as a code-switched sentence. In this work, we propose a novel technique to predict whether given audio is mono-lingual or code-switched. We propose a multi-modal learning approach by utilising the phoneme information along with audio features for code-switch detection. Our model consists of a Phoneme Network that processes phoneme sequence and Audio Network(AN), which processes the mfcc features. We fuse representation learned from both the Networks to predict if the utterance is code-switched or not. The Audio Network and Phonetic Network consist of initial convolution, Bi-LSTM, and transformer encoder layers. The transformer encoder layer helps in selecting important and relevant features for better classification by using self-attention. We show that utilising the phoneme sequence of the utterance along with the mfcc features improves the performance of code-switch detection significantly. We train and evaluate our model on Microsoft code-switching challenge datasets for Telugu, Tamil, and Gujarati languages. Our experiments show that the multi-modal learning approach significantly improved accuracy over the uni-modal approaches for Telugu-English, Gujarati-English, and Tamil-English datasets. We also study the system performance using different neural layers and show that the transformers help obtain better performance.

Summary

We haven't generated a summary for this paper yet.