End-to-End Language Identification using Multi-Head Self-Attention and 1D Convolutional Neural Networks (2102.00306v1)
Abstract: In this work, we propose a new approach for language identification using multi-head self-attention combined with raw waveform based 1D convolutional neural networks for Indian languages. Our approach uses an encoder, multi-head selfattention, and a statistics pooling layer. The encoder learns features directly from raw waveforms using 1D convolution kernels and an LSTM layer. The LSTM layer captures temporal information between the features extracted by the 1D convolutional layer. The multi-head self-attention layer takes outputs of the LSTM layer and applies self-attention mechanisms on these features with M different heads. This process helps the model give more weightage to the more useful features and less weightage to the less relevant features. Finally, the frame-level features are combined using a statistics pooling layer to extract the utterance-level feature vector label prediction. We conduct all our experiments on the 373 hrs of audio data for eight different Indian languages. Our experiments show that our approach outperforms the baseline model by an absolute 3.69% improvement in F1-score and achieves the best F1-score of 95.90%. Our approach also shows that using raw waveform models gets a 1.7% improvement in performance compared to the models built using handcrafted features.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.