Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer-based Model for Word Level Language Identification in Code-mixed Kannada-English Texts (2211.14459v1)

Published 26 Nov 2022 in cs.CL and cs.AI

Abstract: Using code-mixed data in NLP research currently gets a lot of attention. Language identification of social media code-mixed text has been an interesting problem of study in recent years due to the advancement and influences of social media in communication. This paper presents the Instituto Polit\'ecnico Nacional, Centro de Investigaci\'on en Computaci\'on (CIC) team's system description paper for the CoLI-Kanglish shared task at ICON2022. In this paper, we propose the use of a Transformer based model for word-level language identification in code-mixed Kannada English texts. The proposed model on the CoLI-Kenglish dataset achieves a weighted F1-score of 0.84 and a macro F1-score of 0.61.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Atnafu Lambebo Tonja (27 papers)
  2. Mesay Gemeda Yigezu (8 papers)
  3. Olga Kolesnikova (24 papers)
  4. Moein Shahiki Tash (7 papers)
  5. Grigori Sidorov (45 papers)
  6. Alexander Gelbuk (1 paper)
Citations (23)

Summary

We haven't generated a summary for this paper yet.