Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Code-Switched Language Models Using Neural Based Synthetic Data from Parallel Sentences (1909.08582v1)

Published 18 Sep 2019 in cs.CL

Abstract: Training code-switched LLMs is difficult due to lack of data and complexity in the grammatical structure. Linguistic constraint theories have been used for decades to generate artificial code-switching sentences to cope with this issue. However, this require external word alignments or constituency parsers that create erroneous results on distant languages. We propose a sequence-to-sequence model using a copy mechanism to generate code-switching data by leveraging parallel monolingual translations from a limited source of code-switching data. The model learns how to combine words from parallel sentences and identifies when to switch one language to the other. Moreover, it captures code-switching constraints by attending and aligning the words in inputs, without requiring any external knowledge. Based on experimental results, the LLM trained with the generated sentences achieves state-of-the-art performance and improves end-to-end automatic speech recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Genta Indra Winata (94 papers)
  2. Andrea Madotto (64 papers)
  3. Chien-Sheng Wu (77 papers)
  4. Pascale Fung (150 papers)
Citations (90)
X Twitter Logo Streamline Icon: https://streamlinehq.com