2000 character limit reached
Training a code-switching language model with monolingual data (1911.06003v2)
Published 14 Nov 2019 in cs.CL
Abstract: A lack of code-switching data complicates the training of code-switching (CS) LLMs. We propose an approach to train such CS LLMs on monolingual data only. By constraining and normalizing the output projection matrix in RNN-based LLMs, we bring embeddings of different languages closer to each other. Numerical and visualization results show that the proposed approaches remarkably improve the performance of CS LLMs trained on monolingual data. The proposed approaches are comparable or even better than training CS LLMs with artificially generated CS data. We additionally use unsupervised bilingual word translation to analyze whether semantically equivalent words in different languages are mapped together.