Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dual Language Models for Code Switched Speech Recognition (1711.01048v2)

Published 3 Nov 2017 in cs.CL

Abstract: In this work, we present a simple and elegant approach to LLMing for bilingual code-switched text. Since code-switching is a blend of two or more different languages, a standard bilingual LLM can be improved upon by using structures of the monolingual LLMs. We propose a novel technique called dual LLMs, which involves building two complementary monolingual LLMs and combining them using a probabilistic model for switching between the two. We evaluate the efficacy of our approach using a conversational Mandarin-English speech corpus. We prove the robustness of our model by showing significant improvements in perplexity measures over the standard bilingual LLM without the use of any external information. Similar consistent improvements are also reflected in automatic speech recognition error rates.

Citations (15)

Summary

We haven't generated a summary for this paper yet.