Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reducing language context confusion for end-to-end code-switching automatic speech recognition (2201.12155v4)

Published 28 Jan 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Code-switching deals with alternative languages in communication process. Training end-to-end (E2E) automatic speech recognition (ASR) systems for code-switching is especially challenging as code-switching training data are always insufficient to combat the increased multilingual context confusion due to the presence of more than one language. We propose a language-related attention mechanism to reduce multilingual context confusion for the E2E code-switching ASR model based on the Equivalence Constraint (EC) Theory. The linguistic theory requires that any monolingual fragment that occurs in the code-switching sentence must occur in one of the monolingual sentences. The theory establishes a bridge between monolingual data and code-switching data. We leverage this linguistics theory to design the code-switching E2E ASR model. The proposed model efficiently transfers language knowledge from rich monolingual data to improve the performance of the code-switching ASR model. We evaluate our model on ASRU 2019 Mandarin-English code-switching challenge dataset. Compared to the baseline model, our proposed model achieves a 17.12% relative error reduction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuai Zhang (319 papers)
  2. Jiangyan Yi (77 papers)
  3. Zhengkun Tian (24 papers)
  4. Jianhua Tao (139 papers)
  5. Yu Ting Yeung (11 papers)
  6. Liqun Deng (13 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.