Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-domain Speech Recognition with Unsupervised Character-level Distribution Matching (2104.07491v3)

Published 15 Apr 2021 in cs.SD, cs.LG, and eess.AS

Abstract: End-to-end automatic speech recognition (ASR) can achieve promising performance with large-scale training data. However, it is known that domain mismatch between training and testing data often leads to a degradation of recognition accuracy. In this work, we focus on the unsupervised domain adaptation for ASR and propose CMatch, a Character-level distribution matching method to perform fine-grained adaptation between each character in two domains. First, to obtain labels for the features belonging to each character, we achieve frame-level label assignment using the Connectionist Temporal Classification (CTC) pseudo labels. Then, we match the character-level distributions using Maximum Mean Discrepancy. We train our algorithm using the self-training technique. Experiments on the Libri-Adapt dataset show that our proposed approach achieves 14.39% and 16.50% relative Word Error Rate (WER) reduction on both cross-device and cross-environment ASR. We also comprehensively analyze the different strategies for frame-level label assignment and Transformer adaptations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenxin Hou (11 papers)
  2. Jindong Wang (150 papers)
  3. Xu Tan (164 papers)
  4. Tao Qin (201 papers)
  5. Takahiro Shinozaki (13 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.