Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Listening and Live Captioning: Multi-Task Training for Speech Enhancement (2106.02896v1)

Published 5 Jun 2021 in eess.AS

Abstract: With the surge of online meetings, it has become more critical than ever to provide high-quality speech audio and live captioning under various noise conditions. However, most monaural speech enhancement (SE) models introduce processing artifacts and thus degrade the performance of downstream tasks, including automatic speech recognition (ASR). This paper proposes a multi-task training framework to make the SE models unharmful to ASR. Because most ASR training samples do not have corresponding clean signal references, we alternately perform two model update steps called SE-step and ASR-step. The SE-step uses clean and noisy signal pairs and a signal-based loss function. The ASR-step applies a pre-trained ASR model to training signals enhanced with the SE model. A cross-entropy loss between the ASR output and reference transcriptions is calculated to update the SE model parameters. Experimental results with realistic large-scale settings using ASR models trained on 75,000-hour data show that the proposed framework improves the word error rate for the SE output by 11.82% with little compromise in the SE quality. Performance analysis is also carried out by changing the ASR model, the data used for the ASR-step, and the schedule of the two update steps.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sefik Emre Eskimez (28 papers)
  2. Xiaofei Wang (138 papers)
  3. Min Tang (80 papers)
  4. Hemin Yang (7 papers)
  5. Zirun Zhu (8 papers)
  6. Zhuo Chen (319 papers)
  7. Huaming Wang (23 papers)
  8. Takuya Yoshioka (77 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.