Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Deliberation by Text-Only and Semi-Supervised Training (2206.14716v1)

Published 29 Jun 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Text-only and semi-supervised training based on audio-only data has gained popularity recently due to the wide availability of unlabeled text and speech data. In this work, we propose incorporating text-only and semi-supervised training into an attention-based deliberation model. By incorporating text-only data in training a bidirectional encoder representation from transformer (BERT) for the deliberation text encoder, and large-scale text-to-speech and audio-only utterances using joint acoustic and text decoder (JATD) and semi-supervised training, we achieved 4%-12% WER reduction for various tasks compared to the baseline deliberation. Compared to a state-of-the-art LLM (LM) rescoring method, the deliberation model reduces the Google Voice Search WER by 11% relative. We show that the deliberation model also achieves a positive human side-by-side evaluation compared to the state-of-the-art LM rescorer with reasonable endpointer latencies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ke Hu (57 papers)
  2. Tara N. Sainath (79 papers)
  3. Yanzhang He (41 papers)
  4. Rohit Prabhavalkar (59 papers)
  5. Trevor Strohman (38 papers)
  6. Sepand Mavandadi (5 papers)
  7. Weiran Wang (65 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.