Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving the Training Recipe for a Robust Conformer-based Hybrid Model (2206.12955v1)

Published 26 Jun 2022 in cs.CL, eess.AS, and stat.ML

Abstract: Speaker adaptation is important to build robust automatic speech recognition (ASR) systems. In this work, we investigate various methods for speaker adaptive training (SAT) based on feature-space approaches for a conformer-based acoustic model (AM) on the Switchboard 300h dataset. We propose a method, called Weighted-Simple-Add, which adds weighted speaker information vectors to the input of the multi-head self-attention module of the conformer AM. Using this method for SAT, we achieve 3.5% and 4.5% relative improvement in terms of WER on the CallHome part of Hub5'00 and Hub5'01 respectively. Moreover, we build on top of our previous work where we proposed a novel and competitive training recipe for a conformer-based hybrid AM. We extend and improve this recipe where we achieve 11% relative improvement in terms of word-error-rate (WER) on Switchboard 300h Hub5'00 dataset. We also make this recipe efficient by reducing the total number of parameters by 34% relative.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mohammad Zeineldeen (16 papers)
  2. Jingjing Xu (80 papers)
  3. Christoph Lüscher (10 papers)
  4. Ralf Schlüter (73 papers)
  5. Hermann Ney (104 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.