Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Noise Robustness of an End-to-End Neural Model for Automatic Speech Recognition (2010.12715v1)

Published 23 Oct 2020 in eess.AS

Abstract: We present our experiments in training robust to noise an end-to-end automatic speech recognition (ASR) model using intensive data augmentation. We explore the efficacy of fine-tuning a pre-trained model to improve noise robustness, and we find it to be a very efficient way to train for various noisy conditions, especially when the conditions in which the model will be used, are unknown. Starting with a model trained on clean data helps establish baseline performance on clean speech. We carefully fine-tune this model to both maintain the performance on clean speech, and improve the model accuracy in noisy conditions. With this schema, we trained robust to noise English and Mandarin ASR models on large public corpora. All described models and training recipes are open sourced in NeMo, a toolkit for conversational AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jagadeesh Balam (40 papers)
  2. Jocelyn Huang (11 papers)
  3. Vitaly Lavrukhin (32 papers)
  4. Slyne Deng (1 paper)
  5. Somshubra Majumdar (31 papers)
  6. Boris Ginsburg (112 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.