Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications (2203.16822v2)

Published 31 Mar 2022 in eess.AS, cs.CL, and cs.LG

Abstract: Recent work on self-supervised pre-training focus on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E) acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 and 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Juan Zuluaga-Gomez (27 papers)
  2. Amrutha Prasad (10 papers)
  3. Iuliia Nigmatulina (14 papers)
  4. Saeed Sarfjoo (3 papers)
  5. Petr Motlicek (40 papers)
  6. Matthias Kleinert (2 papers)
  7. Hartmut Helmke (3 papers)
  8. Oliver Ohneiser (3 papers)
  9. Qingran Zhan (2 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.