Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation (2204.02470v2)

Published 5 Apr 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Self-Supervised Learning (SSL) models have been successfully applied in various deep learning-based speech tasks, particularly those with a limited amount of data. However, the quality of SSL representations depends highly on the relatedness between the SSL training domain(s) and the target data domain. On the contrary, spectral feature (SF) extractors such as log Mel-filterbanks are hand-crafted non-learnable components, and could be more robust to domain shifts. The present work examines the assumption that combining non-learnable SF extractors to SSL models is an effective approach to low resource speech tasks. We propose a learnable and interpretable framework to combine SF and SSL representations. The proposed framework outperforms significantly both baseline and SSL models on Automatic Speech Recognition (ASR) and Speech Translation (ST) tasks on three low resource datasets. We additionally design a mixture of experts based combination model. This last model reveals that the relative contribution of SSL models over conventional SF extractors is very small in case of domain mismatch between SSL training set and the target language data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dan Berrebbi (10 papers)
  2. Jiatong Shi (82 papers)
  3. Brian Yan (40 papers)
  4. Osbel Lopez-Francisco (1 paper)
  5. Jonathan D. Amith (2 papers)
  6. Shinji Watanabe (416 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.