Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Speech Pre-Training with Supervision-Enhanced Acoustic Unit (2212.03482v1)

Published 7 Dec 2022 in eess.AS and cs.SD

Abstract: Speech pre-training has shown great success in learning useful and general latent representations from large-scale unlabeled data. Based on a well-designed self-supervised learning pattern, pre-trained models can be used to serve lots of downstream speech tasks such as automatic speech recognition. In order to take full advantage of the labed data in low resource task, we present an improved pre-training method by introducing a supervision-enhanced acoustic unit (SEAU) pattern to intensify the expression of comtext information and ruduce the training cost. Encoder representations extracted from the SEAU pattern are used to generate more representative target units for HuBERT pre-training process. The proposed method, named SeHuBERT, achieves a relative word error rate reductions of 10.5% and 4.9% comared with the standard HuBERT on Turkmen speech recognition task with 500 hours and 100 hours fine-tuning data respectively. Extended to more languages and more data, SeHuBERT can aslo achieve a relative word error rate reductions of approximately 10% at half of the training cost compared with HuBERT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pengcheng Li (60 papers)
  2. Genshun Wan (10 papers)
  3. Fenglin Ding (5 papers)
  4. Hang Chen (77 papers)
  5. Jianqing Gao (12 papers)
  6. Jia Pan (127 papers)
  7. Cong Liu (169 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.