Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer based unsupervised pre-training for acoustic representation learning (2007.14602v3)

Published 29 Jul 2020 in eess.AS and cs.SD

Abstract: Recently, a variety of acoustic tasks and related applications arised. For many acoustic tasks, the labeled data size may be limited. To handle this problem, we propose an unsupervised pre-training method using Transformer based encoder to learn a general and robust high-level representation for all acoustic tasks. Experiments have been conducted on three kinds of acoustic tasks: speech emotion recognition, sound event detection and speech translation. All the experiments have shown that pre-training using its own training data can significantly improve the performance. With a larger pre-training data combining MuST-C, Librispeech and ESC-US datasets, for speech emotion recognition, the UAR can further improve absolutely 4.3% on IEMOCAP dataset. For sound event detection, the F1 score can further improve absolutely 1.5% on DCASE2018 task5 development set and 2.1% on evaluation set. For speech translation, the BLEU score can further improve relatively 12.2% on En-De dataset and 8.4% on En-Fr dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ruixiong Zhang (10 papers)
  2. Haiwei Wu (16 papers)
  3. Wubo Li (8 papers)
  4. Dongwei Jiang (16 papers)
  5. Wei Zou (62 papers)
  6. Xiangang Li (46 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.