Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Speech Representation Learning via Speech-level and Phoneme-level Masking Approach (2210.13805v1)

Published 25 Oct 2022 in cs.SD, cs.CL, and eess.AS

Abstract: Recovering the masked speech frames is widely applied in speech representation learning. However, most of these models use random masking in the pre-training. In this work, we proposed two kinds of masking approaches: (1) speech-level masking, making the model to mask more speech segments than silence segments, (2) phoneme-level masking, forcing the model to mask the whole frames of the phoneme, instead of phoneme pieces. We pre-trained the model via these two approaches, and evaluated on two downstream tasks, phoneme classification and speaker recognition. The experiments demonstrated that the proposed masking approaches are beneficial to improve the performance of speech representation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xulong Zhang (60 papers)
  2. Jianzong Wang (144 papers)
  3. Ning Cheng (96 papers)
  4. Kexin Zhu (5 papers)
  5. Jing Xiao (267 papers)

Summary

We haven't generated a summary for this paper yet.