Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploration of Efficient End-to-End ASR using Discretized Input from Self-Supervised Learning (2305.18108v1)

Published 29 May 2023 in cs.SD and eess.AS

Abstract: Self-supervised learning (SSL) of speech has shown impressive results in speech-related tasks, particularly in automatic speech recognition (ASR). While most methods employ the output of intermediate layers of the SSL model as real-valued features for downstream tasks, there is potential in exploring alternative approaches that use discretized token sequences. This approach offers benefits such as lower storage requirements and the ability to apply techniques from natural language processing. In this paper, we propose a new protocol that utilizes discretized token sequences in ASR tasks, which includes de-duplication and sub-word modeling to enhance the input sequence. It reduces computational cost by decreasing the length of the sequence. Our experiments on the LibriSpeech dataset demonstrate that our proposed protocol performs competitively with conventional ASR systems using continuous input features, while reducing computational and storage costs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xuankai Chang (61 papers)
  2. Brian Yan (40 papers)
  3. Yuya Fujita (16 papers)
  4. Takashi Maekaku (9 papers)
  5. Shinji Watanabe (416 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.