Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Universal Speech Discrete Tokens: A Case Study for ASR and TTS (2309.07377v2)

Published 14 Sep 2023 in eess.AS and cs.SD

Abstract: Self-supervised learning (SSL) proficiency in speech-related tasks has driven research into utilizing discrete tokens for speech tasks like recognition and translation, which offer lower storage requirements and great potential to employ natural language processing techniques. However, these studies, mainly single-task focused, faced challenges like overfitting and performance degradation in speech recognition tasks, often at the cost of sacrificing performance in multi-task scenarios. This study presents a comprehensive comparison and optimization of discrete tokens generated by various leading SSL models in speech recognition and synthesis tasks. We aim to explore the universality of speech discrete tokens across multiple speech tasks. Experimental results demonstrate that discrete tokens achieve comparable results against systems trained on FBank features in speech recognition tasks and outperform mel-spectrogram features in speech synthesis in subjective and objective metrics. These findings suggest that universal discrete tokens have enormous potential in various speech-related tasks. Our work is open-source and publicly available at https://github.com/k2-fsa/icefall.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yifan Yang (578 papers)
  2. Feiyu Shen (6 papers)
  3. Chenpeng Du (28 papers)
  4. Ziyang Ma (73 papers)
  5. Kai Yu (202 papers)
  6. Daniel Povey (45 papers)
  7. Xie Chen (166 papers)
Citations (22)

Summary

Towards Universal Speech Discrete Tokens: A Case Study for ASR and TTS

The paper "Towards Universal Speech Discrete Tokens: A Case Study for ASR and TTS" explores the universality and efficacy of discrete speech tokens across multiple speech processing tasks, specifically focusing on Automatic Speech Recognition (ASR) and Text-to-Speech (TTS). Using discrete tokens derived from Self-Supervised Learning (SSL) models, the authors aim to compare these with traditional feature representations in speech processing tasks, offering potential improvements in storage efficiency and model performance.

Methodological Insights

The researchers conducted an extensive paper utilizing discrete tokens generated by four prominent SSL models: vq-wav2vec, EnCodec, HuBERT, and WavLM. These tokens were assessed for their utility in ASR and TTS tasks:

  1. ASR Study: Discrete tokens were used to train End-to-End (E2E) ASR models on various datasets including LibriSpeech and GigaSpeech. The researchers introduced specialized data augmentation strategies to address overfitting and improve robustness with discrete tokens. The models were evaluated in terms of Word Error Rate (WER) and Character Error Rate (CER).
  2. TTS Study: The TTS evaluation focused on resynthesis tasks to gauge the upper bound of synthesis quality achievable with discrete tokens. Techniques such as CTX-vec2wav, enhanced by mel-spectrogram prompts, were applied to assess performance compared to traditional mel-spectrogram features.

Key Findings

  • ASR Performance: Discrete tokens obtained from HuBERT and WavLM offered competitive performance relative to traditional FBank features, especially in low-resource scenarios. However, tokens from models like EnCodec and vq-wav2vec showed lower effectiveness.
  • TTS Performance: In TTS tasks, discrete tokens, with the exception of EnCodec, delivered high-quality audio outputs comparable to mel-spectrogram features. Notably, DAC tokens demonstrated superior resynthesis quality without additional fine-tuning.

Implications and Future Work

This paper highlights the potential for discrete tokens to replace traditional speech features in various applications, offering advantages in storage and processing. The empirical results suggest that these tokens can maintain, if not exceed, the performance of conventional methods in both ASR and TTS tasks.

Theoretical implications extend to cross-modal exploration, where discrete tokens can serve as a bridge between speech and text representations. Future research might explore the generalization of these tokens across languages and investigate further optimization techniques for multi-task scenarios.

The research serves as a baseline for continued investigation into more efficient and effective universal models for speech processing, aiming to unify the representation of spoken and written language. The open-source release of their work aligns well with ongoing collaborative advancements in this domain.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub