Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CTC-based Non-autoregressive Textless Speech-to-Speech Translation (2406.07330v1)

Published 11 Jun 2024 in cs.CL, cs.AI, cs.SD, and eess.AS

Abstract: Direct speech-to-speech translation (S2ST) has achieved impressive translation quality, but it often faces the challenge of slow decoding due to the considerable length of speech sequences. Recently, some research has turned to non-autoregressive (NAR) models to expedite decoding, yet the translation quality typically lags behind autoregressive (AR) models significantly. In this paper, we investigate the performance of CTC-based NAR models in S2ST, as these models have shown impressive results in machine translation. Experimental results demonstrate that by combining pretraining, knowledge distillation, and advanced NAR training techniques such as glancing training and non-monotonic latent alignments, CTC-based NAR models achieve translation quality comparable to the AR model, while preserving up to 26.81$\times$ decoding speedup.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qingkai Fang (19 papers)
  2. Zhengrui Ma (18 papers)
  3. Yan Zhou (206 papers)
  4. Min Zhang (630 papers)
  5. Yang Feng (230 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com