Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Decoder-only Large Language Models for Speech-to-text Translation (2407.03169v1)

Published 3 Jul 2024 in cs.CL, cs.SD, and eess.AS

Abstract: LLMs, known for their exceptional reasoning capabilities, generalizability, and fluency across diverse domains, present a promising avenue for enhancing speech-related tasks. In this paper, we focus on integrating decoder-only LLMs to the task of speech-to-text translation (S2TT). We propose a decoder-only architecture that enables the LLM to directly consume the encoded speech representation and generate the text translation. Additionally, we investigate the effects of different parameter-efficient fine-tuning techniques and task formulation. Our model achieves state-of-the-art performance on CoVoST 2 and FLEURS among models trained without proprietary data. We also conduct analyses to validate the design choices of our proposed model and bring insights to the integration of LLMs to S2TT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chao-Wei Huang (28 papers)
  2. Hui Lu (38 papers)
  3. Hongyu Gong (44 papers)
  4. Hirofumi Inaguma (42 papers)
  5. Ilia Kulikov (31 papers)
  6. Ruslan Mavlyutov (5 papers)
  7. Sravya Popuri (18 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com