Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Long-Form Spoken Language Translation with Large Language Models (2212.09895v1)

Published 19 Dec 2022 in cs.CL

Abstract: A challenge in spoken language translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we fine-tune a general-purpose, LLM to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. We compare to several segmentation strategies and find that our approach improves BLEU score on three languages by an average of 2.7 BLEU overall compared to an automatic punctuation baseline. Further, we demonstrate the effectiveness of two constrained decoding strategies to improve well-formedness of the model output from above 99% to 100%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Arya D. McCarthy (23 papers)
  2. Hao Zhang (947 papers)
  3. Shankar Kumar (34 papers)
  4. Felix Stahlberg (31 papers)
  5. Axel H. Ng (1 paper)
Citations (2)