2000 character limit reached
Improved Long-Form Spoken Language Translation with Large Language Models (2212.09895v1)
Published 19 Dec 2022 in cs.CL
Abstract: A challenge in spoken language translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we fine-tune a general-purpose, LLM to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. We compare to several segmentation strategies and find that our approach improves BLEU score on three languages by an average of 2.7 BLEU overall compared to an automatic punctuation baseline. Further, we demonstrate the effectiveness of two constrained decoding strategies to improve well-formedness of the model output from above 99% to 100%.
- Arya D. McCarthy (23 papers)
- Hao Zhang (947 papers)
- Shankar Kumar (34 papers)
- Felix Stahlberg (31 papers)
- Axel H. Ng (1 paper)