2000 character limit reached
Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding (2311.08380v2)
Published 14 Nov 2023 in cs.CL
Abstract: Minimum Bayes Risk (MBR) decoding can significantly improve translation performance of Multilingual LLMs (MLLMs). However, MBR decoding is computationally expensive. We show how the recently developed Reinforcement Learning technique, Direct Preference Optimization (DPO), can fine-tune MLLMs to get the gains of MBR without any additional computation in inference. Our method uses only a small monolingual fine-tuning set and yields significantly improved performance on multiple NMT test sets compared to MLLMs without DPO.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.