Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis (2409.20059v1)

Published 30 Sep 2024 in cs.CL

Abstract: Neural metrics for machine translation (MT) evaluation have become increasingly prominent due to their superior correlation with human judgments compared to traditional lexical metrics. Researchers have therefore utilized neural metrics through quality-informed decoding strategies, achieving better results than likelihood-based methods. With the rise of LLMs, preference-based alignment techniques have gained attention for their potential to enhance translation quality by optimizing model weights directly on preferences induced by quality estimators. This study focuses on Contrastive Preference Optimization (CPO) and conducts extensive experiments to evaluate the impact of preference-based alignment on translation quality. Our findings indicate that while CPO consistently outperforms Supervised Fine-Tuning (SFT) on high-quality data with regard to the alignment metric, it may lead to instability across downstream evaluation metrics, particularly between neural and lexical ones. Additionally, we demonstrate that relying solely on the base model for generating candidate translations achieves performance comparable to using multiple external systems, while ensuring better consistency across downstream metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hippolyte Gisserot-Boukhlef (4 papers)
  2. Ricardo Rei (34 papers)
  3. Emmanuel Malherbe (5 papers)
  4. Céline Hudelot (50 papers)
  5. Pierre Colombo (48 papers)
  6. Nuno M. Guerreiro (27 papers)