Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models (2009.13267v4)

Published 20 Sep 2020 in cs.CL, cs.LG, and stat.ML

Abstract: The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability. Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution -- there are samples with much higher BLEU score comparing to the beam decoding output. To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR). We use both marginal energy models (over target sentence) and joint energy models (over both source and target sentences). Our EBR with the joint energy model consistently improves the performance of the Transformer-based NMT: +4 BLEU points on IWSLT'14 German-English, +3.0 BELU points on Sinhala-English, +1.2 BLEU on WMT'16 English-German tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sumanta Bhattacharyya (3 papers)
  2. Amirmohammad Rooshenas (7 papers)
  3. Subhajit Naskar (5 papers)
  4. Simeng Sun (23 papers)
  5. Mohit Iyyer (87 papers)
  6. Andrew McCallum (132 papers)
Citations (55)