Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prevent the Language Model from being Overconfident in Neural Machine Translation (2105.11098v2)

Published 24 May 2021 in cs.CL and cs.AI

Abstract: The Neural Machine Translation (NMT) model is essentially a joint LLM conditioned on both the source sentence and partial translation. Therefore, the NMT model naturally involves the mechanism of the LLM (LM) that predicts the next token only based on partial translation. Despite its success, NMT still suffers from the hallucination problem, generating fluent but inadequate translations. The main reason is that NMT pays excessive attention to the partial translation while neglecting the source sentence to some extent, namely overconfidence of the LM. Accordingly, we define the Margin between the NMT and the LM, calculated by subtracting the predicted probability of the LM from that of the NMT model for each token. The Margin is negatively correlated to the overconfidence degree of the LM. Based on the property, we propose a Margin-based Token-level Objective (MTO) and a Margin-based Sentencelevel Objective (MSO) to maximize the Margin for preventing the LM from being overconfident. Experiments on WMT14 English-to-German, WMT19 Chinese-to-English, and WMT14 English-to-French translation tasks demonstrate the effectiveness of our approach, with 1.36, 1.50, and 0.63 BLEU improvements, respectively, compared to the Transformer baseline. The human evaluation further verifies that our approaches improve translation adequacy as well as fluency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mengqi Miao (2 papers)
  2. Fandong Meng (174 papers)
  3. Yijin Liu (29 papers)
  4. Xiao-Hua Zhou (30 papers)
  5. Jie Zhou (687 papers)
Citations (37)