Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Recombination for Efficient Decoding of Neural Machine Translation (1808.08482v2)

Published 25 Aug 2018 in cs.CL

Abstract: In Neural Machine Translation (NMT), the decoder can capture the features of the entire prediction history with neural connections and representations. This means that partial hypotheses with different prefixes will be regarded differently no matter how similar they are. However, this might be inefficient since some partial hypotheses can contain only local differences that will not influence future predictions. In this work, we introduce recombination in NMT decoding based on the concept of the "equivalence" of partial hypotheses. Heuristically, we use a simple $n$-gram suffix based equivalence function and adapt it into beam search decoding. Through experiments on large-scale Chinese-to-English and English-to-Germen translation tasks, we show that the proposed method can obtain similar translation quality with a smaller beam size, making NMT decoding more efficient.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhisong Zhang (31 papers)
  2. Rui Wang (996 papers)
  3. Masao Utiyama (39 papers)
  4. Eiichiro Sumita (31 papers)
  5. Hai Zhao (227 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.