Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Machine Translation Decoding beyond Beam Search (2104.05336v1)

Published 12 Apr 2021 in cs.CL and cs.LG

Abstract: Beam search is the go-to method for decoding auto-regressive machine translation models. While it yields consistent improvements in terms of BLEU, it is only concerned with finding outputs with high model likelihood, and is thus agnostic to whatever end metric or score practitioners care about. Our aim is to establish whether beam search can be replaced by a more powerful metric-driven search technique. To this end, we explore numerous decoding algorithms, including some which rely on a value function parameterised by a neural network, and report results on a variety of metrics. Notably, we introduce a Monte-Carlo Tree Search (MCTS) based method and showcase its competitiveness. We provide a blueprint for how to use MCTS fruitfully in language applications, which opens promising future directions. We find that which algorithm is best heavily depends on the characteristics of the goal metric; we believe that our extensive experiments and analysis will inform further research in this area.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rémi Leblond (10 papers)
  2. Jean-Baptiste Alayrac (38 papers)
  3. Laurent Sifre (21 papers)
  4. Jean-Baptiste Lespiau (17 papers)
  5. Ioannis Antonoglou (17 papers)
  6. Karen Simonyan (54 papers)
  7. Oriol Vinyals (116 papers)
  8. Miruna Pislar (3 papers)
Citations (62)