Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Inference For Neural Machine Translation (2010.02416v2)

Published 6 Oct 2020 in cs.CL and cs.LG

Abstract: Large Transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field. In this work, we look for the optimal combination of known techniques to optimize inference speed without sacrificing translation quality. We conduct an empirical study that stacks various approaches and demonstrates that combination of replacing decoder self-attention with simplified recurrent units, adopting a deep encoder and a shallow decoder architecture and multi-head attention pruning can achieve up to 109% and 84% speedup on CPU and GPU respectively and reduce the number of parameters by 25% while maintaining the same translation quality in terms of BLEU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yi-Te Hsu (7 papers)
  2. Sarthak Garg (9 papers)
  3. Yi-Hsiu Liao (4 papers)
  4. Ilya Chatsviorkin (1 paper)
Citations (11)