Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling (2011.07164v1)

Published 13 Nov 2020 in cs.CL

Abstract: Pre-training models on vast quantities of unlabeled data has emerged as an effective approach to improving accuracy on many NLP tasks. On the other hand, traditional machine translation has a long history of leveraging unlabeled data through noisy channel modeling. The same idea has recently been shown to achieve strong improvements for neural machine translation. Unfortunately, na\"{i}ve noisy channel modeling with modern sequence to sequence models is up to an order of magnitude slower than alternatives. We address this issue by introducing efficient approximations to make inference with the noisy channel approach as fast as strong ensembles while increasing accuracy. We also show that the noisy channel approach can outperform strong pre-training results by achieving a new state of the art on WMT Romanian-English translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shruti Bhosale (18 papers)
  2. Kyra Yee (9 papers)
  3. Sergey Edunov (26 papers)
  4. Michael Auli (73 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.