2000 character limit reached
Incremental Adaptation Strategies for Neural Network Language Models (1412.6650v4)
Published 20 Dec 2014 in cs.NE, cs.CL, and cs.LG
Abstract: It is today acknowledged that neural network LLMs outperform backoff LLMs in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network LLM to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.
- Aram Ter-Sarkisov (10 papers)
- Holger Schwenk (35 papers)
- Fethi Bougares (18 papers)
- Loic Barrault (4 papers)