Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Neural Language Modeling via Adversarial Training (1906.03805v2)

Published 10 Jun 2019 in cs.LG, cs.CL, and stat.ML

Abstract: Recently, substantial progress has been made in LLMing by using deep neural networks. However, in practice, large scale neural LLMs have been shown to be prone to overfitting. In this paper, we present a simple yet highly effective adversarial training mechanism for regularizing neural LLMs. The idea is to introduce adversarial noise to the output embedding layer while training the models. We show that the optimal adversarial noise yields a simple closed-form solution, thus allowing us to develop a simple and time efficient algorithm. Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models. Empirically, we show that our method improves on the single model state-of-the-art results for LLMing on Penn Treebank (PTB) and Wikitext-2, achieving test perplexity scores of 46.01 and 38.07, respectively. When applied to machine translation, our method improves over various transformer-based translation baselines in BLEU scores on the WMT14 English-German and IWSLT14 German-English tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dilin Wang (37 papers)
  2. Chengyue Gong (30 papers)
  3. Qiang Liu (405 papers)
Citations (113)

Summary

We haven't generated a summary for this paper yet.