Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Adversarial Training Can Improve Neural Language Models (2211.09728v1)

Published 2 Nov 2022 in cs.CL and cs.LG

Abstract: While deep learning in the form of recurrent neural networks (RNNs) has caused a significant improvement in neural LLMing, the fact that they are extremely prone to overfitting is still a mainly unresolved issue. In this paper we propose a regularization method based on generative adversarial networks (GANs) and adversarial training (AT), that can prevent overfitting in neural LLMs. Unlike common adversarial training methods such as the fast gradient sign method (FGSM) that require a second back-propagation through time, and therefore effectively require at least twice the amount of time for regular training, the overhead of our method does not exceed more than 20% of the training of the baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sajad Movahedi (8 papers)
  2. Azadeh Shakery (26 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.