Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ACtuAL: Actor-Critic Under Adversarial Learning (1711.04755v1)

Published 13 Nov 2017 in stat.ML and cs.LG

Abstract: Generative Adversarial Networks (GANs) are a powerful framework for deep generative modeling. Posed as a two-player minimax problem, GANs are typically trained end-to-end on real-valued data and can be used to train a generator of high-dimensional and realistic images. However, a major limitation of GANs is that training relies on passing gradients from the discriminator through the generator via back-propagation. This makes it fundamentally difficult to train GANs with discrete data, as generation in this case typically involves a non-differentiable function. These difficulties extend to the reinforcement learning setting when the action space is composed of discrete decisions. We address these issues by reframing the GAN framework so that the generator is no longer trained using gradients through the discriminator, but is instead trained using a learned critic in the actor-critic framework with a Temporal Difference (TD) objective. This is a natural fit for sequence modeling and we use it to achieve improvements on LLMing tasks over the standard Teacher-Forcing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Anirudh Goyal (93 papers)
  2. Nan Rosemary Ke (40 papers)
  3. Alex Lamb (45 papers)
  4. R Devon Hjelm (32 papers)
  5. Chris Pal (37 papers)
  6. Joelle Pineau (123 papers)
  7. Yoshua Bengio (601 papers)
Citations (9)