Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Cooperative Networks for Natural Language Generation (2201.12320v1)

Published 28 Jan 2022 in cs.LG and cs.CL

Abstract: Generative Adversarial Networks (GANs) have known a tremendous success for many continuous generation tasks, especially in the field of image generation. However, for discrete outputs such as language, optimizing GANs remains an open problem with many instabilities, as no gradient can be properly back-propagated from the discriminator output to the generator parameters. An alternative is to learn the generator network via reinforcement learning, using the discriminator signal as a reward, but such a technique suffers from moving rewards and vanishing gradient problems. Finally, it often falls short compared to direct maximum-likelihood approaches. In this paper, we introduce Generative Cooperative Networks, in which the discriminator architecture is cooperatively used along with the generation policy to output samples of realistic texts for the task at hand. We give theoretical guarantees of convergence for our approach, and study various efficient decoding schemes to empirically achieve state-of-the-art results in two main NLG tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sylvain Lamprier (40 papers)
  2. Thomas Scialom (35 papers)
  3. Antoine Chaffin (13 papers)
  4. Vincent Claveau (7 papers)
  5. Ewa Kijak (16 papers)
  6. Jacopo Staiano (38 papers)
  7. Benjamin Piwowarski (38 papers)
Citations (11)