Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Accurate Evaluation of GANs for Language Generation (1806.04936v3)

Published 13 Jun 2018 in cs.CL

Abstract: Generative Adversarial Networks (GANs) are a promising approach to language generation. The latest works introducing novel GAN models for language generation use n-gram based metrics for evaluation and only report single scores of the best run. In this paper, we argue that this often misrepresents the true picture and does not tell the full story, as GAN models can be extremely sensitive to the random initialization and small deviations from the best hyperparameter choice. In particular, we demonstrate that the previously used BLEU score is not sensitive to semantic deterioration of generated texts and propose alternative metrics that better capture the quality and diversity of the generated samples. We also conduct a set of experiments comparing a number of GAN models for text with a conventional LLM (LM) and find that neither of the considered models performs convincingly better than the LM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Stanislau Semeniuta (3 papers)
  2. Aliaksei Severyn (29 papers)
  3. Sylvain Gelly (43 papers)
Citations (79)

Summary

We haven't generated a summary for this paper yet.