Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Detection of Distributional Discrepancy for Text Generation (1910.04859v2)

Published 28 Sep 2019 in cs.CV

Abstract: The text generated by neural LLMs is not as good as the real text. This means that their distributions are different. Generative Adversarial Nets (GAN) are used to alleviate it. However, some researchers argue that GAN variants do not work at all. When both sample quality (such as Bleu) and sample diversity (such as self-Bleu) are taken into account, the GAN variants even are worse than a well-adjusted LLM. But, Bleu and self-Bleu can not precisely measure this distributional discrepancy. In fact, how to measure the distributional discrepancy between real text and generated text is still an open problem. In this paper, we theoretically propose two metric functions to measure the distributional difference between real text and generated text. Besides that, a method is put forward to estimate them. First, we evaluate LLM with these two functions and find the difference is huge. Then, we try several methods to use the detected discrepancy signal to improve the generator. However the difference becomes even bigger than before. Experimenting on two existing language GANs, the distributional discrepancy between real text and generated text increases with more adversarial learning rounds. It demonstrates both of these language GANs fail.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xingyuan Chen (17 papers)
  2. Ping Cai (3 papers)
  3. Peng Jin (91 papers)
  4. Haokun Du (1 paper)
  5. Hongjun Wang (41 papers)
  6. Xingyu Dai (3 papers)
  7. Jiajun Chen (125 papers)