Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation (2206.02369v2)

Published 6 Jun 2022 in cs.CL

Abstract: While large-scale neural LLMs, such as GPT2 and BART, have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e.g.}, greedy search). This phenomenon is counter-intuitive since there are few consecutive sentence-level repetitions in human corpora (e.g., 0.02\% in Wikitext-103). To investigate the underlying reasons for generating consecutive sentence-level repetitions, we study the relationship between the probabilities of the repetitive tokens and their previous repetitions in the context. Through our quantitative experiments, we find that 1) LLMs have a preference to repeat the previous sentence; 2) The sentence-level repetitions have a \textit{self-reinforcement effect}: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence; 3) The sentences with higher initial probabilities usually have a stronger self-reinforcement effect. Motivated by our findings, we propose a simple and effective training method \textbf{DITTO} (Pseu\underline{D}o-Repet\underline{IT}ion Penaliza\underline{T}i\underline{O}n), where the model learns to penalize probabilities of sentence-level repetitions from pseudo repetitive data. Although our method is motivated by mitigating repetitions, experiments show that DITTO not only mitigates the repetition issue without sacrificing perplexity, but also achieves better generation quality. Extensive experiments on open-ended text generation (Wikitext-103) and text summarization (CNN/DailyMail) demonstrate the generality and effectiveness of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jin Xu (131 papers)
  2. Xiaojiang Liu (27 papers)
  3. Jianhao Yan (27 papers)
  4. Deng Cai (181 papers)
  5. Huayang Li (26 papers)
  6. Jian Li (667 papers)
Citations (56)

Summary

Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation

The paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" addresses a pervasive issue within large-scale neural LLMs characterized by undesirable sentence-level loops, particularly when using maximization-based decoding algorithms such as greedy search. The authors conduct an intricate investigation to understand the causal factors behind consecutive sentence-level repetitions, revealing key insights into the inherent biases in LLMs and proposing a novel training method, DITTO (Pseu\underline{D}o-Repet\underline{IT}ion Penaliza\underline{T}i\underline{O}n), to mitigate this problem.

Analysis of Sentence-Level Repetitions

The paper begins by identifying a tendency of LLMs to repeat previous sentences, an observation supported by empirical data demonstrating a self-reinforcement effect in models. This effect manifests as an increased probability of repeating a sentence as instances of repetition accumulate. A remarkable finding is that sentences with higher initial probabilities exhibit a more pronounced self-reinforcement effect, making them more prone to repetition. The implications are critical: once a sentence is repeated, its likelihood of subsequent repetition grows, potentially trapping the model in redundancy loops.

Proposed Method: DITTO

Arising from these observations, the authors propose DITTO, a training strategy designed to penalize sentence-level repetitions effectively. Through the clever construction of pseudo data consisting of manually repeated sentences, the model is taught to reduce the probability of repetition using a penalization mechanism based on prior repetition frequencies. Notably, DITTO accomplishes the goal of reducing repetitions without detriment to perplexity or generation quality.

Experimental Validation

Extensive experiments conducted on datasets like Wikitext-103 and CNN/DailyMail—covering both open-ended text generation and text summarization tasks—demonstrate DITTO's effectiveness. Models trained with DITTO exhibit improved performance in terms of repetition metrics, approaching human levels of natural language usage. Moreover, they attain superior quality scores, as measured by MAUVE, indicating the generation of texts close to those produced by humans. In fact, DITTO-enhanced models also yield improvements in perplexity and accuracy, reinforcing the method's robustness and versatility.

Implications and Future Directions

The findings presented have profound implications for the training and deployment of neural LLMs. By addressing the core of the repetition problem, DITTO provides a path for enhancing the generalizability and utility of these models in practical applications. The paper sets a precedent for exploring further the intricacies of repetition phenomena in generated text and applying these insights to refine text generation technology. Future research could explore the interplay between sentence probabilities and model architecture or explore alternative LLM embeddings that inherently counter repetition tendencies.

In summary, this paper offers a comprehensive analysis of a key challenge in neural text generation and proposes a viable solution, thus contributing valuable insight and methodology to the continued evolution of AI language capabilities.

Youtube Logo Streamline Icon: https://streamlinehq.com