Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Repetition Problems of LLMs in Code Generation (2505.10402v1)

Published 15 May 2025 in cs.CL, cs.AI, cs.LG, and cs.SE

Abstract: With the advent of neural LLMs, the performance of code generation has been significantly boosted. However, the problem of repetitions during the generation process continues to linger. Previous work has primarily focused on content repetition, which is merely a fraction of the broader repetition problem in code generation. A more prevalent and challenging problem is structural repetition. In structural repetition, the repeated code appears in various patterns but possesses a fixed structure, which can be inherently reflected in grammar. In this paper, we formally define structural repetition and propose an efficient decoding approach called RPG, which stands for Repetition Penalization based on Grammar, to alleviate the repetition problems in code generation for LLMs. Specifically, RPG first leverages grammar rules to identify repetition problems during code generation, and then strategically decays the likelihood of critical tokens that contribute to repetitions, thereby mitigating them in code generation. To facilitate this study, we construct a new dataset CodeRepetEval to comprehensively evaluate approaches for mitigating the repetition problems in code generation. Extensive experimental results demonstrate that RPG substantially outperforms the best-performing baselines on CodeRepetEval dataset as well as HumanEval and MBPP benchmarks, effectively reducing repetitions and enhancing the quality of generated code.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yihong Dong (35 papers)
  2. Yuchen Liu (156 papers)
  3. Xue Jiang (82 papers)
  4. Zhi Jin (161 papers)
  5. Ge Li (213 papers)

Summary

An Analysis of Structural Repetition in LLMs for Code Generation

The paper "Rethinking Repetition Problems of LLMs in Code Generation" addresses a significant yet often overlooked issue concerning the quality of output from LLMs used in code generation—repetition, particularly structural repetition. While neural LLMs have notably advanced the field of automatic code generation, especially with the recent emergence of LLMs, the tendency to produce repetitive code fragments remains a technical challenge that hinders the generation quality.

Structural Repetition: Definition and Challenges

The paper distinguishes between content repetition and structural repetition, identifying the latter as the more prevalent and problematic form in code generation. Structural repetition is characterized by the recurrence of code patterns featuring fixed structural formations, as dictated by grammar rules, which can persist even in the absence of identical content being repeated verbatim. The authors propose a formal definition of structural repetition and argue that it is distinct enough to merit a dedicated approach to both identification and mitigation, which existing content repetition strategies fail to adequately address.

RPG: A Novel Decoding Approach

To tackle structural repetition, the authors introduce RPG (Repetition Penalization based on Grammar), an innovative decoding strategy designed to detect and mitigate repetition in LLM-generated code. RPG uses grammar rules to identify potential repetition by leveraging a pushdown automaton, which allows it to understand and track the structural flow of the code as it is generated. When repetitions are detected, RPG applies a penalization mechanism to reduce the likelihood of generating further repetitive structures, thereby enhancing the diversity and utility of the generated code.

Evaluation and Results

To evaluate RPG's effectiveness, the authors developed a dataset named CodeRepetEval specifically for assessing code generation repetition. Their experiments included several scenarios, and RPG consistently outperformed existing approaches like Greedy Sampling, Top-k Sampling, and other repetition penalty strategies, demonstrating notable reductions in unnecessary repetition and improvements in code compilability. For numerical evidence, RPG achieved higher End-of-sentence Generation Percentage (EGP) scores across all tested scenarios and notably improved the Compiler Correctness Percentage (CCP).

Implications and Future Directions

The implications of this research extend beyond mere functional improvements in code generation. By reducing the structural repetition, RPG conserves computational resources (e.g., tokens and generation time) and improves the robustness of LLM outputs in software engineering applications. This advancement also hints at the potential for integrating more sophisticated grammar-aware mechanisms into AI models, potentially leading to further innovations in AI-powered software development tools.

Furthermore, the paper sets the stage for ongoing research into the underlying causes of repetition phenomena in LLMs and suggests a strategic direction for future developments. By highlighting the inadequacies of current repetition-focused strategies, the RPG approach not only provides a practical solution but also serves as a heuristic for improving model architectures and training methodologies.

Conclusion

In conclusion, this paper presents a critical step forward in understanding and addressing code generation issues within LLMs. The RPG model highlights the necessity of a grammar-based approach to combat structural repetition effectively, paving the way for more reliable and efficient code generation models. This work will likely stimulate further research into optimizing LLM outputs and refining the collaborative dynamics between AI systems and human software developers.

Youtube Logo Streamline Icon: https://streamlinehq.com