Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges (2401.07339v2)

Published 14 Jan 2024 in cs.SE

Abstract: LLMs have shown promise in automated code generation but typically excel only in simpler tasks such as generating standalone code units. Real-world software development, however, often involves complex code repositories (named repo) with complex dependencies and extensive documentation. To fill this gap, our research pivots towards evaluating LLMs in a more realistic setting -- real-world repo-level code generation. We introduce CodeAgentBench, a manually curated benchmark for repo-level code generation. This benchmark comprises five high-quality Python projects, encompassing a total of 101 samples. We assess nine leading LLMs on repo-level tasks and observe a decline in their performance. To tackle this, we present CodeAgent, a novel LLM-based agent framework that employs external tools for effective repo-level code generation. CodeAgent integrates five programming tools, enabling interaction with software artifacts for information retrieval, code symbol navigation, and code testing. We implement four agent strategies to optimize these tools' usage. Our experiments on CodeAgentBench show that CodeAgent enhances LLM performance significantly, with improvements ranging from 18.1\% to 250\%. Further tests on the HumanEval benchmark confirm CodeAgent's adaptability and efficacy across various code generation tasks. Notably, CodeAgent outperforms commercial products like Github Copilot, showcasing superior accuracy and efficiency. These results demonstrate CodeAgent's robust capabilities in code generation, highlighting its potential for real-world repo-level coding challenges.

This paper addresses the limitations of LLMs in code generation, which typically perform well on simple, standalone tasks (like function-level generation) but struggle with complex, real-world software development scenarios involving entire code repositories (repo-level tasks). These tasks require understanding intricate dependencies, navigating existing codebases, and integrating new code seamlessly.

To evaluate and improve LLM performance in this realistic setting, the authors introduce two main contributions:

  1. CodeAgentBench: A manually curated benchmark specifically designed for repo-level code generation. It consists of 101 tasks derived from five diverse, high-quality Python projects sourced from GitHub. Each task includes rich contextual information:
    • Detailed documentation (following Sphinx format) including requirements, class/function signatures, parameter descriptions, and explanations of domain-specific terms.
    • Contextual dependencies (identified using a static analysis tool based on tree-sitter) such as imported modules, user-defined classes/functions within the repo.
    • A sandbox runtime environment for execution.
    • A self-contained test suite for verifying correctness.
    • A canonical solution refined through manual checks. Experiments show that even advanced LLMs like GPT-4 achieve low pass rates (e.g., 21.8% for GPT-4) on CodeAgentBench, highlighting the difficulty of repo-level tasks compared to simpler benchmarks like HumanEval.
  2. CodeAgent: A novel LLM-based agent framework designed to tackle these repo-level challenges by integrating external tools. It mimics the developer workflow of information gathering, implementation, and testing. CodeAgent equips LLMs with five programming tools:
    • Information Retrieval: WebSearch (using DuckDuckGo) for external knowledge and DocSearch (using BM25) for retrieving relevant project documentation.
    • Code Implementation: SymbolSearch (using tree-sitter) for navigating the codebase, finding symbol definitions (variables, functions, classes), and understanding dependencies within files or across the repository.
    • Code Testing: FormatCheck (using Black) for checking and correcting code formatting and PythonREPL for executing code snippets within the repository's environment to check syntax and functional correctness, providing execution feedback for debugging.

To manage tool usage effectively, CodeAgent explores four agent strategies:

  • ReAct: Interleaves reasoning and acting, deciding dynamically which tool to use based on the current state.
  • Tool-Planning: Creates a plan upfront, breaks the task into subtasks, and uses tools as needed for complex subtasks.
  • OpenAIFunc: Leverages the built-in function-calling capabilities of models like GPT-3.5/GPT-4.
  • Rule-based: Follows a predefined workflow inspired by human programming: web search -> documentation search -> symbol navigation -> code generation -> format check -> code interpretation/debugging.

Experiments were conducted using nine LLMs (including GPT-4, GPT-3.5, CodeLlama-34B, DeepSeek-33B) on CodeAgentBench and HumanEval. Key findings include:

  • CodeAgent significantly improves performance on CodeAgentBench across all tested LLMs, with gains ranging from 18.1% to 250% in Pass@1 rate compared to direct generation (NoAgent).
  • The Rule-based and ReAct strategies were generally the most effective.
  • The framework also showed improvements on the function-level HumanEval benchmark, demonstrating adaptability.
  • An ablation paper confirmed the positive contribution of each tool, with Code Symbol Navigation being particularly crucial.
  • CodeAgent outperformed commercial products like Github Copilot and AutoGPT in a manual comparison on a subset of CodeAgentBench tasks.

The paper concludes that CodeAgent, by integrating specialized tools and employing agent strategies, effectively enhances LLMs' capabilities for complex, real-world repo-level code generation tasks, bridging the gap between simple benchmarks and practical software development needs. Limitations include the need for further investigation into potential data memorization effects, the exploration of more advanced tools, refining the comparison methodology with commercial products, and optimizing agent prompts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kechi Zhang (22 papers)
  2. Jia Li (380 papers)
  3. Ge Li (213 papers)
  4. Xianjie Shi (5 papers)
  5. Zhi Jin (160 papers)
Citations (39)
X Twitter Logo Streamline Icon: https://streamlinehq.com