Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Automated Code Review Using Large Language Models with Symbolic Reasoning (2507.18476v1)

Published 24 Jul 2025 in cs.SE and cs.AI

Abstract: Code review is one of the key processes in the software development lifecycle and is essential to maintain code quality. However, manual code review is subjective and time consuming. Given its rule-based nature, code review is well suited for automation. In recent years, significant efforts have been made to automate this process with the help of artificial intelligence. Recent developments in LLMs have also emerged as a promising tool in this area, but these models often lack the logical reasoning capabilities needed to fully understand and evaluate code. To overcome this limitation, this study proposes a hybrid approach that integrates symbolic reasoning techniques with LLMs to automate the code review process. We tested our approach using the CodexGlue dataset, comparing several models, including CodeT5, CodeBERT, and GraphCodeBERT, to assess the effectiveness of combining symbolic reasoning and prompting techniques with LLMs. Our results show that this approach improves the accuracy and efficiency of automated code review.

Summary

  • The paper introduces a hybrid framework that combines LLMs and a symbolic reasoning-based knowledge map to improve code defect detection.
  • The methodology employs fine-tuning, few-shot prompt engineering, and evaluation on CodeXGlue, yielding significant accuracy improvements.
  • The approach enhances explainability and scalability in automated code review by integrating explicit bug patterns and best practices.

Hybrid Automated Code Review with LLMs and Symbolic Reasoning

Introduction

The paper "Automated Code Review Using LLMs with Symbolic Reasoning" (2507.18476) addresses the persistent challenges in automating code review, a critical yet resource-intensive phase in the software development lifecycle. While LLMs have demonstrated strong capabilities in code generation and pattern recognition, their limitations in logical reasoning and semantic code understanding restrict their effectiveness in code review tasks. This work proposes a hybrid framework that integrates symbolic reasoning—via a structured knowledge map of best practices and defect patterns—into the LLM-based code review pipeline. The approach is empirically validated on the CodeXGlue Python defect detection dataset using CodeT5, CodeBERT, and GraphCodeBERT, with a focus on quantifying the impact of symbolic reasoning, prompt engineering, and fine-tuning.

Methodology

Dataset and Preprocessing

The paper utilizes the CodeXGlue Python defect detection dataset, which provides labeled Python function snippets categorized as clean or buggy. Each sample is tokenized using the respective model's tokenizer, with input sequences padded to a maximum length of 256 tokens. To address class imbalance, random oversampling is applied, ensuring adequate representation of buggy samples.

Model Selection and Fine-Tuning

Three transformer-based LLMs are evaluated:

  • CodeBERT: Pre-trained on paired natural language and code data, suitable for code summarization and translation.
  • GraphCodeBERT: Extends CodeBERT with data flow graph information, enhancing structural code understanding.
  • CodeT5: An encoder-decoder model optimized for code understanding and generation tasks.

All models are fine-tuned on the defect detection task using AdamW (learning rate 1×1051 \times 10^{-5}, weight decay 0.01) with mixed-precision (FP16) training for computational efficiency.

Symbolic Reasoning via Knowledge Map

The core innovation is the integration of a knowledge map containing 20 Python-specific bug patterns and best practices (e.g., naming anti-patterns, unreachable code, error handling risks, resource leaks, mutable default arguments). This knowledge map is injected into the LLM prompt, providing explicit symbolic context to guide the model's reasoning during code review. Additionally, few-shot learning is employed by including labeled code examples in the prompt, further anchoring the model's predictions.

Evaluation Metrics

Performance is assessed using precision, recall, F1-score, and accuracy, with a focus on the trade-off between false positives and error detection sensitivity.

Experimental Results

Experiments are conducted on an NVIDIA A100 GPU, comparing four scenarios for each model: base (one-shot), few-shot, fine-tuned, and the proposed hybrid approach (fine-tuned + few-shot + knowledge map).

Key findings:

  • Base Model Performance: All models exhibit low precision, recall, and accuracy in the base scenario, with CodeT5 marginally outperforming the others.
  • Few-Shot Learning: Incorporating few-shot examples yields moderate improvements, particularly for GraphCodeBERT (accuracy increase of 19.11%).
  • Fine-Tuning: Fine-tuning on CodeXGlue significantly boosts performance, especially for GraphCodeBERT (accuracy increase of 27.46%).
  • Hybrid Approach: The integration of symbolic reasoning via the knowledge map, combined with fine-tuning and few-shot learning, delivers the highest gains. GraphCodeBERT achieves the best results (precision 0.485, F1-score 0.381, accuracy 0.687), with the hybrid approach improving average accuracy by 16% over the base models.

Notably, the hybrid approach outperforms prior work such as SYNCHROMESH, which reported a 12% improvement for code generation tasks using LLM-symbolic integration.

Analysis and Implications

The results demonstrate that symbolic reasoning, operationalized through explicit knowledge maps, can compensate for LLMs' deficiencies in logical and semantic code analysis. The hybrid approach is particularly effective for models with architectural support for structural code information (e.g., GraphCodeBERT). The observed variance in model responsiveness to prompt engineering and fine-tuning underscores the importance of model-specific optimization strategies.

From a practical perspective, the framework offers a scalable path to more reliable and consistent automated code review, reducing manual effort and mitigating subjectivity. The explicit encoding of best practices and defect patterns enhances explainability and trustworthiness, addressing common concerns with LLM-based code analysis.

Future Directions

Potential extensions include:

  • Expanding the knowledge map to support additional programming languages and paradigms.
  • Incorporating more advanced symbolic reasoning techniques, such as graph-based reasoning or multi-modal learning.
  • Exploring dynamic prompt construction based on code context and project-specific guidelines.
  • Investigating the integration of formal verification tools for higher-assurance code review in safety-critical domains.

Conclusion

This work establishes that hybridizing LLMs with symbolic reasoning via structured knowledge maps yields measurable improvements in automated code review, particularly in defect detection accuracy and consistency. The approach is model-agnostic but demonstrates the greatest benefit for architectures that leverage code structure. As LLMs and symbolic reasoning frameworks continue to advance, such integrative methods are poised to become foundational in automated software engineering tools, with significant implications for code quality, maintainability, and developer productivity.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com