Papers
Topics
Authors
Recent
Search
2000 character limit reached

An Insight into Security Code Review with LLMs: Capabilities, Obstacles, and Influential Factors

Published 29 Jan 2024 in cs.SE and cs.AI | (2401.16310v5)

Abstract: Security code review is a time-consuming and labor-intensive process typically requiring integration with automated security defect detection tools. However, existing security analysis tools struggle with poor generalization, high false positive rates, and coarse detection granularity. LLMs have been considered promising candidates for addressing those challenges. In this study, we conducted an empirical study to explore the potential of LLMs in detecting security defects during code review. Specifically, we evaluated the performance of seven LLMs under five different prompts and compared them with state-of-the-art static analysis tools. We also performed linguistic and regression analyses for the two top-performing LLMs to identify quality problems in their responses and factors influencing their performance. Our findings show that: (1) In security code review, LLMs significantly outperform state-of-the-art static analysis tools, and the reasoning-optimized LLM performs better than general-purpose LLMs. (2) DeepSeek-R1 achieves the highest performance, followed by GPT-4. The optimal prompt for DeepSeek-R1 incorporates both the commit message and chain-of-thought (CoT) guidance, while for GPT-4, the prompt with a Common Weakness Enumeration (CWE) list works best. (3) GPT-4 frequently produces vague expressions and exhibits difficulties in accurately following instructions in the prompts, while DeepSeek-R1 more commonly generates inaccurate code details in its outputs. (4) LLMs are more adept at identifying security defects in code files that have fewer tokens and security-relevant annotations.

Citations (5)

Summary

  • The paper demonstrates that LLMs, especially reasoning-optimized models like DeepSeek-R1, outperform static analyzers in detecting code vulnerabilities.
  • It employs five distinct prompt strategies, including CWE listings and Chain-of-Thought reasoning, to enhance accuracy across varied code contexts.
  • Results reveal that factors like token count, annotation relevance, and code community significantly influence LLM performance and consistency.

An Insight into Security Code Review with LLMs: Capabilities, Obstacles, and Influential Factors

Introduction

The study "An Insight into Security Code Review with LLMs: Capabilities, Obstacles, and Influential Factors" (2401.16310) undertakes an empirical evaluation of the potential for using LLMs as tools in security code review, compared to state-of-the-art static analysis tools. This research highlights the advantages and limitations of LLMs in the context of detecting security defects in code files. It evaluates seven LLMs under different prompting strategies and compares their performance against well-established static analysis tools in Python and C/C++ datasets.

Methodology

The paper explores three central Research Questions (RQs) to assess the capabilities of LLMs in security code review. The RQs involve evaluating LLM performance in detecting security defects, identifying quality issues in LLM-generated responses, and analyzing factors influencing LLM performance. The empirical study uses a dataset constructed from 534 code review files, featuring 15 predefined security defect types and diverse code contexts from four open-source projects.

Prompt Design

Five distinct prompt templates were crafted, varying from basic prompts to those integrating Common Weakness Enumeration (CWE) lists and Chain-of-Thought (CoT) reasoning to optimize LLM response accuracy. Particular emphasis was given to how these varied prompts impact the effectiveness of LLMs across different datasets and contexts.

Key Findings

Superior LLM Performance

The results demonstrate that LLMs, particularly reasoning-optimized models like DeepSeek-R1, significantly surpass traditional static analysis tools in detecting security defects. These models exhibit superior handling of security code reviews when supported by tailored prompts, showcasing improved detection of vulnerabilities such as race conditions and integer overflows.

  • DeepSeek-R1: Outperforms other models, especially when using prompts tailored with commit messages and CWE listings.
  • GPT-4: Also shows strong detection capabilities, benefiting from prompts that include specific CWEs. Figure 1

    Figure 1: Distribution of LoC of the code files with security defects.

    Figure 2

    Figure 2: An overview of the research procedure for investigating the three RQs.

Quality and Consistency Concerns

While LLMs demonstrated capabilities in identifying defects, issues such as verbose outputs and inconsistent results across iterations were noted. These challenges underline the importance of prompt construction and highlight intrinsic model variability as a significant factor.

  • Consistency Issues: Notably, GPT-4 and DeepSeek-R1 demonstrated variability in results across repeated experiments, a reflection of inherent non-determinism. Figure 3

    Figure 3: Distribution of security defect types on the Python and C/C++ dataset.

Influential Factors

Several factors significantly impact the performance of LLMs:

  • Token Count: Models generally perform better with fewer tokens, underscoring the need for efficient input processing.
  • Annotation Relevance: Security-relevant comments in code were identified as critical guides for LLM performance, aiding in more precise defect detection.
  • Community and File Type: Variations in performance were associated with different code communities (e.g., OpenStack vs. Qt) and file types, accentuating the need for contextual adaptability in LLM applications. Figure 4

Figure 4

Figure 4: Construction templates for the five prompts.

Conclusions and Future Directions

This study underscores the transformative potential of LLMs in security code review, demonstrating their effectiveness over traditional tools in recognizing and explaining security defects. Future work should focus on refining prompt strategies, enhancing model consistency, and exploring the integration of external knowledge bases to mitigate hallucinations and amplify LLM utility.

In conclusion, while LLMs like DeepSeek-R1 offer enhanced capabilities, addressing issues such as non-determinism and leveraging detailed CWE information remain essential steps toward maximizing their efficacy in automated security reviews. The evolution of LLMs in this domain promises significant advances in software security, particularly through improvements in model precision and contextual understanding.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.