Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG (2406.11147v2)

Published 17 Jun 2024 in cs.SE and cs.AI

Abstract: Vulnerability detection is essential for software quality assurance. In recent years, deep learning models (especially LLMs) have shown promise in vulnerability detection. In this work, we propose a novel LLM-based vulnerability detection technique Vul-RAG, which leverages knowledge-level retrieval-augmented generation (RAG) framework to detect vulnerability for the given code in three phases. First, Vul-RAG constructs a vulnerability knowledge base by extracting multi-dimension knowledge via LLMs from existing CVE instances; second, for a given code snippet, Vul-RAG} retrieves the relevant vulnerability knowledge from the constructed knowledge base based on functional semantics; third, Vul-RAG leverages LLMs to check the vulnerability of the given code snippet by reasoning the presence of vulnerability causes and fixing solutions of the retrieved vulnerability knowledge. Our evaluation of Vul-RAG on our constructed benchmark PairVul shows that Vul-RAG substantially outperforms all baselines by 12.96\%/110\% relative improvement in accuracy/pairwise-accuracy. In addition, our user study shows that the vulnerability knowledge generated by Vul-RAG can serve as high-quality explanations which can improve the manual detection accuracy from 0.60 to 0.77.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xueying Du (9 papers)
  2. Geng Zheng (1 paper)
  3. Kaixin Wang (30 papers)
  4. Jiayi Feng (4 papers)
  5. Wentai Deng (3 papers)
  6. Mingwei Liu (22 papers)
  7. Xin Peng (82 papers)
  8. Tao Ma (56 papers)
  9. Yiling Lou (28 papers)
  10. Bihuan Chen (21 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com