Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Security Vulnerability Detection with Multitask Self-Instructed Fine-Tuning of Large Language Models (2406.05892v1)

Published 9 Jun 2024 in cs.CR, cs.LG, and cs.SE

Abstract: Software security vulnerabilities allow attackers to perform malicious activities to disrupt software operations. Recent Transformer-based LLMs have significantly advanced vulnerability detection, surpassing the capabilities of static analysis based deep learning models. However, LLMs trained solely on code tokens do not capture either the explanation of vulnerability type or the data flow structure information of code, both of which are crucial for vulnerability detection. We propose a novel technique that integrates a multitask sequence-to-sequence LLM with pro-gram control flow graphs encoded as a graph neural network to achieve sequence-to-classification vulnerability detection. We introduce MSIVD, multitask self-instructed fine-tuning for vulnerability detection, inspired by chain-of-thought prompting and LLM self-instruction. Our experiments demonstrate that MSIVD achieves superior performance, outperforming the highest LLM-based vulnerability detector baseline (LineVul), with a F1 score of 0.92 on the BigVul dataset, and 0.48 on the PreciseBugs dataset. By training LLMs and GNNs simultaneously using a combination of code and explanatory metrics of a vulnerable program, MSIVD represents a promising direction for advancing LLM-based vulnerability detection that generalizes to unseen data. Based on our findings, we further discuss the necessity for new labelled security vulnerability datasets, as recent LLMs have seen or memorized prior datasets' held-out evaluation data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aidan Z. H. Yang (6 papers)
  2. Haoye Tian (26 papers)
  3. He Ye (16 papers)
  4. Ruben Martins (24 papers)
  5. Claire Le Goues (34 papers)
Citations (3)