Code Vulnerability Detection: A Comparative Analysis of Emerging Large Language Models (2409.10490v1)
Abstract: The growing trend of vulnerability issues in software development as a result of a large dependence on open-source projects has received considerable attention recently. This paper investigates the effectiveness of LLMs in identifying vulnerabilities within codebases, with a focus on the latest advancements in LLM technology. Through a comparative analysis, we assess the performance of emerging LLMs, specifically Llama, CodeLlama, Gemma, and CodeGemma, alongside established state-of-the-art models such as BERT, RoBERTa, and GPT-3. Our study aims to shed light on the capabilities of LLMs in vulnerability detection, contributing to the enhancement of software security practices across diverse open-source repositories. We observe that CodeGemma achieves the highest F1-score of 58\ and a Recall of 87\, amongst the recent additions of LLMs to detect software security vulnerabilities.
- Shaznin Sultana (1 paper)
- Sadia Afreen (3 papers)
- Nasir U. Eisty (25 papers)