Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Code Vulnerability Detection: A Comparative Analysis of Emerging Large Language Models (2409.10490v1)

Published 16 Sep 2024 in cs.SE

Abstract: The growing trend of vulnerability issues in software development as a result of a large dependence on open-source projects has received considerable attention recently. This paper investigates the effectiveness of LLMs in identifying vulnerabilities within codebases, with a focus on the latest advancements in LLM technology. Through a comparative analysis, we assess the performance of emerging LLMs, specifically Llama, CodeLlama, Gemma, and CodeGemma, alongside established state-of-the-art models such as BERT, RoBERTa, and GPT-3. Our study aims to shed light on the capabilities of LLMs in vulnerability detection, contributing to the enhancement of software security practices across diverse open-source repositories. We observe that CodeGemma achieves the highest F1-score of 58\ and a Recall of 87\, amongst the recent additions of LLMs to detect software security vulnerabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shaznin Sultana (1 paper)
  2. Sadia Afreen (3 papers)
  3. Nasir U. Eisty (25 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.