Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants (2208.09727v4)

Published 20 Aug 2022 in cs.CR

Abstract: LLMs such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked 'shopping list' structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Gustavo Sandoval (1 paper)
  2. Hammond Pearce (35 papers)
  3. Teo Nys (1 paper)
  4. Ramesh Karri (92 papers)
  5. Siddharth Garg (99 papers)
  6. Brendan Dolan-Gavitt (24 papers)
Citations (81)