Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What You See Is Not Always What You Get: An Empirical Study of Code Comprehension by Large Language Models (2412.08098v1)

Published 11 Dec 2024 in cs.SE, cs.AI, and cs.LG

Abstract: Recent studies have demonstrated outstanding capabilities of LLMs in software engineering domain, covering numerous tasks such as code generation and comprehension. While the benefit of LLMs for coding task is well noted, it is perceived that LLMs are vulnerable to adversarial attacks. In this paper, we study the specific LLM vulnerability to imperceptible character attacks, a type of prompt-injection attack that uses special characters to befuddle an LLM whilst keeping the attack hidden to human eyes. We devise four categories of attacks and investigate their effects on the performance outcomes of tasks relating to code analysis and code comprehension. Two generations of ChatGPT are included to evaluate the impact of advancements made to contemporary models. Our experimental design consisted of comparing perturbed and unperturbed code snippets and evaluating two performance outcomes, which are model confidence using log probabilities of response, and correctness of response. We conclude that earlier version of ChatGPT exhibits a strong negative linear correlation between the amount of perturbation and the performance outcomes, while the recent ChatGPT presents a strong negative correlation between the presence of perturbation and performance outcomes, but no valid correlational relationship between perturbation budget and performance outcomes. We anticipate this work contributes to an in-depth understanding of leveraging LLMs for coding tasks. It is suggested future research should delve into how to create LLMs that can return a correct response even if the prompt exhibits perturbations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bangshuo Zhu (2 papers)
  2. Jiawen Wen (3 papers)
  3. Huaming Chen (38 papers)

Summary

An Empirical Study of Code Comprehension Vulnerabilities in LLMs

The paper in this paper addresses the vulnerabilities of LLMs in software engineering contexts, particularly focusing on adversarial attacks that leverage imperceptible character manipulation. As LLMs have become integral in assisting software developers with tasks spanning code generation, program repair, and vulnerability detection, understanding their robustness against sophisticated attack vectors is crucial. Despite their adeptness at handling natural language tasks, including those required for code comprehension, LLMs have exhibited susceptibility to subtle perturbations. This paper's contribution lies in empirically assessing how truly imperceptible perturbations, encoded via special Unicode characters, affect LLMs' performance, specifically targeting three versions of ChatGPT—two from the third generation and one from the fourth generation.

The research findings demonstrate that the GPT-3.5 models exhibit a strong negative linear correlation between perturbation budget and performance outcomes, such as model confidence and correctness. In contrast, the GPT-4 model, while still negatively impacted by perturbations, reveals a distinct response pattern. The presence of any perturbation rapidly degrades performance outcomes without clearly delineating between perturbation budgets or categories. This discrepancy suggests that GPT-4 may possess inherent mechanisms that more rigidly handle confounding prompts, thereby avoiding false positives. This distinction emphasizes an advancement in security layers within the GPT-4 architecture, although at a cost of handling legitimate yet complex inputs less flexibly.

The paper also categorizes perturbations into four types: reordering, invisible characters, deletions, and homoglyphs, each affecting models to varying extents. Notably, the deletions and reorderings categories showed the most significant performance declines, highlighting specific vulnerabilities in current model architectures.

From a theoretical perspective, these findings underscore the importance of enhancing LLMs' understanding mechanisms, especially in discerning imperceptible perturbations that mimic authentic input. Practically, the research outputs are invaluable for developers looking to implement LLMs in software engineering environments, ensuring a cognizance of potential security vulnerabilities.

Future research should focus on developing LLMs that retain accuracy and task efficacy despite the presence of such perturbations. Introducing sophisticated context-parsing algorithms that simulate human intuition when handling corrupted inputs might bridge the gap between appearance and understanding. Moreover, exploration of LLM interpretability mechanisms could aid in developing models that explicate their reasoning processes, further aligning model output with user expectations.

In sum, this paper advances our understanding of LLMs, such as ChatGPT, in handling code-related tasks under adversarial conditions. It calls for continued investigation into refining LLMs to ensure security, reliability, and seamless integration into the workflows they are designed to augment. The exploration of imperceptible character attacks paves the way for more resilient AI systems capable of withstanding increasingly sophisticated threats in a vast array of applications.