Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Traces of Memorisation in Large Language Models for Code (2312.11658v2)

Published 18 Dec 2023 in cs.CR, cs.AI, and cs.SE

Abstract: LLMs have gained significant popularity because of their ability to generate human-like text and potential applications in various fields, such as Software Engineering. LLMs for code are commonly trained on large unsanitised corpora of source code scraped from the internet. The content of these datasets is memorised and can be extracted by attackers with data extraction attacks. In this work, we explore memorisation in LLMs for code and compare the rate of memorisation with LLMs trained on natural language. We adopt an existing benchmark for natural language and construct a benchmark for code by identifying samples that are vulnerable to attack. We run both benchmarks against a variety of models, and perform a data extraction attack. We find that LLMs for code are vulnerable to data extraction attacks, like their natural language counterparts. From the training data that was identified to be potentially extractable we were able to extract 47% from a CodeGen-Mono-16B code completion model. We also observe that models memorise more, as their parameter count grows, and that their pre-training data are also vulnerable to attack. We also find that data carriers are memorised at a higher rate than regular code or documentation and that different model architectures memorise different samples. Data leakage has severe outcomes, so we urge the research community to further investigate the extent of this phenomenon using a wider range of models and extraction techniques in order to build safeguards to mitigate this issue.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ali Al-Kaswan (7 papers)
  2. Maliheh Izadi (36 papers)
  3. Arie van Deursen (54 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.