Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying and Analyzing Entity-level Memorization in Large Language Models (2308.15727v2)

Published 30 Aug 2023 in cs.CL

Abstract: LLMs have been proven capable of memorizing their training data, which can be extracted through specifically designed prompts. As the scale of datasets continues to grow, privacy risks arising from memorization have attracted increasing attention. Quantifying LLM memorization helps evaluate potential privacy risks. However, prior works on quantifying memorization require access to the precise original data or incur substantial computational overhead, making it difficult for applications in real-world LLMs. To this end, we propose a fine-grained, entity-level definition to quantify memorization with conditions and metrics closer to real-world scenarios. In addition, we also present an approach for efficiently extracting sensitive entities from autoregressive LLMs. We conduct extensive experiments based on the proposed, probing LLMs' ability to reconstruct sensitive entities under different settings. We find that LLMs have strong memorization at the entity level and are able to reproduce the training data even with partial leakages. The results demonstrate that LLMs not only memorize their training data but also understand associations between entities. These findings necessitate that trainers of LLMs exercise greater prudence regarding model memorization, adopting memorization mitigation techniques to preclude privacy violations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhenhong Zhou (15 papers)
  2. Jiuyang Xiang (2 papers)
  3. Chaomeng Chen (2 papers)
  4. Sen Su (25 papers)
Citations (4)