2000 character limit reached
Enhancing Data Privacy in Large Language Models through Private Association Editing (2406.18221v3)
Published 26 Jun 2024 in cs.CL and cs.AI
Abstract: LLMs require a significant redesign in solutions to preserve privacy in data-intensive applications due to their text-generation capabilities. Indeed, LLMs tend to memorize and emit private information when maliciously prompted. In this paper, we introduce Private Association Editing (PAE) as a novel defense approach for private data leakage. PAE is designed to effectively remove Personally Identifiable Information (PII) without retraining the model. Experimental results demonstrate the effectiveness of PAE with respect to alternative baseline methods. We believe PAE will serve as a critical tool in the ongoing effort to protect data privacy in LLMs, encouraging the development of safer models for real-world applications.
- Davide Venditti (4 papers)
- Elena Sofia Ruzzetti (11 papers)
- Giancarlo A. Xompero (4 papers)
- Cristina Giannone (4 papers)
- Andrea Favalli (9 papers)
- Raniero Romagnoli (6 papers)
- Fabio Massimo Zanzotto (25 papers)