Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models (2406.01983v1)

Published 4 Jun 2024 in cs.CL

Abstract: With the passage of the Right to Be Forgotten (RTBF) regulations and the scaling up of LLM training datasets, research on model unlearning in LLMs has become more crucial. Before the era of LLMs, machine unlearning research focused mainly on classification tasks in models with small parameters. In these tasks, the content to be forgotten or retained is clear and straightforward. However, as parameter sizes have grown and tasks have become more complex, balancing forget quality and model utility has become more challenging, especially in scenarios involving personal data instead of classification results. Existing methods based on gradient ascent and its variants often struggle with this balance, leading to unintended information loss or partial forgetting. To address this challenge, we propose RKLD, a novel \textbf{R}everse \textbf{KL}-Divergence-based Knowledge \textbf{D}istillation unlearning algorithm for LLMs targeting the unlearning of personal information. Through RKLD, we achieve significant forget quality and effectively maintain the model utility in our experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Bichen Wang (3 papers)
  2. Yuzhe Zi (1 paper)
  3. Yixin Sun (12 papers)
  4. Yanyan Zhao (39 papers)
  5. Bing Qin (186 papers)
Citations (4)