Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Machine Unlearning in Large Language Models (2404.16841v1)

Published 3 Feb 2024 in cs.CR

Abstract: Recently, LLMs have emerged as a notable field, attracting significant attention for its ability to automatically generate intelligent contents for various application domains. However, LLMs still suffer from significant security and privacy issues. For example, LLMs might expose user privacy from hacking attacks or targeted prompts. To address this problem, this paper introduces a novel machine unlearning framework into LLMs. Our objectives are to make LLMs not produce harmful, hallucinatory, or privacy-compromising responses, while retaining their standard output capabilities. To accomplish this, we use an evaluative model to pinpoint dialogues needing unlearning. We also establish a distance loss to function as the model's negative loss, diverting it from previous undesirable outputs. Furthermore, we determine the expected output's cluster mean to formulate a positive loss, directing the model's outputs toward preferable outcomes without compromising its reasoning abilities and performance. Experimental results show that our approach effectively meets unlearning objectives without substantially compromising model performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kongyang Chen (17 papers)
  2. Zixin Wang (31 papers)
  3. Bing Mi (6 papers)
  4. Waixi Liu (1 paper)
  5. Shaowei Wang (57 papers)
  6. Xiaojun Ren (4 papers)
  7. Jiaxing Shen (14 papers)
Citations (6)

HackerNews