Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models (2406.11087v3)

Published 16 Jun 2024 in cs.CR, cs.AI, cs.CL, and cs.LG

Abstract: LLMs have repeatedly shown outstanding performance across diverse applications. However, deploying these models can inadvertently risk user privacy. The significant memory demands during training pose a major challenge in terms of resource consumption. This substantial size places a heavy load on memory resources, raising considerable practical concerns. In this paper, we introduce DP-MemArc, a novel training framework aimed at reducing the memory costs of LLMs while emphasizing the protection of user data privacy. DP-MemArc incorporates side network or reversible network designs to support a variety of differential privacy memory-efficient fine-tuning schemes. Our approach not only achieves in memory optimization but also ensures robust privacy protection, keeping user data secure and confidential. Extensive experiments have demonstrated that DP-MemArc effectively provides differential privacy-efficient fine-tuning across different task scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Yanming Liu (20 papers)
  2. Xinyue Peng (9 papers)
  3. Jiannan Cao (9 papers)
  4. Yuwei Zhang (48 papers)
  5. Chen Ma (90 papers)
  6. Songhang Deng (5 papers)
  7. Mengchen Fu (1 paper)
  8. Xuhong Zhang (61 papers)
  9. Sheng Cheng (40 papers)
  10. Xun Wang (96 papers)
  11. Jianwei Yin (71 papers)
  12. Tianyu Du (34 papers)
  13. Xiaolan Ke (3 papers)