DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models (2406.11087v3)
Abstract: LLMs have repeatedly shown outstanding performance across diverse applications. However, deploying these models can inadvertently risk user privacy. The significant memory demands during training pose a major challenge in terms of resource consumption. This substantial size places a heavy load on memory resources, raising considerable practical concerns. In this paper, we introduce DP-MemArc, a novel training framework aimed at reducing the memory costs of LLMs while emphasizing the protection of user data privacy. DP-MemArc incorporates side network or reversible network designs to support a variety of differential privacy memory-efficient fine-tuning schemes. Our approach not only achieves in memory optimization but also ensures robust privacy protection, keeping user data secure and confidential. Extensive experiments have demonstrated that DP-MemArc effectively provides differential privacy-efficient fine-tuning across different task scenarios.
- Yanming Liu (20 papers)
- Xinyue Peng (9 papers)
- Jiannan Cao (9 papers)
- Yuwei Zhang (48 papers)
- Chen Ma (90 papers)
- Songhang Deng (5 papers)
- Mengchen Fu (1 paper)
- Xuhong Zhang (61 papers)
- Sheng Cheng (40 papers)
- Xun Wang (96 papers)
- Jianwei Yin (71 papers)
- Tianyu Du (34 papers)
- Xiaolan Ke (3 papers)