Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Selective Pre-training for Private Fine-tuning (2305.13865v3)

Published 23 May 2023 in cs.LG and cs.CR

Abstract: Text prediction models, when used in applications like email clients or word processors, must protect user data privacy and adhere to model size constraints. These constraints are crucial to meet memory and inference time requirements, as well as to reduce inference costs. Building small, fast, and private domain-specific LLMs is a thriving area of research. In this work, we show that a careful pre-training on a \emph{subset} of the public dataset that is guided by the private dataset is crucial to train small LLMs with differential privacy. On standard benchmarks, small models trained with our new framework achieve state-of-the-art performance. In addition to performance improvements, our results demonstrate that smaller models, through careful pre-training and private fine-tuning, can match the performance of much larger models that do not have access to private data. This underscores the potential of private learning for model compression and enhanced efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Da Yu (19 papers)
  2. Sivakanth Gopi (37 papers)
  3. Janardhan Kulkarni (52 papers)
  4. Zinan Lin (42 papers)
  5. Saurabh Naik (3 papers)
  6. Tomasz Lukasz Religa (1 paper)
  7. Jian Yin (67 papers)
  8. Huishuai Zhang (64 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.