Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Differentially Private Model Compression (2206.01838v1)

Published 3 Jun 2022 in cs.LG and cs.CR

Abstract: Recent papers have shown that large pre-trained LLMs such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream NLP tasks while simultaneously guaranteeing differential privacy. The inference cost of these models -- which consist of hundreds of millions of parameters -- however, can be prohibitively large. Hence, often in practice, LLMs are compressed before they are deployed in specific applications. In this paper, we initiate the study of differentially private model compression and propose frameworks for achieving 50% sparsity levels while maintaining nearly full performance. We demonstrate these ideas on standard GLUE benchmarks using BERT models, setting benchmarks for future research on this topic.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Fatemehsadat Mireshghallah (26 papers)
  2. Arturs Backurs (33 papers)
  3. Lukas Wutschitz (13 papers)
  4. Janardhan Kulkarni (52 papers)
  5. Huseyin A Inan (3 papers)
Citations (12)