Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Sample Complexity of Privately Learning Axis-Aligned Rectangles (2107.11526v1)

Published 24 Jul 2021 in cs.LG, cs.CR, and cs.DS

Abstract: We revisit the fundamental problem of learning Axis-Aligned-Rectangles over a finite grid $Xd\subseteq{\mathbb{R}}d$ with differential privacy. Existing results show that the sample complexity of this problem is at most $\min\left{ d{\cdot}\log|X| \;,\; d{1.5}{\cdot}\left(\log*|X| \right){1.5}\right}$. That is, existing constructions either require sample complexity that grows linearly with $\log|X|$, or else it grows super linearly with the dimension $d$. We present a novel algorithm that reduces the sample complexity to only $\tilde{O}\left{d{\cdot}\left(\log*|X|\right){1.5}\right}$, attaining a dimensionality optimal dependency without requiring the sample complexity to grow with $\log|X|$.The technique used in order to attain this improvement involves the deletion of "exposed" data-points on the go, in a fashion designed to avoid the cost of the adaptive composition theorems. The core of this technique may be of individual interest, introducing a new method for constructing statistically-efficient private algorithms.

Citations (6)

Summary

We haven't generated a summary for this paper yet.