Online Algorithms with Limited Data Retention (2404.10997v1)
Abstract: We introduce a model of online algorithms subject to strict constraints on data retention. An online learning algorithm encounters a stream of data points, one per round, generated by some stationary process. Crucially, each data point can request that it be removed from memory $m$ rounds after it arrives. To model the impact of removal, we do not allow the algorithm to store any information or calculations between rounds other than a subset of the data points (subject to the retention constraints). At the conclusion of the stream, the algorithm answers a statistical query about the full dataset. We ask: what level of performance can be guaranteed as a function of $m$? We illustrate this framework for multidimensional mean estimation and linear regression problems. We show it is possible to obtain an exponential improvement over a baseline algorithm that retains all data as long as possible. Specifically, we show that $m = \textsc{Poly}(d, \log(1/\epsilon))$ retention suffices to achieve mean squared error $\epsilon$ after observing $O(1/\epsilon)$ $d$-dimensional data points. This matches the error bound of the optimal, yet infeasible, algorithm that retains all data forever. We also show a nearly matching lower bound on the retention required to guarantee error $\epsilon$. One implication of our results is that data retention laws are insufficient to guarantee the right to be forgotten even in a non-adversarial world in which firms merely strive to (approximately) optimize the performance of their algorithms. Our approach makes use of recent developments in the multidimensional random subset sum problem to simulate the progression of stochastic gradient descent under a model of adversarial noise, which may be of independent interest.
- On the multidimensional random subset sum problem. arXiv preprint arXiv:2207.13944, 2022.
- Practical coreset constructions for machine learning. arXiv preprint arXiv:1703.06476, 2017.
- Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010: 19th International Conference on Computational StatisticsParis France, August 22-27, 2010 Keynote, Invited and Contributed Papers, pages 177–186. Springer, 2010.
- California consumer privacy act. https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5, 2018. Cal. Civ. Code §§ 1798.100 et seq.
- Consumer data protection act, 2021 h.b. 2307/2021 s.b. 1392. https://lis.virginia.gov/cgi-bin/legp604.exe?ses=212&typ=bil&val=Hb2307, 2021.
- Control, confidentiality, and the right to be forgotten. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 3358–3372, 2023.
- Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463–480. IEEE, 2015.
- Revisiting the random subset sum problem. In European Symposium on Algorithms. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2023.
- New subsampling algorithms for fast least squares regression. Advances in neural information processing systems, 26, 2013.
- Fast approximation of matrix coherence and statistical leverage. The Journal of Machine Learning Research, 13(1):3475–3506, 2012.
- Dan Feldman. Introduction to core-sets: an updated survey. arXiv preprint arXiv:2011.09384, 2020.
- A general framework for proving the equivariant strong lottery ticket hypothesis. In The Eleventh International Conference on Learning Representations, 2022.
- General data protection regulation, 2016. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1.
- Formalizing data deletion in the context of the right to be forgotten. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 373–402. Springer, 2020.
- Random differential privacy. arXiv preprint arXiv:1112.2680, 2011.
- George S. Lueker. Exponentially small bounds on the expected optimum of the partition and subset sum problems. Random Structures & Algorithms, 12(1):51–62, 1998.
- Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 91–100, 2013.
- A statistical perspective on algorithmic leveraging. In International Conference on Machine Learning, pages 91–99. PMLR, 2014.
- A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022.
- Optimal lottery tickets via subset sum: Logarithmic over-parameterization is sufficient. Advances in neural information processing systems, 33:2599–2610, 2020.
- Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1571–1578, 2012.
- Optimal subsampling with influence functions. Advances in neural information processing systems, 31, 2018.
- Machine unlearning: A survey. ACM Computing Surveys, 56(1):1–36, 2023.
- Rong Zhu. Gradient-based sampling: An adaptive importance sampling for least-squares. Advances in neural information processing systems, 29, 2016.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.