Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime (2302.14154v1)
Abstract: We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret ${O} \big( \varepsilon{-1} \log{1.5}{d} \big)$ where $d$ is the number of experts. This significantly improves over the best existing regret bounds for the DP non-realizable setting which are ${O} \big( \varepsilon{-1} \min\big{d, T{1/3}\log d\big} \big)$. We also develop an adaptive algorithm for the small-loss setting with regret $O(L\star\log d + \varepsilon{-1} \log{1.5}{d})$ where $L\star$ is the total loss of the best expert. Additionally, we consider DP online convex optimization in the realizable setting and propose an algorithm with near-optimal regret $O \big(\varepsilon{-1} d{1.5} \big)$, as well as an algorithm for the smooth case with regret $O \big( \varepsilon{-2/3} (dT){1/3} \big)$, both significantly improving over existing bounds in the non-realizable regime.
- Hilal Asi (29 papers)
- Vitaly Feldman (71 papers)
- Tomer Koren (79 papers)
- Kunal Talwar (83 papers)