Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Strongly Convex Optimization with Unknown Delays (2103.11354v1)

Published 21 Mar 2021 in cs.LG and math.OC

Abstract: We investigate the problem of online convex optimization with unknown delays, in which the feedback of a decision arrives with an arbitrary delay. Previous studies have presented a delayed variant of online gradient descent (OGD), and achieved the regret bound of $O(\sqrt{T+D})$ by only utilizing the convexity condition, where $D$ is the sum of delays over $T$ rounds. In this paper, we further exploit the strong convexity to improve the regret bound. Specifically, we first extend the delayed variant of OGD for strongly convex functions, and establish a better regret bound of $O(d\log T)$, where $d$ is the maximum delay. The essential idea is to let the learning rate decay with the total number of received feedback linearly. Furthermore, we consider the more challenging bandit setting, and obtain similar theoretical guarantees by incorporating the classical multi-point gradient estimator into our extended method. To the best of our knowledge, this is the first work that solves online strongly convex optimization under the general delayed setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yuanyu Wan (23 papers)
  2. Wei-Wei Tu (29 papers)
  3. Lijun Zhang (239 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.