Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Delays in Online Strongly-Convex Optimization (1605.06201v4)

Published 20 May 2016 in cs.LG, cs.AI, and stat.ML

Abstract: We consider the problem of strongly-convex online optimization in presence of adversarial delays; in a T-iteration online game, the feedback of the player's query at time t is arbitrarily delayed by an adversary for d_t rounds and delivered before the game ends, at iteration t+d_t-1. Specifically for \algo{online-gradient-descent} algorithm we show it has a simple regret bound of \Oh{\sum_{t=1}T \log (1+ \frac{d_t}{t})}. This gives a clear and simple bound without resorting any distributional and limiting assumptions on the delays. We further show how this result encompasses and generalizes several of the existing known results in the literature. Specifically it matches the celebrated logarithmic regret \Oh{\log T} when there are no delays (i.e. d_t = 1) and regret bound of \Oh{\tau \log T} for constant delays d_t = \tau.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Daniel Khashabi (83 papers)
  2. Kent Quanrud (29 papers)
  3. Amirhossein Taghvaei (64 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.