Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning-Augmented Weighted Paging (2011.09076v2)

Published 18 Nov 2020 in cs.DS

Abstract: We consider a natural semi-online model for weighted paging, where at any time the algorithm is given predictions, possibly with errors, about the next arrival of each page. The model is inspired by Belady's classic optimal offline algorithm for unweighted paging, and extends the recently studied model for learning-augmented paging (Lykouris and Vassilvitskii, 2018) to the weighted setting. For the case of perfect predictions, we provide an $\ell$-competitive deterministic and an $O(\log \ell)$-competitive randomized algorithm, where $\ell$ is the number of distinct weight classes. Both these bounds are tight, and imply an $O(\log W)$- and $O(\log \log W)$-competitive ratio, respectively, when the page weights lie between $1$ and $W$. Previously, it was not known how to use these predictions in the weighted setting and only bounds of $k$ and $O(\log k)$ were known, where $k$ is the cache size. Our results also generalize to the interleaved paging setting and to the case of imperfect predictions, with the competitive ratios degrading smoothly from $O(\ell)$ and $O(\log \ell)$ to $O(k)$ and $O(\log k)$, respectively, as the prediction error increases. Our results are based on several insights on structural properties of Belady's algorithm and the sequence of page arrival predictions, and novel potential functions that incorporate these predictions. For the case of unweighted paging, the results imply a very simple potential function based proof of the optimality of Belady's algorithm, which may be of independent interest.

Citations (25)

Summary

We haven't generated a summary for this paper yet.