Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Generalization of Self-Improving Algorithms (2003.08329v2)

Published 18 Mar 2020 in cs.CG and cs.CC

Abstract: Ailon et al. [SICOMP'11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances $x_1,\cdots,x_n$ follow some unknown \emph{product distribution}. That is, $x_i$ comes from a fixed unknown distribution $\mathsf{D}i$, and the $x_i$'s are drawn independently. After spending $O(n{1+\varepsilon})$ time in a learning phase, the subsequent expected running time is $O((n+ H)/\varepsilon)$, where $H \in {H\mathrm{S},H_\mathrm{DT}}$, and $H_\mathrm{S}$ and $H_\mathrm{DT}$ are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the $x_i$'s under the \emph{group product distribution}. There is a hidden partition of $[1,n]$ into groups; the $x_i$'s in the $k$-th group are fixed unknown functions of the same hidden variable $u_k$; and the $u_k$'s are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map $u_k$ to $x_i$'s are well-behaved. After an $O(\mathrm{poly}(n))$-time training phase, we achieve $O(n + H_\mathrm{S})$ and $O(n\alpha(n) + H_\mathrm{DT})$ expected running times for sorting and DT, respectively, where $\alpha(\cdot)$ is the inverse Ackermann function.

Citations (1)

Summary

We haven't generated a summary for this paper yet.