Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-improving Algorithms for Coordinate-wise Maxima (1204.0824v1)

Published 3 Apr 2012 in cs.CG and cs.DS

Abstract: Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the \emph{self-improving setting}. We have $n$ (unknown) independent distributions $\cD_1, \cD_2, ..., \cD_n$ of planar points. An input pointset $(p_1, p_2, ..., p_n)$ is generated by taking an independent sample $p_i$ from each $\cD_i$, so the input distribution $\cD$ is the product $\prod_i \cD_i$. A self-improving algorithm repeatedly gets input sets from the distribution $\cD$ (which is \emph{a priori} unknown) and tries to optimize its running time for $\cD$. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution $\cD$. Let $\OPT_\cD$ denote the expected depth of an \emph{optimal} linear comparison tree computing the maxima for distribution $\cD$. Our algorithm eventually has an expected running time of $O(\text{OPT}_\cD + n)$, even though it did not know $\cD$ to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution $\cD$.

Citations (8)

Summary

We haven't generated a summary for this paper yet.