Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Universal Perfect Samplers for Incremental Streams (2407.04931v1)

Published 6 Jul 2024 in cs.DS and math.PR

Abstract: If $G : \mathbb{R}+ \to \mathbb{R}+$, the $G$-moment of a vector $\mathbf{x}\in\mathbb{R}+n$ is $G(\mathbf{x}) = \sum{v\in[n]} G(\mathbf{x}(v))$ and the $G$-sampling problem is to select an index $v_\in [n]$ according to its contribution to the $G$-moment, i.e., such that $\Pr(v_=v) = G(\mathbf{x}(v))/G(\mathbf{x})$. Approximate $G$-samplers may introduce multiplicative and/or additive errors to this probability, and some have a non-trivial probability of failure. In this paper we focus on the exact $G$-sampling problem, where $G$ is selected from the class $\mathcal{G}$ of Laplace exponents of non-negative, one-dimensional L\'evy processes, which includes several well studied classes such as $p$th moments $G(z)=zp$, $p\in[0,1]$, logarithms $G(z)=\log(1+z)$, Cohen and Geri's soft concave sublinear functions, which are used to approximate concave sublinear functions, including cap statistics. We develop $G$-samplers for a vector $\mathbf{x} \in \mathbb{R}+n$ that is presented as an incremental stream of positive updates. In particular: * For any $G\in\mathcal{G}$, we give a very simple $G$-sampler that uses 2 words of memory and stores at all times a $v\in [n]$, such that $\Pr(v_=v)$ is exactly $G(\mathbf{x}(v))/G(\mathbf{x})$. * We give a ``universal'' $\mathcal{G}$-sampler that uses $O(\log n)$ words of memory w.h.p., and given any $G\in \mathcal{G}$ at query time, produces an exact $G$-sample. With an overhead of a factor of $k$, both samplers can be used to $G$-sample a sequence of $k$ indices with or without replacement. Our sampling framework is simple and versatile, and can easily be generalized to sampling from more complex objects like graphs and hypergraphs.

Summary

We haven't generated a summary for this paper yet.