Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Monte Carlo randomized approximation schemes (1411.4074v1)

Published 14 Nov 2014 in math.ST, cs.CC, math.PR, and stat.TH

Abstract: Consider a central problem in randomized approximation schemes that use a Monte Carlo approach. Given a sequence of independent, identically distributed random variables $X_1,X_2,\ldots$ with mean $\mu$ and standard deviation at most $c \mu$, where $c$ is a known constant, and $\epsilon,\delta > 0$, create an estimate $\hat \mu$ for $\mu$ such that $\text{P}(|\hat \mu - \mu| > \epsilon \mu) \leq \delta$. This technique has been used for building randomized approximation schemes for the volume of a convex body, the permanent of a nonnegative matrix, the number of linear extensions of a poset, the partition function of the Ising model and many other problems. Existing methods use (to the leading order) $19.35 (c/\epsilon)2 \ln(\delta{-1})$ samples. This is the best possible number up to the constant factor, and it is an open question as to what is the best constant possible. This work gives an easy to apply estimate that only uses $6.96 (c/\epsilon)2 \ln(\delta{-1})$ samples in the leading order.

Citations (1)

Summary

We haven't generated a summary for this paper yet.