Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 94 tok/s
GPT OSS 120B 476 tok/s Pro
Kimi K2 190 tok/s Pro
2000 character limit reached

The square root rule for adaptive importance sampling (1901.02976v2)

Published 10 Jan 2019 in math.ST, stat.CO, and stat.TH

Abstract: In adaptive importance sampling, and other contexts, we have $K>1$ unbiased and uncorrelated estimates $\hat\mu_k$ of a common quantity $\mu$. The optimal unbiased linear combination weights them inversely to their variances but those weights are unknown and hard to estimate. A simple deterministic square root rule based on a working model that $\mathrm{Var}(\hat\mu_k)\propto k{-1/2}$ gives an unbisaed estimate of $\mu$ that is nearly optimal under a wide range of alternative variance patterns. We show that if $\mathrm{Var}(\hat\mu_k)\propto k{-y}$ for an unknown rate parameter $y\in [0,1]$ then the square root rule yields the optimal variance rate with a constant that is too large by at most $9/8$ for any $0\le y\le 1$ and any number $K$ of estimates. Numerical work shows that rule is similarly robust to some other patterns with mildly decreasing variance as $k$ increases.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube