Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomised Composition and Small-Bias Minimax (2208.12896v1)

Published 26 Aug 2022 in cs.CC and quant-ph

Abstract: We prove two results about randomised query complexity $\mathrm{R}(f)$. First, we introduce a "linearised" complexity measure $\mathrm{LR}$ and show that it satisfies an inner-optimal composition theorem: $\mathrm{R}(f\circ g) \geq \Omega(\mathrm{R}(f) \mathrm{LR}(g))$ for all partial $f$ and $g$, and moreover, $\mathrm{LR}$ is the largest possible measure with this property. In particular, $\mathrm{LR}$ can be polynomially larger than previous measures that satisfy an inner composition theorem, such as the max-conflict complexity of Gavinsky, Lee, Santha, and Sanyal (ICALP 2019). Our second result addresses a question of Yao (FOCS 1977). He asked if $\epsilon$-error expected query complexity $\bar{\mathrm{R}}_{\epsilon}(f)$ admits a distributional characterisation relative to some hard input distribution. Vereshchagin (TCS 1998) answered this question affirmatively in the bounded-error case. We show that an analogous theorem fails in the small-bias case $\epsilon=1/2-o(1)$.

Citations (4)

Summary

We haven't generated a summary for this paper yet.