Papers
Topics
Authors
Recent
2000 character limit reached

Non-Asymptotic Query Complexity Bounds

Updated 30 December 2025
  • Non-asymptotic query complexity bounds define the exact number of queries needed with explicit constants, addressing finite parameters like error probability and noise level.
  • They span diverse models including noisy Boolean functions, graph connectivity, learning tasks, cake cutting, and rank-based optimization.
  • The analyses leverage adaptive algorithms, information-theoretic methods, and adversarial techniques to achieve tight performance guarantees and reveal phase transitions in computational effort.

Non-asymptotic query complexity bounds specify the minimum number of queries required to solve computational, statistical, or learning problems in the query model for specific finite values of problem parameters, error probabilities, and noise levels, with all constants and lower-order terms explicit. Unlike asymptotic or big-Theta results, these bounds resolve precise dependence on sample size, error tolerance, and problem structure, supporting rigorous performance guarantees in both classical, noisy, and information-constrained computation.

1. Noisy Query Complexity Bounds for Boolean Functions and Graph Problems

A central thread in non-asymptotic query complexity is the study of computation under independent per-query noise, particularly within the binary noisy-query or noisy decision-tree model. Here, the return value of each binary query is flipped independently with probability p(0,1/2)p \in (0,1/2) (that is, passed through a BSCp\mathrm{BSC}_p channel). The noisy-query complexity Rp,δ(f)R_{p,\delta}(f) is the minimal expected number of queries required to compute a function ff with error at most δ\delta.

High-Influence Boolean Functions

For any Boolean function f:{0,1}n{0,1}f : \{0,1\}^n \to \{0,1\} with total influence I(f)=Ω(n)I(f) = \Omega(n)—where

Infi(f)=Prx[f(x)f(xei)],\mathrm{Inf}_i(f) = \Pr_x[f(x) \neq f(x \oplus e_i)]\,,

I(f)=i=1nInfi(f),I(f) = \sum_{i=1}^n \mathrm{Inf}_i(f)\,,

the tight non-asymptotic complexity is Rp,1/3(f)=Θ(nlogn).R_{p,1/3}(f) = \Theta(n \log n)\,. This is sharp: reconstructing each bit O(logn)O(\log n) times suffices, and the lower bound uses a three-phase reduction—first, O(logn)O(\log n) noisy queries per variable, then conditional revealing to attain product posteriors, and finally O(n)O(n) noiseless queries to settle the remaining uncertainty. The proof leverages binomial large-deviation lemmas and total-influence decay per query (Gu et al., 7 Feb 2025).

Graph Connectivity under Noisy Queries

Given edge-query access to an unknown nn-vertex graph, where each answer is flipped independently with probability pp, the tight complexity for deciding connectivity is Rp,1/3(Connn)=Θ(n2logn)R_{p,1/3}(\mathrm{Conn}_n) = \Theta(n^2 \log n) where the algorithm reconstructs all (n2){n \choose 2} edges by querying each O(logn)O(\log n) times and runs BFS on the result. The lower bound employs an adversarial two-component forcing construction and a reduction to posterior indistinguishability across a uniform set of connected/disconnected edge sets (Gu et al., 7 Feb 2025).

Threshold and Counting Functions

For deciding whether at least kk out of nn bits are ones (the kk-Threshold problem) with noisy queries and error at most δ\delta, the precise non-asymptotic bound is Rp,δ(THn,k)=(1±o(1))nlog(min{k,nk+1}/δ)(12p)log1ppR_{p,\delta}(\mathrm{TH}_{n,k}) = (1 \pm o(1))\cdot \frac{n \log(\min\{k,n-k+1\}/\delta)}{(1-2p)\log\frac{1-p}{p}} across the entire 1k(n+1)/21 \leq k \leq (n+1)/2 regime; a similar formula holds for noise-tolerant exact counting. These bounds are achieved by adaptive "asymmetric check-bit" procedures with per-bit sequential testing, and lower bounds are proved via three-phase reductions refining classical information-theoretic arguments (Gu et al., 7 Feb 2025).

2. Classical, Adversarial, and Partition Lower Bound Methods

Non-asymptotic bounds also arise in adversarial query settings, including both deterministic and randomized query learning. For Boolean and partial functions, a suite of lower-bound methods has matured, notably the adversary and partition bounds.

Partition Bound

Given f:{0,1}n{0,1}f : \{0,1\}^n \to \{0,1\} and error ϵ\epsilon, the ϵ\epsilon-partition bound prtϵ(f)\mathrm{prt}_\epsilon(f) is the optimum of a linear program over weights on subcubes, subject to coverage and consistency constraints. Explicitly,

Rϵ(f)log2prtϵ(f)R_\epsilon(f) \geq \log_2 \mathrm{prt}_\epsilon(f)

Whereas the adversary and approximate degree methods yield O(n)O(\sqrt{n}) lower bounds on certain functions (e.g., Tribes), the partition bound achieves Ω(n)\Omega(n), strictly stronger by a quadratic factor (0910.4266).

Tightness in Learning and Certificate Complexity

For promise problems, such as 1v2-Cycle (distinguishing an nn-cycle from two n/2n/2-cycles), precise non-asymptotic deterministic and randomized lower bounds are established by certificate complexity and explicit adversary arguments. For instance, the deterministic query complexity for 1v2-Cycle is exactly n2/512n^2/512, and the $1/6$-approximate certificate complexity is n/4n/4, with corresponding round lower bounds for adaptive MPC (Charikar et al., 2020). These adversarial techniques generalize to a wide class of partial functions and constrained communication models.

3. Non-asymptotic Query Complexity in Stochastic and Heavy-Hitter Identification

In modern learning-theoretic and statistical tasks, non-asymptotic query/sample complexity bounds underpin performance guarantees for support recovery and threshold identification.

Threshold-based Support Identification

Given access to an oracle returning samples from an unknown distribution P\mathcal{P} over [k][k], the task is to identify all elements ii with piγp_i \geq \gamma. For index queries (full label reveals), KL-UCB algorithms achieve

O(maxj{m,m+1}log(k/δ)(pjγ)2)O\left(\max_{j \in \{m,m+1\}} \frac{\log (k/\delta)}{(p_j-\gamma)^2}\right)

queries for confidence 1δ1-\delta. For noisy or pairwise-comparison queries (only "same" vs "different"), non-asymptotic bounds scale as O(log(k/δ)/(pjγ)2)O(\log (k/\delta)/(p_j-\gamma)^2) after an explicit two-phase clustering and testing procedure, both matching lower bounds up to explicit constants (Sarmasarkar et al., 2020).

1-Identification in Bandits

For deciding whether any arm in a KK-armed stochastic bandit has mean at least a threshold μ0\mu_0, near-optimal non-asymptotic sample complexity of order

E[τ]γ[ln(1/δ)Δ0,12+(lnKΔ0,12)H1pos]E[\tau] \leq \gamma\left[\frac{\ln (1/\delta)}{\Delta_{0,1}^2} + \left(\ln \frac{K}{\Delta_{0,1}^2}\right) H_1^{pos}\right]

is achieved, where gaps Δ0,a\Delta_{0,a} and collective hardness H1posH_1^{pos} are explicit in the problem parameters. Lower bounds match up to polylogarithmic factors, derived by change-of-measure arguments (Li et al., 8 Jun 2025).

4. Query Complexity Bounds in Learning and Model-Theoretic Frameworks

The number of equivalence and membership queries for exact learning depends on combinatorial parameters such as Littlestone dimension Ldim(C)\mathrm{Ldim}(\mathcal{C}) and the consistency dimension C(C,H)\mathcal{C}(\mathcal{C},\mathcal{H}). Non-asymptotic results in this vein include:

  • Pure equivalence-query complexity: LCEQ(C,H)max{Ldim(C)+1,C(C,H)}\mathrm{LC}^{EQ}(\mathcal{C},\mathcal{H}) \geq \max\{\mathrm{Ldim}(\mathcal{C})+1, \mathcal{C}(\mathcal{C},\mathcal{H})\}, with matching linear or exponential upper bounds.
  • In the randomized case, expected queries needed is O(Ldim(C))O(\mathrm{Ldim}(\mathcal{C})) (Chase et al., 2019).

Applications span DFA learning, ω\omega-regular languages, and other structured hypothesis classes, with tight, model-dependent non-asymptotic estimates for query efficiency. Connections to compression schemes and model-theoretic stability (nfcp) are established through these new combinatorial invariants.

5. Query Complexity in Cake Cutting and Fair Division

In fair division (Robertson–Webb model), non-asymptotic bounds precisely determine the number of queries required for various fairness notions:

  • For three-player envy-free cake-cutting with connected pieces, both upper and lower bounds are Θ(log(1/ϵ))\Theta(\log(1/\epsilon)) in the precision parameter.
  • For n=2n=2 players, perfect and equitable allocations likewise require Θ(log(1/ϵ))\Theta(\log(1/\epsilon)) queries.
  • All continuous moving knife protocols with bounded devices and cuts can be simulated in O(rlog(1/ϵ))O(r\log(1/\epsilon)) queries for rr steps, yielding finite, constant-tight non-asymptotic simulation results (Brânzei et al., 2017).

A summary table (see source) details upper and lower bounds for major fairness concepts.

6. Rank-Based and Zero-Order Optimization Under Ordinal Feedback

For stochastic smooth convex and nonconvex optimization using only ordinal/rank-based feedback, explicit non-asymptotic query complexities match those of value-based methods, even when only rankings (not function values) are available:

  • For strongly convex functions, O(dLGu2/(μ2ϵ))O(d L G_u^2/(\mu^2 \epsilon)) queries suffice to reach function suboptimality ϵ\epsilon, and for nonconvex O(dLGu2/ϵ2)O(d L G_u^2/\epsilon^2) queries are sufficient for ϵ\epsilon-stationary points.
  • These rates match information-theoretic minima and show no penalty for using purely ordinal information (Ye, 22 Dec 2025).

The analysis combines concentration for order statistics of Gaussian samples, adaptive rank-weighted direction selection, and high-probability martingale control, yielding tight, instance-dependent bounds.

7. Significance and Broader Impact

Non-asymptotic query complexity bounds provide sharp, instance-parameterized decision guidelines crucial for real-world systems where nn, kk, δ\delta, or pp are moderate or finite. For many classic and modern models—including noisy computation, bandit identification, heavy hitter estimation, cake cutting, and machine learning—these results resolve optimal performance up to explicit constants, clarify phase transitions in computational effort as parameters vary, and support the design of practical algorithms whose performance tightly aligns with theoretical minimums. In several settings, non-asymptotic results reveal exact characterizations (not just Θ\Theta-notation) and uncover separations or matches between adversary, polynomial, and partition methods, establishing a unified landscape for query-efficient algorithm design.

References

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Non-Asymptotic Query Complexity Bounds.