Papers
Topics
Authors
Recent
Search
2000 character limit reached

Wedge Sampling: Algorithms & Applications

Updated 6 February 2026
  • Wedge sampling is a method that allocates computational resources to wedge-like motifs, leveraging length-two paths in data structures for efficient statistical estimation.
  • It employs strategies such as center sampling and hybrid edge-based approaches to approximate graph metrics like triangle counts and clustering coefficients with strong probabilistic guarantees.
  • The technique extends to tensor completion, MIPS, and radio interferometry, enabling near-linear sample complexity and improved performance in signal extraction and parameter estimation.

Wedge sampling is a class of algorithmic and measurement strategies that allocate computational resources or experimental observations to "wedge"-like combinatorial or geometric structures, rather than to individual elements. It appears prominently in fields including graph analysis, tensor completion, radio interferometry, and high-dimensional metric search. Across applications, the unifying principle is the strategic use of length-two paths—either as combinatorial motifs (paths of length two in a graph, or bipartite wedge walks), cylindrical kk-space subvolumes, or as structured sampling patterns—to optimize statistical estimation, computational efficiency, or signal extraction.

1. Wedge Sampling in Graph Analysis

Wedge sampling was introduced as a scalable method to approximate graph triadic measures, primarily triangle counts and clustering coefficients. In an undirected graph G=(V,E)G=(V,E), a wedge is a path of length two: an ordered triple (u,v,w)(u,v,w) such that {u,v},{v,w}E\{u,v\},\{v,w\} \in E, centered at vv (Seshadhri et al., 2012, Seshadhri et al., 2013). The total number of wedges is W=vV(dv2)W = \sum_{v\in V} \binom{d_v}{2}, where dvd_v is the degree of vertex vv.

The classic wedge sampling procedure is:

  • Precompute WvW_v (wedges at vv), WW (total).
  • Sample center vv with probability proportional to Wv/WW_v/W.
  • Draw a uniform unordered neighbor pair (u,w)(u,w) of vv.
  • Mark the wedge "closed" if {u,w}E\{u,w\}\in E (i.e., (u,v,w)(u,v,w) forms a triangle).

The fraction of closed wedges over kk samples, Xˉ=1kiXi\bar X = \frac{1}{k}\sum_i X_i, is an unbiased estimator for the transitivity (global clustering coefficient) C=3T/WC=3T/W, and T^=(W/3)Xˉ\hat T = (W/3)\bar X is unbiased for triangle count TT, with probabilistic guarantees given by Hoeffding's bound (Seshadhri et al., 2012). By shifting the center sampling distribution, one obtains estimators for local, degree-wise, and directed clustering coefficients, or for uniform triangle sampling. Empirically, wedge sampling achieves order-of-magnitude speedups (up to 104×10^4\times) over enumeration, with errors independent of graph size due to sample-based bounds (Seshadhri et al., 2013).

Variants such as edge-based wedge sampling combine edge sampling (robust to degree distribution) with wedge extension for variance reduction. In very large graphs with power-law degree, these hybrids further reduce sample complexity—by factors up to 8×8\times relative to pure edge or wedge sampling (Türkoğlu et al., 2017).

2. Wedge Sampling in Low-Rank Tensor Completion

Recent theoretical advances demonstrate a fundamental gain by directly sampling "wedge" patterns in tensor completion problems. For a kk-order low-rank tensor of size nkn^k, the standard (uniform) entry sampling regime is insufficient to guarantee spectral connectivity without O~(nk/2)\tilde{O}(n^{k/2}) samples. Wedge sampling, as introduced in (Luo et al., 5 Feb 2026), instead allocates sampling budget to length-two paths (triplets (i,,j)(i,\ell,j) corresponding to pairs of rows via a common column in a tensor unfolding), which can be viewed as wedges in a bipartite sampling graph.

Each wedge sample collects both entries AiA_{i\ell} and AjA_{j\ell}. A random subset W~\tilde{\mathcal{W}} of wedge triples is sampled at rate pp. Spectral initialization builds Z=(i,,j)W~AiAjp(Eij+Eji)+Z=\sum_{(i,\ell,j)\in\tilde{\mathcal{W}}} \frac{A_{i\ell}A_{j\ell}}{p}\cdot(E_{ij}+E_{ji})+\cdots, ensuring E[Z]=AA\mathbb{E}[Z]=AA^\top. Wedge-based design guarantees sufficient connectivity for spectral methods at the optimal sample complexity O(nlogn)O(n \log n), dramatically improving over the O~(nk/2)\tilde{O}(n^{k/2}) threshold. Plug-and-play refinement via nonconvex optimization then yields both weak and exact recovery (Luo et al., 5 Feb 2026).

This sampling paradigm reveals that the previously observed statistical-to-computational gap in polynomial-time tensor completion is chiefly a consequence of the uniform-entry model, rather than inherent algorithmic hardness. Wedge sampling closes this gap by structurally enforcing the presence of informative second-moment correlations on a near-linear sample budget.

3. Wedge Sampling in Maximum Inner Product Search (MIPS)

In high-dimensional data retrieval, wedge sampling functions as a randomized sketching technique for identifying top-kk inner products under computation budget constraints (Lorenzen et al., 2019). For a query qRdq\in\mathbb{R}^d and database XRn×dX\in\mathbb{R}^{n\times d}, wedge sampling aims to recover the largest xiqx_i \cdot q with as few inner product computations as possible. The protocol is:

  • Precompute for each coordinate jj the 1\ell_1-aggregate cj=ixijc_j = \sum_i |x_{ij}|.
  • For each screening sample:
    • Draw coordinate jj with probability qjcj/z|q_j|c_j/z, where z=jqjcjz = \sum_j |q_j|c_j.
    • Draw data index ii with probability xij/cj|x_{ij}|/c_j.
    • Update a per-item counter.
  • Select the BB items with largest counters, compute exact inner products, and return top-kk.

This two-tiered screening and ranking approach, especially in its deterministic variant (dWedge), yields theoretically lower variance and strictly lower screening complexity than alternative methods such as diamond sampling. In empirical benchmarks, wedge sampling achieves recall/speedup trade-offs superior to sampling, greedy, and LSH-based approaches in large-scale recommendation and feature matching tasks (Lorenzen et al., 2019).

4. Wedge Sampling in Radio Interferometry and 21cm Cosmology

In 21cm intensity mapping, "wedge sampling" refers to the strategy of partitioning Fourier (k,k)(k_\perp,k_\parallel)-space into cylindrical sectors ("wedges" in μ=k/k\mu=k_\parallel/|k|) to optimize signal recovery and foreground avoidance during the Epoch of Reionization (EoR) (Chen et al., 2024). The physically motivated "foreground wedge"—a region in (k,k)(k_\perp,k_\parallel) contaminated by chromatic foreground leakage—is excluded, and the remaining "EoR window" is further subdivided into wedges for improved parameter estimation.

  • The power spectrum P1(k,μ)P_1(k,\mu) is decomposed into Legendre multipoles within each wedge μ[μ0(i),μ1(i)]\mu\in[\mu_0^{(i)},\mu_1^{(i)}] with per-bin weight W(k,μ)W(k,\mu) dictated by the antenna array's baseline distribution.
  • Wedge-averaged power statistics PWi(k)P^{W_i}(k) are constructed as a weighted sum of multipoles (monopole, quadrupole, hexadecapole), with weights F(i)F_\ell^{(i)} derived analytically.
  • The Fisher matrix for parameter inference is computed on this expanded data vector, yielding error forecasts.

Isolating relatively flat-weighted wedges mitigates the highly non-uniform kk-space sampling imposed by realistic interferometric layouts (e.g., SKA-Low). By focusing on narrow μ\mu-ranges, wedge sampling reduces anisotropic biases, mode-mixing, and enhances the sensitivity to reionization parameters, delivering a 3×\sim 3\times improvement in marginal errors compared to monopole-only analyses (Chen et al., 2024).

Advances in physical array design, such as the RULES algorithm for uv-plane coverage (MacKay et al., 18 Sep 2025), further interface with wedge suppression by achieving near-complete, regular uv-sampling grids, which suppress the foreground wedge by up to 16\sim 16 orders of magnitude in simulated image-based pipelines. This level of wedge suppression is critically dependent on precise, nonrandom antenna placement and can be degraded by positional errors, missing baselines, or insufficient redundancy (MacKay et al., 18 Sep 2025). Analytical work confirms that completely erasing the wedge via baseline densification is limited by practical constraints, but logarithmic-radial regularity yields significant leakage reduction (Murray et al., 2018).

5. Methodological Foundations and Theoretical Guarantees

Wedge sampling's statistical and computational advantages hinge on key mathematical properties:

  • Unbiasedness and Concentration: For graph metrics, Hoeffding's inequality provides explicit bounds on estimation error for sampling-based wedge statistics. The sample complexity is independent of overall graph size, depending only on desired error and failure probability (Seshadhri et al., 2012).
  • Variance Reduction: Hybridization with edge sampling or deterministic assignment further reduces estimator variance, especially in heterogeneous data (e.g., graphs with power-law degree) (Türkoğlu et al., 2017, Lorenzen et al., 2019).
  • Spectral Initialization: In tensor completion, wedge-based estimators guarantee sufficient connectivity and concentration for accurate spectral projections at near-linear sample cost, as shown by matrix-Bernstein and Davis–Kahan analysis (Luo et al., 5 Feb 2026).
  • Mode-Mixing Mitigation: In radio interferometry, partitioning k-space into wedges with nearly uniform sampling weight, or physically engineering uv-complete arrays, directly attacks the root of mixing-induced foreground leakage (Chen et al., 2024, MacKay et al., 18 Sep 2025, Murray et al., 2018).

6. Empirical Performance and Applications

In large-scale graphs, wedge sampling achieves subsecond approximation of global clustering, degree-wise clustering, triangle counts, and directed motifs for graphs with over 10710^7 edges, matching or exceeding accuracy of edge sparsification approaches while being factors of 10310^3104×10^4\times faster (Seshadhri et al., 2012, Seshadhri et al., 2013). In triangle estimation, hybrid edge-based wedge strategies enable estimation with sampling fractions <105<10^{-5} in massive power-law graphs, outperforming pure edge or wedge methods by up to 8×8\times (Türkoğlu et al., 2017).

In tensor completion, wedge sampling closes the sample complexity gap of polynomial-time algorithms, establishing that polynomial recovery is achievable with O(n)O(n) samples, and that further refinement only requires O(n)O(n) additional uniform entries (Luo et al., 5 Feb 2026).

In 21cm cosmology, wedge sampling strategies in data space (partitioning k-cylinders) enable 3×3\times tighter parameter constraints, while wedge sampling in array design (uv-complete layouts) drives wedge power to the detection floor under nominal conditions (Chen et al., 2024, MacKay et al., 18 Sep 2025). Physical and algorithmic wedge suppression strategies are complementary; physical layout regularity suppresses leakage at the map-making level, while data partitioning with wedge multipoles improves parameter estimation downstream.

7. Limitations and Practical Considerations

Despite its strengths, wedge sampling's effectiveness can be hampered by real-world constraints:

  • Physical Array Design: Complete wedge suppression via baseline density or perfect regularity is unachievable beyond moderate array sizes. Small position errors (\simmm) and missing antennas can degrade suppression by many orders of magnitude, though redundancy mitigates this effect (MacKay et al., 18 Sep 2025, Murray et al., 2018).
  • Graph Structure: In graphs with very low clustering, the variance of wedge estimators increases, requiring larger sample sizes or bias-variance trade-offs.
  • Computational Tradeoffs: Preprocessing for deterministic wedge sampling (e.g., sorting for dWedge in MIPS) incurs memory and time costs proportional to data size.
  • Model Assumptions: In tensor completion, wedge sampling's gains derive from nonadaptive, random wedge allocation; adversarial or correlated missingness patterns may break the statistical guarantees (Luo et al., 5 Feb 2026).

Thus, wedge sampling is an algorithmic and experimental principle that—when carefully adapted to structural and physical constraints—enables near-optimal sampling, estimation, and signal extraction across diverse high-dimensional inference problems. The method is rigorously validated in graph theory (Seshadhri et al., 2012, Türkoğlu et al., 2017, Seshadhri et al., 2013), tensor analysis (Luo et al., 5 Feb 2026), radio astronomy (Chen et al., 2024, MacKay et al., 18 Sep 2025, Murray et al., 2018), and machine learning search (Lorenzen et al., 2019).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Wedge Sampling.