Papers
Topics
Authors
Recent
2000 character limit reached

Coreset Subsampling Overview

Updated 11 January 2026
  • Coreset subsampling is a technique that constructs a small, weighted subset of data to approximate full-data performance with provable accuracy.
  • It leverages sensitivity, importance, and diversity-based sampling to significantly reduce computational costs in various statistical and machine learning tasks.
  • Advanced methods, including submodular maximization and Bayesian coresets, offer scalable solutions for high-dimensional clustering, regression, and inference.

Coreset subsampling is a central framework for dataset reduction across computational statistics, machine learning, signal processing, Bayesian inference, and numerical linear algebra. The core idea is to construct a small, weighted subset (the "coreset") of a much larger dataset such that key model or inference tasks performed on the coreset approximate those on the full data according to predefined guarantees. Coreset subsampling strategies leverage probabilistic, combinatorial, geometric, convex, or submodular properties of the data and the corresponding optimization problems, enabling algorithmic and statistical speedups, scalability to massive data, memory and energy savings, and new theoretical insights into data summarization.

1. Coreset Fundamentals: Definitions, Problem Statements, and Applicability

A coreset for a problem is a (typically small) weighted subset whose objective, or cost, approximates that of the full data to provable accuracy. Let PP be a weighted dataset, QQ a family of queries (e.g., model parameters, clusters), and f:P×QR0f:P \times Q \to \mathbb{R}_{\ge 0} a loss/cost function. A weighted subset (C,u)(C, u) of PP is an ε\varepsilon-coreset if for all qQq \in Q: pPw(p)f(p,q)cCu(c)f(c,q)εpPw(p)f(p,q)\Big| \sum_{p \in P} w(p) f(p, q) - \sum_{c \in C} u(c) f(c, q) \Big| \leq \varepsilon \sum_{p \in P} w(p) f(p, q) This structure covers mean/variance estimation (Maalouf et al., 2021), kk-means/median clustering, kk-line clustering, subspace approximation, SVMs (Tukan et al., 2020), kernel density estimation (Zheng et al., 2017), low-rank factorization (Maalouf et al., 2019, Li et al., 2022), and Bayesian inference (Chen et al., 2023, Manousakas et al., 2022, Naik et al., 2022).

Core applications include:

  • Efficient model training and validation with reduced data
  • Fast hyperparameter and architecture sweeps
  • Streaming, distributed, and federated learning
  • Accelerated optimization and Bayesian inference
  • Robustness to noisy or adversarial data

Coreset size typically depends on data dimension, model complexity, error tolerance, and the "sensitivity" structure of the specific problem.

2. Sensitivity, Importance, and Diversity Sampling Frameworks

Sensitivity sampling is foundational for many coreset constructions (Braverman et al., 2016, Maalouf et al., 2019). For each data point pPp \in P, its sensitivity

σ(p)=supqQw(p)f(p,q)pPw(p)f(p,q)\sigma(p) = \sup_{q \in Q} \frac{w(p) f(p, q)}{\sum_{p' \in P} w(p') f(p', q)}

quantifies its maximal influence on the objective. The sum t=pPσ(p)t = \sum_{p \in P} \sigma(p) governs sample complexity: sampling N=O((t/ε2)(dlogt+log(1/δ)))N = O((t/\varepsilon^2) (d \log t + \log(1/\delta))) points with probability proportional to σ(p)/t\sigma(p)/t and appropriate rescales yields an ε\varepsilon-coreset for cost functions of VC-dimension dd (Braverman et al., 2016). This framework unifies theoretical guarantees for kk-clustering (Huang et al., 2020), SVM (Tukan et al., 2020), regression (Li et al., 2022), density estimation (Turner et al., 2020), and many “near-convex” problems (Tukan et al., 2020).

Importance sampling generalizes sensitivity sampling with heuristic or problem-specific weights (leverage scores, gradient magnitudes, combined influence metrics). Diversity-based sampling (e.g., Determinantal Point Processes, DPPs) introduces negative correlations to reduce redundancy (Tremblay et al., 2018), strictly lowering estimator variance and often yielding superior subsample efficiency, especially in clustering and regression.

3. Submodular, Geometric, and Modern Non-Sensitivity-Based Coresets

Recent advances address the empirical and computational limitations of sensitivity and importance sampling for high-dimensional, nonconvex, or deep learning settings. Submodular maximization—specifically facility location and related functions—yields robust streaming-compatible greedy algorithms with (11/e)(1-1/e)-optimality guarantees for set selection, commonly used in SubZeroCore (Moser et al., 26 Sep 2025) and deep coreset libraries (Guo et al., 2022). These methods synthesize density, coverage, and diversity criteria in the coreset objective and leverage scalable kk-nearest neighbor search and lazy greedy maximization.

Geometric partition/aggregation approaches (such as ring decomposition for clustering (Braverman et al., 2022)) can, sometimes surprisingly, enable pure uniform sampling or VC-based approximations with coreset size independent of nn. These methods are particularly effective for constrained clustering (capacitated, fair, or Wasserstein barycenter), and yield smaller, sometimes optimal ε\varepsilon-dependent coresets in low dimensions.

4. Specialized and Advanced Coreset Constructions

Bayesian coresets recast posterior inference as a data summarization problem, optimizing weighted KL-divergence between the full and coreset posterior (Naik et al., 2022, Chen et al., 2023, Manousakas et al., 2022). Greedy variational, quasi-Newton, and even MCMC-based joint sample-weighted schemes are established, with explicit high-probability guarantees, control of approximation error in total variation or two-moment KL, and extension to intractable BNNs and other models.

For kernel density estimation and general smooth divergences (including Sinkhorn), Carathéodory or kernel quadrature-based strategies (Turner et al., 2020, Kokot et al., 28 Apr 2025) enable coreset construction via moment or maximum mean discrepancy minimization. These methods achieve minimax-optimal L2L_2 risk and, especially for Sinkhorn divergence, achieve sublinear (m=o(n)m = o(n)) coreset size with rigorous statistical control.

Element-wise core-sets (Li et al., 2022, Xue et al., 22 Sep 2025) select large-magnitude entries per column (rather than rows), optimally exploiting numerical sparsity in regression or matrix factorization (e.g., ALS for recommender systems), providing notable computation and accuracy speedups in very high dimensions.

5. Empirical Performance, Complexity, and Practical Choices

Empirical evaluations consistently support the theoretical speedup and compression of coreset subsampling, but reveal context-dependent trade-offs and the need for careful baseline testing:

Generic complexity is at most a small multiple of original data passes (O(nd)O(nd) to O(nnz(X)+rp2)O(\mathrm{nnz}(X) + rp^2), or O(K[NM2+M3+SN])O(K[NM^2 + M^3 + SN]) for Bayesian MCMC/VI) unless advanced approximate nearest neighbor or randomized algebraic routines are used.

6. Extensions: Streaming, Distributed, Budget-Aware, and Robust Coresets

Merge-and-reduce paradigms (Braverman et al., 2016, Maalouf et al., 2019) enable scalable streaming and distributed coresets: per-block summaries are coresetized locally, then recursively merged and re-coresetized, maintaining polylogarithmic size, communication, and error. Robust variants—such as median-of-means aggregation in linear regression (Li et al., 2022) or cost-aware greedy schemes for graph summarization (Vahidian et al., 2019)—adapt to outliers, nonuniform costs, and adversarial contamination. Element- or block-wise selection is particularly effective in networks, tensor decompositions, and large-scale collaborative filtering (Xue et al., 22 Sep 2025).

7. Limitations, Open Problems, and Future Directions

Despite theoretical and empirical success, practical deployment of coresets reveals several gaps:

  • Sensitivity bounds are sometimes too loose to outperform uniform sampling, particularly in loosely regularized or low-variance models (Lu et al., 2023).
  • Many advanced coresets have substantial model/hyperparameter dependency or require pretraining or nontrivial feature engineering (Moser et al., 26 Sep 2025, Guo et al., 2022).
  • Finite-sample and sharp minimax bounds for high-dimensional, nonconvex, or composite objectives remain open (Kokot et al., 28 Apr 2025, Manousakas et al., 2022).
  • Developing fully automatic, adaptive, or data-driven coreset size selectors and deeper integration into iterative machine learning pipelines (e.g., coreset MCMC, dataset distillation) are active lines of research.

Further connections of coreset construction to kernel quadrature, moment and score matching, discrepancy theory, and randomized numerical linear algebra continue to deepen and broaden the scope of efficient and theoretically principled data summarization.


Selected references: (Braverman et al., 2016, Tremblay et al., 2018, Zheng et al., 2017, Tukan et al., 2020, Huang et al., 2020, Maalouf et al., 2019, Naik et al., 2022, Li et al., 2022, Chen et al., 2023, Manousakas et al., 2022, Turner et al., 2020, Braverman et al., 2022, Vahidian et al., 2019, Moser et al., 26 Sep 2025, Guo et al., 2022, Lu et al., 2023, Tukan et al., 2020, Xue et al., 22 Sep 2025, Kokot et al., 28 Apr 2025).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Coreset Subsampling.