Convex Cone Sparsification Function
- The sparsification function is a measure that determines the minimum number of elements required to approximate sums in a convex cone within a specified relative error.
- It generalizes spectral sparsification from positive semidefinite matrices to arbitrary convex cones using tools from convex analysis and interior-point theory.
- Barrier-based methods establish explicit upper bounds on the sparsifier size, enabling efficient sparse approximations in large-scale conic optimization problems.
The sparsification function of a convex cone quantifies the minimal support size required to approximate arbitrary sums of elements from the cone to within a prescribed order-relative error. This concept generalizes spectral sparsification from sums of positive semidefinite matrices to sums within arbitrary convex cones, using tools from convex analysis and interior-point theory. The sparsification function provides worst-case bounds that are intrinsic to the geometric and barrier properties of the cone.
1. Foundational Definitions
Let be a closed convex cone with the cone-induced partial order
$x\;_K\!y \iff y - x\;\in K.$
The relative interior is denoted $\RelInt(K)$.
-Sparsifier: Given with $e = \sum_{i=1}^m x_i \in \RelInt(K)$ and , an -sparsifier of comprises a subset and weights so that
$(1-\varepsilon)\,e\;_K\; \sum_{i \in S}\lambda_i x_i\;_K\; (1+\varepsilon)\,e.$
Sparsification Function: A function is a sparsification function for if, for every collection summing to $e \in \RelInt(K)$ and every , there exists an -sparsifier with . The sparsification function is defined as
where is the set of feasible such . Carathéodory’s theorem for cones implies (Saunderson, 26 Dec 2025).
2. Upper Bounds: Barrier-Based Results
For a proper cone (closed, pointed, full-dimensional) that admits a -logarithmically homogeneous self-concordant barrier, i.e., a convex $F:\Int(K)\to \mathbb{R}$ satisfying
the following bounds are established:
- General Case: If admits such a barrier,
- Pairwise Self-Concordant Case: If, additionally,
for all $x \in \Int(K), u,v \in K$, where is the minimal with $-tx\,_K u\,_K tx$, then
Every hyperbolicity cone, and in particular the positive semidefinite cone , satisfies the pairwise condition with , recovering the Batson–Spielman–Srivastava bound (Saunderson, 26 Dec 2025).
3. Algorithmic Proof Sketches via Barrier Methods
3.1 Frank–Wolfe Construction
For the upper bound, classical self-concordance yields weights such that
and thus with . Set . Defining the convex set and the quadratic objective with , the Frank–Wolfe algorithm produces, after iterations, a point with . Self-concordance properties ensure this yields an -sparsifier of size .
3.2 BSS-style Iteration for Pairwise Barriers
For the sharper bound, define "upper" and "lower" barrier potentials
Following a Batson–Spielman–Srivastava-type greedy iteration and using the pairwise self-concordance condition, at each step a new sparsifier component is added without increasing the barrier potentials. After steps, the resulting normalized sum is an -sparsifier with terms (Saunderson, 26 Dec 2025).
4. Concrete Instance: Positive Semidefinite Cone
For , the standard logarithmic barrier is with parameter . The pairwise self-concordance condition holds (e.g., by Loewner’s theorem). Hence,
exactly matching the dimension-dependent sparsification originally established for matrix-valued spectral sparsification.
5. Geometric Operations and Monotonicity
If a convex set has a proper -lift, meaning for some linear space meeting $\RelInt(K)$ and linear map , then
In particular, intersection with a hyperplane meeting $\RelInt(K)$, linear projections, convex lifts, and extended formulations all do not increase the sparsification function. This demonstrates stability under standard convex-geometric operations and suggests intrinsic geometric monotonicity of the sparsification function (Saunderson, 26 Dec 2025).
6. Applications to Conic Optimization
For covering-type conic programs
$\min_{y \ge 0} \langle b, y \rangle \quad \text{s.t.} \quad \sum_{i=1}^m y_i a_i\;_K\;c,$
where , the constraints can be replaced by an -sparsifier of size , yielding a near-optimal sparse solution with .
For packing-type duals,
replacing the cost vector by its sparsifier alters the optimum by at most a factor. Thus, cone sparsification reduces the support size of near-optimal feasible points, with implications for the acceleration of first-order or combinatorial algorithms in large-scale conic optimization settings (Saunderson, 26 Dec 2025).