Papers
Topics
Authors
Recent
Search
2000 character limit reached

Convex Cone Sparsification Function

Updated 30 December 2025
  • The sparsification function is a measure that determines the minimum number of elements required to approximate sums in a convex cone within a specified relative error.
  • It generalizes spectral sparsification from positive semidefinite matrices to arbitrary convex cones using tools from convex analysis and interior-point theory.
  • Barrier-based methods establish explicit upper bounds on the sparsifier size, enabling efficient sparse approximations in large-scale conic optimization problems.

The sparsification function of a convex cone quantifies the minimal support size required to approximate arbitrary sums of elements from the cone to within a prescribed order-relative error. This concept generalizes spectral sparsification from sums of positive semidefinite matrices to sums within arbitrary convex cones, using tools from convex analysis and interior-point theory. The sparsification function provides worst-case bounds that are intrinsic to the geometric and barrier properties of the cone.

1. Foundational Definitions

Let KRnK \subseteq \mathbb{R}^n be a closed convex cone with the cone-induced partial order

$x\;_K\!y \iff y - x\;\in K.$

The relative interior is denoted $\RelInt(K)$.

ε\varepsilon-Sparsifier: Given x1,,xmKx_1,\ldots,x_m \in K with $e = \sum_{i=1}^m x_i \in \RelInt(K)$ and 0<ε<10 < \varepsilon < 1, an ε\varepsilon-sparsifier of ee comprises a subset S{1,,m}S \subseteq \{1, \ldots, m\} and weights {λi>0:iS}\{\lambda_i > 0 : i \in S\} so that

$(1-\varepsilon)\,e\;_K\; \sum_{i \in S}\lambda_i x_i\;_K\; (1+\varepsilon)\,e.$

Sparsification Function: A function α:(0,1)R+\alpha : (0,1) \to \mathbb{R}_+ is a sparsification function for KK if, for every collection {xi}K\{x_i\} \subset K summing to $e \in \RelInt(K)$ and every ε(0,1)\varepsilon \in (0,1), there exists an ε\varepsilon-sparsifier S,{λi}S, \{\lambda_i\} with Sα(ε)|S| \le \alpha(\varepsilon). The sparsification function is defined as

spK(ε):=infαFKα(ε),sp_K(\varepsilon) := \inf_{\alpha \in \mathcal{F}_K} \alpha(\varepsilon),

where FK\mathcal{F}_K is the set of feasible such α\alpha. Carathéodory’s theorem for cones implies spK(ε)dim(K)sp_K(\varepsilon) \le \dim(K) (Saunderson, 26 Dec 2025).

2. Upper Bounds: Barrier-Based Results

For a proper cone KK (closed, pointed, full-dimensional) that admits a ν\nu-logarithmically homogeneous self-concordant barrier, i.e., a C3C^3 convex $F:\Int(K)\to \mathbb{R}$ satisfying

F(tx)=F(x)νlnt,D3F(x)[u,u,u]2[D2F(x)[u,u]]3/2,F(tx) = F(x)-\nu \ln t, \quad |D^3 F(x)[u,u,u]| \le 2 [ D^2 F(x)[u,u] ]^{3/2},

the following bounds are established:

  • General Case: If KK admits such a barrier,

spK(ε)(4ν/ε)2,ε(0,1).sp_K(\varepsilon) \le \Big\lceil\, (4\nu/\varepsilon)^2 \Big\rceil, \quad \forall \varepsilon \in (0,1).

  • Pairwise Self-Concordant Case: If, additionally,

0D3F(x)[v,u,u]2D2F(x)[v,u]ux,0 \le -D^3F(x)[v,u,u] \le 2 D^2F(x)[v,u]|u|_x,

for all $x \in \Int(K), u,v \in K$, where ux|u|_x is the minimal tt with $-tx\,_K u\,_K tx$, then

spK(ε)4ν/ε2,ε(0,1).sp_K(\varepsilon) \le \Big\lceil \, 4\nu / \varepsilon^2 \Big\rceil, \quad \forall \varepsilon \in (0,1).

Every hyperbolicity cone, and in particular the positive semidefinite cone S+dS^d_+, satisfies the pairwise condition with ν=d\nu = d, recovering the Batson–Spielman–Srivastava bound O(d/ε2)O(d/\varepsilon^2) (Saunderson, 26 Dec 2025).

3. Algorithmic Proof Sketches via Barrier Methods

3.1 Frank–Wolfe Construction

For the (4ν/ε)2\lceil(4\nu/\varepsilon)^2\rceil upper bound, classical self-concordance yields weights {μi>0}\{\mu_i > 0\} such that

e=i=1mμixi,iDF(e)[xi]=ν,e = \sum_{i=1}^m \mu_i x_i, \quad -\sum_i D F(e)[x_i] = \nu,

and thus wi=DF(e)[xi]/νw_i = -D F(e)[x_i] / \nu with iwi=1\sum_i w_i=1. Set x~i=xi/wi\tilde{x}_i = x_i / w_i. Defining the convex set X=Conv{x~i}\mathcal{X} = \operatorname{Conv}\{\tilde{x}_i\} and the quadratic objective f(z)=12zee2f(z) = \tfrac{1}{2}\|z-e\|_e^2 with ue2=D2F(e)[u,u]\|u\|_e^2 = D^2 F(e)[u,u], the Frank–Wolfe algorithm produces, after T8ν2/ε2T \approx 8\nu^2/\varepsilon^2 iterations, a point zTXz_T \in \mathcal{X} with zTeeε\|z_T - e\|_e \le \varepsilon. Self-concordance properties ensure this yields an ε\varepsilon-sparsifier of size TT.

3.2 BSS-style Iteration for Pairwise Barriers

For the sharper 4ν/ε2\lceil 4\nu/\varepsilon^2 \rceil bound, define "upper" and "lower" barrier potentials

Φu,e(x)=DF(uex)[e],Φ,e(x)=DF(xe)[e].\Phi^{u,e}(x) = -D F(ue - x)[e], \quad \Phi_{\ell,e}(x) = -D F(x - \ell e)[e].

Following a Batson–Spielman–Srivastava-type greedy iteration and using the pairwise self-concordance condition, at each step a new sparsifier component is added without increasing the barrier potentials. After TT steps, the resulting normalized sum is an ε\varepsilon-sparsifier with TT terms (Saunderson, 26 Dec 2025).

4. Concrete Instance: Positive Semidefinite Cone

For K=S+dK = S^d_+, the standard logarithmic barrier is F(X)=lndetXF(X) = -\ln\det X with parameter ν=d\nu = d. The pairwise self-concordance condition holds (e.g., by Loewner’s theorem). Hence,

spS+d(ε)4d/ε2,sp_{S^d_+}(\varepsilon) \le \left\lceil 4d / \varepsilon^2 \right\rceil,

exactly matching the dimension-dependent sparsification originally established for matrix-valued spectral sparsification.

5. Geometric Operations and Monotonicity

If a convex set CC has a proper KK-lift, meaning C=π(KL)C = \pi(K \cap L) for some linear space LL meeting $\RelInt(K)$ and linear map π\pi, then

spC(ε)spK(ε).sp_C(\varepsilon) \le sp_K(\varepsilon).

In particular, intersection with a hyperplane meeting $\RelInt(K)$, linear projections, convex lifts, and extended formulations all do not increase the sparsification function. This demonstrates stability under standard convex-geometric operations and suggests intrinsic geometric monotonicity of the sparsification function (Saunderson, 26 Dec 2025).

6. Applications to Conic Optimization

For covering-type conic programs

$\min_{y \ge 0} \langle b, y \rangle \quad \text{s.t.} \quad \sum_{i=1}^m y_i a_i\;_K\;c,$

where c=icic = \sum_i c_i, the mm constraints {ai}\{a_i\} can be replaced by an ε\varepsilon-sparsifier of size spK(ε)sp_K(\varepsilon), yielding a near-optimal sparse solution y~\tilde{y} with supp(y~)spK(ε)|\operatorname{supp}(\tilde{y})| \le sp_K(\varepsilon).

For packing-type duals,

maxxKc,xs.t.ai,xbi,\max_{x \in K^*} \langle c, x \rangle \quad \text{s.t.} \quad \langle a_i, x\rangle \le b_i,

replacing the cost vector cc by its sparsifier cc' alters the optimum by at most a 1±ε1 \pm \varepsilon factor. Thus, cone sparsification reduces the support size of near-optimal feasible points, with implications for the acceleration of first-order or combinatorial algorithms in large-scale conic optimization settings (Saunderson, 26 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sparsification Function of a Convex Cone.