Papers
Topics
Authors
Recent
Search
2000 character limit reached

Absmean Quantization Function

Updated 14 December 2025
  • The absolute–mean quantization function is a framework that measures the discrepancy between a probability distribution and finite codebooks using the L1–norm, underpinning both static identifiability and dynamic convergence results.
  • It leverages the convexity of the quantization error and is closely linked with Voronoi diagram constructions, ensuring robust geometric and probabilistic analyses.
  • The framework has practical implications for designing efficient quantization schemes and facilitates convergence assessments via equivalence with the Wasserstein distance.

The absolute–mean quantization function, also called the L1L^1–quantization error function, is a functional framework for quantifying the discrepancy between a probability distribution μ\mu on Rd\mathbb{R}^d and finite sets (called codebooks) under an arbitrary norm. Its key significance lies in characterizing probability distributions and convergence in the Wasserstein distance, with both static (identifiability) and dynamic (convergence) results. The concept is grounded in quantization theory, geometric constructions—especially Voronoi diagrams—and is central in the work of Liu and Pagès (Liu et al., 2018).

1. Definition and Formal Properties

Let (Rd,)(\mathbb{R}^d, |\cdot|) be equipped with any norm. For μPp(Rd)\mu \in \mathcal{P}_p(\mathbb{R}^d) (measures possessing finite pp-th moment), and a measurable quantizer q:Rd{x1,,xN}q: \mathbb{R}^d \to \{x_1, \ldots, x_N\}, the general LpL^p–quantization error is defined as

Xq(X)p=(EXμ[Xq(X)p])1/p.\|X - q(X)\|_p = \left( \mathbb{E}_{X \sim \mu} [|X - q(X)|^p] \right)^{1/p} .

The minimal quantization error over all codebooks Γ\Gamma of size N\le N is

eN,p(μ)=infΓN(RdminaΓξapμ(dξ))1/p.e_{N,p}(\mu) = \inf_{|\Gamma| \le N} \left( \int_{\mathbb{R}^d} \min_{a \in \Gamma} |\xi - a|^p \, \mu(d\xi) \right)^{1/p}.

Specializing to p=1p=1, the absolute–mean quantization error at level NN reads: eN,1(μ)=infΓNRdminaΓξaμ(dξ).e_{N,1}(\mu) = \inf_{|\Gamma| \le N} \int_{\mathbb{R}^d} \min_{a \in \Gamma} |\xi - a| \, \mu(d\xi). For N=1N=1, e1,1(μ;a)=Rdξaμ(dξ)e_{1,1}(\mu; a) = \int_{\mathbb{R}^d} |\xi - a| \, \mu(d\xi), with minimizer aa being any geometric median of μ\mu.

2. Analytical Expressions and Convexity

For a single code point aRda \in \mathbb{R}^d: e1,1(μ;a)=ξaμ(dξ),e_{1,1}(\mu; a) = \int |\xi - a| \, \mu(d\xi), and, when the Euclidean norm is used, its gradient is

ae1,1(μ;a)=ξaξaμ(dξ).\nabla_a e_{1,1}(\mu; a) = -\int \frac{\xi - a}{|\xi - a|} \, \mu(d\xi).

In dimension d=1d=1, e1,1(μ;x)e_{1,1}(\mu; x) is convex, and its derivative satisfies

(e1,1)(x)=1+2μ((,x]),(e_{1,1})'(x) = -1 + 2\,\mu((-\infty, x]),

which immediately implies the median characterization and uniqueness for probability laws given the absolute–mean quantization function.

3. Static Characterization: Identifiability of Measures

Identifiability via the absolute–mean quantization function rests on codebook cardinality. For any norm on Rd\mathbb{R}^d, define

c(d,)=min{k:a1,,akS(0,1),  S(0,1)i=1kB(ai,1)},c(d, |\cdot|) = \min \{ k : \exists\, a_1, \ldots, a_k \in S(0,1),\; S(0,1) \subset \cup_{i=1}^k B(a_i, 1) \},

the minimal number of unit balls required to cover the unit sphere. If Nc(d,)+1N \geq c(d, |\cdot|) + 1 (in Euclidean space, Nd+2N \geq d+2), then the following holds:

Theorem (Static Characterization)

If eN,1(μ;)=eN,1(ν;)+Ce_{N,1}(\mu;\, \cdot) = e_{N,1}(\nu; \, \cdot) + C for some constant CC, then μ=ν\mu = \nu and C=0C=0.

In d=1d=1, it is established that c(1,)=2c(1,|\cdot|)=2, and already N=1N=1 provides identifiability, leading to the proposition:

Proposition (One–Dimensional Static Characterization)

If for all xRx \in \mathbb{R}, e1,1(μ;x)=e1,1(ν;x)+Ce_{1,1}(\mu; x) = e_{1,1}(\nu; x) + C, then μ=ν\mu = \nu and C=0C=0.

4. Dynamic Characterization: Wasserstein Convergence

The L1L^1–Wasserstein distance on P1(Rd)\mathcal{P}_1(\mathbb{R}^d) is denoted W1(μ,ν)\mathcal{W}_1(\mu, \nu). The equivalence between convergence in Wasserstein distance and quantization error convergence is formalized as follows:

Theorem (W1W_1–Convergence)

For μn,μP1(Rd)\mu_n, \mu \in \mathcal{P}_1(\mathbb{R}^d) and fixed Nc(d,)+1N \geq c(d, |\cdot|)+1, the following are equivalent:

  1. W1(μn,μ)0\mathcal{W}_1(\mu_n, \mu) \to 0,
  2. supx(Rd)NeN,1(μn;x)eN,1(μ;x)0\sup_{x \in (\mathbb{R}^d)^N} |e_{N,1}(\mu_n; x) - e_{N,1}(\mu; x)| \to 0,
  3. Pointwise: eN,1(μn;x)eN,1(μ;x)e_{N,1}(\mu_n; x) \to e_{N,1}(\mu; x) for all x(Rd)Nx \in (\mathbb{R}^d)^N.

In d=1d=1, N=1N=1 suffices, yielding the efficient criterion: W1(μn,μ)0        e1,1(μn;x)e1,1(μ;x),pointwise or uniformly in x.\mathcal{W}_1(\mu_n, \mu) \to 0 \;\;\Longleftrightarrow\;\; e_{1,1}(\mu_n; x) \to e_{1,1}(\mu; x), \quad \text{pointwise or uniformly in } x.

5. Quantization–Based Distances and Completeness

A quantization–based distance between μ,νP1(Rd)\mu, \nu \in \mathcal{P}_1(\mathbb{R}^d) is defined by

QN,1(μ,ν):=supx(Rd)NeN,1(μ;x)eN,1(ν;x).\mathcal{Q}_{N,1}(\mu, \nu) := \sup_{x \in (\mathbb{R}^d)^N} |e_{N,1}(\mu;x) - e_{N,1}(\nu;x)|.

It is always bounded above by the Wasserstein distance: QN,1(μ,ν)W1(μ,ν)\mathcal{Q}_{N,1}(\mu, \nu) \le \mathcal{W}_1(\mu, \nu). By the static and dynamic characterizations, QN,1\mathcal{Q}_{N,1} is a bona fide distance, topologically and Lipschitz equivalent to W1\mathcal{W}_1 for Nc(d,)+1N \geq c(d, |\cdot|)+1. In d=1d=1, N=1N=1 already assures that (P1(R),Q1,1)(\mathcal{P}_1(\mathbb{R}), \mathcal{Q}_{1,1}) is complete.

6. Geometric Underpinning via Voronoi Diagrams

For codebook Γ={x1,,xN}Rd\Gamma = \{x_1, \dots, x_N\} \subset \mathbb{R}^d, Voronoi cells are given by

Vi={ξRd:ξxi=min1jNξxj}.V_i = \{ \xi \in \mathbb{R}^d : |\xi - x_i| = \min_{1 \leq j \leq N} |\xi - x_j| \}.

Essential geometric features:

  • Each cell ViV_i is star-shaped around xix_i.
  • Existence of a codebook Γ\Gamma such that one Voronoi cell is nonempty and bounded (using a covering argument) allows construction of functions supported in that cell that serve as approximate identities. Specifically, φ(ξ)=minjiξxjminjξxj\varphi(\xi) = \min_{j \ne i} |\xi - x_j| - \min_j |\xi - x_j| is nonnegative, compactly supported in ViV_i^\circ, and, when normalized, functions as an approximate identity.
  • This covering construction justifies the choice N=c(d,)+1N = c(d, |\cdot|)+1.

7. Context, Significance, and Consequences

The absolute–mean quantization error function provides a principled link between quantization theory, transportation metrics, and geometric analysis. Its static and dynamic characterizations underpin identifiability and convergence results for probability measures in the Wasserstein framework, and its geometric foundation via Voronoi diagrams ensures the robustness of these characterizations across norms and dimensions. The minimality conditions on codebook cardinality have direct consequences for practical quantization schemes and theoretical studies concerning the completeness and topological properties of statistical metric spaces (Liu et al., 2018).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Absmean Quantization Function.