Papers
Topics
Authors
Recent
2000 character limit reached

Vector-Valued Distance on SPD

Updated 14 November 2025
  • The paper introduces a vector-valued distance that fully characterizes the relative position between SPD matrices via eigen-log decomposition.
  • The topic is defined as the study of SPD matrices viewed as a Riemannian manifold, where the vector-valued distance encodes multi-directional geometric displacements.
  • The computation utilizes eigendecomposition under an affine-invariant Riemannian metric, ensuring isometric invariance and consistency with scalar distance norms.

The vector-valued distance (VVD) on the manifold of symmetric positive definite matrices, denoted SPDₙ, provides a complete Riemannian invariant that encodes the full relative position of two points in the space. Unlike scalar Riemannian distances, the VVD delivers a higher-fidelity geometrical descriptor that supports both theoretical analysis and practical applications in manifold learning, geometric representation, and information geometry.

1. Geometric Structure of SPDₙ

SPDₙ is the set

SPDn={PRn×nP=P,    xPx>0  x0}SPDₙ = \{ P \in \mathbb{R}^{n \times n} \mid P^\top = P,\;\;x^\top P x > 0 \;\forall\,x\neq0 \}

comprising all real, symmetric, positive definite n×nn \times n matrices. As a manifold, SPDₙ is smooth and of dimension n(n+1)/2n(n+1)/2, with tangent space at PP identified as TPSPDnSnT_PSPDₙ \cong Sₙ, where SnSₙ is the vector space of real symmetric matrices.

The manifold possesses a rich structure as a Riemannian symmetric space. The general linear group GL(n)GL(n) acts transitively by congruence: ΦM(P)=MPM,MGL(n),\Phi_M(P) = M P M^\top, \quad M \in GL(n), preserving the Riemannian structure by isometries. This action partitions into “translations” by MSPDnM \in SPDₙ (no fixed point), “rotations” by QO(n)Q \in O(n) (fixing the identity matrix II), and the involutive “reflection” PP1P \mapsto P^{-1}.

The affine-invariant Riemannian metric is given by

U,VP=tr(P1UP1V)\langle U, V \rangle_P = \operatorname{tr}(P^{-1} U P^{-1} V)

for U,VTPSPDnU, V \in T_PSPDₙ. Isometries under GL(n)GL(n) manifest as

DΦM(U),DΦM(V)MP=U,VP.\langle D\Phi_M(U), D\Phi_M(V) \rangle_{M \cdot P} = \langle U, V \rangle_P.

2. Construction of the Vector-Valued Distance

For P,QSPDnP, Q \in SPDₙ, the VVD produces a vector in Rn\mathbb{R}^n that characterizes the displacement from PP to QQ. The construction proceeds as follows:

  • Compute M=P1QM = P^{-1} Q, which belongs to SPDnSPDₙ and is thus diagonalisable.
  • Perform an orthogonal eigendecomposition:

P1Q=U  diag(λ1,,λn)  UP^{-1} Q = U \; \mathrm{diag}(\lambda_1, \dots, \lambda_n) \; U^\top

where λ1λ2λn>0\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n > 0, UO(n)U \in O(n).

  • The Riemannian logarithm at the identity coincides with the matrix logarithm:

log(P1Q)=U  diag(logλ1,,logλn)  U.\log(P^{-1} Q) = U \; \mathrm{diag}(\log \lambda_1, \dots, \log \lambda_n) \; U^\top.

  • By definition (Lopez et al. §2.2), the vector-valued distance is

d(P,Q)=(logλ1,,logλn)Rn,\mathbf{d}(P, Q) = (\log \lambda_1, \dots, \log \lambda_n) \in \mathbb{R}^n,

with ordering λ1λn\lambda_1 \geq \cdots \geq \lambda_n ensuring placement in the “positive Weyl chamber.”

Each component logλi\log\lambda_i measures the signed expansion along the ii-th eigendirection, offering directional decomposition of displacement within the non-Euclidean geometry.

3. Properties of the Vector-Valued Distance

The VVD encapsulates several distinguished geometric and algebraic properties:

  • Isometric Invariance: For any isometry ΦGL(n)\Phi \in GL(n) mapping (P,Q)(P, Q) to (P,Q)(P', Q'), P1QP^{-1}Q and (P)1Q=Φ(P)1Φ(Q)(P')^{-1}Q' = \Phi(P)^{-1} \Phi(Q) share the spectrum, ensuring d(P,Q)=d(P,Q)\mathbf d(P, Q) = \mathbf d(P', Q'). Conversely, equal VVD vectors imply the pairs are isometric.
  • Anti-symmetry (up to permutation): The displacement reverses sign (up to order reversal): if the eigenvalues of P1QP^{-1} Q are {λ1,,λn}\{\lambda_1, \ldots, \lambda_n\}, then those of Q1PQ^{-1}P are {1λn,,1λ1}\{\frac{1}{\lambda_n}, \ldots, \frac{1}{\lambda_1}\}, yielding

d(Q,P)=d(P,Q)\mathbf d(Q, P) = -\mathbf d(P, Q)

after reordering.

  • Triangle (Majorization) Property: For all P,Q,RSPDnP, Q, R \in SPDₙ,

d(P,R)d(P,Q)+d(Q,R),\mathbf d(P, R) \preceq \mathbf d(P, Q) + \mathbf d(Q, R),

with \preceq the majorization order on Rn\mathbb{R}^n. This property subsumes the ordinary triangle inequality under any symmetric norm.

  • Complete Invariant: The VVD provides a complete isometry-invariant between pairs in SPDnSPDₙ; no geometric information is lost.

4. Connection to Scalar Distance and Norms

The classical affine-invariant Riemannian distance emerges as the 2\ell_2-norm of the VVD: dRiem(P,Q)=i=1n(logλi)2=d(P,Q)2.d_{\mathrm{Riem}}(P, Q) = \sqrt{ \sum_{i=1}^n (\log \lambda_i)^2 } = \| \mathbf d(P, Q) \|_2. More generally, all p\ell_p-norms of d(P,Q)\mathbf d(P, Q), invariant under coordinate permutations, yield Finsler metrics on SPDnSPDₙ with the same group of isometries. Examples include: \begin{align*} d_{F_1}(P, Q) & = \sum_{i=1}n |\log \lambda_i|, \ d_{F_\infty}(P, Q) & = \max_{i} |\log \lambda_i|. \end{align*} This structure generalizes the interpretation of distance beyond scalar length to encompass multi-directional geometric information.

5. Extension to General Symmetric Spaces

The vector-valued distance admits generalization to all noncompact Riemannian symmetric spaces G/KG/K of rank rr: d:(G/K)×(G/K)Rr,\mathbf d: (G/K) \times (G/K) \to \mathbb{R}^r, defined by selecting a maximal flat through a reference point, conjugating an argument into it, and reading “log-coordinates.” SPDₙ forms a canonical instance with r=nr = n. Analogous constructions apply to Grassmannians, orthogonal groups, and beyond, rooting in the general Lie-theoretic setting of Weyl chambers and Cartan subalgebras (see Helgason (1978); Kapovich–Leeb–Porti (2017)).

6. Algorithmic Computation of VVD

The computation of the VVD between P,QSPDnP, Q \in SPDₙ proceeds via the following pseudocode:

  1. Compute MP1QM \leftarrow P^{-1} Q (this may be solved as PX=QP \cdot X = Q).
  2. Perform eigendecomposition:

M=Udiag(λ1,,λn)U,λ1λn>0.M = U \, \mathrm{diag}(\lambda_1, \ldots, \lambda_n) \, U^\top,\quad \lambda_1 \geq \cdots \geq \lambda_n > 0.

  1. Assemble vRnv \in \mathbb{R}^n by vilogλiv_i \leftarrow \log \lambda_i for i=1,,ni = 1, \ldots, n.
  2. Return vv.

The dominant computational cost arises from the eigendecomposition, which is O(n3)\mathcal{O}(n^3). Implementation details, proofs, and further algorithmic considerations are discussed in the appendices of Lopez et al. (2021).

7. Applications and Visualization

The VVD supports a spectrum of applications in geometric representation learning. In tasks such as knowledge graph completion, item recommendation, and question answering, models utilizing SPDₙ and its VVD-based geometry demonstrate improved performance relative to analogous Euclidean and hyperbolic architectures. The vector-valued structure of the VVD also enables direct visualization of learned embeddings, revealing clear separation between positive and negative samples.

A plausible implication is enhanced interpretability: the decomposition of distances along Weyl chamber directions provides insights into how complex data relationships unfold in high-rank, non-Euclidean spaces.


For a rigorous treatment of the geometric and algebraic properties, as well as applications and proof details, see the full development and appendices in "Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices" (López et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Vector-Valued Distance on RSS.