Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Geometric Jensen-Shannon Divergence Between Gaussian Measures On Hilbert Space (2506.10494v1)

Published 12 Jun 2025 in math.PR, cs.IT, math.IT, and stat.ML

Abstract: This work studies the Geometric Jensen-Shannon divergence, based on the notion of geometric mean of probability measures, in the setting of Gaussian measures on an infinite-dimensional Hilbert space. On the set of all Gaussian measures equivalent to a fixed one, we present a closed form expression for this divergence that directly generalizes the finite-dimensional version. Using the notion of Log-Determinant divergences between positive definite unitized trace class operators, we then define a Regularized Geometric Jensen-Shannon divergence that is valid for any pair of Gaussian measures and that recovers the exact Geometric Jensen-Shannon divergence between two equivalent Gaussian measures when the regularization parameter tends to zero.

Summary

  • The paper introduces a geometric extension of Jensen-Shannon divergence for Gaussian measures in infinite-dimensional Hilbert spaces.
  • It derives closed-form expressions using Log-Determinant divergence and a regularized variant to handle non-equivalent measures.
  • The findings enhance practical applications in fields like statistical physics and machine learning by improving high-dimensional data analysis.

Geometric Jensen-Shannon Divergence Between Gaussian Measures on Hilbert Space

The paper presents an exploration of the Geometric Jensen-Shannon (GJS) divergence in the domain of Gaussian measures defined on an infinite-dimensional Hilbert space. This paper fundamentally extends the classical Jensen-Shannon divergence, notable in information theory, into a geometric form accommodating Gaussian measures, which are crucial in probabilistic and statistical applications.

Mathematical and Theoretical Foundations

The authors provide a comprehensive formulation of the GJS divergence using Gaussian measures equivalent to a predetermined Gaussian measure, maintained within a Hilbert space. This choice of space imposes considerable mathematical challenges due to the absence of a natural reference measure, such as the Lebesgue measure in finite-dimensional settings, complicating the definition of Gaussian densities.

Construction and Extension

The paper leverages the closed-form expression of the GJS divergence, derived from the Log-Determinant divergence between positive definite trace class operators, introducing a Regularized Geometric Jensen-Shannon divergence. This regularized variant is applicable to any pair of Gaussian measures, transcending previous limitations which required equivalent measures.

Key formulaic constructions include the use of abstract means, particularly the geometric mean, which allows the derivation of weighted mixtures of Gaussian measures. This facilitates the development of new GJS divergence expressions expanding beyond traditional settings. The regularized expression recovers the exact GJS divergence when the regularization parameter approaches zero, enabling practical implementations that necessitate adjusting for non-equivalent measures.

Implications and Future Directions

The methodologies and results contribute to a methodological arsenal for handling Gaussian distributions in infinite-dimensional contexts, reasonably extending into areas such as statistical physics, quantum mechanics, and functional data analysis. As the analysis of infinite-dimensional spaces is computationally intensive, the closed form of the regularized divergence may provide substantial improvements in computational methodologies involving Gaussian measures.

The paper anticipates future work extending practical applications, especially within Artificial Intelligence and Machine Learning domains where handling infinite-dimensional data structures and distributions has become critical, particularly in high-dimensional Bayesian statistics and functional data analysis. Potential enhancements regarding computational efficiency and integration within existing machine learning frameworks are expected to yield impactful advancements in the handling of complex data structures.

Thus, the paper stands as a significant contribution to the mathematical theory of divergence measures and their applications in high-dimensional statistical learning and data science, providing foundational techniques ensuring robust analysis in complex probability spaces.