Matrix-based Rényi Entropy
- Matrix-based Rényi entropy is a kernel-based functional that estimates entropy directly from the eigen-spectrum of normalized Gram matrices without explicit density estimation.
- It employs randomized numerical linear algebra and low-rank approximations to overcome the cubic cost of traditional eigendecomposition, ensuring scalability for large datasets.
- The framework extends to multivariate, conditional, and quantum settings, enabling applications in deep learning, feature selection, and quantification of quantumness.
Matrix-based Rényi entropy is a functional that enables direct estimation of information-theoretic quantities from data via the spectrum of kernel (Gram) matrices, bypassing explicit density estimation. This framework extends Rényi entropy—a fundamental generalization of the classical Shannon entropy based on order- matrix powers—to structured data, random processes, and multivariate settings, making it broadly applicable in machine learning, information theory, and quantum information contexts. Developments in randomized numerical linear algebra and low-rank representations have yielded scalable and robust computation schemes for large-scale data.
1. Definition and Core Properties
Given samples , one constructs a symmetric positive semidefinite (SPD) kernel (Gram) matrix . After normalization so that , the matrix-based Rényi entropy of order , , is defined by
where are the eigenvalues of (Dong et al., 2022).
For , converges to the matrix-based analogue of Shannon's entropy. The choice of kernel and normalization guarantees that and . The value emphasizes low eigenvalue (tail) structure; emphasizes leading eigenmodes.
This definition subsumes classical, quantum, and nonparametric data-driven settings (Reisizadeh et al., 2016, Yu et al., 2018).
2. Computational Considerations and Randomized Approximations
The direct computation of via eigendecomposition has time and memory complexity, which is prohibitive for large . To address scalability, stochastic trace estimation is used: where is a random probe (Gaussian or Rademacher). The empirical estimator
converts the trace into a sum of matrix-vector products (Dong et al., 2022, Gong et al., 2021).
For integer , implicit powers can be computed iteratively: ; for non-integer , polynomial approximations (Taylor, Chebyshev), or Lanczos quadrature are effective. The total complexity reduces to for dense matrices, where is the number of probe vectors and the polynomial degree, both sublinear in . Rigorous error bounds are established for all these methods, with theoretical guarantees matching minimax lower bounds up to logarithmic factors (Dong et al., 2022, Gong et al., 2021).
Block low-rank approximations further accelerate computation in the presence of structure, e.g., after clustering matrix rows/columns (Gong et al., 2021).
3. Multivariate and Joint Extensions
Matrix-based Rényi entropy has been extended to joint, conditional, and multivariate cases relevant for mutual information and interaction information estimation. For random variables with Gram matrices and normalized densities , the joint entropy uses the Hadamard product:
Analogous forms yield matrix-based mutual information, total correlation, and a suite of interactive information quantities (Yu et al., 2018). The resulting functionals are symmetric, subadditive, and admit tight bounds connecting marginal and joint entropies.
4. Matrix Inequalities, Quantum Context, and Theoretical Bounds
In the quantum case, is a density matrix. The Rényi relative entropy
gives rise to entropic bounds on conditional and mutual information: with tightness and equality characterized in terms of spectral properties (e.g., flat spectra, proportional supports).
Key bounds include:
- Lower and upper bounds on depending only on the rank and nonzero spectrum.
- Determinant-trace inequalities and logdet bounds for (Reisizadeh et al., 2016).
Such results allow the replacement of full eigendecomposition by easier-to-compute determinant or trace constraints, which are more accessible for quantum coding theorems and physical experiments.
5. Low-Rank Matrix-based Rényi Entropy and Robust Approximations
To remedy sensitivity to noise and further enhance scalability, low-rank variants have been introduced. The low-rank matrix-based Rényi entropy retains only the leading eigenvalues : where . This truncation makes more sensitive to informative perturbations (modifying top eigenmodes) and less sensitive to noise (spread across the tail), providing demonstrably improved robustness (Dong et al., 2022). Lanczos and random projection methods afford or computation for large .
Empirical results confirm speedups (up to over full-matrix methods) and negligible loss in accuracy for tasks such as information bottleneck optimization and feature selection (Dong et al., 2022).
6. Generalizations, Cross-Entropy, and Axiomatic Properties
Matrix-based Rényi entropy and its relatives, such as -cross-entropies, are formulated in RKHS using Gram matrices, enabling unbiased, nonparametric, and minimax-optimal estimation even for high-dimensional distributions (Sledge et al., 2021). For normalized empirical Gram matrices , (from samples of , ), one has: with generalizations (mirrored, tripartite). These satisfy all Rényi divergence axioms: non-negativity, continuity, monotonicity, additivity, and data processing inequalities (for suitable and Gram arguments).
For pure-state quantum ensembles, Gram-matrix-based --Rényi coherence measures quantify ensemble quantumness in a rigorous resource-theoretic fashion, unifying Petz-type, sandwiched, and Tsallis entropies under a common umbrella, and connecting with majorizer coherence and operational quantumness distinctions (Yuan et al., 2022).
7. Applications and Empirical Evidence
Matrix-based Rényi entropy has been applied to a wide range of problems:
- Information bottleneck and deep learning: scalable training of bottleneck-regularized networks on large datasets (CIFAR-10), with up to speedup and no loss of prediction accuracy for low-rank or randomized estimators (Dong et al., 2022, Dong et al., 2022).
- Feature selection: robust and fast evaluation of mutual information for selecting informative features in high-dimensional classification (hyperspectral imaging, UCI datasets), outperforming classical PDF-based or histogram-based criteria (Yu et al., 2018, Dong et al., 2022).
- Quantification of quantumness: computation of Gram-matrix Rényi coherence for pure-state quantum ensembles, with closed-form expressions for canonical state families (Yuan et al., 2022).
- Stationary random processes: exact formulas for Rényi entropy rates of vector-valued Gaussian processes via the spectrum of block Toeplitz matrices (Mulherkar, 2018).
8. Open Problems and Future Perspectives
Active lines of research include:
- Adaptive polynomial interpolation and error control near regimes.
- Extensions to streaming and distributed variants for ultra-large-scale datasets.
- Extensions of Gram-matrix Rényi coherence from pure to mixed-state ensembles in quantum information.
- Further theoretical investigation of the tradeoff between spectral truncation, robustness, and approximation error in low-rank entropy computation.
Matrix-based Rényi entropy and its algorithmic ecosystem thus provide an effective, theoretically grounded, and computationally tractable framework for information-theoretic analysis across machine learning, data science, and quantum domains (Dong et al., 2022, Gong et al., 2021, Dong et al., 2022, Sledge et al., 2021, Yuan et al., 2022).