Hilbert coVariance Filters (HVFs)
- Hilbert coVariance Filters (HVFs) are advanced filtering mechanisms defined over infinite-dimensional Hilbert spaces, using spectral decomposition of covariance operators for functional signal processing.
- They enable convolution, feature extraction, and dimensionality reduction through both direct spectral modulation and polynomial filter approximations.
- Empirically, HVFs underpin Hilbert coVariance Networks (HVNs) that outperform standard classifiers by robustly recovering FPCA scores and ensuring noise-adaptive, cross-component filtering.
Hilbert coVariance Filters (HVFs) are operator-based filtering mechanisms defined over infinite-dimensional Hilbert spaces, generalizing spectral architectures and graph filtering paradigms to functional domains. They are central to learning frameworks for signals modeled as random variables in separable Hilbert spaces, offering a principled approach to convolution, feature extraction, and dimensionality reduction through direct manipulation of the covariance operator spectrum. HVFs are foundational for designing Hilbert coVariance Networks (HVNs), which extend graph neural network architectures to functional and kernel spaces, offering robust, transferable, and noise-adaptive filtering for both synthetic and real-world time-series data (Battiloro et al., 16 Sep 2025).
1. Formal Definition and Mathematical Structure
The construction of an HVF begins with the spectral decomposition of the covariance operator associated with a zero-mean random variable , where is a separable, possibly infinite-dimensional Hilbert space. The covariance operator is compact and trace-class, ensuring the existence of an orthonormal basis and corresponding non-negative eigenvalues such that
The Hilbert coVariance Fourier Transform (HVFT) of is then An HVF is defined by a bounded Borel function (the frequency response), yielding the filter
where is the projection of onto . Thus, each spectral component is modulated by , fully characterizing the filter's action in the HVFT domain.
Alternatively, spatial (polynomial) HVFs can be realized by polynomial expansions: with coefficients , giving a frequency response and avoiding the need for explicit eigendecomposition.
2. Integration into Hilbert Space Learning Frameworks
HVFs are integral to the design of Hilbert coVariance Networks (HVNs), which are layered architectures generalizing convolutional neural networks to functional spaces. The framework proceeds as follows:
- Spectral Foundation: Signals are analyzed and filtered in the HVFT domain established by the covariance operator’s spectrum.
- Convolutional HVF Banks: Each layer applies a bank of HVFs—distinct spectral or polynomial responses—to multiple input channels.
- Nonlinear Activations: Outputs are processed by a pointwise nonlinear operator (activation function) .
- Stacking: These filtering/activation layers are composed, analogous to deep neural networks, enabling expressive representations in high/inf-dim spaces.
The generic propagation rule for the -th layer is
3. Discretization and Empirical Implementation
Due to the infinite-dimensional nature of and unknown , HVF-based architectures require a principled discretization:
- Empirical Covariance: Given sampled signals , the empirical covariance is
- Bounded Linear Discretization: Operators (e.g., based on Riesz-representable functionals or bin-averaging) project -valued signals into finite-dim vectors, with mapping back to .
The finite-dimensional empirical covariance matrix is . Post-discretization, all HVF and HVN operations can be performed within , followed by potential lifting back to .
4. Connection to Functional Principal Component Analysis (FPCA)
A key result is that empirical HVFs can recover the FPCA of filtered signals. For any distinct eigenvalue of , there exists a polynomial filter of degree at most (the number of distinct eigenvalues) such that , where projects onto the -eigenspace. For any eigenfunction ,
Thus, HVFs can recover the FPCA scores and thus the variance-maximizing directions in the sample—a unification of spectral dimensionality reduction and harmonic filtering in the operator framework.
5. Applications and Empirical Validation
HVFs and HVNs are applicable in:
- Functional data analysis: Signals modeled in spaces.
- RKHS-based learning: Point-evaluation functionals naturally define bounded projections compatible with HVFs.
- Time-series and sequence modeling: For both multivariate and infinite sequences, bin averaging or direct sequence representation enables HVF pipelines.
- Cross-domain generality: The framework accommodates various domains (multivariate functions, signal processing, kernel-defined features).
Empirical validation demonstrates that HVNs surpass MLP and FPCA-based classifiers in both synthetic (constructed from discriminative GP-generated time-series) and real-world (e.g., ECG5000) classification tasks across different discretization resolutions. The robustness of HVFs is attributed to their ability to globally process cross-component covariance, rather than relying solely on local or component-wise statistics (Battiloro et al., 16 Sep 2025).
6. Theoretical and Practical Implications
HVFs extend graph signal processing and spectral networks to continuous and functional domains, unifying classical filtering, kernel learning, and functional data analysis under the spectral action of the covariance operator. The framework's spectral and polynomial filter construction enables stable, consistent recovery of principal components (including in sample-dependent eigenspaces) and supports noise-robust, structure-preserving feature learning.
In summary, Hilbert coVariance Filters represent a mathematically principled, highly general, and practically effective approach to filtering and learning over infinite-dimensional spaces, enabling new architectures and solutions for high-dimensional, functional, and kernelized data models. The methodology ensures both theoretical soundness (via operator spectral theory and careful discretization) and empirical improvement, opening pathways for further development in functional learning and operator-theoretic machine learning.