RR-FBTC: Bayesian Tensor Completion
- The paper introduces a variational Bayesian method that integrates functional tensor modeling with ARD priors to automatically reveal the effective tensor rank.
- It employs multi-output Gaussian process priors with closed-form variational updates, enabling probabilistic uncertainty quantification and robust missing data recovery.
- Empirical studies on synthetic and real-world datasets, such as MRI and image inpainting, demonstrate state-of-the-art recovery accuracy and computational scalability.
Rank-Revealing Functional Bayesian Tensor Completion (RR-FBTC) is a variational Bayesian framework for low-rank tensor completion that enables rigorous automatic rank determination in both discrete and functional (continuous-indexed) tensor settings. RR-FBTC integrates functional tensor modeling with multi-output Gaussian process (MOGP) priors or hierarchical sparsity priors, providing a unified approach for rank-adaptive structure learning, probabilistic uncertainty quantification, and principled missing data recovery across discrete, continuous, and hybrid tensor domains (Li et al., 25 Dec 2025, Zhao et al., 2015, Bazerque et al., 2013).
1. Mathematical Model: Functional Tensor Completion
RR-FBTC models a -mode tensor (potentially defined on continuous domains) via the functional CANDECOMP/PARAFAC (CP) decomposition:
where may be either discrete or real-valued, are latent mode- factor functions, and is the tensor rank to be inferred. Given noisy observations ,
the likelihood is Gaussian with respect to the observed entries.
Each mode-factor is governed by a multi-output Gaussian process prior:
where is a specified kernel and parameterizes an automatic relevance determination (ARD) shrinkage prior. This induces a matrix-normal prior distribution over the discretization . The ARD hyperpriors
promote column-wise sparsity, which underpins the rank-revealing property of RR-FBTC (Li et al., 25 Dec 2025).
2. Rank-Revealing Mechanism
The RR-FBTC framework exploits variational Bayesian inference with shrinkage-inducing (ARD) priors to reveal the effective tensor rank during learning. Posterior updates for the hyperparameters ensure that components of negligible explanatory power are pruned automatically:
- If , the posterior variance of the corresponding mode- factors collapses, effectively removing component and decrementing the rank (Li et al., 25 Dec 2025, Zhao et al., 2015, Bazerque et al., 2013).
- The number of finite determines the inferred rank, and iterative elimination is integrated into each variational inference step.
This form of semi-automatic model selection subsumes the use of fixed-rank methods and eliminates manual hyperparameter grid search or cross-validation for the rank parameter.
3. Theoretical Guarantees: Expressiveness and Approximation
RR-FBTC, when equipped with universal kernel families (e.g., product Matérn or RBF kernels), achieves a universal approximation property on compact domains. Specifically, for any continuous target function on a compact domain , there exists and mean functions such that the CP-form expansion satisfies
This ensures expressive capacity for both continuous and discretized multi-dimensional signals, and establishes a probabilistic functional generalization of classic low-rank tensor models (Li et al., 25 Dec 2025).
4. Variational Inference and Algorithmic Framework
RR-FBTC adopts mean-field variational Bayesian inference for the joint posterior over factor functions, ARD hyperparameters, and noise precision. The updates admit closed-form expressions:
- for each factor function
The key update equations for variational means and covariances involve mode-wise kernel inverses and products over non-pruned components. Pruning is performed at each iteration for components satisfying (Li et al., 25 Dec 2025, Zhao et al., 2015). Closed-form computational complexity is per iteration, dominated by kernel inversions.
The RR-FBTC optimization admits efficient implementations, "embarrassingly" parallel mode updates, and optional conjugate-gradient linear solvers for large-scale settings.
5. Empirical Results and Benchmarking
Evaluations on synthetic and real-world data, including MRI, climate, oceanography, and high-resolution image inpainting, demonstrate robust rank identification and state-of-the-art recovery accuracy:
- Synthetic continuous tensors: RRSE with 90% missing entries; rapid convergence to true rank (Li et al., 25 Dec 2025, Zhao et al., 2015).
- Image inpainting: PSNR and SSIM exceed fixed-rank and non-Bayesian competitors over a wide range of missing ratios and noise conditions.
- MRI inpainting: Achieved dB NMSE at 50% missing with cross-slice kernel learning (Bazerque et al., 2013).
- Robustness: Rank converges from initial to the true value within a few iterations.
- Ablation studies: Recovery accuracy and rank-selection stability remain high across kernel hyperparameters.
A summary comparison of core features:
| Method (by arXiv ID) | Tensor Model | Rank Adaptation Mechanism | Type of Prior |
|---|---|---|---|
| (Li et al., 25 Dec 2025) | CP, functional | ARD shrinkage via | MOGP (kernel) |
| (Zhao et al., 2015) | Tucker | Group-sparsity on | Student-t / Laplace |
| (Bazerque et al., 2013) | CP, PARAFAC | Frobenius/ARD penalty | Gaussian, kernel-enabled |
6. Connections to Related Bayesian Tensor Models
RR-FBTC generalizes Bayesian low-rank tensor models for both canonical polyadic (CP), Tucker, and tensor ring decompositions:
- In Tucker models, group-sparsity priors on enforce rank-revealing shrinkage in multilinear ranks (Zhao et al., 2015).
- In tensor ring models, Student-t hierarchical priors on TR core tensors automatically drive redundant rank components to zero through precision variables (Long et al., 2020).
- In discrete PARAFAC formulations, Frobenius/ARD penalties on factor norms induces quasi-norm regularization for sparsity in component weights (Bazerque et al., 2013).
All these Bayesian models leverage variational inference schemes that yield posterior rank pruning, allow uncertainty quantification, and avoid tuning of rank or regularization hyperparameters.
7. Practical and Computational Considerations
Computational efficiency in RR-FBTC relies on kernel matrix inversion per factor and per component retained after pruning. Scalability is achieved when is substantially smaller than ambient tensor dimensions. The framework supports non-Gaussian observation models (e.g., Poisson), non-Euclidean mode kernels (RKHS-structured priors), and empirical estimation of covariance matrices from training data (Bazerque et al., 2013).
Open-source implementations, reproducibility scripts, data-generation procedures, and hyperparameter settings are detailed for various domains. Example code and datasets for the functional RR-FBTC implementation are available via https://github.com/OceanSTARLab/RR-FBTC (Li et al., 25 Dec 2025).
RR-FBTC establishes a principled, theoretically expressive, and computationally scalable approach for Bayesian tensor completion with automatic rank determination, providing a bridge between functional data analysis, tensor decompositions, and probabilistic machine learning (Li et al., 25 Dec 2025, Zhao et al., 2015, Bazerque et al., 2013).