Papers
Topics
Authors
Recent
2000 character limit reached

RR-FBTC: Bayesian Tensor Completion

Updated 2 January 2026
  • The paper introduces a variational Bayesian method that integrates functional tensor modeling with ARD priors to automatically reveal the effective tensor rank.
  • It employs multi-output Gaussian process priors with closed-form variational updates, enabling probabilistic uncertainty quantification and robust missing data recovery.
  • Empirical studies on synthetic and real-world datasets, such as MRI and image inpainting, demonstrate state-of-the-art recovery accuracy and computational scalability.

Rank-Revealing Functional Bayesian Tensor Completion (RR-FBTC) is a variational Bayesian framework for low-rank tensor completion that enables rigorous automatic rank determination in both discrete and functional (continuous-indexed) tensor settings. RR-FBTC integrates functional tensor modeling with multi-output Gaussian process (MOGP) priors or hierarchical sparsity priors, providing a unified approach for rank-adaptive structure learning, probabilistic uncertainty quantification, and principled missing data recovery across discrete, continuous, and hybrid tensor domains (Li et al., 25 Dec 2025, Zhao et al., 2015, Bazerque et al., 2013).

1. Mathematical Model: Functional Tensor Completion

RR-FBTC models a KK-mode tensor (potentially defined on continuous domains) via the functional CANDECOMP/PARAFAC (CP) decomposition:

xi=r=1Rk=1Kurk(ik),x_{\mathbf{i}} = \sum_{r=1}^{R} \prod_{k=1}^K u^k_r(i_k),

where iki_k may be either discrete or real-valued, urku^k_r are latent mode-kk factor functions, and RR is the tensor rank to be inferred. Given noisy observations {(in,yn)}n=1N\{(\mathbf{i}^n, y_n)\}_{n=1}^N,

yn=xin+wn,wnN(0,τ1),y_n = x_{\mathbf{i}^n} + w_n, \quad w_n\sim\mathcal N(0,\tau^{-1}),

the likelihood is Gaussian with respect to the observed entries.

Each mode-factor UkU^k is governed by a multi-output Gaussian process prior:

Uk()MGP(0,ςk(,),Γ1),U^k(\cdot)\sim\mathcal{MGP}(0,\varsigma_k(\cdot,\cdot),\Gamma^{-1}),

where ςk\varsigma_k is a specified kernel and Γ=diag(γ1,,γR)\Gamma = \text{diag}(\gamma_1, \ldots, \gamma_R) parameterizes an automatic relevance determination (ARD) shrinkage prior. This induces a matrix-normal prior distribution over the discretization UkRNk×RU^k\in\mathbb R^{N_k \times R}. The ARD hyperpriors

p(γr)=Gamma(γrar,br)p(\gamma_r)=\text{Gamma}(\gamma_r\mid a_r,b_r)

promote column-wise sparsity, which underpins the rank-revealing property of RR-FBTC (Li et al., 25 Dec 2025).

2. Rank-Revealing Mechanism

The RR-FBTC framework exploits variational Bayesian inference with shrinkage-inducing (ARD) priors to reveal the effective tensor rank during learning. Posterior updates for the γr\gamma_r hyperparameters ensure that components of negligible explanatory power are pruned automatically:

  • If γr\langle\gamma_r\rangle \to \infty, the posterior variance of the corresponding mode-kk factors collapses, effectively removing component rr and decrementing the rank (Li et al., 25 Dec 2025, Zhao et al., 2015, Bazerque et al., 2013).
  • The number of finite γr\gamma_r determines the inferred rank, and iterative elimination is integrated into each variational inference step.

This form of semi-automatic model selection subsumes the use of fixed-rank methods and eliminates manual hyperparameter grid search or cross-validation for the rank parameter.

3. Theoretical Guarantees: Expressiveness and Approximation

RR-FBTC, when equipped with universal kernel families (e.g., product Matérn or RBF kernels), achieves a universal approximation property on compact domains. Specifically, for any continuous target function gg on a compact domain ZRD\mathcal Z\subset\mathbb R^D, there exists RR and mean functions uˉrk\bar u_r^k such that the CP-form expansion satisfies

fg<ϵ,f(z)=r=1Rk=1Kuˉrk(zk).\|f-g\|_\infty<\epsilon,\quad f(\mathbf z)=\sum_{r=1}^R\prod_{k=1}^K \bar u_r^k(z_k).

This ensures expressive capacity for both continuous and discretized multi-dimensional signals, and establishes a probabilistic functional generalization of classic low-rank tensor models (Li et al., 25 Dec 2025).

4. Variational Inference and Algorithmic Framework

RR-FBTC adopts mean-field variational Bayesian inference for the joint posterior over factor functions, ARD hyperparameters, and noise precision. The updates admit closed-form expressions:

  • q(urk)=N(mrk,Ψrk)q(u^k_r) = \mathcal N(m^k_r, \Psi^k_r) for each factor function
  • q(γr)=Gam(a^r,b^r)q(\gamma_r) = \mathrm{Gam}(\hat a_r, \hat b_r)
  • q(τ)=Gam(a^0,b^0)q(\tau) = \mathrm{Gam}(\hat a_0, \hat b_0)

The key update equations for variational means and covariances involve mode-wise kernel inverses and products over non-pruned components. Pruning is performed at each iteration for components satisfying γr>ϵprune\langle\gamma_r\rangle>\epsilon_\text{prune} (Li et al., 25 Dec 2025, Zhao et al., 2015). Closed-form computational complexity is O(RmaxkNk3)O(R\max_k N_k^3) per iteration, dominated by kernel inversions.

The RR-FBTC optimization admits efficient implementations, "embarrassingly" parallel mode updates, and optional conjugate-gradient linear solvers for large-scale settings.

5. Empirical Results and Benchmarking

Evaluations on synthetic and real-world data, including MRI, climate, oceanography, and high-resolution image inpainting, demonstrate robust rank identification and state-of-the-art recovery accuracy:

  • Synthetic continuous tensors: RRSE <0.2< 0.2 with 90% missing entries; rapid convergence to true rank (Li et al., 25 Dec 2025, Zhao et al., 2015).
  • Image inpainting: PSNR and SSIM exceed fixed-rank and non-Bayesian competitors over a wide range of missing ratios and noise conditions.
  • MRI inpainting: Achieved 10.5-10.5 dB NMSE at 50% missing with cross-slice kernel learning (Bazerque et al., 2013).
  • Robustness: Rank converges from initial Rinit=maxkNkR_{\text{init}}=\max_k N_k to the true value within a few iterations.
  • Ablation studies: Recovery accuracy and rank-selection stability remain high across kernel hyperparameters.

A summary comparison of core features:

Method (by arXiv ID) Tensor Model Rank Adaptation Mechanism Type of Prior
(Li et al., 25 Dec 2025) CP, functional ARD shrinkage via γr\gamma_r MOGP (kernel)
(Zhao et al., 2015) Tucker Group-sparsity on λrn\lambda^n_r Student-t / Laplace
(Bazerque et al., 2013) CP, PARAFAC Frobenius/ARD penalty Gaussian, kernel-enabled

RR-FBTC generalizes Bayesian low-rank tensor models for both canonical polyadic (CP), Tucker, and tensor ring decompositions:

  • In Tucker models, group-sparsity priors on λr(n)\lambda^{(n)}_r enforce rank-revealing shrinkage in multilinear ranks (Zhao et al., 2015).
  • In tensor ring models, Student-t hierarchical priors on TR core tensors automatically drive redundant rank components to zero through precision variables λr(n)\lambda^{(n)}_r (Long et al., 2020).
  • In discrete PARAFAC formulations, Frobenius/ARD penalties on factor norms induces 2/3\ell_{2/3} quasi-norm regularization for sparsity in component weights (Bazerque et al., 2013).

All these Bayesian models leverage variational inference schemes that yield posterior rank pruning, allow uncertainty quantification, and avoid tuning of rank or regularization hyperparameters.

7. Practical and Computational Considerations

Computational efficiency in RR-FBTC relies on kernel matrix inversion per factor and per component retained after pruning. Scalability is achieved when RR is substantially smaller than ambient tensor dimensions. The framework supports non-Gaussian observation models (e.g., Poisson), non-Euclidean mode kernels (RKHS-structured priors), and empirical estimation of covariance matrices from training data (Bazerque et al., 2013).

Open-source implementations, reproducibility scripts, data-generation procedures, and hyperparameter settings are detailed for various domains. Example code and datasets for the functional RR-FBTC implementation are available via https://github.com/OceanSTARLab/RR-FBTC (Li et al., 25 Dec 2025).


RR-FBTC establishes a principled, theoretically expressive, and computationally scalable approach for Bayesian tensor completion with automatic rank determination, providing a bridge between functional data analysis, tensor decompositions, and probabilistic machine learning (Li et al., 25 Dec 2025, Zhao et al., 2015, Bazerque et al., 2013).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Rank-Revealing Functional Bayesian Tensor Completion (RR-FBTC).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube