Bias-Corrected Joint Spectral Embedding for Multilayer Networks with Invariant Subspace: Entrywise Eigenvector Perturbation and Inference (2406.07849v1)
Abstract: In this paper, we propose to estimate the invariant subspace across heterogeneous multiple networks using a novel bias-corrected joint spectral embedding algorithm. The proposed algorithm recursively calibrates the diagonal bias of the sum of squared network adjacency matrices by leveraging the closed-form bias formula and iteratively updates the subspace estimator using the most recent estimated bias. Correspondingly, we establish a complete recipe for the entrywise subspace estimation theory for the proposed algorithm, including a sharp entrywise subspace perturbation bound and the entrywise eigenvector central limit theorem. Leveraging these results, we settle two multiple network inference problems: the exact community detection in multilayer stochastic block models and the hypothesis testing of the equality of membership profiles in multilayer mixed membership models. Our proof relies on delicate leave-one-out and leave-two-out analyses that are specifically tailored to block-wise symmetric random matrices and a martingale argument that is of fundamental interest for the entrywise eigenvector central limit theorem.