Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
116 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
24 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
35 tokens/sec
2000 character limit reached

Bias-Corrected Joint Spectral Embedding for Multilayer Networks with Invariant Subspace: Entrywise Eigenvector Perturbation and Inference (2406.07849v1)

Published 12 Jun 2024 in math.ST, stat.ML, and stat.TH

Abstract: In this paper, we propose to estimate the invariant subspace across heterogeneous multiple networks using a novel bias-corrected joint spectral embedding algorithm. The proposed algorithm recursively calibrates the diagonal bias of the sum of squared network adjacency matrices by leveraging the closed-form bias formula and iteratively updates the subspace estimator using the most recent estimated bias. Correspondingly, we establish a complete recipe for the entrywise subspace estimation theory for the proposed algorithm, including a sharp entrywise subspace perturbation bound and the entrywise eigenvector central limit theorem. Leveraging these results, we settle two multiple network inference problems: the exact community detection in multilayer stochastic block models and the hypothesis testing of the equality of membership profiles in multilayer mixed membership models. Our proof relies on delicate leave-one-out and leave-two-out analyses that are specifically tailored to block-wise symmetric random matrices and a martingale argument that is of fundamental interest for the entrywise eigenvector central limit theorem.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com