Distributional Convergence of the Empirical Laplacians with Integral Kernels on Domains with Boundaries (2503.05633v1)
Abstract: Motivated by the problem of understanding theoretical bounds for the performance of the Belkin-Niyogi Laplacian eigencoordinate approach to dimension reduction in machine learning problems, we consider the convergence of random graph Laplacian operators to a Laplacian-type operator on a manifold. For ${X_j}$ i.i.d.\ random variables taking values in $\mathbb{R}d$ and $K$ a kernel with suitable integrability we define random graph Laplacians \begin{equation*} D_{\epsilon,n}f(p)=\frac{1}{n\epsilon{d+2}}\sum_{j=1}nK\left(\frac{p-X_j}{\epsilon}\right)(f(X_j)-f(p)) \end{equation*} and study their convergence as $\epsilon=\epsilon_n\to0$ and $n\to\infty$ to a second order elliptic operator of the form \begin{align*} \Delta_K f(p) &= \sum_{i,j=1}d\frac{\partial f}{\partial x_i}(p)\frac{\partial g}{\partial x_j}(p)\int_{\mathbb{R}d}K(-t)t_it_jd\lambda(t)\ &\quad +\frac{g(p)}{2}\sum_{i,j=1}d\frac{\partial2f}{\partial x_i\partial x_j}(p)\int_{\mathbb{R}d}K(-t)t_it_jd\lambda(t). \end{align*} Our results provide conditions that guarantee that $D_{\epsilon_n,n}f(p)-\Delta_Kf(p)$ converges to zero in probability as $n\to\infty$ and can be rescaled by $\sqrt{n\epsilon_n{d+2}}$ to satisfy a central limit theorem. They generalize the work of Gin\'e--Koltchinskii~\cite{gine2006empirical} and Belkin--Niyogi~\cite{belkin2008towards} to allow manifolds with boundary and a wider choice of kernels $K$, and to prove convergence under weaker smoothness assumptions and a correspondingly more precise choice of conditions on the asymptotics of $\epsilon_n$ as $n\to\infty$.