Instance-dependent Convergence Theory for Diffusion Models (2410.13738v2)
Abstract: Score-based diffusion models have demonstrated outstanding empirical performance in machine learning and artificial intelligence, particularly in generating high-quality new samples from complex probability distributions. Improving the theoretical understanding of diffusion models, with a particular focus on the convergence analysis, has attracted significant attention. In this work, we develop a convergence rate that is adaptive to the smoothness of different target distributions, referred to as instance-dependent bound. Specifically, we establish an iteration complexity of $\min{d,d{2/3}L{1/3},d{1/3}L}\varepsilon{-2/3}$ (up to logarithmic factors), where $d$ denotes the data dimension, and $\varepsilon$ quantifies the output accuracy in terms of total variation (TV) distance. In addition, $L$ represents a relaxed Lipschitz constant, which, in the case of Gaussian mixture models, scales only logarithmically with the number of components, the dimension and iteration number, demonstrating broad applicability.