Boosting a near-correct sampler to improve epsilon dependence
Develop a generic boosting procedure for generative modeling that, given a sampling oracle whose output distribution is already sufficiently close (e.g., in total variation distance) to a target distribution, refines the oracle to achieve arbitrarily small accuracy epsilon in total variation, analogous to boosting in supervised learning. Such a procedure should enable improving the current exponential dependence on 1/epsilon in the diffusion-model-based learning of Gaussian mixtures to a logarithmic dependence on 1/epsilon, matching prior rates.
References
Open question: $$ dependence and boosting? One shortcoming of our result is the exponential dependence on $1/$ intead of $\log(1/)$ as in previous works. This raises an interesting fundamental question: given a sampling oracle for a distribution ${\mathcal{M}$ which is sufficiently close to a target distribution $\mathcal{M}$, can we refine the accuracy of the oracle analogous to boosting in supervised learning? If so, this would give a generic way to improve our $$ dependence to match the rate achieved by prior work.