Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Mean-Field Theory for Kernel Alignment with Random Features in Generative and Discriminative Models (1909.11820v3)

Published 25 Sep 2019 in cs.LG and stat.ML

Abstract: We propose a novel supervised learning method to optimize the kernel in the maximum mean discrepancy generative adversarial networks (MMD GANs), and the kernel support vector machines (SVMs). Specifically, we characterize a distributionally robust optimization problem to compute a good distribution for the random feature model of Rahimi and Recht. Due to the fact that the distributional optimization is infinite dimensional, we consider a Monte-Carlo sample average approximation (SAA) to obtain a more tractable finite dimensional optimization problem. We subsequently leverage a particle stochastic gradient descent (SGD) method to solve the derived finite dimensional optimization problem. Based on a mean-field analysis, we then prove that the empirical distribution of the interactive particles system at each iteration of the SGD follows the path of the gradient descent flow on the Wasserstein manifold. We also establish the non-asymptotic consistency of the finite sample estimator. We evaluate our kernel learning method for the hypothesis testing problem by evaluating the kernel MMD statistics, and show that our learning method indeed attains better power of the test for larger threshold values compared to an untrained kernel. Moreover, our empirical evaluation on benchmark data-sets shows the advantage of our kernel learning approach compared to alternative kernel learning methods.

Citations (1)

Summary

We haven't generated a summary for this paper yet.