Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributional Autoencoders Know the Score (2502.11583v2)

Published 17 Feb 2025 in stat.ML and cs.LG

Abstract: This work presents novel and desirable properties of a recently introduced class of autoencoders - the Distributional Principal Autoencoder (DPA) - which combines distributionally correct reconstruction with principal components-like interpretability of the encodings. First, we show formally that the level sets of the encoder orient themselves exactly with regard to the score of the data distribution. This both explains the method's often remarkable performance in disentangling the factors of variation of the data, as well as opens up possibilities of recovering its distribution while having access to samples only. In settings where the score itself has physical meaning - such as when the data obeys the Boltzmann distribution - we demonstrate that the method can recover scientifically important quantities such as the minimum free energy path. Second, we prove that if the data lies on a manifold that can be approximated by the encoder, the optimal encoder's components beyond the dimension of the manifold will carry absolutely no additional information about the data distribution. This promises potentially new ways of determining the number of relevant dimensions of the data. The results thus demonstrate that the DPA elegantly combines two often disparate goals of unsupervised learning: the learning of the data distribution and the learning of the intrinsic data dimensionality.

Summary

We haven't generated a summary for this paper yet.