Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Masked Autoencoders via Hierarchical Latent Variable Models (2306.04898v1)

Published 8 Jun 2023 in cs.LG and cs.CV

Abstract: Masked autoencoder (MAE), a simple and effective self-supervised learning framework based on the reconstruction of masked image regions, has recently achieved prominent success in a variety of vision tasks. Despite the emergence of intriguing empirical observations on MAE, a theoretically principled understanding is still lacking. In this work, we formally characterize and justify existing empirical insights and provide theoretical guarantees of MAE. We formulate the underlying data-generating process as a hierarchical latent variable model and show that under reasonable assumptions, MAE provably identifies a set of latent variables in the hierarchical model, explaining why MAE can extract high-level information from pixels. Further, we show how key hyperparameters in MAE (the masking ratio and the patch size) determine which true latent variables to be recovered, therefore influencing the level of semantic information in the representation. Specifically, extremely large or small masking ratios inevitably lead to low-level representations. Our theory offers coherent explanations of existing empirical observations and provides insights for potential empirical improvements and fundamental limitations of the masking-reconstruction paradigm. We conduct extensive experiments to validate our theoretical insights.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Lingjing Kong (13 papers)
  2. Martin Q. Ma (9 papers)
  3. Guangyi Chen (45 papers)
  4. Eric P. Xing (192 papers)
  5. Yuejie Chi (109 papers)
  6. Louis-Philippe Morency (123 papers)
  7. Kun Zhang (353 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.