Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Structured Latent Factors from Dependent Data:A Generative Model Framework from Information-Theoretic Perspective (2007.10623v2)

Published 21 Jul 2020 in cs.LG and stat.ML

Abstract: Learning controllable and generalizable representation of multivariate data with desired structural properties remains a fundamental problem in machine learning. In this paper, we present a novel framework for learning generative models with various underlying structures in the latent space. We represent the inductive bias in the form of mask variables to model the dependency structure in the graphical model and extend the theory of multivariate information bottleneck to enforce it. Our model provides a principled approach to learn a set of semantically meaningful latent factors that reflect various types of desired structures like capturing correlation or encoding invariance, while also offering the flexibility to automatically estimate the dependency structure from data. We show that our framework unifies many existing generative models and can be applied to a variety of tasks including multi-modal data modeling, algorithmic fairness, and invariant risk minimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ruixiang Zhang (69 papers)
  2. Masanori Koyama (29 papers)
  3. Katsuhiko Ishiguro (8 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.