Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prediction-Constrained Training for Semi-Supervised Mixture and Topic Models (1707.07341v1)

Published 23 Jul 2017 in stat.ML, cs.AI, and cs.LG

Abstract: Supervisory signals have the potential to make low-dimensional data representations, like those learned by mixture and topic models, more interpretable and useful. We propose a framework for training latent variable models that explicitly balances two goals: recovery of faithful generative explanations of high-dimensional data, and accurate prediction of associated semantic labels. Existing approaches fail to achieve these goals due to an incomplete treatment of a fundamental asymmetry: the intended application is always predicting labels from data, not data from labels. Our prediction-constrained objective for training generative models coherently integrates loss-based supervisory signals while enabling effective semi-supervised learning from partially labeled data. We derive learning algorithms for semi-supervised mixture and topic models using stochastic gradient descent with automatic differentiation. We demonstrate improved prediction quality compared to several previous supervised topic models, achieving predictions competitive with high-dimensional logistic regression on text sentiment analysis and electronic health records tasks while simultaneously learning interpretable topics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Michael C. Hughes (39 papers)
  2. Leah Weiner (3 papers)
  3. Gabriel Hope (4 papers)
  4. Thomas H. McCoy Jr. (1 paper)
  5. Roy H. Perlis (4 papers)
  6. Erik B. Sudderth (18 papers)
  7. Finale Doshi-Velez (134 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.