Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VICE: Variational Interpretable Concept Embeddings (2205.00756v8)

Published 2 May 2022 in cs.LG, stat.AP, and stat.ML

Abstract: A central goal in the cognitive sciences is the development of numerical models for mental representations of object concepts. This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task. VICE uses variational inference to obtain sparse, non-negative representations of object concepts with uncertainty estimates for the embedding values. These estimates are used to automatically select the dimensions that best explain the data. We derive a PAC learning bound for VICE that can be used to estimate generalization performance or determine a sufficient sample size for experimental design. VICE rivals or outperforms its predecessor, SPoSE, at predicting human behavior in the triplet odd-one-out task. Furthermore, VICE's object representations are more reproducible and consistent across random initializations, highlighting the unique advantage of using VICE for deriving interpretable embeddings from human behavior.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lukas Muttenthaler (12 papers)
  2. Charles Y. Zheng (6 papers)
  3. Patrick McClure (11 papers)
  4. Robert A. Vandermeulen (23 papers)
  5. Martin N. Hebart (6 papers)
  6. Francisco Pereira (23 papers)
Citations (15)