Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering (1902.07864v2)

Published 21 Feb 2019 in cs.LG, cs.AI, cs.CV, and stat.ML

Abstract: We propose a new class of probabilistic neural-symbolic models, that have symbolic functional programs as a latent, stochastic variable. Instantiated in the context of visual question answering, our probabilistic formulation offers two key conceptual advantages over prior neural-symbolic models for VQA. Firstly, the programs generated by our model are more understandable while requiring lesser number of teaching examples. Secondly, we show that one can pose counterfactual scenarios to the model, to probe its beliefs on the programs that could lead to a specified answer given an image. Our results on the CLEVR and SHAPES datasets verify our hypotheses, showing that the model gets better program (and answer) prediction accuracy even in the low data regime, and allows one to probe the coherence and consistency of reasoning performed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ramakrishna Vedantam (19 papers)
  2. Karan Desai (9 papers)
  3. Stefan Lee (62 papers)
  4. Marcus Rohrbach (75 papers)
  5. Dhruv Batra (160 papers)
  6. Devi Parikh (129 papers)
Citations (81)