Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual Concept-Metaconcept Learning (2002.01464v1)

Published 4 Feb 2020 in cs.CV, cs.AI, cs.CL, cs.LG, and stat.ML

Abstract: Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i.e., the color). In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects, since they both categorize the shape of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them. Evaluation on both synthetic and real-world datasets validates our claims.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chi Han (30 papers)
  2. Jiayuan Mao (55 papers)
  3. Chuang Gan (195 papers)
  4. Joshua B. Tenenbaum (257 papers)
  5. Jiajun Wu (249 papers)
Citations (58)

Summary

We haven't generated a summary for this paper yet.