Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLGrammar: Grounded Grammar Induction of Vision and Language (2103.12975v1)

Published 24 Mar 2021 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Cognitive grammar suggests that the acquisition of language grammar is grounded within visual structures. While grammar is an essential representation of natural language, it also exists ubiquitously in vision to represent the hierarchical part-whole structure. In this work, we study grounded grammar induction of vision and language in a joint learning framework. Specifically, we present VLGrammar, a method that uses compound probabilistic context-free grammars (compound PCFGs) to induce the language grammar and the image grammar simultaneously. We propose a novel contrastive learning framework to guide the joint learning of both modules. To provide a benchmark for the grounded grammar induction task, we collect a large-scale dataset, \textsc{PartIt}, which contains human-written sentences that describe part-level semantics for 3D objects. Experiments on the \textsc{PartIt} dataset show that VLGrammar outperforms all baselines in image grammar induction and language grammar induction. The learned VLGrammar naturally benefits related downstream tasks. Specifically, it improves the image unsupervised clustering accuracy by 30\%, and performs well in image retrieval and text retrieval. Notably, the induced grammar shows superior generalizability by easily generalizing to unseen categories.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yining Hong (23 papers)
  2. Qing Li (430 papers)
  3. Song-Chun Zhu (216 papers)
  4. Siyuan Huang (123 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.