Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dialog without Dialog Data: Learning Visual Dialog Agents from VQA Data (2007.12750v1)

Published 24 Jul 2020 in cs.CV, cs.AI, and cs.CL

Abstract: Can we develop visually grounded dialog agents that can efficiently adapt to new tasks without forgetting how to talk to people? Such agents could leverage a larger variety of existing data to generalize to new tasks, minimizing expensive data collection and annotation. In this work, we study a setting we call "Dialog without Dialog", which requires agents to develop visually grounded dialog models that can adapt to new tasks without language level supervision. By factorizing intention and language, our model minimizes linguistic drift after fine-tuning for new tasks. We present qualitative results, automated metrics, and human studies that all show our model can adapt to new tasks and maintain language quality. Baselines either fail to perform well at new tasks or experience language drift, becoming unintelligible to humans. Code has been made available at https://github.com/mcogswell/dialog_without_dialog

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Michael Cogswell (19 papers)
  2. Jiasen Lu (32 papers)
  3. Rishabh Jain (44 papers)
  4. Stefan Lee (62 papers)
  5. Devi Parikh (129 papers)
  6. Dhruv Batra (160 papers)
Citations (15)