Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator (2105.11589v2)

Published 25 May 2021 in cs.CV, cs.AI, cs.CL, cs.LG, and cs.RO

Abstract: Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN). In this paper, we present VISITRON, a multi-modal Transformer-based navigator better suited to the interactive regime inherent to Cooperative Vision-and-Dialog Navigation (CVDN). VISITRON is trained to: i) identify and associate object-level concepts and semantics between the environment and dialogue history, ii) identify when to interact vs. navigate via imitation learning of a binary classification head. We perform extensive pre-training and fine-tuning ablations with VISITRON to gain empirical insights and improve performance on CVDN. VISITRON's ability to identify when to interact leads to a natural generalization of the game-play mode introduced by Roman et al. (arXiv:2005.00728) for enabling the use of such models in different environments. VISITRON is competitive with models on the static CVDN leaderboard and attains state-of-the-art performance on the Success weighted by Path Length (SPL) metric.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ayush Shrivastava (8 papers)
  2. Karthik Gopalakrishnan (34 papers)
  3. Yang Liu (2253 papers)
  4. Robinson Piramuthu (36 papers)
  5. Devi Parikh (129 papers)
  6. Dilek Hakkani-Tür (164 papers)
  7. Gokhan Tür (2 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.