Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation (1906.00347v3)

Published 2 Jun 2019 in cs.CL

Abstract: Vision-and-Language Navigation (VLN) requires grounding instructions, such as "turn right and stop at the door", to routes in a visual environment. The actual grounding can connect language to the environment through multiple modalities, e.g. "stop at the door" might ground into visual objects, while "turn right" might rely only on the geometric structure of a route. We investigate where the natural language empirically grounds under two recent state-of-the-art VLN models. Surprisingly, we discover that visual features may actually hurt these models: models which only use route structure, ablating visual features, outperform their visual counterparts in unseen new environments on the benchmark Room-to-Room dataset. To better use all the available modalities, we propose to decompose the grounding procedure into a set of expert models with access to different modalities (including object detections) and ensemble them at prediction time, improving the performance of state-of-the-art models on the VLN task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ronghang Hu (26 papers)
  2. Daniel Fried (69 papers)
  3. Anna Rohrbach (53 papers)
  4. Dan Klein (99 papers)
  5. Trevor Darrell (324 papers)
  6. Kate Saenko (178 papers)
Citations (93)