Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transferable Representation Learning in Vision-and-Language Navigation (1908.03409v2)

Published 9 Aug 2019 in cs.CV, cs.CL, cs.LG, and cs.RO

Abstract: Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals. The overall task requires competence in several perception problems: successful agents combine spatio-temporal, vision and language understanding to produce appropriate action sequences. Our approach adapts pre-trained vision and language representations to relevant in-domain tasks making them more effective for VLN. Specifically, the representations are adapted to solve both a cross-modal sequence alignment and sequence coherence task. In the sequence alignment task, the model determines whether an instruction corresponds to a sequence of visual frames. In the sequence coherence task, the model determines whether the perceptual sequences are predictive sequentially in the instruction-conditioned latent space. By transferring the domain-adapted representations, we improve competitive agents in R2R as measured by the success rate weighted by path length (SPL) metric.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haoshuo Huang (5 papers)
  2. Vihan Jain (16 papers)
  3. Harsh Mehta (34 papers)
  4. Alexander Ku (15 papers)
  5. Gabriel Magalhaes (2 papers)
  6. Jason Baldridge (45 papers)
  7. Eugene Ie (26 papers)
Citations (83)