Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explore the Potential Performance of Vision-and-Language Navigation Model: a Snapshot Ensemble Method (2111.14267v1)

Published 28 Nov 2021 in cs.CV, cs.AI, cs.LG, and cs.MM

Abstract: Vision-and-Language Navigation (VLN) is a challenging task in the field of artificial intelligence. Although massive progress has been made in this task over the past few years attributed to breakthroughs in deep vision and LLMs, it remains tough to build VLN models that can generalize as well as humans. In this paper, we provide a new perspective to improve VLN models. Based on our discovery that snapshots of the same VLN model behave significantly differently even when their success rates are relatively the same, we propose a snapshot-based ensemble solution that leverages predictions among multiple snapshots. Constructed on the snapshots of the existing state-of-the-art (SOTA) model $\circlearrowright$BERT and our past-action-aware modification, our proposed ensemble achieves the new SOTA performance in the R2R dataset challenge in Navigation Error (NE) and Success weighted by Path Length (SPL).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Wenda Qin (2 papers)
  2. Teruhisa Misu (27 papers)
  3. Derry Wijaya (31 papers)
Citations (4)