Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation (2211.16649v1)

Published 30 Nov 2022 in cs.CV, cs.AI, cs.CL, and cs.RO

Abstract: Household environments are visually diverse. Embodied agents performing Vision-and-Language Navigation (VLN) in the wild must be able to handle this diversity, while also following arbitrary language instructions. Recently, Vision-LLMs like CLIP have shown great performance on the task of zero-shot object recognition. In this work, we ask if these models are also capable of zero-shot language grounding. In particular, we utilize CLIP to tackle the novel problem of zero-shot VLN using natural language referring expressions that describe target objects, in contrast to past work that used simple language templates describing object classes. We examine CLIP's capability in making sequential navigational decisions without any dataset-specific finetuning, and study how it influences the path that an agent takes. Our results on the coarse-grained instruction following task of REVERIE demonstrate the navigational capability of CLIP, surpassing the supervised baseline in terms of both success rate (SR) and success weighted by path length (SPL). More importantly, we quantitatively show that our CLIP-based zero-shot approach generalizes better to show consistent performance across environments when compared to SOTA, fully supervised learning approaches when evaluated via Relative Change in Success (RCS).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Vishnu Sashank Dorbala (10 papers)
  2. Gunnar Sigurdsson (5 papers)
  3. Robinson Piramuthu (36 papers)
  4. Jesse Thomason (65 papers)
  5. Gaurav S. Sukhatme (88 papers)
Citations (46)

Summary

We haven't generated a summary for this paper yet.