Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Target-Driven Structured Transformer Planner for Vision-Language Navigation (2207.11201v1)

Published 19 Jul 2022 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Vision-language navigation is the task of directing an embodied agent to navigate in 3D scenes with natural language instructions. For the agent, inferring the long-term navigation target from visual-linguistic clues is crucial for reliable path planning, which, however, has rarely been studied before in literature. In this article, we propose a Target-Driven Structured Transformer Planner (TD-STP) for long-horizon goal-guided and room layout-aware navigation. Specifically, we devise an Imaginary Scene Tokenization mechanism for explicit estimation of the long-term target (even located in unexplored environments). In addition, we design a Structured Transformer Planner which elegantly incorporates the explored room layout into a neural attention architecture for structured and global planning. Experimental results demonstrate that our TD-STP substantially improves previous best methods' success rate by 2% and 5% on the test set of R2R and REVERIE benchmarks, respectively. Our code is available at https://github.com/YushengZhao/TD-STP .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yusheng Zhao (37 papers)
  2. Jinyu Chen (18 papers)
  3. Chen Gao (136 papers)
  4. Wenguan Wang (103 papers)
  5. Lirong Yang (6 papers)
  6. Haibing Ren (8 papers)
  7. Huaxia Xia (8 papers)
  8. Si Liu (130 papers)
Citations (49)