Target-Driven Structured Transformer Planner for Vision-Language Navigation (2207.11201v1)
Abstract: Vision-language navigation is the task of directing an embodied agent to navigate in 3D scenes with natural language instructions. For the agent, inferring the long-term navigation target from visual-linguistic clues is crucial for reliable path planning, which, however, has rarely been studied before in literature. In this article, we propose a Target-Driven Structured Transformer Planner (TD-STP) for long-horizon goal-guided and room layout-aware navigation. Specifically, we devise an Imaginary Scene Tokenization mechanism for explicit estimation of the long-term target (even located in unexplored environments). In addition, we design a Structured Transformer Planner which elegantly incorporates the explored room layout into a neural attention architecture for structured and global planning. Experimental results demonstrate that our TD-STP substantially improves previous best methods' success rate by 2% and 5% on the test set of R2R and REVERIE benchmarks, respectively. Our code is available at https://github.com/YushengZhao/TD-STP .
- Yusheng Zhao (37 papers)
- Jinyu Chen (18 papers)
- Chen Gao (136 papers)
- Wenguan Wang (103 papers)
- Lirong Yang (6 papers)
- Haibing Ren (8 papers)
- Huaxia Xia (8 papers)
- Si Liu (130 papers)