Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs (2407.07775v2)

Published 10 Jul 2024 in cs.RO and cs.AI

Abstract: An elusive goal in navigation research is to build an intelligent agent that can understand multimodal instructions including natural language and image, and perform useful navigation. To achieve this, we study a widely useful category of navigation tasks we call Multimodal Instruction Navigation with demonstration Tours (MINT), in which the environment prior is provided through a previously recorded demonstration video. Recent advances in Vision LLMs (VLMs) have shown a promising path in achieving this goal as it demonstrates capabilities in perceiving and reasoning about multimodal inputs. However, VLMs are typically trained to predict textual output and it is an open research question about how to best utilize them in navigation. To solve MINT, we present Mobility VLA, a hierarchical Vision-Language-Action (VLA) navigation policy that combines the environment understanding and common sense reasoning power of long-context VLMs and a robust low-level navigation policy based on topological graphs. The high-level policy consists of a long-context VLM that takes the demonstration tour video and the multimodal user instruction as input to find the goal frame in the tour video. Next, a low-level policy uses the goal frame and an offline constructed topological graph to generate robot actions at every timestep. We evaluated Mobility VLA in a 836m2 real world environment and show that Mobility VLA has a high end-to-end success rates on previously unsolved multimodal instructions such as "Where should I return this?" while holding a plastic bin. A video demonstrating Mobility VLA can be found here: https://youtu.be/-Tof__Q8_5s

Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs

The paper "Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs" addresses the challenge of building intelligent agents capable of understanding multimodal user instructions, combining natural language and images, to perform navigation tasks. This category of tasks, referred to as Multimodal Instruction Navigation with Tours (MINT), leverages demonstration videos as environment priors to bypass the traditional exploration phase, thereby simplifying the navigation process.

The core contribution of the paper is the development of a hierarchical Vision-Language-Action (VLA) navigation policy, named Mobility VLA, which integrates the environment understanding and reasoning capabilities of long-context Vision-LLMs (VLMs) with robust low-level navigation policies based on topological graphs. This dual-layered approach ensures both high-fidelity comprehension of complex multimodal instructions and precise navigation actions.

Key Components

  1. Demonstration Tour Video:
    • The demonstration tour provides a comprehensive prior of the environment. It is essential as it helps in creating the topological graph and aids the high-level policy in goal identification. The tour can be recorded via teleoperation or a standard smartphone, making it accessible for end-users.
  2. Topological Graph:
    • Constructed offline using COLMAP, a structure-from-motion pipeline, the topological graph encapsulates connections between frames captured in the tour. This graph mitigates the limitations of VLMs, which typically struggle with out-of-distribution robot action queries.
  3. Long-Context Vision-LLM (VLM):
    • The high-level policy employs long-context VLMs to interpret the multimodal user instructions and identify the goal frame from the tour video. This model's extensive context length allows it to process comprehensive demonstrations, significantly enhancing the fidelity of environment understanding.
  4. Hierarchical Localization and Control:
    • Localizing the current observation allows the system to map the real-time camera feed to vertices on the topological graph. The low-level policy then produces waypoint actions based on this localization, ensuring precise navigation.

Experimental Results

The authors validate Mobility VLA in real-world scenarios, particularly in an office environment of 836 square meters, and compare its performance against baselines like CLIP-based retrieval and text-only approaches. Key findings include:

  • High End-to-End Success Rate: The approach achieved high success rates (80% to 90% in most instruction categories) and demonstrated significant improvements over baseline methods. The success rate in Reasoning-Required and Multimodal instructions was notably higher, underscoring the effectiveness of integrating long-context VLMs and topological graphs.
  • Low-Level Policy Robustness: The system maintained a 100% success rate in goal reaching, evidenced even when using demonstration tours recorded months prior, indicating robustness against environmental changes.
  • Generalization and Ease of Deployment: Proof-of-concept experiments in a home-like environment using a smartphone for the tour collection revealed a 100% success rate with high SPL, showcasing the system's flexibility and user-friendliness.

Implications and Future Directions

The Mobility VLA system presents notable practical and theoretical implications:

  • Practical Usability: By enabling multimodal instruction navigation, the system enhances the natural interaction between humans and robots. The ability to use smartphones to record tour videos for navigation setup significantly lowers the barrier to deployment.
  • Scalability and Adaptability: The hierarchical approach can be adapted to different robotic embodiments, as the primary requirement is only RGB camera observations.
  • Further Research: Enhancements could include integrating active exploration mechanisms to extend beyond pre-defined tours and optimizing VLM inference times for more fluid user interactions. Additionally, the potential to expand beyond navigation tasks to more complex multimodal commands presents intriguing future research avenues.

Conclusion

The paper successfully introduces Mobility VLA, achieving a significant leap in solving MINT tasks through a sophisticated fusion of long-context VLMs and topological graph-based navigation. Its robust performance in real-world environments and ease of use marks a substantial advancement in robot usability and human-robot interaction in complex, everyday scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (22)
  1. Hao-Tien Lewis Chiang (12 papers)
  2. Zhuo Xu (82 papers)
  3. Zipeng Fu (16 papers)
  4. Mithun George Jacob (3 papers)
  5. Tingnan Zhang (53 papers)
  6. Tsang-Wei Edward Lee (12 papers)
  7. Wenhao Yu (139 papers)
  8. Connor Schenck (11 papers)
  9. David Rendleman (4 papers)
  10. Dhruv Shah (48 papers)
  11. Fei Xia (111 papers)
  12. Jasmine Hsu (12 papers)
  13. Jonathan Hoech (2 papers)
  14. Pete Florence (33 papers)
  15. Sean Kirmani (18 papers)
  16. Sumeet Singh (25 papers)
  17. Vikas Sindhwani (60 papers)
  18. Carolina Parada (11 papers)
  19. Chelsea Finn (264 papers)
  20. Peng Xu (357 papers)
Citations (10)
Youtube Logo Streamline Icon: https://streamlinehq.com