Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill (2309.10309v2)

Published 19 Sep 2023 in cs.RO

Abstract: Zero-shot object navigation is a challenging task for home-assistance robots. This task emphasizes visual grounding, commonsense inference and locomotion abilities, where the first two are inherent in foundation models. But for the locomotion part, most works still depend on map-based planning approaches. The gap between RGB space and map space makes it difficult to directly transfer the knowledge from foundation models to navigation tasks. In this work, we propose a Pixel-guided Navigation skill (PixNav), which bridges the gap between the foundation models and the embodied navigation task. It is straightforward for recent foundation models to indicate an object by pixels, and with pixels as the goal specification, our method becomes a versatile navigation policy towards all different kinds of objects. Besides, our PixNav is a pure RGB-based policy that can reduce the cost of home-assistance robots. Experiments demonstrate the robustness of the PixNav which achieves 80+% success rate in the local path-planning task. To perform long-horizon object navigation, we design an LLM-based planner to utilize the commonsense knowledge between objects and rooms to select the best waypoint. Evaluations across both photorealistic indoor simulators and real-world environments validate the effectiveness of our proposed navigation strategy. Code and video demos are available at https://github.com/wzcai99/Pixel-Navigator.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. yue Li, J. Yang, H. Su, J.-J. Zhu, and L. Zhang, “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” ArXiv, vol. abs/2303.05499, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:257427307
  2. M. Minderer, A. A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Mahendran, A. Arnab, M. Dehghani, Z. Shen, X. Wang, X. Zhai, T. Kipf, and N. Houlsby, “Simple open-vocabulary object detection with vision transformers,” ArXiv, vol. abs/2205.06230, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:248721818
  3. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  4. B. Li, K. Q. Weinberger, S. J. Belongie, V. Koltun, and R. Ranftl, “Language-driven semantic segmentation,” ArXiv, vol. abs/2201.03546, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:245836975
  5. R. Zhang, J. Han, A. Zhou, X. Hu, S. Yan, P. Lu, H. Li, P. Gao, and Y. Qiao, “Llama-adapter: Efficient fine-tuning of language models with zero-init attention,” arXiv preprint arXiv:2303.16199, 2023.
  6. W. Dai, J. Li, D. Li, A. M. H. Tiong, J. Zhao, W. Wang, B. Li, P. Fung, and S. C. H. Hoi, “Instructblip: Towards general-purpose vision-language models with instruction tuning,” ArXiv, vol. abs/2305.06500, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:258615266
  7. S. Y. Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song, “Clip on wheels: Zero-shot object navigation as object localization and exploration,” ArXiv, vol. abs/2203.10421, 2022.
  8. K. Zhou, K. Zheng, C. Pryor, Y. Shen, H. Jin, L. Getoor, and X. E. Wang, “Esc: Exploration with soft commonsense constraints for zero-shot object navigation,” arXiv preprint arXiv:2301.13166, 2023.
  9. B. Yu, H. Kasaei, and M. Cao, “L3mvn: Leveraging large language models for visual target navigation,” ArXiv, vol. abs/2304.05501, 2023.
  10. V. S. Dorbala, J. F. Mullen Jr, and D. Manocha, “Can an embodied agent find your" cat-shaped mug"? llm-based zero-shot object navigation,” arXiv preprint arXiv:2303.03480, 2023.
  11. J. Chen, G. Li, S. Kumar, B. Ghanem, and F. Yu, “How to not train your dragon: Training-free embodied object goal navigation with semantic frontiers,” 2023.
  12. A. Khandelwal, L. Weihs, R. Mottaghi, and A. Kembhavi, “Simple but effective: Clip embeddings for embodied ai,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022.
  13. D. S. Chaplot, D. Gandhi, A. Gupta, and R. Salakhutdinov, “Object goal navigation using goal-oriented semantic exploration,” in In Neural Information Processing Systems (NeurIPS), 2020.
  14. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. J. Joshi, R. C. Julian, D. Kalashnikov, Y. Kuang, I. Leal, K.-H. Lee, S. Levine, Y. Lu, U. Malla, D. Manjunath, I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao, M. S. Ryoo, G. Salazar, P. R. Sanketi, K. Sayed, J. Singh, S. A. Sontakke, A. Stone, C. Tan, H. Tran, V. Vanhoucke, S. Vega, Q. H. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich, “Rt-1: Robotics transformer for real-world control at scale,” ArXiv, vol. abs/2212.06817, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:254591260
  15. H. Walke, K. Black, A. Lee, M. J. Kim, M. Du, C. Zheng, T. Zhao, P. Hansen-Estruch, Q. H. Vuong, A. W. He, V. Myers, K. Fang, C. Finn, and S. Levine, “Bridgedata v2: A dataset for robot learning at scale,” ArXiv, vol. abs/2308.12952, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:261100981
  16. R. Ramrakhya, E. Undersander, D. Batra, and A. Das, “Habitat-web: Learning embodied object-search strategies from human demonstrations at scale,” in CVPR, 2022.
  17. R. Ramrakhya, D. Batra, E. Wijmans, and A. Das, “Pirlnav: Pretraining with imitation and rl finetuning for objectnav,” in CVPR, 2023.
  18. K. Yadav, R. Ramrakhya, A. Majumdar, V.-P. Berges, S. Kuhar, D. Batra, A. Baevski, and O. Maksymets, “Offline visual representation learning for embodied navigation,” in Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023. [Online]. Available: https://openreview.net/forum?id=Spfbts_vNY
  19. M. Zhu, B. Zhao, and T. Kong, “Navigating to objects in unseen environments by distance prediction,” 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10 571–10 578, 2022.
  20. A. Majumdar, G. Aggarwal, B. Devnani, J. Hoffman, and D. Batra, “Zson: Zero-shot object-goal navigation using multimodal goal embeddings,” in Neural Information Processing Systems (NeurIPS), 2022.
  21. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning, 2021.
  22. Q. Zhao, L. Zhang, B. He, H. Qiao, and Z. yong Liu, “Zero-shot object goal visual navigation,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 2025–2031, 2022.
  23. C. Huang, O. Mees, A. Zeng, and W. Burgard, “Visual language maps for robot navigation,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 10 608–10 615, 2022.
  24. B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler, “Open-vocabulary queryable scene representations for real world planning,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 11 509–11 522, 2022.
  25. Y. Zhang, X. Huang, J. Ma, Z. Li, Z. Luo, Y. Xie, Y. Qin, T. Luo, Y. Li, S. Liu, Y. Guo, and L. Zhang, “Recognize anything: A strong image tagging model,” ArXiv, vol. abs/2306.03514, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:259089333
  26. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “Llama: Open and efficient foundation language models,” ArXiv, vol. abs/2302.13971, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:257219404
  27. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. J. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” ArXiv, vol. abs/2005.14165, 2020.
  28. W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, “Language models as zero-shot planners: Extracting actionable knowledge for embodied agents,” ArXiv, vol. abs/2201.07207, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:246035276
  29. I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg, “Progprompt: Generating situated robot task plans using large language models,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 11 523–11 530.
  30. K. Rana, J. Haviland, S. Garg, J. Abou-Chakra, I. Reid, and N. Suenderhauf, “Sayplan: Grounding large language models using 3d scene graphs for scalable task planning,” arXiv preprint arXiv:2307.06135, 2023.
  31. S. Huang, Z. Jiang, H.-W. Dong, Y. J. Qiao, P. Gao, and H. Li, “Instruct2act: Mapping multi-modality instructions to robotic actions with large language model,” ArXiv, vol. abs/2305.11176, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:258762636
  32. D. Zhu, J. Chen, X. Shen, X. Li, and M. Elhoseiny, “Minigpt-4: Enhancing vision-language understanding with advanced large language models,” ArXiv, vol. abs/2304.10592, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:258291930
  33. R. Zhang, J. Han, A. Zhou, X. Hu, S. Yan, P. Lu, H. Li, P. Gao, and Y. J. Qiao, “Llama-adapter: Efficient fine-tuning of language models with zero-init attention,” ArXiv, vol. abs/2303.16199, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:257771811
  34. D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. H. Vuong, T. Yu, W. Huang, Y. Chebotar, P. Sermanet, D. Duckworth, S. Levine, V. Vanhoucke, K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. R. Florence, “Palm-e: An embodied multimodal language model,” ArXiv, vol. abs/2303.03378, 2023.
  35. OpenAI, “Gpt-4 technical report,” ArXiv, vol. abs/2303.08774, 2023.
  36. G. Zhou, Y. Hong, and Q. Wu, “Navgpt: Explicit reasoning in vision-and-language navigation with large language models,” ArXiv, vol. abs/2305.16986, 2023.
  37. M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, and D. Batra, “Habitat: A Platform for Embodied AI Research,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  38. A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y. Zhao, J. Turner, N. Maestre, M. Mukadam, D. Chaplot, O. Maksymets, A. Gokaslan, V. Vondrus, S. Dharur, F. Meier, W. Galuba, A. Chang, Z. Kira, V. Koltun, J. Malik, M. Savva, and D. Batra, “Habitat 2.0: Training home assistants to rearrange their habitat,” in Advances in Neural Information Processing Systems (NeurIPS), 2021.
  39. S. K. Ramakrishnan, A. Gokaslan, E. Wijmans, O. Maksymets, A. Clegg, J. Turner, E. Undersander, W. Galuba, A. Westbury, A. X. Chang, M. Savva, Y. Zhao, and D. Batra, “Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai,” ArXiv, vol. abs/2109.08238, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:237563216
  40. R. Ramrakhya, E. Undersander, D. Batra, and A. Das, “Habitat-web: Learning embodied object-search strategies from human demonstrations at scale,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5163–5173, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:248006501
  41. K. Yadav, R. Ramrakhya, A. Majumdar, V.-P. Berges, S. Kuhar, D. Batra, A. Baevski, and O. Maksymets, “Offline visual representation learning for embodied navigation,” ArXiv, vol. abs/2204.13226, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:248426942
Citations (28)

Summary

  • The paper presents PixNav, a novel pixel-guided navigation policy that replaces map-based methods with a simple, efficient RGB-only input.
  • It integrates foundation models and large language models to transform visual data into actionable textual plans for effective zero-shot navigation.
  • Empirical results demonstrate competitive success rates and robust performance in long-horizon, real-world environments with cost-effective hardware.

Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill

The paper under review introduces a novel approach to zero-shot object navigation, a task of considerable importance in the development of home-assistance robots. The focus lies on bridging the gap between foundational models—known for their visual and language perception capabilities—and robot locomotion, which has traditionally relied on map-based planning methods. The proposed solution, termed Pixel-guided Navigation skill (PixNav), offers a pure RGB-based navigation policy, which stands in contrast to the map-based systems that require depth sensing and can be cost-prohibitive.

Overview of Contributions

The core contributions of this paper are primarily centered around three areas:

  1. Pixel Navigation: The authors propose a pixel-guided navigation policy as a substitute for traditional path-planning methods in map-based navigation tasks. PixNav relies solely on RGB input, simplifying hardware requirements without sacrificing navigational efficacy.
  2. Integration with Foundation Models: The research explores leveraging strong zero-shot recognition capabilities of foundation models to enhance navigation tasks. The proposed system aligns the foundational models' robust visual perception with the pixel navigation methodology.
  3. Utilization of LLMs: A hierarchical policy is introduced where LLMs serve as planners, utilizing commonsense priors to enhance the robot's path-planning capabilities. This involves transforming visual data into textual inputs, enabling sophisticated decision-making processes.

Methodological Details

PixNav transforms object navigation into pixel-targeting, where the task of navigating to an object is redefined as reaching a designated pixel. This approach effectively leverages RGB-based data, circumventing the need for depth perception. The acquisition of training data for PixNav is notably more efficient, as it can generate diverse trajectories by specifying navigation goals with different pixels, as opposed to the single trajectory constraint of object goal navigation.

In practical implementations, PixNav is coupled with a visual-LLM, LLama-Adapter, to convert panoramic visual observations into detailed textual descriptions. This translation aids a LLM in crafting a navigation plan. The planning framework consists of summarizing and clustering the spatial environment into a structured format that guides efficient room-to-room navigation.

Evaluation and Findings

Empirical evaluations conducted within the HM3D dataset demonstrate the proposed method's competence. PixNav’s ability to generalize to varying RGB camera settings indicates its robustness and potential applicability in varied real-world contexts. Compared to conventional zero-shot object navigation baselines, PixNav exhibits competitive success rates and demonstrates promising SPL (Success weighted by Path Length) metrics.

In the context of long-horizon navigation, the integration of an LLM-based planner proves beneficial. Through methodical prompting, the LLM can effectively navigate complex environments, demonstrating an ability to exploit commonsense reasoning for spatial exploration.

Implications and Future Directions

The implications of this research are multifaceted. Practically, PixNav offers an accessible and cost-effective navigation solution by eliminating the need for complex sensory inputs beyond RGB. Theoretically, this work opens avenues for further exploration of non-traditional sensory inputs in robotics, particularly the role of pixels as a target in navigation systems.

Future developments might focus on fine-tuning data-driven policies via large-scale, diverse datasets, potentially enhancing the long-horizon navigational capabilities of PixNav. Additionally, extending the methodology to other modalities such as LIDAR or multispectral imaging for complex navigational environments could yield valuable insights.

In summary, the paper presents a compelling argument for the viability of pixel-guided navigation in zero-shot object navigation tasks. It leverages the capabilities of foundation models and LLMs, presenting a significant step forward in the quest for efficient, versatile, and scalable navigation systems for home-assistance robots.

Youtube Logo Streamline Icon: https://streamlinehq.com