Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models (2405.15143v3)

Published 24 May 2024 in cs.LG, cs.AI, and cs.CL
Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models

Abstract: Go-Explore is a powerful family of algorithms designed to solve hard-exploration problems built on the principle of archiving discovered states, and iteratively returning to and exploring from the most promising states. This approach has led to superhuman performance across a wide variety of challenging problems including Atari games and robotic control, but requires manually designing heuristics to guide exploration (i.e., determine which states to save and explore from, and what actions to consider next), which is time-consuming and infeasible in general. To resolve this, we propose Intelligent Go-Explore (IGE) which greatly extends the scope of the original Go-Explore by replacing these handcrafted heuristics with the intelligence and internalized human notions of interestingness captured by giant pretrained foundation models (FMs). This provides IGE with a human-like ability to instinctively identify how interesting or promising any new state is (e.g., discovering new objects, locations, or behaviors), even in complex environments where heuristics are hard to define. Moreover, IGE offers the exciting opportunity to recognize and capitalize on serendipitous discoveries-states encountered during exploration that are valuable in terms of exploration, yet where what makes them interesting was not anticipated by the human user. We evaluate our algorithm on a diverse range of language and vision-based tasks that require search and exploration. Across these tasks, IGE strongly exceeds classic reinforcement learning and graph search baselines, and also succeeds where prior state-of-the-art FM agents like Reflexion completely fail. Overall, Intelligent Go-Explore combines the tremendous strengths of FMs and the powerful Go-Explore algorithm, opening up a new frontier of research into creating more generally capable agents with impressive exploration capabilities.

Intelligent Go-Explore: Leveraging Giant Foundation Models for Enhanced Exploration Capabilities

The paper "Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models" introduces a novel method for improving the exploration capabilities of reinforcement learning (RL) agents. Building on the Go-Explore framework, this approach integrates the intelligence and human-like reasoning of large pretrained foundation models (FMs) to tackle hard-exploration problems more effectively. The research addresses significant shortcomings in classical Go-Explore by replacing manually designed heuristics with the adaptive and contextual understanding provided by FMs.

Overview of Go-Explore and Its Limitations

Go-Explore is a well-established algorithm family in deep RL that excels in solving complex exploration problems such as Atari games and robotic control. The core mechanism involves archiving novel states and iteratively returning to and exploring from the most promising ones. This method relies heavily on domain-specific heuristics to guide state selection, action choices, and the criteria for archiving new states.

However, the necessity of manually designing these heuristics poses substantial limitations, making the approach impractical or infeasible for more complex and less well-defined problems. Human players, in contrast, possess an intuitive sense of state interestingness and potential, which classical Go-Explore lacks.

Introduction to Intelligent Go-Explore (IGE)

Intelligent Go-Explore (IGE) extends the Go-Explore framework by incorporating the sophisticated reasoning capabilities of giant foundation models. These models, trained on vast internet-scale datasets, can understand and contextualize complex state information, which allows them to:

  1. Select Promising States: IGE uses the FM to choose the most promising states to return to from the archive, leveraging its internalized notions of interest and value.
  2. Select Actions: Instead of relying on random action sampling, IGE queries the FM to determine the best actions to explore from a given state.
  3. Archive New States: The FM also judges whether newly discovered states are sufficiently interesting to be archived, thereby recognizing and prioritizing serendipitous discoveries.

By employing FM intelligence in these stages, IGE automates the exploration process, reducing the need for domain-specific heuristics and enabling more effective exploration in previously challenging environments.

Empirical Evaluation

IGE's efficacy is demonstrated across a variety of text-based environments that require exploration and search:

  1. Game of 24: This mathematical reasoning task requires the agent to use arithmetic operations to reach a target number. IGE achieved a 100% success rate 70.8% faster than the best graph search baseline, showcasing its ability to leverage FM's intuitive problem-solving skills effectively.
  2. BabyAI-Text: This environment involves a partially observable gridworld where agents follow language instructions. IGE outperformed the state-of-the-art methods with significantly fewer online samples, specifically excelling in more complex tasks involving sequence and temporal reasoning.
  3. TextWorld: In this text-based game environment requiring long-horizon exploration and commonsense reasoning, IGE successfully solved tasks with complex state transitions and partial observability. Notably, IGE was the only algorithm to succeed in finding optimal solutions in the Coin Collector domain.

Analysis and Implications

The analysis section of the paper substantiates the importance of FM intelligence in different stages of Go-Explore, demonstrating substantial performance improvements when FM-based decision-making is used. Furthermore, integrating FMs significantly reduces the archive size by filtering uninteresting states, thereby focusing computational resources more effectively.

Future Directions and Speculations

The research opens new avenues for advancing autonomous agents across diverse and complex domains. As foundation models continue to improve, the capabilities and performance of IGE are expected to scale correspondingly. Future work could explore multimodal environments and expand the scope of applications to scientific discovery and innovation.

Conclusion

Intelligent Go-Explore represents a significant step forward in enhancing exploration capabilities through the integration of foundation models. By reducing reliance on manually designed heuristics and leveraging the contextual understanding of FMs, IGE offers a robust, scalable, and efficient approach to solving hard-exploration problems in RL. This work lays a strong foundation for future research into more generally capable and intelligent autonomous agents.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Constitutional ai: Harmlessness from ai feedback, 2022.
  2. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, June 2013. ISSN 1076-9757. doi: 10.1613/jair.3912. URL http://dx.doi.org/10.1613/jair.3912.
  3. Graph of thoughts: Solving elaborate problems with large language models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16):17682–17690, March 2024a. ISSN 2159-5399. doi: 10.1609/aaai.v38i16.29720. URL http://dx.doi.org/10.1609/aaai.v38i16.29720.
  4. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, 2024b.
  5. On the opportunities and risks of foundation models. ArXiv, 2021. URL https://crfm.stanford.edu/assets/report.pdf.
  6. Quality-diversity through ai feedback, 2023.
  7. Language models are few-shot learners, 2020.
  8. Grounding large language models in interactive environments with online reinforcement learning. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 3676–3713. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/carta23a.html.
  9. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15(3):1–45, 2024.
  10. Seth Cooper. A framework for scientific discovery through video games. Morgan & Claypool, 2014.
  11. Textworld: A learning environment for text-based games. CoRR, abs/1806.11532, 2018.
  12. A survey on in-context learning. arXiv preprint arXiv:2301.00234, 2022.
  13. First return, then explore. Nature, 590:580–586, 02 2021a. doi: 10.1038/s41586-020-03157-9.
  14. Go-explore: a new approach for hard-exploration problems, 2021b.
  15. Cell-free latent go-explore, 2023.
  16. Stream of search (sos): Learning to search in language, 2024.
  17. Thought Cloning: Learning to think while acting by imitating human thinking. Advances in Neural Information Processing Systems, 36, 2024.
  18. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents, 2022.
  19. Imitation learning: A survey of learning methods. ACM Comput. Surv., 50(2), apr 2017. ISSN 0360-0300. doi: 10.1145/3054912. URL https://doi.org/10.1145/3054912.
  20. General intelligence requires rethinking exploration. Royal Society Open Science, 10(6):230539, 2023.
  21. Motif: Intrinsic motivation from artificial intelligence feedback, 2023.
  22. Can large language models explore in-context?, 2024.
  23. The nethack learning environment. Advances in Neural Information Processing Systems, 33:7671–7684, 2020.
  24. Exploration in deep reinforcement learning: A survey. Information Fusion, 85:1–22, 2022.
  25. RLAIF: Scaling reinforcement learning from human feedback with AI feedback, 2024. URL https://openreview.net/forum?id=AAxIs3D2ZZ.
  26. Beyond a*: Better planning with transformers via search dynamics bootstrapping, 2024.
  27. Retrieval-augmented generation for knowledge-intensive nlp tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf.
  28. Agentbench: Evaluating llms as agents, 2023.
  29. Go-explore complex 3-d game environments for automated reachability testing. IEEE Transactions on Games, 16(1):235–240, 2024. doi: 10.1109/TG.2022.3228401.
  30. In-context learning and induction heads. arXiv preprint arXiv:2209.11895, 2022.
  31. OpenAI. Gpt-4 technical report, 2024.
  32. Neural map: Structured memory for deep reinforcement learning. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Bk9zbyZCZ.
  33. Reflexion: Language agents with verbal reinforcement learning, 2023.
  34. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html.
  35. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421.
  36. Gemini Team. Gemini: A family of highly capable multimodal models, 2024.
  37. Breadcrumbs to the goal: Goal-conditioned exploration from human-in-the-loop feedback. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 63222–63258. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/c7c7cf10082e454b9662a686ce6f1b6f-Paper-Conference.pdf.
  38. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  39. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):1–26, 2024.
  40. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022.
  41. A comprehensive study of multimodal large language models for image quality assessment. arXiv preprint arXiv:2403.10854, 2024.
  42. Tree of Thoughts: Deliberate problem solving with large language models, 2023a.
  43. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023b. URL https://openreview.net/forum?id=WE_vluYUL-X.
  44. OMNI: Open-endedness via models of human notions of interestingness. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=AgM3MzT99c.
  45. Calibrate before use: Improving few-shot performance of language models. In International conference on machine learning, pages 12697–12706. PMLR, 2021.
  46. Judging llm-as-a-judge with mt-bench and chatbot arena. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 46595–46623. Curran Associates, Inc., 2023a. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/91f18a1287b398d378ef22505bf41832-Paper-Datasets_and_Benchmarks.pdf.
  47. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023b.
  48. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
  49. Bootstrap methods and applications. IEEE Signal Processing Magazine, 24(4):10–19, 2007.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Cong Lu (23 papers)
  2. Shengran Hu (8 papers)
  3. Jeff Clune (65 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com