In-Memory Learning: A Declarative Learning Framework for Large Language Models (2403.02757v1)
Abstract: The exploration of whether agents can align with their environment without relying on human-labeled data presents an intriguing research topic. Drawing inspiration from the alignment process observed in intelligent organisms, where declarative memory plays a pivotal role in summarizing past experiences, we propose a novel learning framework. The agents adeptly distill insights from past experiences, refining and updating existing notes to enhance their performance in the environment. This entire process transpires within the memory components and is implemented through natural language, so we character this framework as In-memory Learning. We also delve into the key features of benchmarks designed to evaluate the self-improvement process. Through systematic experiments, we demonstrate the effectiveness of our framework and provide insights into this problem.
- Do as i can, not as i say: Grounding language in robotic affordances.
- Language models are few-shot learners.
- Grounding large language models in interactive environments with online reinforcement learning. arXiv preprint arXiv:2302.02662.
- Do llms understand social knowledge? evaluating the sociability of large language models with socket benchmark. arXiv preprint arXiv:2305.14938.
- A survey on in-context learning.
- Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems, 35:18343–18362.
- Mrkl systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning.
- Api-bank: A comprehensive benchmark for tool-augmented llms.
- Agentsims: An open-source sandbox for large language model evaluation.
- Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688.
- Transforming human-centered ai collaboration: Redefining embodied agents capabilities through interactive grounded language instructions.
- Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pages 1–22.
- Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789.
- Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761.
- Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face.
- Reflexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems.
- Alfworld: Aligning text and embodied environments for interactive learning. Learning,Learning.
- Larry R Squire and Stuart M Zola. 1996. Structure and function of declarative and nondeclarative memory systems. Proceedings of the National Academy of Sciences, 93(24):13515–13522.
- Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291.
- Chain-of-thought prompting elicits reasoning in large language models.
- The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.
- Hotpotqa: A dataset for diverse, explainable multi-hop question answering.
- Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744–20757.
- React: Synergizing reasoning and acting in language models.
- Expel: Llm agents are experiential learners.