TaskWeaver: A Code-First Agent Framework (2311.17541v3)
Abstract: LLMs have shown impressive abilities in natural language understanding and generation, leading to their widespread use in applications such as chatbots and virtual assistants. However, existing LLM frameworks face limitations in handling domain-specific data analytics tasks with rich data structures. Moreover, they struggle with flexibility to meet diverse user requirements. To address these issues, TaskWeaver is proposed as a code-first framework for building LLM-powered autonomous agents. It converts user requests into executable code and treats user-defined plugins as callable functions. TaskWeaver provides support for rich data structures, flexible plugin usage, and dynamic plugin selection, and leverages LLM coding capabilities for complex logic. It also incorporates domain-specific knowledge through examples and ensures the secure execution of generated code. TaskWeaver offers a powerful and flexible framework for creating intelligent conversational agents that can handle complex tasks and adapt to domain-specific scenarios. The code is open sourced at https://github.com/microsoft/TaskWeaver/.
- Autogen. Available at: https://github.com/microsoft/autogen. Accessed on [11/22/2023].
- Autogpt. Available at: https://github.com/Significant-Gravitas/AutoGPT. Accessed on [11/22/2023].
- Autogpt challenge. Available at: https://github.com/Significant-Gravitas/AutoGPT/blob/master/docs/content/challenges/memory/challenge_a.md. Accessed on [11/22/2023].
- Babyagi. Available at: https://github.com/yoheinakajima/babyagi. Accessed on [11/22/2023].
- Jarvis. Available at: https://github.com/microsoft/JARVIS. Accessed on [11/22/2023].
- Langchain. Available at: https://www.langchain.com/. Accessed on [11/22/2023].
- Llm powered autonomous agents. Available at: https://lilianweng.github.io/posts/2023-06-23-agent/. Accessed on [11/22/2023].
- Semantic kernel. Available at: https://github.com/microsoft/semantic-kernel. Accessed on [11/22/2023].
- Transformers agents. Available at: https://huggingface.co/docs/transformers/transformers_agents. Accessed on [11/22/2023].
- Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
- A general language assistant as a laboratory for alignment. CoRR, abs/2112.00861, 2021.
- Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023.
- Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
- Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023.
- Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118, 2023.
- OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
- Improving language understanding with unsupervised learning. OpenAI Blog, 2018.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
- The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023.
- React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
- A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
- Agents: An open-source framework for autonomous language agents. arXiv preprint arXiv:2309.07870, 2023.