Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 177 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

State Machine of Thoughts: Leveraging Past Reasoning Trajectories for Enhancing Problem Solving (2312.17445v2)

Published 29 Dec 2023 in cs.AI

Abstract: Current LLM-based agents reason within an exploration-evaluation framework, navigating problem-solving processes in a tree-like manner. However, these methods often neglect successful reasoning trajectories once a problem is resolved, leading to inefficient use of these trajectories for future analogous problems. To address this inefficiency, we adopt a state machine to record experience derived from previous reasoning trajectories. Within the state machine, states represent decomposed sub-problems, while state transitions reflect the dependencies among sub-problems. The state machine records both successful and failed trajectories. Utilizing the experience from the state machine, our proposed State Machine of Thoughts (SMoT) selects the most optimal sub-solutions and avoids incorrect ones. Our experiments show that SMoT can significantly improve problem-solving abilities in two exploration-intensive problems: the 24-point game and a taxi navigation reinforcement learning game.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Graph of Thoughts: Solving Elaborate Problems with Large Language Models. arXiv:2308.09687 [cs.CL]
  2. Language Models are Few-Shot Learners. In NeurIPS.
  3. Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712 [cs.CL]
  4. Dynamic Planning with a LLM. arXiv:2308.06391 [cs.CL]
  5. OpenAGI: When LLM Meets Domain Experts. NeurIPS (2023).
  6. Hierarchical finite state machines with multiple concurrency models. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 18, 6 (June 1999), 742–760.
  7. Reasoning with Language Model is Planning with World Model. arXiv:2305.14992 [cs.CL]
  8. LLM+P: Empowering Large Language Models with Optimal Planning Proficiency. arXiv:2304.11477 [cs.AI]
  9. Self-Refine: Iterative Refinement with Self-Feedback. arXiv:2303.17651 [cs.CL]
  10. Training language models to follow instructions with human feedback. arXiv:2203.02155 [cs.CL]
  11. SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning. In CoRL.
  12. Reflexion: Language Agents with Verbal Reinforcement Learning. In NeurIPS.
  13. LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971 [cs.CL]
  14. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv:2305.16291 [cs.AI]
  15. Self-Consistency Improves Chain of Thought Reasoning in Language Models. In ICLR.
  16. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In NeurIPS.
  17. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework. arXiv:2308.08155
  18. Self-Evaluation Guided Beam Search for Reasoning. arXiv:2305.00633
  19. Mihalis Yannakakis. 2000. Hierarchical State Machines. In Theoretical Computer Science: Exploring New Frontiers of Theoretical Informatics. 315–330.
  20. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv:2305.10601
  21. ReAct: Synergizing Reasoning and Acting in Language Models. In ICLR.
  22. Building Cooperative Embodied Agents Modularly with Large Language Models. arXiv:2307.02485 [cs.AI]

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 2 likes.

Upgrade to Pro to view all of the tweets about this paper: