Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Plansformer: Generating Symbolic Plans using Transformers (2212.08681v1)

Published 16 Dec 2022 in cs.AI

Abstract: LLMs have been the subject of active research, significantly advancing the field of NLP. From BERT to BLOOM, LLMs have surpassed state-of-the-art results in various natural language tasks such as question answering, summarization, and text generation. Many ongoing efforts focus on understanding LLMs' capabilities, including their knowledge of the world, syntax, and semantics. However, extending the textual prowess of LLMs to symbolic reasoning has been slow and predominantly focused on tackling problems related to the mathematical field. In this paper, we explore the use of LLMs for automated planning - a branch of AI concerned with the realization of action sequences (plans) to achieve a goal, typically executed by intelligent agents, autonomous robots, and unmanned vehicles. We introduce Plansformer; an LLM fine-tuned on planning problems and capable of generating plans with favorable behavior in terms of correctness and length with reduced knowledge-engineering efforts. We also demonstrate the adaptability of Plansformer in solving different planning domains with varying complexities, owing to the transfer learning abilities of LLMs. For one configuration of Plansformer, we achieve ~97% valid plans, out of which ~95% are optimal for Towers of Hanoi - a puzzle-solving domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Vishal Pallagani (17 papers)
  2. Bharath Muppasani (9 papers)
  3. Keerthiram Murugesan (38 papers)
  4. Francesca Rossi (55 papers)
  5. Lior Horesh (52 papers)
  6. Biplav Srivastava (57 papers)
  7. Francesco Fabiano (16 papers)
  8. Andrea Loreggia (20 papers)
Citations (33)