Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models for AI Planning (2311.13720v2)

Published 22 Nov 2023 in cs.AI

Abstract: This is the first work to look at the application of LLMs for the purpose of model space edits in automated planning tasks. To set the stage for this union, we explore two different flavors of model space problems that have been studied in the AI planning literature and explore the effect of an LLM on those tasks. We empirically demonstrate how the performance of an LLM contrasts with combinatorial search (CS) -- an approach that has been traditionally used to solve model space tasks in planning, both with the LLM in the role of a standalone model space reasoner as well as in the role of a statistical signal in concert with the CS approach as part of a two-stage process. Our experiments show promising results suggesting further forays of LLMs into the exciting world of model space reasoning for planning tasks in the future.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Turgay Caglar (19 papers)
  2. Sirine Belhaj (1 paper)
  3. Tathagata Chakraborti (33 papers)
  4. Michael Katz (21 papers)
  5. Sarath Sreedharan (41 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.