Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 90 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 41 tok/s
GPT-5 High 42 tok/s Pro
GPT-4o 109 tok/s
GPT OSS 120B 477 tok/s Pro
Kimi K2 222 tok/s Pro
2000 character limit reached

LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents (2507.03293v1)

Published 4 Jul 2025 in cs.AI, cs.CL, cs.LG, cs.SY, and eess.SY

Abstract: LLMs have demonstrated promise in reasoning tasks and general decision-making in static environments. In long-term planning tasks, however, errors tend to accumulate, often leading to unsafe or inefficient behavior, limiting their use in general-purpose settings. We propose a modular actor-critic architecture in which an LLM actor is guided by LTLCrit, a trajectory-level LLM critic that communicates via linear temporal logic (LTL). Our setup combines the reasoning strengths of LLMs with the guarantees of formal logic. The actor selects high-level actions from natural language observations, while the critic analyzes full trajectories and proposes new LTL constraints that shield the actor from future unsafe or inefficient behavior. The architecture supports both fixed, hand-specified safety constraints and adaptive, learned soft constraints that promote long-term efficiency. Our architecture is model-agnostic: any LLM-based planner can serve as the actor, and LTLCrit serves as a logic-generating wrapper. We formalize planning as graph traversal under symbolic constraints, allowing LTLCrit to analyze failed or suboptimal trajectories and generate new temporal logic rules that improve future behavior. We evaluate our system on the Minecraft diamond-mining benchmark, achieving 100% completion rates and improving efficiency compared to baseline LLM planners. Our results suggest that enabling LLMs to supervise each other through logic is a powerful and flexible paradigm for safe, generalizable decision making.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces LTLCrit, a modular LLM-based actor-critic framework that integrates LTL constraints to improve planning safety and efficiency in embodied agents.
  • It employs a two-timescale framework where an online actor validates actions with LTL constraints and an offline critic inducts new constraints from trajectory data.
  • Empirical evaluations on a Minecraft benchmark show reduced action counts and unsafe behaviors, significantly enhancing task success rates.

LTLCrit: A Temporal Logic-Based LLM Critic for Safe and Efficient Embodied Agents

The paper introduces LTLCrit, a modular actor-critic architecture that integrates LLMs with formal symbolic reasoning via Linear Temporal Logic (LTL) to address the challenges of safe and efficient long-horizon planning in embodied environments. The architecture is designed to overcome the limitations of LLMs in sequential decision-making tasks, particularly their tendency to accumulate errors and violate safety or efficiency constraints over extended trajectories.

Architecture and Methodology

LTLCrit employs a two-timescale actor-critic framework:

  • Online Actor Loop: An LLM-based actor receives natural language state descriptions and proposes high-level actions. Each action is verified against a set of LTL constraints using a Büchi automaton. If the action is valid, it is executed; otherwise, the actor is prompted to replan.
  • Offline Critic Loop: An LLM-based critic periodically reviews complete trajectories, identifies unsafe or inefficient behaviors, and proposes new LTL constraints. These constraints are immediately incorporated into the verifier, shaping future behavior.

This modular decomposition leverages the local reasoning strengths of LLMs while addressing their weaknesses in maintaining long-term consistency and safety. The architecture is model-agnostic, allowing any LLM-based planner to serve as the actor, with LTLCrit acting as a logic-generating wrapper.

Planning as Graph Traversal under Symbolic Constraints

The planning problem is formalized as a shortest path search in a symbolic state-action graph, where nodes represent abstracted states and state-action pairs, and edges are pruned by LTL constraints. This bipartite graph representation makes explicit the roles of the actor (selecting actions) and the critic (pruning actions via constraints). The exponential complexity of the state space is mitigated by the LLM's semantic reasoning and the critic's data-driven constraint induction.

LTL as a Communication Protocol

A key innovation is the use of LTL as the communication protocol between actor and critic. LTL constraints are machine-checkable, interpretable, and reusable across similar states and tasks. The critic generates constraints of the form:

1
G(φ_s ⇒ X(φ_a))

where φ_s is a boolean condition over symbolic state features and φ_a is a boolean formula over actions. Constraints are induced from three sources:

  1. Environment Feedback: Encoding failures (e.g., attempting illegal actions) as permanent constraints.
  2. Graph-Based Efficiency Analysis: Promoting efficient action sequences and eliminating redundant behaviors.
  3. Overconstrained States: Detecting and refining overly restrictive constraint sets to avoid deadlocks.

All induced constraints are grounded in observed trajectory data, ensuring traceability and preventing overgeneralization.

Empirical Evaluation

The architecture is evaluated on the Minecraft diamond-mining benchmark, a long-horizon, partially observable task with complex dependencies. Two LLM-based planners, SayCan and InnerMonologue, are used as actors, both with and without LTLCrit augmentation.

Key empirical findings:

  • Efficiency: LTLCrit consistently improves the number of primitive actions required to reach subgoals. For example, InnerMonologue's average action count to obtain a diamond is reduced from 45.5 to 35.8, and its success rate increases from 80% to 100%. SayCan alone fails to complete the task, but with LTLCrit achieves 100% success.
  • Safety: The number of failed or unsafe actions is significantly reduced. InnerMonologue's failed action rate drops from 23% to 4.5% with LTLCrit, and 15% of unsafe actions are proactively blocked by the critic.

Tables:

Method Wooden Tool Stone Tool Iron Tool Diamond
SayCan N/A (0/5) N/A (0/5) N/A (0/5) N/A (0/5)
SayCan + LTLCrit 12.6 (5/5) 17.6 (5/5) 37.8 (5/5) 45.4 (5/5)
InnerMonologue 12.2 (5/5) 18.2 (5/5) 43.25(4/5) 45.5 (4/5)
InnerMonologue + LTLCrit 9.4 (5/5) 14.4 (5/5) 32.0 (5/5) 35.8 (5/5)
Method Failed Actions Critic-Blocked Unsafe Actions
InnerMonologue 23% N/A
InnerMonologue + LTLCrit 4.5% 15%

Design Considerations and Limitations

  • Atomic Propositions: The expressivity and precision of LTL constraints are determined by the choice of atomic propositions. Natural language-grounded, interpretable propositions facilitate LLM reasoning and human editability.
  • Constraint Induction: The critic's ability to generalize from observed data enables efficient pruning of the action space, but the quality of constraints is limited by the symbolic abstraction. Missing or poorly defined propositions can hinder performance.
  • Offline Critic: The critic operates offline, requiring multiple trajectories to converge on effective rules. Online operation risks over-constraining the agent.
  • Manual Overhead: While constraints are human-editable, this introduces a new form of manual intervention in maintaining the constraint set.
  • Assumptions: The approach assumes access to structured state descriptors and symbolic task structure, which may not be available in all environments.

Implications and Future Directions

LTLCrit demonstrates that formal logic-based supervision can substantially improve the safety and efficiency of LLM-driven agents in complex, dynamic environments. The symbolic actor-critic framework bridges the gap between natural language reasoning and formal planning, providing interpretable, verifiable, and generalizable decision-making.

Practical implications include:

  • Enhanced reliability and trustworthiness of LLM agents in safety-critical domains such as robotics, healthcare, and autonomous systems.
  • Modular integration with existing LLM planners, enabling rapid adoption in diverse embodied environments.
  • Human-in-the-loop verification and editability of agent behavior via interpretable LTL constraints.

Future research directions:

  • Automatic discovery of meaningful atomic propositions to further reduce manual engineering.
  • Extension to multi-agent coordination and real-world human-robot teaming.
  • Online constraint induction and adaptation for real-time applications.
  • Application to environments lacking explicit symbolic structure, potentially via learned abstractions.

Conclusion

LTLCrit provides a principled approach to integrating LLMs with formal logic, yielding agents that are not only capable of complex reasoning but also verifiably safe and efficient. The empirical results in Minecraft highlight the potential of this paradigm for advancing the deployment of trustworthy, general-purpose embodied agents.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com