Papers
Topics
Authors
Recent
Search
2000 character limit reached

Cook2LTL: Translating Recipes into LTL Plans

Updated 21 December 2025
  • Cook2LTL is a system that translates unstructured cooking recipes into temporally precise, robot-executable plans using Linear Temporal Logic.
  • It combines a lightweight semantic parser, pretrained large language models, and dynamic caching to efficiently reduce high-level instructions into kitchen primitives.
  • Empirical evaluations in simulation demonstrate that action caching significantly reduces API calls, latency, and cost while improving plan reliability.

Cook2LTL is a system for translating unstructured natural language cooking recipes into unambiguous, temporally precise robot-executable plans specified in Linear Temporal Logic (LTL). Developed to address the linguistic and temporal complexity of everyday recipes and the vast action space inherent to cooking, Cook2LTL combines a lightweight semantic parser, pretrained LLMs, a dynamic action decomposition library with caching, and formal LTL schema to generate grounded kitchen task plans (Mavrogiannis et al., 2023).

1. System Architecture and Workflow

Cook2LTL architecture integrates four primary components to enable the end-to-end translation of recipe instructions into LTL:

  • Cooking Domain Knowledge Base: Captures the set of kitchen-relevant primitive actions (A\mathcal{A}; e.g., pick, place, turn_on, wait) and the semantic schema (C\mathcal{C}) for instruction parsing, comprising categories Verb, What?, Where?, How?, Time, and Temperature.
  • Semantic-Parsing Module: Processes each recipe step rir_i, segmenting imperative sentences into their semantic constituents according to C\mathcal{C} and emitting function-style action descriptors a=Verb()a = \text{Verb}(\ldots).
  • LLM-Based Translator: For descriptors aAa \notin \mathcal{A}, generates reductions into sequences of primitives in A\mathcal{A} using “prog-prompt” style few-shot prompting. Also produces an initial LTL “skeleton” based on the parsed descriptors.
  • Dynamic Action Library A\mathbb{A} and Caching: Each new action decomposition a[a1,...,ak]a \rightarrow [a_1, ..., a_k] is stored in a runtime cache, enabling O(1)O(1) lookups and bypassing redundant LLM queries on repeated high-level actions.

Algorithm 1 in the system formalizes the full action pipeline: preprocess the instruction; semantically parse to obtain descriptors; check for known primitive mappings (via A\mathcal{A} or the cache A\mathbb{A}); reduce composite actions by LLM and update the cache; finally, construct the full LTL expression over the sequence of reduced actions (Mavrogiannis et al., 2023).

2. Mapping Natural Language to Primitives

2.1 Annotation and Parsing Process

The approach utilizes a manually annotated corpus of 1,000 recipe steps from Recipe1M+, where spans are labeled according to C={Verb,What?,Where?,How?,Time,Temperature}\mathcal{C} = \{\text{Verb}, \text{What?}, \text{Where?}, \text{How?}, \text{Time}, \text{Temperature}\}. A sequence-tagger model (NER/tagging) is fine-tuned to segment each instruction rir_i into function-style descriptors, e.g., chop(onion)\mathtt{chop(onion)}.

2.2 Action Reduction by LLM

If a semantic action descriptor is an element of the primitive set A\mathcal{A}, it is directly mapped; otherwise, Cook2LTL constructs a “prog-prompt” for the LLM, supplying previous examples and a parser-extracted target action for decomposition. The LLM emits a procedural breakdown in the form of primitive API calls (e.g., pick, place, turn_on), which are then cached for future reuse.

A concrete example is the reduction of “chop the onion until translucent” into the primitives: pick(onion), place(onchopping_board), slice(onion), wait_until(translucent) (Mavrogiannis et al., 2023).

3. Linear Temporal Logic Formalization

Cook2LTL generates LTL specifications leveraging standard syntax:

ϕ::=p¬ϕϕϕϕϕXϕFϕGϕϕUϕ\phi ::= p \mid \neg\phi \mid \phi\wedge\phi \mid \phi\vee\phi \mid \mathbf{X}\,\phi \mid \mathbf{F}\,\phi \mid \mathbf{G}\,\phi \mid \phi\,\mathbf{U}\,\phi

Where pp represents atomic propositions corresponding to primitive action completion (“chop_onions”, etc.), and the temporal operators denote behavioral constraints:

  • Gϕ\mathbf{G}\,\phi: Globally; ϕ\phi holds at all time steps.
  • Fϕ\mathbf{F}\,\phi: Eventually; ϕ\phi holds at some (future) step.
  • Xϕ\mathbf{X}\,\phi: Next; ϕ\phi holds at the next immediate step.
  • ϕ1Uϕ2\phi_1\,\mathbf{U}\,\phi_2: Until; ϕ1\phi_1 holds until ϕ2\phi_2.

Key formula patterns include safety properties (G(¬burn_pan)\mathbf{G}(\neg burn\_pan)), ordering constraints (G(chop_onionsFadd_onions)\mathbf{G}(chop\_onions \rightarrow \mathbf{F}\,add\_onions)), and multi-step sequencing using nested eventually operators (Mavrogiannis et al., 2023).

4. Example Translations

Single-Step Example:

Instruction: “Chop the onion until translucent.”

  • Semantic parse: a1=chop(onion)a_1 = chop(onion), Time = “until translucent”
  • Action reduction: [pick(onion), place(onchopping_board), slice(onion), wait_until(translucent)]
  • LTL formula:

ψ1=F(pick_onionF(place_on_boardF(slice_onionF(wait_until(translucent)))))\psi_1 = \mathbf{F}(\texttt{pick\_onion} \wedge \mathbf{F}(\texttt{place\_on\_board} \wedge \mathbf{F}(\texttt{slice\_onion} \wedge \mathbf{F}(\texttt{wait\_until(translucent)}))))

Multi-Step Example:

Instruction: “Add oil, then cook for 5 min at medium heat.”

  • Parses to a1=add(oil)a_1 = add(oil), a2=cook(duration=5m,temperature=med)a_2 = cook(duration=5m, temperature=med); both reduce to primitives.
  • LTL formula:

ϕ=F(add_oilF(turn_on_stove_medF(wait(300s))))\phi = \mathbf{F}(\texttt{add\_oil} \wedge \mathbf{F}(\texttt{turn\_on\_stove\_med} \wedge \mathbf{F}(\texttt{wait(300s)})))

These constructions directly encode instructional temporality and facilitate execution by downstream planners (Mavrogiannis et al., 2023).

5. Runtime Caching and Efficiency

The dynamic action reduction cache A\mathbb{A} is core to Cook2LTL efficiency. At each parsing step, the cache is checked for previously reduced high-level actions before invoking the LLM. This mechanism achieves O(1)O(1) action decomposition for actions already encountered, sharply reducing redundant API requests, cost, and conversion time.

Empirical results on 50 held-out recipes from Recipe1M+ demonstrate:

System Executability Time Cost API Calls
AR* (no cache) 91 % 14.85 m \$0.19 275
AR (primitives) 92 % 9.89 m \$0.16 231
Cook2LTL (+A\mathbb{A}) 94 % 6.05 m \$0.11 134

In the AI2-THOR simulation environment, Cook2LTL yields significant reductions in LLM API calls (-51%), latency (-59%), and cost (-42%), relative to a baseline with no cache (Mavrogiannis et al., 2023).

6. Experimental Evaluation in Simulated Environments

Evaluations are conducted in AI2-THOR, with primitive set A\mathcal{A} limited by the simulator's kitchen manipulator API. Four representative tasks were used (microwave potato, chop tomato, cut bread, refrigerate apple), and the following results were reported:

Task SRAR_{\mathrm{AR}} TimeAR_{\mathrm{AR}} SRCook2LTL_{\mathrm{Cook2LTL}} TimeCook2LTL_{\mathrm{Cook2LTL}}
Microwave the potato 5.4 % 27.3 s 8 % 3.3 s
Chop the tomato 2.4 % 16.0 s 4 % 1.6 s
Cut the bread 9 % 12.9 s 8 % 1.1 s
Refrigerate the apple 7.6 % 14.6 s 8 % 1.6 s

Metrics included success rate (SR) and LLM-induced latency per episode, indicating improved performance under action decomposition caching. These improvements are attributed to the elimination of redundant LLM calls and streamlined action translation (Mavrogiannis et al., 2023).

7. Strengths, Limitations, and Future Directions

Strengths:

  • Robust open-vocabulary parsing of free-form web recipes.
  • Flexible decomposition into any user-defined primitive set A\mathcal{A}.
  • Rigorous, temporally-rich LTL specifications readily consumable by planning modules.
  • Substantial reduction in LLM-driven cost and latency via dynamic action caching.

Limitations and Directions:

  • Current semantic parser is trained on 1,000 steps; larger scale or self-supervised augmentation is required to ensure robustness in diverse, real-world recipe corpora.
  • AI2-THOR supports only rudimentary "toy" action primitives; extension to more realistic simulators and physical robot trials (e.g., with YCB objects) is identified as future work.
  • No runtime verification of LLM-generated reductions is performed; incorporation of environment-based feedback or symbolic plan checking (cf. AutoTAMP) is suggested to strengthen reliability.

The system demonstrates the capacity of LLM-guided pipelines, in concert with formal methods and adaptive caching, to transform unstructured instructional text into high-fidelity, robot-executable temporal plans, offering a foundation for further advances in automated task understanding and planning in robotics (Mavrogiannis et al., 2023).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Cook2LTL.