Reward Machines in MiniGrid Levels
- Reward Machines are automata-based formalisms that decompose sparse, long-horizon reinforcement tasks into explicit sub-goals with event-triggered rewards.
- Automatic synthesis via foundation models and passive trace-based inference enables precise RM construction, reducing sample complexity in MiniGrid environments.
- Embedding natural language instructions in RM states allows for zero-shot transfer and structured policy decomposition, enhancing learning in complex tasks.
Reward Machines (RMs) offer an automata-based formalism for structuring and specifying reward functions in reinforcement learning, particularly in sequential and compositional tasks such as those encountered in the MiniGrid suite. By decomposing sparse long-horizon objectives into explicit sub-goals and transitions over well-defined event predicates, RMs transform the underlying Markov decision process—often partially observable and highly non-Markovian—with a reward structure that enables scalable and sample-efficient learning through augmented state representations. Recent advances leverage foundation models and passive automata inference to acquire RMs automatically, yielding substantial improvements in multiple MiniGrid environments and supporting robust zero-shot generalization, memoryless policy decomposition, and sample complexity reductions (Castanyer et al., 16 Oct 2025, Icarte et al., 2021, Wu et al., 3 Aug 2025).
1. Formal Definition of Reward Machines
An RM is defined as a tuple , with:
- : finite set of automaton (RM) states, each representing a unique sub-goal.
- : unique initial state.
- : finite alphabet of Boolean event symbols (e.g., , ).
- : deterministic transition function.
- : scalar reward function associated with each transition.
- : set of accepting/final states (absorbing).
- : labeling function mapping MDP transitions to event symbols.
For each event and state , a transition yields the next state and a reward . Unspecified transitions default to self-loops with zero reward. In some formulations, and may accept sets or of simultaneously holding predicates, but MiniGrid RMs typically process a single Boolean event per step for practical clarity (Castanyer et al., 16 Oct 2025, Icarte et al., 2021, Wu et al., 3 Aug 2025).
2. Automatic Synthesis and Inference of Reward Machines
Automated RM synthesis addresses the challenge of precise reward specification. Two principal methodologies are prominent:
- Foundation Model-Aided Synthesis (ARM-FM):
- A generator FM is prompted with a natural-language mission, the MiniGrid environment API, and an explicit RM template. The FM returns a succinct RM specification: , , , , .
- A parser ingests the FM's output, constructing a programmatic representation and deriving executable labeling functions via code-specialized FMs.
- Generator and critic FMs co-train in a loop wherein the critic enforces predicate correctness, minimality, coverage, and consistent reward allocation, driving iterative improvement until logical correctness is certified.
- This process enables RM construction directly from intuitive task descriptions and symbol detectors (Castanyer et al., 16 Oct 2025).
- Passive Trace-Based Inference (DB-RPNI for DBMM):
- The Dual-Behavior Mealy Machine (DBMM) formalism generalizes RMs for both reward- and transition-based abstractions. The DB-RPNI algorithm infers minimal DBMMs through a two-phase process: sample-set construction (from labeled MiniGrid trajectories) and state merging over a prefix-tree structure, subject to local output compatibility.
- For MiniGrid, AP is constructed from events such as . Labeled event trajectories are collected (typically 1,000–10,000 traces), preprocessed to remove redundant or trivial events, and converted into RM sample sequences. The algorithm iteratively merges states with compatible output histories, yielding a compact RM that encapsulates all necessary event/reward dynamics (Wu et al., 3 Aug 2025).
3. Embedding Event Abstraction and Language Alignment
To support generalization and subpolicy composition, each RM state is augmented with an FM-generated English instruction . This instruction is mapped to an embedding via a pretrained text encoder (e.g., from Qwen/Mistral families). During RL, policy is conditioned on these semantically aligned embeddings, facilitating:
- Zero-shot transfer to structurally similar unseen RMs by leveraging clusterings of embeddings for semantically related instructions (e.g., “pick up blue key” and “pick up red key”).
- Faster convergence on related subtasks in procedurally generated and held-out MiniGrid environments (Castanyer et al., 16 Oct 2025).
Empirical results show start/middle/end sub-task clusters in embedding space, with natural grouping of like sub-tasks, supporting efficient policy re-use.
4. RM-Based Decomposition and Policy Learning
Recasting RL with RMs entails augmenting observations with the current RM-state , creating a Markov process over . Each RM state defines a memoryless subpolicy:
and a corresponding Q-function . At each experience with abstract event ,
Thus, complex long-horizon tasks are decomposed into structured subtasks, each directly shaped by intermediate reward signals. This dramatically improves sample efficiency compared to flat or extrinsic-only rewards, which are often sparse and delayed in MiniGrid domains (Icarte et al., 2021).
5. Practical Application in MiniGrid Environments
RMs are applied to standard MiniGrid levels such as DoorKey, BlockedUnlockPickup, UnlockToUnlock, KeyCorridor, MultiRoom, and ObstructedMaze. The process involves:
- Mission description and API details provided to an FM; RM with labeled event symbols and reward shaping intermediate sub-goals is synthesized.
- Labeling functions are implemented efficiently (typically <10 lines each), mapping environment transitions to symbols for event detection.
- RL agent (DQN+RM) operates over the joint state , or in the case of QRM, over a separate Q-value for each RM-state.
- Benchmarks consistently show that DQN+RM or LRM+DDQN surpass vanilla RL methods, ICM, LLM-policy, and CLIP-reward baselines, reaching high rewards by several hundred thousand steps—even in procedurally generated or long-horizon levels where all baselines fail to learn (Castanyer et al., 16 Oct 2025).
Feature extraction (e.g., from partial FoV images with object and location channel encoding) is coupled with RM-state indicators, yielding networks that abstract over events rather than raw grid positions.
Example: “Pick up the red key and then open the door”
Given the mission, “Pick up the red key and then open the door,” the synthesized RM for MiniGrid can be specified as:
- ; instructions: “Pick up the red key.”, “Open the red door.”, “Done.”
- State transitions:
- , reward
- , reward
- Else, self-loops with zero reward
- Labeling (in Python):
1 2 3 4
def has_red_key(env): return env.carrying is not None and env.carrying.type=="key" and env.carrying.color=="red" def door_opened_red(env): return any(obj.type=="door" and obj.color=="red" and obj.is_open for obj in env.grid)
- LaTeX transition:
This RM provides dense reward shaping, improving credit assignment by splitting the overall task into short-horizon subgoals and enabling the agent to learn key pickup and door opening orders-of-magnitude faster than under sparse goal-only rewards (Castanyer et al., 16 Oct 2025).
6. Inference Algorithms and Computational Considerations
Passive state-merging inference for RMs, as in the DB-RPNI algorithm, is both efficient and provably correct under structure completeness. The computational complexity to obtain a minimal correct automaton is , and empirical results show that RMs for MiniGrid (typically 4–8 states) can be inferred in minutes on CPU hardware (Wu et al., 3 Aug 2025).
Algorithmic steps for MiniGrid RM inference comprise collecting sufficient diverse labeled traces (ensuring all event sequences), constructing prefix-tree transducers from symbol-labeled trajectories, and state merging with local compatibility checks over output histories. Event detector specification and preprocessing (e.g., compressing runs of identical symbols) are critical for sample efficiency and accuracy, while hyperparameter choices (confidence threshold , maximum automaton size) directly influence automaton minimality and merging fidelity.
7. Limitations, Extensions, and Impact
Key limitations for RM approaches in MiniGrid include the requirement for a well-specified detector set , possible imperfection in highly stochastic or information-lossy environments, and the need for sufficient coverage in observed traces to guarantee structure completeness (Icarte et al., 2021). Extensions proposed in the literature include combining RM inference with intrinsic exploration, on-the-fly automata learning, and interactive or expert-driven event set reduction.
The impact is marked: experimental findings demonstrate that RM-augmented RL achieves near-optimal sample efficiency and task completion rates on MiniGrid levels where sparse extrinsic reward or memory-augmented baselines stagnate. Structured, compositional reward design thus enables the systematic transformation of intractable long-horizon or procedural tasks into learnable curricula, establishing RMs as critical instruments for reinforcement learning in abstract, partially observable, and compositional domains (Castanyer et al., 16 Oct 2025, Icarte et al., 2021, Wu et al., 3 Aug 2025).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free