Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 153 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 169 tok/s Pro
GPT OSS 120B 441 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Pre$^3$: Enabling Deterministic Pushdown Automata for Faster Structured LLM Generation (2506.03887v1)

Published 4 Jun 2025 in cs.CL

Abstract: Extensive LLM applications demand efficient structured generations, particularly for LR(1) grammars, to produce outputs in specified formats (e.g., JSON). Existing methods primarily parse LR(1) grammars into a pushdown automaton (PDA), leading to runtime execution overhead for context-dependent token processing, especially inefficient under large inference batches. To address these issues, we propose Pre$3$ that exploits deterministic pushdown automata (DPDA) to optimize the constrained LLM decoding efficiency. First, by precomputing prefix-conditioned edges during the preprocessing, Pre$3$ enables ahead-of-time edge analysis and thus makes parallel transition processing possible. Second, by leveraging the prefix-conditioned edges, Pre$3$ introduces a novel approach that transforms LR(1) transition graphs into DPDA, eliminating the need for runtime path exploration and achieving edge transitions with minimal overhead. Pre$3$ can be seamlessly integrated into standard LLM inference frameworks, reducing time per output token (TPOT) by up to 40% and increasing throughput by up to 36% in our experiments. Our code is available at https://github.com/ModelTC/lightllm.

Summary

  • The paper introduces a DPDA-based framework that deterministically transforms LR(1) grammars to eliminate runtime ambiguity and inefficiencies in structured LLM generation.
  • It leverages prefix-conditioned edges and cycle-aware DPDA construction to precompute unique transition paths, enhancing processing efficiency.
  • Experiments show up to 40% faster token generation and 36% higher throughput compared to state-of-the-art baselines, demonstrating practical scalability.

The paper "Pre3^3: Enabling Deterministic Pushdown Automata for Faster Structured LLM Generation" (2506.03887) addresses the computational inefficiencies of existing methods for generating structured output from LLMs, particularly when adhering to grammars like LR(1) (commonly used for formats like JSON). Current state-of-the-art approaches typically parse LR(1) grammars into a Pushdown Automaton (PDA). While PDAs handle the recursive nature of context-free grammars, their non-deterministic nature leads to significant runtime overhead, especially under large inference batch sizes. This overhead stems from the need for context-dependent token processing, requiring backtracking, speculative exploration, and complex management of a persistent stack.

Pre3^3 proposes a novel solution by leveraging the properties of Deterministic Pushdown Automata (DPDA). The core idea is to transform the LR(1) grammar into a DPDA during a preprocessing step. This deterministic nature allows for precomputed transition paths, eliminating the runtime ambiguity and associated overhead of traditional PDA-based methods.

The key technical contributions of Pre3^3 are:

  1. Prefix-conditioned Edges: Unlike standard PDA transitions that rely only on the current state, input symbol, and top of the stack, Pre3^3 introduces "Prefix-conditioned Edges." These edges require matching the input symbol and a specific prefix of symbols already on the stack (representing the parsing history). This ensures that for any given state, input, and stack configuration, the next transition is uniquely determined. This determinism is crucial for enabling ahead-of-time analysis and parallel processing of transitions.
  2. Cycle-aware DPDA Construction: The paper presents an algorithm to build the DPDA directly from the LR(1) state transition graph. This involves defining two types of edges:
    • Acceptance Edges: Directly derived from LR(1) shift operations, corresponding to pushing state information onto the stack.
    • Reduction Edges: Explicitly added to handle reduction operations (replacing a sequence of symbols with a non-terminal). This process includes resolving non-determinism by merging epsilon-reduction edges with compatible acceptance edges and incorporating prefix-conditioned stack matching. A key challenge is handling cycles in the LR(1) graph, which could lead to infinite reduction paths during construction. Pre3^3 addresses this by modifying back-edges in cycles to include stack pop operations that remove redundant states corresponding to a full cycle traversal, ensuring that stack information only reflects the net effect of cycle traversals.
  3. Edge Optimization with Prefix-condition: The deterministic and precomputed nature of DPDA edges allows for structural optimizations during preprocessing. These include:
    • Edge Aggregation: Merging edges that have the same stack prefix condition and operations but accept different symbols (e.g., aggregating edges for digits 0-9).
    • Edge Merging: Connecting edges that share a matched stack prefix and operations, potentially reducing the number of steps needed to reach a state. These optimizations simplify the automaton and improve runtime efficiency.

Pre3^3 is implemented and integrated with the LightLLM inference framework, utilizing both Python (approx. 2000 lines) and C++ (approx. 1000 lines). The DPDA construction is a one-time preprocessing step, reported to take only a few seconds for complex grammars like JSON, and the results are cacheable.

The practical benefits of Pre3^3 are demonstrated through extensive evaluation against state-of-the-art baselines like XGrammar, Outlines, and llama.cpp on various models (Llama-3-8B, Llama-2-70B, DeepSeek-V2-Lite-Chat, Qwen2-14B) and grammars (JSON, Chain-of-Thought). The experiments show:

  • Lower per-step decoding overhead compared to baselines.
  • Significant reductions in time per output token (TPOT), achieving up to a 40% improvement over XGrammar, particularly noticeable at larger batch sizes (e.g., 29-40% reduction for batch sizes 256-512).
  • Increased throughput in real-world serving simulations, showing up to 36% higher throughput compared to XGrammar at higher concurrency levels. The performance gains are more pronounced as batch size and concurrency increase, highlighting Pre3^3's superior scalability.

While Pre3^3 demonstrates significant advancements for LR(1) grammars, the authors note limitations such as potential challenges with more complex LR(k) grammars (k>1k>1) and that the current implementation is a research prototype that could benefit from production-level hardware and system optimizations. However, the efficiency of the preprocessing step makes it practical for real-world deployment.

Dice Question Streamline Icon: https://streamlinehq.com
Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 7 tweets and received 2662 likes.

Upgrade to Pro to view all of the tweets about this paper: