Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 128 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 189 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

AdapTrack: Constrained Decoding without Distorting LLM's Output Intent (2510.17376v1)

Published 20 Oct 2025 in cs.SE

Abstract: LLM-based code generation and completion tools have been widely adopted, but they may sometimes produce code that does not meet necessary constraints, such as syntactic correctness or API existence. Constrained decoding techniques are developed to help the model generate code adhering to the constraints by greedily eliminating generation options that violate constraints at each step of the generation process. However, there is a severe limitation of constrained decoding, that it distorts the model's output intent, forcing it to produce code that may satisfy the constraint but does not match the development intent and is therefore incorrect. In response to this challenge, we propose AdapTrack. By incorporating backtracking into the generation process, AdapTrack avoids distorting the output intent of the model, thereby producing results that are not only constraint-compliant but also more semantically aligned with model's output intent. On our synthetic API completion dataset, AdapTrack can achieve up to 360.87% improvement compared to constrained decoding; on the real-world API completion dataset we collect that exhibits similar issues, AdapTrack can achieve up to 38.93% improvement over constrained decoding; in general code genration benchmarks, compared to constrained decoding, AdapTrack can achieve up to 7.84% improvement on HumanEval, and up to 6.42% improvement on MBPP. This indicates that, simply by better adhering to the model's output intent, AdapTrack can achieve significant improvements. We provide a theoretical proof that the distribution produced by AdapTrack aligns with the model's distribution given the generated tokens, thereby ensuring that the model's output intent is not distorted. Experiments on DSL problems show that, compared to existing methods, our approach can provide generation results that are more consistent with the LLM's distribution.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.