2000 character limit reached
A Note on Hybrid Online Reinforcement and Imitation Learning for LLMs: Formulations and Algorithms
Published 28 Dec 2025 in cs.LG, cs.AI, and cs.CL | (2512.23097v1)
Abstract: We present a unified framework for LLM fine-tuning that integrates Imitation Learning and Reinforcement Learning. By analyzing the gradient of a composite objective combining trajectory-level KL divergence with task rewards, we derive a natural decomposition into two components: (1) an analytically computable Dense Gradient for token-level imitation, and (2) a Monte Carlo estimated Sparse Gradient for long-horizon reward optimization. The Dense Gradient admits a closed-form logit-level formula, enabling efficient GPU implementation.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.