2000 character limit reached
Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods (2502.01384v2)
Published 3 Feb 2025 in stat.ML, cs.AI, cs.CL, and cs.LG
Abstract: Discrete diffusion models have recently gained significant attention due to their ability to process complex discrete structures for LLMing. However, fine-tuning these models with policy gradient methods, as is commonly done in Reinforcement Learning from Human Feedback (RLHF), remains a challenging task. We propose an efficient, broadly applicable, and theoretically justified policy gradient algorithm, called Score Entropy Policy Optimization (SEPO), for fine-tuning discrete diffusion models over non-differentiable rewards. Our numerical experiments across several discrete generative tasks demonstrate the scalability and efficiency of our method. Our code is available at https://github.com/ozekri/SEPO.
- Oussama Zekri (4 papers)
- Nicolas Boullé (32 papers)