Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 79 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 98 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Aligning CodeLLMs with Direct Preference Optimization (2410.18585v1)

Published 24 Oct 2024 in cs.AI and cs.LG

Abstract: The last year has witnessed the rapid progress of LLMs across diverse domains. Among them, CodeLLMs have garnered particular attention because they can not only assist in completing various programming tasks but also represent the decision-making and logical reasoning capabilities of LLMs. However, current CodeLLMs mainly focus on pre-training and supervised fine-tuning scenarios, leaving the alignment stage, which is important for post-training LLMs, under-explored. This work first identifies that the commonly used PPO algorithm may be suboptimal for the alignment of CodeLLM because the involved reward rules are routinely coarse-grained and potentially flawed. We then advocate addressing this using the DPO algorithm. Based on only preference data pairs, DPO can render the model rank data automatically, giving rise to a fine-grained rewarding pattern more robust than human intervention. We also contribute a pipeline for collecting preference pairs for DPO on CodeLLMs. Studies show that our method significantly improves the performance of existing CodeLLMs on benchmarks such as MBPP and HumanEval.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
  2. Santacoder: don’t reach for the stars! arXiv preprint arXiv:2301.03988.
  3. Anthropic. 2024. Introducing claude. https://www.anthropic.com/claude.
  4. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
  5. Qwen technical report. arXiv preprint arXiv:2309.16609.
  6. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
  7. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113.
  8. Stepcoder: Improve code generation with reinforcement learning from compiler feedback. arXiv preprint arXiv:2402.01391.
  9. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196.
  10. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186.
  11. Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. arXiv:2406.18629.
  12. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314–21328.
  13. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161.
  14. Rltf: Reinforcement learning from unit test feedback. arXiv preprint arXiv:2307.04349.
  15. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. Advances in Neural Information Processing Systems, 36.
  16. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664.
  17. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568.
  18. OpenAI. 2024a. Hello gpt-4o. https://openai.com/index/hello-gpt-4o.
  19. OpenAI. 2024b. Introducing openai o1. https://openai.com/o1.
  20. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.
  21. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950.
  22. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
  23. Execution-based code generation using deep reinforcement learning. arXiv preprint arXiv:2301.13816.
  24. Zhongxiang Sun. 2023. A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136.
  25. Large language models in medicine. Nature medicine, 29(8):1930–1940.
  26. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  27. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120.
  28. Codeultrafeedback: An llm-as-a-judge dataset for aligning large language models to coding preferences. arXiv preprint arXiv:2403.09032.
  29. 3d-properties: Identifying challenges in dpo and charting a path forward. arXiv preprint arXiv:2406.07327.
  30. Fingpt: Open-source financial large language models. arXiv preprint arXiv:2306.06031.
  31. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825.
  32. Star: Self-taught reasoner bootstrapping reasoning with reasoning. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pages 15476–15488.
  33. Debug like a human: A large language model debugger via verifying runtime execution step by step. In Findings of the Association for Computational Linguistics ACL 2024, pages 851–870.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube