Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stable Code Technical Report (2404.01226v1)

Published 1 Apr 2024 in cs.CL
Stable Code Technical Report

Abstract: We introduce Stable Code, the first in our new-generation of code LLMs series, which serves as a general-purpose base code LLM targeting code completion, reasoning, math, and other software engineering-based tasks. Additionally, we introduce an instruction variant named Stable Code Instruct that allows conversing with the model in a natural chat interface for performing question-answering and instruction-based tasks. In this technical report, we detail the data and training procedure leading to both models. Their weights are available via Hugging Face for anyone to download and use at https://huggingface.co/stabilityai/stable-code-3b and https://huggingface.co/stabilityai/stable-code-instruct-3b. This report contains thorough evaluations of the models, including multilingual programming benchmarks, and the MT benchmark focusing on multi-turn dialogues. At the time of its release, Stable Code is the state-of-the-art open model under 3B parameters and even performs comparably to larger models of sizes 7 billion and 15 billion parameters on the popular Multi-PL benchmark. Stable Code Instruct also exhibits state-of-the-art performance on the MT-Bench coding tasks and on Multi-PL completion compared to other instruction tuned models. Given its appealing small size, we also provide throughput measurements on a number of edge devices. In addition, we open source several quantized checkpoints and provide their performance metrics compared to the original model.

Exploring Stable Code: A New Benchmark in Code LLMing

Introduction to Stable Code

Stable Code emerges as a compelling advancement in the domain of code LLMs (LMs), aimed at enhancing code completion, reasoning, mathematical problem-solving, and broad software engineering tasks. Accompanying Stable Code, the report also introduces Stable Code Instruct, designed for natural language interfacing, enabling question-answering and instruction-based executions. This technical report meticulously outlines the models' training regime, datasets, and evaluations, providing the research community with both models through Hugging Face. It distinguishes itself by setting new benchmarks in multilingual programming tasks and even parallels the performance of much larger models in the field.

Training Data and Architecture

The report explores the comprehensive data sourcing and preparation strategy, comprising a blend of code repositories, technical documents, mathematical texts, and the web, tailored to foster a comprehensive understanding relevant to software development. The data strategy not only broadens the model's comprehension skills but also imbues it with a versatile conversational ability, thereby enhancing its applicability across a plethora of software engineering queries.

The model's architecture is built upon the Stable LM 3B framework, incorporating adjustments like Rotary Position Embeddings, LayerNorm modifications, and refined bias configurations. The chosen architecture underscores an emphasis on efficiency and performance, leveraging backend optimizations and demonstrating a grasp of advancements in the LLM landscape.

Training Methodology and Model Initialization

Intriguingly, the report outlines a multi-stage training approach enriched with Fill in the Middle (FIM) objectives. This strategic methodology is designed to combat the limitations of traditional causal LLMing by enriching the model's exposure to diverse structural patterns, thereby boosting its comprehension and prediction capabilities concerning code.

Moreover, the training section presents an insightful comparison between models trained from scratch versus those initialized from pre-trained LMs. The findings compellingly advocate for pre-trained initialization, spotlighting the beneficial crossover between natural language processing and code comprehension abilities.

Fine-Tuning and Alignment

Post the base model training, Stable Code Instruct undergoes a rigorous fine-tuning regimen, leveraging a curated blend of datasets tailored to enhance conversational interactivity and response quality. The fine-tuning phase adheres to established practices such as supervised fine-tuning followed by Direct Preference Optimization, highlighting a meticulous effort to refine the model's conversational capabilities.

Performance Evaluations

The evaluative benchmarks provide a robust testament to the models' capabilities. In code completion tasks, Stable Code demonstrates an impressive parity with much larger models across various programming languages. Furthermore, when specialized tasks such as Fill in the Middle (FIM) and SQL queries are considered, the models not only exhibit superior performance but also highlight the nuanced understanding of code contexts and databases.

Additionally, in the field of instruction-based tasks, Stable Code Instruct showcases exemplary performance, underscoring the successful integration of conversational finesse post fine-tuning. These evaluations collectively emphasize the models' standing as competitive, if not superior, alternatives in the landscape of code LMs.

Throughput and Quantization Considerations

A notable mention is given to the throughput measurements and quantization strategies, showcasing the model's practicality in real-world scenarios, especially on edge devices. The report provides insight into the substantial throughput gains achievable through precision adjustments, marking an important consideration for developers aiming to deploy these models in varied computing environments.

Conclusions and Implications

The Stable Code series marks a pivotal advancement in the code LM domain, primarily by marrying the robustness of LLMs with the specificity of software engineering tasks. The detailed account of data sourcing, training methodologies, and fine-tuning strategies underlines a comprehensive effort to develop models that are not just cutting-edge in technology but also versatile in application. The performance metrics reinforce the models' competitiveness, making them valuable assets for researchers and practitioners alike.

Looking forward, the implications of Stable Code and Stable Code Instruct extend beyond mere code completion. They promise advancements in the way we interact with and conceptualize the development of software, paving the way for models that are increasingly in tune with the multifaceted needs of developers. As the field progresses, one can anticipate further refinements and applications stemming from this groundbreaking work.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Stable code complete alpha.
  2. Learning to represent programs with graphs. ArXiv, abs/1711.00740, 2017.
  3. code2seq: Generating sequences from structured representations of code. ArXiv, abs/1808.01400, 2018.
  4. code2vec: learning distributed representations of code. Proceedings of the ACM on Programming Languages, 3:1 – 29, 2018.
  5. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
  6. Llemma: An open language model for mathematics, 2023.
  7. Layer normalization, 2016.
  8. Qwen technical report, 2023.
  9. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022.
  10. Efficient training of language models to fill in the middle. ArXiv, abs/2207.14255, 2022.
  11. Stable lm 2 1.6b technical report, 2024.
  12. A framework for the evaluation of code generation models. https://github.com/bigcode-project/bigcode-evaluation-harness, 2022.
  13. Gpt-neox-20b: An open-source autoregressive language model, 2022.
  14. Multipl-e: A scalable and polyglot approach to benchmarking neural code generation. IEEE Transactions on Software Engineering, 49(7):3675–3691, 2023.
  15. Teaching large language models to self-debug, 2023.
  16. Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023.
  17. Ultrafeedback: Boosting language models with high-quality feedback, 2023.
  18. Cursor. Cursor: The ai-first code editor, 2024.
  19. Premkumar T. Devanbu. On the naturalness of software. 2012 34th International Conference on Software Engineering (ICSE), pages 837–847, 2012.
  20. GitHub. Github copilot: The world’s most widely adopted ai developer tool., 2024.
  21. Deepseek-coder: When the large language model meets programming – the rise of code intelligence, 2024.
  22. MLX: Efficient and flexible machine learning on apple silicon, 2023.
  23. Large language models for software engineering: A systematic literature review, 2023.
  24. Camels in a changing climate: Enhancing lm adaptation with tulu 2, 2023.
  25. The stack: 3 tb of permissively licensed source code. Preprint, 2022.
  26. StarCoder: may the source be with you!, 2023.
  27. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173, 2024.
  28. Octopack: Instruction tuning code large language models. arXiv preprint arXiv:2308.07124, 2023.
  29. Codegen: An open large language model for code with multi-turn program synthesis. In International Conference on Learning Representations, 2022.
  30. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data, and web data only, 2023.
  31. Improving language understanding by generative pre-training, 2018.
  32. Direct preference optimization: Your language model is secretly a reward model, 2023.
  33. Zero: Memory optimizations toward training trillion parameter models, 2020.
  34. Code llama: Open foundation models for code, 2023.
  35. StackOverFlow. Stackoverflow developer survey - 2022, 2022.
  36. Roformer: Enhanced transformer with rotary position embedding, 2023.
  37. Llama: Open and efficient foundation language models, 2023.
  38. Stablelm 3b 4e1t, 2023.
  39. Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944, 2023.
  40. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
  41. If llm is the wizard, then code is the wand: A survey on how code empowers large language models to serve as intelligent agents.
  42. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.
  43. Root mean square layer normalization, 2019.
  44. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Nikhil Pinnaparaju (5 papers)
  2. Reshinth Adithyan (4 papers)
  3. Duy Phung (9 papers)
  4. Jonathan Tow (7 papers)
  5. James Baicoianu (2 papers)
  6. Ashish Datta (2 papers)
  7. Maksym Zhuravinskyi (6 papers)
  8. Dakota Mahan (6 papers)
  9. Marco Bellagente (13 papers)
  10. Carlos Riquelme (26 papers)
  11. Nathan Cooper (35 papers)
Citations (12)