Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Cascading for Code: Reducing Inference Costs with Model Cascading for LLM Based Code Generation (2405.15842v1)

Published 24 May 2024 in cs.SE and cs.LG

Abstract: The rapid development of LLMs has led to significant advancements in code completion tasks. While larger models have higher accuracy, they also cost much more to run. Meanwhile, model cascading has been proven effective to conserve computational resources while enhancing accuracy in LLMs on natural language generation tasks. It generates output with the smallest model in a set, and only queries the larger models when it fails to meet predefined quality criteria. However, this strategy has not been used in code completion tasks, primarily because assessing the quality of code completions differs substantially from assessing natural language, where the former relies heavily on the functional correctness. To address this, we propose letting each model generate and execute a set of test cases for their solutions, and use the test results as the cascading threshold. We show that our model cascading strategy reduces computational costs while increases accuracy compared to generating the output with a single model. We also introduce a heuristics to determine the optimal combination of the number of solutions, test cases, and test lines each model should generate, based on the budget. Compared to speculative decoding, our method works on black-box models, having the same level of cost-accuracy trade-off, yet providing much more choices based on the server's budget. Ours is the first work to optimize cost-accuracy trade-off for LLM code generation with model cascading.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Mark Chen et al. Evaluating large language models trained on code. 2021.
  2. Yujia Li et al. Competition-level code generation with alphacode. Science, 378:1092–1097, 2022.
  3. Erik Nijkamp et al. Codegen: An open large language model for code with multi-turn program synthesis. 2022.
  4. Github copilot · your ai pair programmer, 2021. URL https://copilot.github.com/.
  5. Xunyu Zhu et al. A survey on model compression for large language models. 2023.
  6. Zechun Liu et al. Llm-qat: Data-free quantization aware training for large language models. 2023.
  7. Xinyin Ma et al. Llm-pruner: On the structural pruning of large language models. 2023.
  8. Cheng-Yu Hsieh et al. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. 2023.
  9. Tri Dao et al. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344–16359, 2022.
  10. Erik Nijkamp et al. Codegen2: Lessons for training llms on programming and natural languages. 2023.
  11. Raymond Li et al. Starcoder: may the source be with you! 2023.
  12. Daya Guo et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. 2024.
  13. Bei Chen et al. Codet: Code generation with generated tests. In The Eleventh International Conference on Learning Representations, 2023a.
  14. Baptiste Rozière et al. Code llama: Open foundation models for code, 2023.
  15. Ziyang Luo et al. Wizardcoder: Empowering code large language models with evol-instruct. 2023.
  16. Lingjiao Chen et al. Frugalgpt: How to use large language models while reducing cost and improving performance. 2023b.
  17. Victor Sanh et al. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. 2020.
  18. Yaniv Leviathan et al. Fast inference from transformers via speculative decoding. 2023.
  19. Weimin Xiong et al. The program testing ability of large language models for code. 2023.
  20. RunPod. Gpu cloud service, 2023. URL http://runpod.io.
  21. Daniel Fried et al. Incoder: A generative model for code infilling and synthesis. 2022.
  22. Qinkai Zheng et al. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. 2023.
  23. Hugo Touvron et al. Llama 2: Open foundation and fine-tuned chat models. 2023.
  24. Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. Github repository. https://github.com/sahil280114/codealpaca, 2023.
  25. Jacob Austin et al. Program synthesis with large language models. 2021.
  26. Dan Hendrycks et al. Measuring coding challenge competence with apps. 2021.
  27. Thomas Wolf et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, 2020.
  28. Sylvain Gugger et al. Accelerate: Training and inference at scale made simple, efficient and adaptable. Github repository. https://github.com/huggingface/accelerate, 2022.
  29. Woosuk Kwon et al. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, page 611–626, 2023.
  30. Qidong Su et al. The synergy of speculative decoding and batching in serving large language models. 2023.
  31. Charlie Chen et al. Accelerating large language model decoding with speculative sampling. 2023c.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Boyuan Chen (75 papers)
  2. Mingzhi Zhu (6 papers)
  3. Brendan Dolan-Gavitt (24 papers)
  4. Muhammad Shafique (204 papers)
  5. Siddharth Garg (99 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com