Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 88 tok/s Pro
Kimi K2 138 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

CoCoST: Automatic Complex Code Generation with Online Searching and Correctness Testing (2403.13583v3)

Published 20 Mar 2024 in cs.SE, cs.CL, and cs.LG

Abstract: LLMs have revolutionized code generation ability by converting natural language descriptions into executable code. However, generating complex code within real-world scenarios remains challenging due to intricate structures, subtle bugs, understanding of advanced data types, and lack of supplementary contents. To address these challenges, we introduce the CoCoST framework, which enhances complex code generation by online searching for more information with planned queries and correctness testing for code refinement. Moreover, CoCoST serializes the complex inputs and outputs to improve comprehension and generates test cases to ensure the adaptability for real-world applications. CoCoST is validated through rigorous experiments on the DS-1000 and ClassEval datasets. Experimental results show that CoCoST substantially improves the quality of complex code generation, highlighting its potential to enhance the practicality of LLMs in generating complex code.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Santacoder: don’t reach for the stars! arXiv preprint arXiv:2301.03988.
  2. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
  3. Evaluating large language models trained on code.
  4. Teaching large language models to self-debug.
  5. Classeval: A manually-crafted benchmark for evaluating llms on class-level code generation.
  6. Text2analysis: A benchmark of table question answering with advanced data analysis and unclear queries.
  7. Measuring coding challenge competence with apps. NeurIPS.
  8. Selfevolve: A code evolution framework via large language models.
  9. Ds-1000: A natural and reliable benchmark for data science code generation. ArXiv, abs/2211.11501.
  10. Starcoder: may the source be with you!
  11. Wizardcoder: Empowering code large language models with evol-instruct.
  12. Zohar Manna and Richard J. Waldinger. 1971. Toward automatic program synthesis. Commun. ACM, 14(3):151–165.
  13. OpenAI. 2023. Gpt-4 technical report.
  14. Code llama: Open foundation models for code.
  15. Self-edit: Fault-aware code editor for code generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 769–787, Toronto, Canada. Association for Computational Linguistics.
  16. Docprompting: Generating code by retrieving the docs. In International Conference on Learning Representations (ICLR), Kigali, Rwanda.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 2 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube