Eliciting Instruction-tuned Code Language Models' Capabilities to Utilize Auxiliary Function for Code Generation (2409.13928v1)
Abstract: We study the code generation behavior of instruction-tuned models built on top of code pre-trained LLMs when they could access an auxiliary function to implement a function. We design several ways to provide auxiliary functions to the models by adding them to the query or providing a response prefix to incorporate the ability to utilize auxiliary functions with the instruction-following capability. Our experimental results show the effectiveness of combining the base models' auxiliary function utilization ability with the instruction following ability. In particular, the performance of adopting our approaches with the open-sourced LLMs surpasses that of the recent powerful proprietary LLMs, i.e., gpt-4o.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.