2000 character limit reached
The Heap: A Contamination-Free Multilingual Code Dataset for Evaluating Large Language Models (2501.09653v1)
Published 16 Jan 2025 in cs.CL and cs.AI
Abstract: The recent rise in the popularity of LLMs has spurred the development of extensive code datasets needed to train them. This has left limited code available for collection and use in the downstream investigation of specific behaviors, or evaluation of LLMs without suffering from data contamination. To address this problem, we release The Heap, a large multilingual dataset covering 57 programming languages that has been deduplicated with respect to other open datasets of code, enabling researchers to conduct fair evaluations of LLMs without significant data cleaning overhead.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.