2000 character limit reached
The PLLuM Instruction Corpus (2511.17161v1)
Published 21 Nov 2025 in cs.CL and cs.AI
Abstract: This paper describes the instruction dataset used to fine-tune a set of transformer-based LLMs developed in the PLLuM (Polish LLM) project. We present a functional typology of the organic, converted, and synthetic instructions used in PLLuM and share some observations about the implications of using human-authored versus synthetic instruction datasets in the linguistic adaptation of base LLMs. Additionally, we release the first representative subset of the PLLuM instruction corpus (PLLuMIC), which we believe to be useful in guiding and planning the development of similar datasets for other LLMs.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.