Prompting Encoder Models for Zero-Shot Classification: A Cross-Domain Study in Italian (2407.20654v1)
Abstract: Addressing the challenge of limited annotated data in specialized fields and low-resource languages is crucial for the effective use of LLMs (LMs). While most LLMs are trained on general-purpose English corpora, there is a notable gap in models specifically tailored for Italian, particularly for technical and bureaucratic jargon. This paper explores the feasibility of employing smaller, domain-specific encoder LMs alongside prompting techniques to enhance performance in these specialized contexts. Our study concentrates on the Italian bureaucratic and legal language, experimenting with both general-purpose and further pre-trained encoder-only models. We evaluated the models on downstream tasks such as document classification and entity typing and conducted intrinsic evaluations using Pseudo-Log-Likelihood. The results indicate that while further pre-trained models may show diminished robustness in general knowledge, they exhibit superior adaptability for domain-specific tasks, even in a zero-shot setting. Furthermore, the application of calibration techniques and in-domain verbalizers significantly enhances the efficacy of encoder models. These domain-specialized models prove to be particularly advantageous in scenarios where in-domain resources or expertise are scarce. In conclusion, our findings offer new insights into the use of Italian models in specialized contexts, which may have a significant impact on both research and industrial applications in the digital transformation era.
- Serena Auriemma (2 papers)
- Martina Miliani (3 papers)
- Mauro Madeddu (2 papers)
- Alessandro Bondielli (3 papers)
- Lucia Passaro (8 papers)
- Alessandro Lenci (26 papers)