2000 character limit reached
LTG at SemEval-2025 Task 10: Optimizing Context for Classification of Narrative Roles (2506.05976v1)
Published 6 Jun 2025 in cs.CL
Abstract: Our contribution to the SemEval 2025 shared task 10, subtask 1 on entity framing, tackles the challenge of providing the necessary segments from longer documents as context for classification with a masked LLM. We show that a simple entity-oriented heuristics for context selection can enable text classification using models with limited context window. Our context selection approach and the XLM-RoBERTa LLM is on par with, or outperforms, Supervised Fine-Tuning with larger generative LLMs.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.