Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CoreLM: Coreference-aware Language Model Fine-Tuning (2111.02687v1)

Published 4 Nov 2021 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs are the underpin of all modern NLP tasks. The introduction of the Transformers architecture has contributed significantly into making LLMing very effective across many NLP task, leading to significant advancements in the field. However, Transformers come with a big computational cost, which grows quadratically with respect to the input length. This presents a challenge as to understand long texts requires a lot of context. In this paper, we propose a Fine-Tuning framework, named CoreLM, that extends the architecture of current Pretrained LLMs so that they incorporate explicit entity information. By introducing entity representations, we make available information outside the contextual space of the model, which results in a better LLM for a fraction of the computational cost. We implement our approach using GPT2 and compare the fine-tuned model to the original. Our proposed model achieves a lower Perplexity in GUMBY and LAMBDADA datasets when compared to GPT2 and a fine-tuned version of GPT2 without any changes. We also compare the models' performance in terms of Accuracy in LAMBADA and Children's Book Test, with and without the use of model-created coreference annotations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Nikolaos Stylianou (4 papers)
  2. Ioannis Vlahavas (12 papers)
Citations (2)