Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient and Interpretable Neural Models for Entity Tracking (2208.14252v1)

Published 30 Aug 2022 in cs.CL

Abstract: What would it take for a natural LLM to understand a novel, such as The Lord of the Rings? Among other things, such a model must be able to: (a) identify and record new characters (entities) and their attributes as they are introduced in the text, and (b) identify subsequent references to the characters previously introduced and update their attributes. This problem of entity tracking is essential for language understanding, and thus, useful for a wide array of downstream applications in NLP such as question-answering, summarization. In this thesis, we focus on two key problems in relation to facilitating the use of entity tracking models: (i) scaling entity tracking models to long documents, such as a novel, and (ii) integrating entity tracking into LLMs. Applying language technologies to long documents has garnered interest recently, but computational constraints are a significant bottleneck in scaling up current methods. In this thesis, we argue that computationally efficient entity tracking models can be developed by representing entities with rich, fixed-dimensional vector representations derived from pretrained LLMs, and by exploiting the ephemeral nature of entities. We also argue for the integration of entity tracking into LLMs as it will allow for: (i) wider application given the current ubiquitous use of pretrained LLMs in NLP applications, and (ii) easier adoption since it is much easier to swap in a new pretrained LLM than to integrate a separate standalone entity tracking model.

Summary

We haven't generated a summary for this paper yet.