Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Sense Language Modelling (2012.05776v3)

Published 10 Dec 2020 in cs.CL

Abstract: The effectiveness of a LLM is influenced by its token representations, which must encode contextual information and handle the same word form having a plurality of meanings (polysemy). Currently, none of the common LLMling architectures explicitly model polysemy. We propose a LLM which not only predicts the next word, but also its sense in context. We argue that this higher prediction granularity may be useful for end tasks such as assistive writing, and allow for more a precise linking of LLMs with knowledge bases. We find that multi-sense LLMling requires architectures that go beyond standard LLMs, and here propose a structured prediction framework that decomposes the task into a word followed by a sense prediction task. To aid sense prediction, we utilise a Graph Attention Network, which encodes definitions and example uses of word senses. Overall, we find that multi-sense LLMling is a highly challenging task, and suggest that future work focus on the creation of more annotated training datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Andrea Lekkas (1 paper)
  2. Peter Schneider-Kamp (31 papers)
  3. Isabelle Augenstein (131 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.