Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CoSimLex: A Resource for Evaluating Graded Word Similarity in Context (1912.05320v3)

Published 11 Dec 2019 in cs.CL

Abstract: State of the art natural language processing tools are built on context-dependent word embeddings, but no direct method for evaluating these representations currently exists. Standard tasks and datasets for intrinsic evaluation of embeddings are based on judgements of similarity, but ignore context; standard tasks for word sense disambiguation take account of context but do not provide continuous measures of meaning similarity. This paper describes an effort to build a new dataset, CoSimLex, intended to fill this gap. Building on the standard pairwise similarity task of SimLex-999, it provides context-dependent similarity measures; covers not only discrete differences in word sense but more subtle, graded changes in meaning; and covers not only a well-resourced language (English) but a number of less-resourced languages. We define the task and evaluation metrics, outline the dataset collection methodology, and describe the status of the dataset so far.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Carlos Santos Armendariz (1 paper)
  2. Matthew Purver (32 papers)
  3. Senja Pollak (37 papers)
  4. Nikola Ljubešić (24 papers)
  5. Marko Robnik-Šikonja (39 papers)
  6. Mark Granroth-Wilding (3 papers)
  7. Kristiina Vaik (2 papers)
  8. Matej Ulčar (8 papers)
Citations (33)