Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Neural Word Representations in Tensor-Based Compositional Settings (1408.6179v1)

Published 26 Aug 2014 in cs.CL

Abstract: We provide a comparative study between neural word representations and traditional vector spaces based on co-occurrence counts, in a number of compositional tasks. We use three different semantic spaces and implement seven tensor-based compositional models, which we then test (together with simpler additive and multiplicative approaches) in tasks involving verb disambiguation and sentence similarity. To check their scalability, we additionally evaluate the spaces using simple compositional methods on larger-scale tasks with less constrained language: paraphrase detection and dialogue act tagging. In the more constrained tasks, co-occurrence vectors are competitive, although choice of compositional method is important; on the larger-scale tasks, they are outperformed by neural word embeddings, which show robust, stable performance across the tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dmitrijs Milajevs (2 papers)
  2. Dimitri Kartsaklis (24 papers)
  3. Mehrnoosh Sadrzadeh (51 papers)
  4. Matthew Purver (32 papers)
Citations (114)