Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multiple Word Embeddings for Increased Diversity of Representation (2009.14394v2)

Published 30 Sep 2020 in cs.CL

Abstract: Most state-of-the-art models in NLP are neural models built on top of large, pre-trained, contextual LLMs that generate representations of words in context and are fine-tuned for the task at hand. The improvements afforded by these "contextual embeddings" come with a high computational cost. In this work, we explore a simple technique that substantially and consistently improves performance over a strong baseline with negligible increase in run time. We concatenate multiple pre-trained embeddings to strengthen our representation of words. We show that this concatenation technique works across many tasks, datasets, and model types. We analyze aspects of pre-trained embedding similarity and vocabulary coverage and find that the representational diversity between different pre-trained embeddings is the driving force of why this technique works. We provide open source implementations of our models in both TensorFlow and PyTorch.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Brian Lester (21 papers)
  2. Daniel Pressel (8 papers)
  3. Amy Hemmeter (3 papers)
  4. Sagnik Ray Choudhury (17 papers)
  5. Srinivas Bangalore (11 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.