Multiple Word Embeddings for Increased Diversity of Representation (2009.14394v2)
Abstract: Most state-of-the-art models in NLP are neural models built on top of large, pre-trained, contextual LLMs that generate representations of words in context and are fine-tuned for the task at hand. The improvements afforded by these "contextual embeddings" come with a high computational cost. In this work, we explore a simple technique that substantially and consistently improves performance over a strong baseline with negligible increase in run time. We concatenate multiple pre-trained embeddings to strengthen our representation of words. We show that this concatenation technique works across many tasks, datasets, and model types. We analyze aspects of pre-trained embedding similarity and vocabulary coverage and find that the representational diversity between different pre-trained embeddings is the driving force of why this technique works. We provide open source implementations of our models in both TensorFlow and PyTorch.
- Brian Lester (21 papers)
- Daniel Pressel (8 papers)
- Amy Hemmeter (3 papers)
- Sagnik Ray Choudhury (17 papers)
- Srinivas Bangalore (11 papers)