Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lightweight Adaptation of Neural Language Models via Subspace Embedding (2308.08688v1)

Published 16 Aug 2023 in cs.CL and cs.AI

Abstract: Traditional neural word embeddings are usually dependent on a richer diversity of vocabulary. However, the LLMs recline to cover major vocabularies via the word embedding parameters, in particular, for multilingual LLMs that generally cover a significant part of their overall learning parameters. In this work, we present a new compact embedding structure to reduce the memory footprint of the pre-trained LLMs with a sacrifice of up to 4% absolute accuracy. The embeddings vectors reconstruction follows a set of subspace embeddings and an assignment procedure via the contextual relationship among tokens from pre-trained LLMs. The subspace embedding structure calibrates to masked LLMs, to evaluate our compact embedding structure on similarity and textual entailment tasks, sentence and paraphrase tasks. Our experimental evaluation shows that the subspace embeddings achieve compression rates beyond 99.8% in comparison with the original embeddings for the LLMs on XNLI and GLUE benchmark suites.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Amit Kumar Jaiswal (14 papers)
  2. Haiming Liu (10 papers)
Citations (2)