2000 character limit reached
LLM Vocabulary Compression for Low-Compute Environments (2411.06371v1)
Published 10 Nov 2024 in cs.CL and cs.LG
Abstract: We present a method to compress the final linear layer of LLMs, reducing memory usage by up to 3.4x without significant performance loss. By grouping tokens based on Byte Pair Encoding (BPE) merges, we prevent materialization of the memory-intensive logits tensor. Evaluations on the TinyStories dataset show that our method performs on par with GPT-Neo and GPT2 while significantly improving throughput by up to 3x, making it suitable for low-compute environments.
Collections
Sign up for free to add this paper to one or more collections.