Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extreme compression of sentence-transformer ranker models: faster inference, longer battery life, and less storage on edge devices (2207.12852v1)

Published 29 Jun 2022 in cs.LG, cs.CL, and cs.IR

Abstract: Modern search systems use several large ranker models with transformer architectures. These models require large computational resources and are not suitable for usage on devices with limited computational resources. Knowledge distillation is a popular compression technique that can reduce the resource needs of such models, where a large teacher model transfers knowledge to a small student model. To drastically reduce memory requirements and energy consumption, we propose two extensions for a popular sentence-transformer distillation procedure: generation of an optimal size vocabulary and dimensionality reduction of the embedding dimension of teachers prior to distillation. We evaluate these extensions on two different types of ranker models. This results in extremely compressed student models whose analysis on a test dataset shows the significance and utility of our proposed extensions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Amit Chaulwar (3 papers)
  2. Lukas Malik (2 papers)
  3. Maciej Krajewski (1 paper)
  4. Felix Reichel (8 papers)
  5. Leif-Nissen Lundbæk (3 papers)
  6. Michael Huth (50 papers)
  7. Bartlomiej Matejczyk (1 paper)
Citations (3)