Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leaner and Faster: Two-Stage Model Compression for Lightweight Text-Image Retrieval (2204.13913v1)

Published 29 Apr 2022 in cs.CV and cs.CL

Abstract: Current text-image approaches (e.g., CLIP) typically adopt dual-encoder architecture using pre-trained vision-language representation. However, these models still pose non-trivial memory requirements and substantial incremental indexing time, which makes them less practical on mobile devices. In this paper, we present an effective two-stage framework to compress large pre-trained dual-encoder for lightweight text-image retrieval. The resulting model is smaller (39% of the original), faster (1.6x/2.9x for processing image/text respectively), yet performs on par with or better than the original full model on Flickr30K and MSCOCO benchmarks. We also open-source an accompanying realistic mobile image search application.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Siyu Ren (24 papers)
  2. Kenny Q. Zhu (50 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.