Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Modal Fusion Distillation for Fine-Grained Sketch-Based Image Retrieval (2210.10486v1)

Published 19 Oct 2022 in cs.CV and cs.LG

Abstract: Representation learning for sketch-based image retrieval has mostly been tackled by learning embeddings that discard modality-specific information. As instances from different modalities can often provide complementary information describing the underlying concept, we propose a cross-attention framework for Vision Transformers (XModalViT) that fuses modality-specific information instead of discarding them. Our framework first maps paired datapoints from the individual photo and sketch modalities to fused representations that unify information from both modalities. We then decouple the input space of the aforementioned modality fusion network into independent encoders of the individual modalities via contrastive and relational cross-modal knowledge distillation. Such encoders can then be applied to downstream tasks like cross-modal retrieval. We demonstrate the expressive capacity of the learned representations by performing a wide range of experiments and achieving state-of-the-art results on three fine-grained sketch-based image retrieval benchmarks: Shoe-V2, Chair-V2 and Sketchy. Implementation is available at https://github.com/abhrac/xmodal-vit.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Abhra Chaudhuri (10 papers)
  2. Massimiliano Mancini (66 papers)
  3. Yanbei Chen (167 papers)
  4. Zeynep Akata (144 papers)
  5. Anjan Dutta (41 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.