Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FashionBERT: Text and Image Matching with Adaptive Loss for Cross-modal Retrieval (2005.09801v2)

Published 20 May 2020 in cs.IR, cs.CV, cs.LG, and eess.IV

Abstract: In this paper, we address the text and image matching in cross-modal retrieval of the fashion industry. Different from the matching in the general domain, the fashion matching is required to pay much more attention to the fine-grained information in the fashion images and texts. Pioneer approaches detect the region of interests (i.e., RoIs) from images and use the RoI embeddings as image representations. In general, RoIs tend to represent the "object-level" information in the fashion images, while fashion texts are prone to describe more detailed information, e.g. styles, attributes. RoIs are thus not fine-grained enough for fashion text and image matching. To this end, we propose FashionBERT, which leverages patches as image features. With the pre-trained BERT model as the backbone network, FashionBERT learns high level representations of texts and images. Meanwhile, we propose an adaptive loss to trade off multitask learning in the FashionBERT modeling. Two tasks (i.e., text and image matching and cross-modal retrieval) are incorporated to evaluate FashionBERT. On the public dataset, experiments demonstrate FashionBERT achieves significant improvements in performances than the baseline and state-of-the-art approaches. In practice, FashionBERT is applied in a concrete cross-modal retrieval application. We provide the detailed matching performance and inference efficiency analysis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dehong Gao (26 papers)
  2. Linbo Jin (7 papers)
  3. Ben Chen (23 papers)
  4. Minghui Qiu (58 papers)
  5. Peng Li (390 papers)
  6. Yi Wei (60 papers)
  7. Yi Hu (130 papers)
  8. Hao Wang (1124 papers)
Citations (124)

Summary

We haven't generated a summary for this paper yet.