Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

e-CLIP: Large-Scale Vision-Language Representation Learning in E-commerce (2207.00208v2)

Published 1 Jul 2022 in cs.LG and cs.CV

Abstract: Understanding vision and language representations of product content is vital for search and recommendation applications in e-commerce. As a backbone for online shopping platforms and inspired by the recent success in representation learning research, we propose a contrastive learning framework that aligns language and visual models using unlabeled raw product text and images. We present techniques we used to train large-scale representation learning models and share solutions that address domain-specific challenges. We study the performance using our pre-trained model as backbones for diverse downstream tasks, including category classification, attribute extraction, product matching, product clustering, and adult product recognition. Experimental results show that our proposed method outperforms the baseline in each downstream task regarding both single modality and multiple modalities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wonyoung Shin (4 papers)
  2. Jonghun Park (8 papers)
  3. Taekang Woo (4 papers)
  4. Yongwoo Cho (2 papers)
  5. Kwangjin Oh (3 papers)
  6. Hwanjun Song (44 papers)
Citations (14)