Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Modal Scene Networks (1610.09003v1)

Published 27 Oct 2016 in cs.CV, cs.LG, and cs.MM

Abstract: People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yusuf Aytar (36 papers)
  2. Lluis Castrejon (11 papers)
  3. Carl Vondrick (93 papers)
  4. Hamed Pirsiavash (50 papers)
  5. Antonio Torralba (178 papers)
Citations (112)