Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Order-Free RNN with Visual Attention for Multi-Label Classification (1707.05495v3)

Published 18 Jul 2017 in cs.CV

Abstract: In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shang-Fu Chen (6 papers)
  2. Yi-Chen Chen (14 papers)
  3. Chih-Kuan Yeh (23 papers)
  4. Yu-Chiang Frank Wang (88 papers)
Citations (139)

Summary

We haven't generated a summary for this paper yet.