Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Integration of Bottom-up and Top-Down Salient Cues on RGB-D Data: Saliency from Objectness vs. Non-Objectness (1807.01532v1)

Published 4 Jul 2018 in cs.CV

Abstract: Bottom-up and top-down visual cues are two types of information that helps the visual saliency models. These salient cues can be from spatial distributions of the features (space-based saliency) or contextual / task-dependent features (object based saliency). Saliency models generally incorporate salient cues either in bottom-up or top-down norm separately. In this work, we combine bottom-up and top-down cues from both space and object based salient features on RGB-D data. In addition, we also investigated the ability of various pre-trained convolutional neural networks for extracting top-down saliency on color images based on the object dependent feature activation. We demonstrate that combining salient features from color and dept through bottom-up and top-down methods gives significant improvement on the salient object detection with space based and object based salient cues. RGB-D saliency integration framework yields promising results compared with the several state-of-the-art-models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nevrez Imamoglu (16 papers)
  2. Wataru Shimoda (10 papers)
  3. Chi Zhang (567 papers)
  4. Yuming Fang (53 papers)
  5. Asako Kanezaki (25 papers)
  6. Keiji Yanai (9 papers)
  7. Yoshifumi Nishida (2 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.