Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incorporating Network Built-in Priors in Weakly-supervised Semantic Segmentation (1706.02189v1)

Published 6 Jun 2017 in cs.CV

Abstract: Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground/background masks. Unfortunately these priors either require pixel-level annotations/bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract accurate masks from networks pre-trained for the task of object recognition, thus forgoing external objectness modules. We first show how foreground/background masks can be obtained from the activations of higher-level convolutional layers of a network. We then show how to obtain multi-class masks by the fusion of foreground/background ones with information extracted from a weakly-supervised localization network. Our experiments evidence that exploiting these masks in conjunction with a weakly-supervised training loss yields state-of-the-art tag-based weakly-supervised semantic segmentation results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fatemeh Sadat Saleh (10 papers)
  2. Mohammad Sadegh Aliakbarian (7 papers)
  3. Mathieu Salzmann (185 papers)
  4. Lars Petersson (88 papers)
  5. Jose M. Alvarez (90 papers)
  6. Stephen Gould (104 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.