Position-Sensitive Score Maps in IS-FCN
- Position-sensitive score maps are a set of k² specialized score maps that partition object proposals into spatial subregions for precise instance segmentation.
- They modify the traditional FCN architecture by replacing the 1×1 convolution with one generating multiple (k²) channels, enabling end-to-end fast inference.
- This approach delivers competitive segmentation performance with reduced computational overhead and memory usage compared to conventional per-proposal methods.
Position-sensitive score maps are a mechanism introduced within the Instance-Sensitive Fully Convolutional Networks (IS-FCN) architecture to facilitate instance-level segmentation using fully convolutional networks. Unlike conventional FCNs that produce a single per-pixel score map per class, position-sensitive score maps decompose the prediction process into a set of score maps, each responsible for modeling the likelihood that a pixel belongs to a specific relative spatial subregion (cell) within any candidate object bounding box. This design enables the efficient assembly of instance-level mask proposals directly from shared, low-dimensional output tensors, eliminating the need for high-dimensional per-proposal computation and supporting fast end-to-end training and inference (Dai et al., 2016).
1. Definition of Position-Sensitive Score Maps
Let denote the side of a uniform grid, and the total number of spatial cells used to tile the interior of any object bounding box. The IS-FCN outputs individual score maps (optionally including a -th “background” channel). Each score map , for , encodes the likelihood that pixel falls into relative cell- across all object instances in the image. The mapping from channel index to cell is
so that score map is responsible for relative grid location . Figure 1 in (Dai et al., 2016) illustrates with how each of the $9$ score maps “lights up” one spatial sub-cell of each instance.
2. Network Architecture and Output Head Modification
In classical FCNs for semantic segmentation, the final layer typically employs a convolution to output channels (number of classes). For position-sensitive score maps, this convolution is replaced with one generating output channels (or with background). If the backbone produces features , the head is: where represents the response for grid cell . This enables a single FCN forward pass to produce all position-sensitive maps for the entire image.
3. Pixel-wise Labeling and Mathematical Formulation
Given an image and ground truth instance bounding boxes , training proceeds by assigning each pixel within any a ground truth label corresponding to its spatial cell in the relative grid. The process is as follows:
- Compute normalized offsets within the bounding box:
- Quantize these offsets to grid bins:
- The ground-truth index is .
Pixels not contained in any instance are labeled as background (channel 0 or ).
4. Assembly of Instance Masks from Score Maps
Inference involves constructing instance candidates from the position-sensitive maps. For each candidate bounding box (obtained via sliding windows or region proposals), the following procedure is followed:
- For each in a grid ():
- Select map .
- Map grid location to full image:
- Extract .
- Assemble values as the mask (optionally upsample to ’s full resolution).
- Threshold (e.g., at $0.5$ after sigmoid) for a binary mask.
- Aggregate mask scores to score the box:
- Apply class-agnostic non-maximum suppression (NMS) to select high-scoring masks.
This pipeline is shown in Figure 2 of (Dai et al., 2016).
5. Training Objective and Optimization
Training is performed end-to-end with a per-pixel softmax cross-entropy loss over channels (including background). The loss function is: where:
- ,
- are pixels within instances (each with ),
- are sampled background pixels (target $0$), typically balancing pos:neg as $1:3$,
- is weight decay regularization.
This loss encourages each position-sensitive map to specialize in recognizing relative locations of objects and discourages false activations outside object regions.
6. Relation to Prior Methods
The method is compared with R-FCN (Dai et al., 2016) and DeepMask (Pinheiro et al., 2015) (Dai et al., 2016). R-FCN also uses position-sensitive maps, but only for classification and bounding box regression; IS-FCN extends the idea to pixel-accurate segmentation masks. In contrast, DeepMask uses hundreds of proposal-specific networks or branches (one per window location), causing high computational and memory overhead. The position-sensitive mapping approach in IS-FCN requires only a single pass to compute shared maps and assembles per-proposal masks via efficient cropping and interpolation, yielding an order-of-magnitude reduction in per-proposal computation and significant memory savings.
Empirical results show that this architecture permits end-to-end training for both localization and segmentation and provides competitive instance segmentation performance on PASCAL VOC and MS COCO benchmarks (see Table 1 in (Dai et al., 2016) for ablation by value).
7. Practical Considerations and Impact
The primary advantages of position-sensitive score maps are computational efficiency, low memory overhead, and the capacity for precise spatial localization within object proposals. By transforming the segmentation problem into the assembly of local spatial cues from compact, shared maps, IS-FCN and similar architectures enable scalable instance segmentation with relatively modest architectural modifications. A plausible implication is that this strategy facilitates the extension of FCN-based semantic segmentation architectures to instance-aware tasks with minimal increase in inference cost, and provides a general-purpose template for designing parsimonious prediction heads for spatially-structured outputs (Dai et al., 2016).