Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Select, Attend, and Transfer: Light, Learnable Skip Connections (1804.05181v3)

Published 14 Apr 2018 in cs.CV

Abstract: Skip connections in deep networks have improved both segmentation and classification performance by facilitating the training of deeper network architectures, and reducing the risks for vanishing gradients. They equip encoder-decoder-like networks with richer feature representations, but at the cost of higher memory usage, computation, and possibly resulting in transferring non-discriminative feature maps. In this paper, we focus on improving skip connections used in segmentation networks (e.g., U-Net, V-Net, and The One Hundred Layers Tiramisu (DensNet) architectures). We propose light, learnable skip connections which learn to first select the most discriminative channels and then attend to the most discriminative regions of the selected feature maps. The output of the proposed skip connections is a unique feature map which not only reduces the memory usage and network parameters to a high extent, but also improves segmentation accuracy. We evaluate the proposed method on three different 2D and volumetric datasets and demonstrate that the proposed light, learnable skip connections can outperform the traditional heavy skip connections in terms of segmentation accuracy, memory usage, and number of network parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Saeid Asgari Taghanaki (22 papers)
  2. Aicha Bentaieb (5 papers)
  3. Anmol Sharma (4 papers)
  4. S. Kevin Zhou (165 papers)
  5. Yefeng Zheng (197 papers)
  6. Bogdan Georgescu (23 papers)
  7. Puneet Sharma (42 papers)
  8. Sasa Grbic (24 papers)
  9. Zhoubing Xu (21 papers)
  10. Dorin Comaniciu (40 papers)
  11. Ghassan Hamarneh (64 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.