FEANet: Feature-Enhanced Attention Network for RGB-Thermal Real-time Semantic Segmentation (2110.08988v1)
Abstract: The RGB-Thermal (RGB-T) information for semantic segmentation has been extensively explored in recent years. However, most existing RGB-T semantic segmentation usually compromises spatial resolution to achieve real-time inference speed, which leads to poor performance. To better extract detail spatial information, we propose a two-stage Feature-Enhanced Attention Network (FEANet) for the RGB-T semantic segmentation task. Specifically, we introduce a Feature-Enhanced Attention Module (FEAM) to excavate and enhance multi-level features from both the channel and spatial views. Benefited from the proposed FEAM module, our FEANet can preserve the spatial information and shift more attention to high-resolution features from the fused RGB-T images. Extensive experiments on the urban scene dataset demonstrate that our FEANet outperforms other state-of-the-art (SOTA) RGB-T methods in terms of objective metrics and subjective visual comparison (+2.6% in global mAcc and +0.8% in global mIoU). For the 480 x 640 RGB-T test images, our FEANet can run with a real-time speed on an NVIDIA GeForce RTX 2080 Ti card.
- Fuqin Deng (10 papers)
- Hua Feng (101 papers)
- Mingjian Liang (3 papers)
- Hongmin Wang (9 papers)
- Yong Yang (237 papers)
- Yuan Gao (336 papers)
- Junfeng Chen (26 papers)
- Junjie Hu (111 papers)
- Xiyue Guo (8 papers)
- Tin Lun Lam (36 papers)