Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AttendSeg: A Tiny Attention Condenser Neural Network for Semantic Segmentation on the Edge (2104.14623v1)

Published 29 Apr 2021 in cs.CV and cs.LG

Abstract: In this study, we introduce \textbf{AttendSeg}, a low-precision, highly compact deep neural network tailored for on-device semantic segmentation. AttendSeg possesses a self-attention network architecture comprising of light-weight attention condensers for improved spatial-channel selective attention at a very low complexity. The unique macro-architecture and micro-architecture design properties of AttendSeg strike a strong balance between representational power and efficiency, achieved via a machine-driven design exploration strategy tailored specifically for the task at hand. Experimental results demonstrated that the proposed AttendSeg can achieve segmentation accuracy comparable to much larger deep neural networks with greater complexity while possessing a significantly lower architecture and computational complexity (requiring as much as >27x fewer MACs, >72x fewer parameters, and >288x lower weight memory requirements), making it well-suited for TinyML applications on the edge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiaoyu Wen (4 papers)
  2. Mahmoud Famouri (13 papers)
  3. Andrew Hryniowski (12 papers)
  4. Alexander Wong (230 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.