Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attend and Rectify: a Gated Attention Mechanism for Fine-Grained Recovery (1807.07320v2)

Published 19 Jul 2018 in cs.CV

Abstract: We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. It learns to attend to lower-level feature activations without requiring part annotations and uses these activations to update and rectify the output likelihood distribution. In contrast to other approaches, the proposed mechanism is modular, architecture-independent and efficient both in terms of parameters and computation required. Experiments show that networks augmented with our approach systematically improve their classification accuracy and become more robust to clutter. As a result, Wide Residual Networks augmented with our proposal surpasses the state of the art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford dogs, and UEC Food-100.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pau RodrĂ­guez (47 papers)
  2. Josep M. Gonfaus (5 papers)
  3. Guillem Cucurull (9 papers)
  4. F. Xavier Roca (2 papers)
  5. Jordi GonzĂ lez (12 papers)
Citations (38)