Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attention mechanisms and deep learning for machine vision: A survey of the state of the art (2106.07550v1)

Published 3 Jun 2021 in cs.CV, cs.AI, and cs.LG

Abstract: With the advent of state of the art nature-inspired pure attention based models i.e. transformers, and their success in NLP, their extension to machine vision (MV) tasks was inevitable and much felt. Subsequently, vision transformers (ViTs) were introduced which are giving quite a challenge to the established deep learning based machine vision techniques. However, pure attention based models/architectures like transformers require huge data, large training times and large computational resources. Some recent works suggest that combinations of these two varied fields can prove to build systems which have the advantages of both these fields. Accordingly, this state of the art survey paper is introduced which hopefully will help readers get useful information about this interesting and potential research area. A gentle introduction to attention mechanisms is given, followed by a discussion of the popular attention based deep architectures. Subsequently, the major categories of the intersection of attention mechanisms and deep learning for machine vision (MV) based are discussed. Afterwards, the major algorithms, issues and trends within the scope of the paper are discussed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Abdul Mueed Hafiz (9 papers)
  2. Shabir Ahmad Parah (2 papers)
  3. Rouf Ul Alam Bhat (2 papers)
Citations (38)