Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AGSFCOS: Based on attention mechanism and Scale-Equalizing pyramid network of object detection (2105.09596v1)

Published 20 May 2021 in cs.CV and cs.AI

Abstract: Recently, the anchor-free object detection model has shown great potential for accuracy and speed to exceed anchor-based object detection. Therefore, two issues are mainly studied in this article: (1) How to let the backbone network in the anchor-free object detection model learn feature extraction? (2) How to make better use of the feature pyramid network? In order to solve the above problems, Experiments show that our model has a certain improvement in accuracy compared with the current popular detection models on the COCO dataset, the designed attention mechanism module can capture contextual information well, improve detection accuracy, and use sepc network to help balance abstract and detailed information, and reduce the problem of semantic gap in the feature pyramid network. Whether it is anchor-based network model YOLOv3, Faster RCNN, or anchor-free network model Foveabox, FSAF, FCOS. Our optimal model can get 39.5% COCO AP under the background of ResNet50.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Li Wang (470 papers)
  2. Wei Xiang (106 papers)
  3. Ruhui Xue (2 papers)
  4. Kaida Zou (1 paper)
  5. Laili Zhu (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.