Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Industrial Anomaly Detection and Localization Using Weakly-Supervised Residual Transformers (2306.03492v5)

Published 6 Jun 2023 in cs.CV

Abstract: Recent advancements in industrial Anomaly Detection (AD) have shown that incorporating a few anomalous samples during training can significantly boost accuracy. However, this performance improvement comes at a high cost: extensive annotation efforts, which are often impractical in real-world applications. In this work, we propose a novel framework called "Weakly-supervised RESidual Transformer" (WeakREST), which aims to achieve high AD accuracy while minimizing the need for extensive annotations. First, we reformulate the pixel-wise anomaly localization task into a block-wise classification problem. By shifting the focus to block-wise level, we can drastically reduce the amount of required annotations without compromising on the accuracy of anomaly detection Secondly, we design a residual-based transformer model, termed "Positional Fast Anomaly Residuals" (PosFAR), to classify the image blocks in real time. We further propose to label the anomalous regions using only bounding boxes or image tags as weaker labels, leading to a semi-supervised learning setting. On the benchmark dataset MVTec-AD, our proposed WeakREST framework achieves a remarkable Average Precision (AP) of 83.0%, significantly outperforming the previous best result of 75.8% in the unsupervised setting. In the supervised AD setting, WeakREST further improves performance, attaining an AP of 87.6% compared to the previous best of 78.6%. Notably, even when utilizing weaker labels based on bounding boxes, WeakREST surpasses recent leading methods that rely on pixel-wise supervision, achieving an AP of 87.1% against the prior best of 78.6% on MVTec-AD. This precision advantage is also consistently observed on other well-known AD datasets, such as BTAD and KSDD2.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hanxi Li (15 papers)
  2. Jingqi Wu (2 papers)
  3. Hao Chen (1005 papers)
  4. Mingwen Wang (17 papers)
  5. Lin Yuanbo Wu (12 papers)
  6. Deyin Liu (13 papers)
  7. Peng Wang (831 papers)
Citations (4)