Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Coincident Learning for Unsupervised Anomaly Detection (2301.11368v2)

Published 26 Jan 2023 in cs.LG

Abstract: Anomaly detection is an important task for complex systems (e.g., industrial facilities, manufacturing, large-scale science experiments), where failures in a sub-system can lead to low yield, faulty products, or even damage to components. While complex systems often have a wealth of data, labeled anomalies are typically rare (or even nonexistent) and expensive to acquire. Unsupervised approaches are therefore common and typically search for anomalies either by distance or density of examples in the input feature space (or some associated low-dimensional representation). This paper presents a novel approach called CoAD, which is specifically designed for multi-modal tasks and identifies anomalies based on \textit{coincident} behavior across two different slices of the feature space. We define an \textit{unsupervised} metric, $\hat{F}\beta$, out of analogy to the supervised classification $F\beta$ statistic. CoAD uses $\hat{F}_\beta$ to train an anomaly detection algorithm on \textit{unlabeled data}, based on the expectation that anomalous behavior in one feature slice is coincident with anomalous behavior in the other. The method is illustrated using a synthetic outlier data set and a MNIST-based image data set, and is compared to prior state-of-the-art on two real-world tasks: a metal milling data set and a data set from a particle accelerator.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ryan Humble (6 papers)
  2. Zhe Zhang (182 papers)
  3. Finn O'Shea (2 papers)
  4. Eric Darve (72 papers)
  5. Daniel Ratner (29 papers)

Summary

We haven't generated a summary for this paper yet.