Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining pre-trained Vision Transformers and CIDER for Out Of Domain Detection (2309.03047v1)

Published 6 Sep 2023 in cs.CV and cs.AI

Abstract: Out-of-domain (OOD) detection is a crucial component in industrial applications as it helps identify when a model encounters inputs that are outside the training distribution. Most industrial pipelines rely on pre-trained models for downstream tasks such as CNN or Vision Transformers. This paper investigates the performance of those models on the task of out-of-domain detection. Our experiments demonstrate that pre-trained transformers models achieve higher detection performance out of the box. Furthermore, we show that pre-trained ViT and CNNs can be combined with refinement methods such as CIDER to improve their OOD detection performance even more. Our results suggest that transformers are a promising approach for OOD detection and set a stronger baseline for this task in many contexts

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Grégor Jouet (1 paper)
  2. Clément Duhart (2 papers)
  3. Francis Rousseaux (7 papers)
  4. Julio Laborde (2 papers)
  5. Cyril de Runz (5 papers)

Summary

We haven't generated a summary for this paper yet.