Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distribution Calibration for Out-of-Domain Detection with Bayesian Approximation (2209.06612v1)

Published 14 Sep 2022 in cs.CL

Abstract: Out-of-Domain (OOD) detection is a key component in a task-oriented dialog system, which aims to identify whether a query falls outside the predefined supported intent set. Previous softmax-based detection algorithms are proved to be overconfident for OOD samples. In this paper, we analyze overconfident OOD comes from distribution uncertainty due to the mismatch between the training and test distributions, which makes the model can't confidently make predictions thus probably causing abnormal softmax scores. We propose a Bayesian OOD detection framework to calibrate distribution uncertainty using Monte-Carlo Dropout. Our method is flexible and easily pluggable into existing softmax-based baselines and gains 33.33\% OOD F1 improvements with increasing only 0.41\% inference time compared to MSP. Further analyses show the effectiveness of Bayesian learning for OOD detection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yanan Wu (40 papers)
  2. Zhiyuan Zeng (23 papers)
  3. Keqing He (47 papers)
  4. Yutao Mou (16 papers)
  5. Pei Wang (240 papers)
  6. Weiran Xu (58 papers)
Citations (7)