Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation of Error Probability of Classification Based on the Analysis of the Bayes Code: Extension and Example (1910.03257v8)

Published 8 Oct 2019 in cs.IT and math.IT

Abstract: Suppose that we have two training sequences generated by parametrized distributions $P_{\theta*}$ and $P_{\xi*}$, where $\theta*$ and $\xi*$ are unknown true parameters. Given training sequences, we study the problem of classifying whether a test sequence was generated according to $P_{\theta*}$ or $P_{\xi*}$. This problem can be thought of as a hypothesis testing problem and our aim is to analyze the weighted sum of type-I and type-II error probabilities. Utilizing the analysis of the codeword lengths of the Bayes code, our previous study derived more refined bounds on the error probability than known previously. However, our previous study had the following deficiencies: i) the prior distributions of $\theta$ and $\xi$ are the same; ii) the prior distributions of two hypotheses are uniform; iii) no numerical calculation at finite blocklength. This study solves these problems. We remove the restrictions i) and ii) and derive more general results than obtained previously. To deal with problem iii), we perform a numerical calculation for a concrete model.

Citations (5)

Summary

We haven't generated a summary for this paper yet.