Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning, complexity and information density (0908.4494v1)

Published 31 Aug 2009 in cs.IT, cs.CC, math.IT, and math.PR

Abstract: What is the relationship between the complexity of a learner and the randomness of his mistakes? This question was posed in \cite{rat0903} who showed that the more complex the learner the higher the possibility that his mistakes deviate from a true random sequence. In the current paper we report on an empirical investigation of this problem. We investigate two characteristics of randomness, the stochastic and algorithmic complexity of the binary sequence of mistakes. A learner with a Markov model of order $k$ is trained on a finite binary sequence produced by a Markov source of order $k{*}$ and is tested on a different random sequence. As a measure of learner's complexity we define a quantity called the \emph{sysRatio}, denoted by $\rho$, which is the ratio between the compressed and uncompressed lengths of the binary string whose $i{th}$ bit represents the maximum \emph{a posteriori} decision made at state $i$ of the learner's model. The quantity $\rho$ is a measure of information density. The main result of the paper shows that this ratio is crucial in answering the above posed question. The result indicates that there is a critical threshold $\rho{*}$ such that when $\rho\leq\rho{*}$ the sequence of mistakes possesses the following features: (1)\emph{}low divergence $\Delta$ from a random sequence, (2) low variance in algorithmic complexity. When $\rho>\rho{*}$, the characteristics of the mistake sequence changes sharply towards a\emph{}high\emph{$\Delta$} and high variance in algorithmic complexity.

Citations (4)

Summary

We haven't generated a summary for this paper yet.