Net-negative effect of error-correction on the learning coefficient for Turing machines
Determine whether there exist Turing machines augmented with run-time error-correction (that correct all error syndromes of weight at most C) for which the overall introduction of error-correction reduces the local learning coefficient λ([M], q) of the average negative log-likelihood L around the code [M], despite the increase in complexity from using more transition tuples. Concretely, ascertain whether error-correction can be engineered so that the net effect—combining the tendency for added tuples to raise λ with the upper bound λ([M], q) ≤ d/(2(C+1)) for machines that correct weight-≤C errors—results in a strictly smaller λ([M], q) compared to the original machine without error-correction.
References
It is an important open question whether error-correction can be made "net negative" in this sense for Turing machines. ... Answering this open question would have important implications for the character of programs that are given high probability by the posterior.