Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 78 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 23 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 93 tok/s
GPT OSS 120B 470 tok/s Pro
Kimi K2 183 tok/s Pro
2000 character limit reached

Well-Ordering Principles in Proof Theory and Reverse Mathematics (2010.12453v1)

Published 23 Oct 2020 in math.LO

Abstract: Several theorems about the equivalence of familiar theories of reverse mathematics with certain well-ordering principles have been proved by recursion-theoretic and combinatorial methods (Friedman, Marcone, Montalban et al.) and with far-reaching results by proof-theoretic technology (Afshari, Freund, Girard, Rathjen, Thomson, Valencia Vizcano, Weiermann et al.), employing deduction search trees and cut elimination theorems in infinitary logics with ordinal bounds in the latter case. At type level one, the well-ordering principles are of the form () "if X is well-ordered then f(X) is well-ordered" where f is a standard proof theoretic function from ordinals to ordinals (such f's are always dilators). One aim of the paper is to present a general methodology underlying these results that enables one to construct omega-models of particular theories from () and even beta-models from the type 2 version of (). As () is of complexity Pi-1-2 such a principle cannot characterize stronger comprehensions at the level of Pi-1-1-comprehension. This requires a higher order version of (*) that employs ideas from ordinal representation systems with collapsing functions used in impredicative proof theory. The simplest one is the Bachmann construction. Relativizing the latter construction to any dilator f and asserting that this always yields a well-ordering turns out to be equivalent to Pi-1-1-comprehension. This result has been conjectured and a proof strategy adumbrated roughly 10 years ago, but the proof has only been worked out in recent years.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)