Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 102 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 25 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 472 tok/s Pro
Kimi K2 196 tok/s Pro
2000 character limit reached

Oracle modalities (2406.05818v1)

Published 9 Jun 2024 in math.LO, cs.LO, and math.CT

Abstract: We give a new formulation of Turing reducibility in terms of higher modalities, inspired by an embedding of the Turing degrees in the lattice of subtoposes of the effective topos discovered by Hyland. In this definition, higher modalities play a similar role to I/O monads or dialogue trees in allowing a function to receive input from an external oracle. However, in homotopy type theory they have better logical properties than monads: they are compatible with higher types, and each modality corresponds to a reflective subuniverse that under suitable conditions is itself a model of homotopy type theory. We give synthetic proofs of some basic results about Turing reducibility in cubical type theory making use of two axioms of Markov induction and computable choice. Both axioms are variants of axioms already studied in the effective topos. We show they hold in certain reflective subuniverses of cubical assemblies, demonstrate their use in some simple proofs in synthetic computability theory using modalities, and show they are downwards absolute for oracle modalities. These results have been formalised using cubical mode of the Agda proof assistant. We explore some first connections between Turing reducibility and homotopy theory. This includes a synthetic proof that two Turing degrees are equal as soon as they induce isomorphic permutation groups on the natural numbers, making essential use of both Markov induction and the formulation of groups in HoTT as pointed, connected, 1-truncated types. We also give some simple non-topological examples of modalities in cubical assemblies based on these ideas, to illustrate what we expect higher dimensional analogues of the Turing degrees to look like.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube