Audited calibration under regime shift as a computational test of support-structured broadcast
Abstract: A central prediction of the accompanying theoretical framework is that metacognitive calibration can vary even when content-level performance is held approximately fixed, depending on whether support structure is preserved in a globally reusable broadcast state. We provide a minimal computational test of this claim using a two-channel probabilistic cue-integration task with regime shifts that induce systematic miscalibration in one channel. We compare content-dominated architectures, in which confidence is calibrated by a single global mapping from evidence strength to probability, to an auditor architecture that learns a regime-conditioned calibration mapping from an audit trail of outcomes. We then couple confidence to control by implementing a policy that either acts immediately or requests one additional sample when confidence falls below a threshold. Across matched evidence streams, the auditor substantially improves calibration, particularly in the degraded regime, and produces qualitatively different control behavior by selectively requesting additional evidence under low-support conditions. These results demonstrate a concrete, testable dissociation between content performance and system-level confidence and policy that arises from globally reusable support summaries.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.