Some Simple Economics of AGI
This presentation examines a fundamental reframing of the AGI transition: the core economic constraint is not intelligence or compute, but verification. As AI makes execution nearly free, the Measurability Gap—the structural difference between what can be automated and what can be affordably verified—becomes the defining bottleneck. This gap determines where value accumulates, how risks propagate, and whether the economy evolves toward augmented prosperity or hollow collapse. The talk reveals why human-in-the-loop oversight is dynamically unstable, how alignment decays without adaptive investment, and what this means for labor, markets, and policy.Script
As artificial intelligence drives the cost of execution toward zero, we face a paradox: our economy could produce anything, yet we cannot verify what matters. This paper identifies the Measurability Gap as the binding constraint on the AGI transition, and it changes everything we thought we knew about the economics of advanced AI.
The framework is elegant. Automation cost plummets as compute scales, following a power law. But verification cost is bottlenecked by something that does not scale: human bandwidth, experience, and the slow accumulation of judgment. The distance between these two curves is the Measurability Gap, and it is growing exponentially.
This gap does not just slow things down—it destabilizes the entire system.
The authors identify three forces that make human-in-the-loop oversight dynamically unstable. First, when AI handles all measurable work, humans lose the practice needed to stay competent verifiers. Second, the act of verification itself creates training data that accelerates the next wave of automation. Third, alignment is not a static property—it requires continuous maintenance, and when the Measurability Gap widens faster than oversight can adapt, alignment collapses.
This shift has profound implications for where value lives in the economy. Execution becomes abundant and cheap. The scarcity migrates entirely to verification: the people who can audit, underwrite, and vouch for outputs in high-stakes domains. Firms are no longer valued on what they produce, but on their capacity to absorb and properly price the risks of agentic activity.
Left unaddressed, this creates a market failure the authors call the Trojan Horse externality.
Deploying AI agents without verification is profitable for individual firms but imposes unbounded risk on society. The system appears productive by conventional metrics, yet it is accumulating hidden debt in the form of misaligned outputs, undetected failures, and catastrophic tail risk. The only solution is institutional: strict liability, mandatory insurance, and funding verification infrastructure as a public good.
The future bifurcates. Without investment in verification-scale infrastructure and human augmentation, we drift toward a hollow economy: impressive activity metrics masking fundamental loss of control. But with deliberate policy and technical investment in synthetic practice, cryptographic provenance, and open ground truth registries, we can sustain an augmented economy where human agency and AI capability compound rather than compete.
The paper's simulations confirm the intuition with precision. The automation frontier expands industrially, but the verified share hits a ceiling determined by human bandwidth. Maintaining expertise requires synthetic practice—there is no path around it. And delegating verification to AI itself is a trap: correlated errors mean that measured oversight and realized oversight diverge catastrophically.
The binding constraint on AGI is not how smart it gets, but whether we can verify what it does. That constraint defines the economy we inherit. Visit EmergentMind.com to explore this paper further and create your own research videos.