Some Simple Economics of AGI
This presentation examines a rigorous economic framework that reframes the AGI transition around a critical insight: the core constraint is not intelligence or compute, but verification. As AI systems automate execution at near-zero cost, a structural gap emerges between what can be automated and what can be verified by finite human bandwidth. This Measurability Gap drives fundamental shifts in labor markets, firm structure, and systemic risk, ultimately determining whether we achieve an augmented economy of trust and discovery or a hollow economy of unverified output and hidden catastrophic risk.Script
The fundamental bottleneck in the age of artificial general intelligence isn't compute power or algorithmic sophistication. It's something far more constrained: our human capacity to verify what these systems actually do.
The authors model this through two competing cost curves. Automation costs plummet as compute scales, making even complex tasks nearly free to execute. But verification costs stay stubbornly high, bounded by the wage, attention, and accumulated experience of human experts. The structural difference between these curves is what they call the Measurability Gap, and it's growing fast.
This gap doesn't just widen passively. It triggers three interlocking dynamics that destabilize the entire system.
First, when you automate measurable work, you destroy the pipeline that creates new experts—the apprenticeship loop collapses. Second, every time an expert verifies agent output, they inadvertently generate training data that accelerates the next wave of automation, eroding their own irreplaceability. Third, alignment isn't a static achievement. It's a maintenance process, and when the Measurability Gap widens faster than humans can audit, alignment drifts toward failure. Using AI to verify AI only masks this with correlated failure modes.
Value migrates from execution to verification. Skills and credentials in automatable domains lose their wage premium entirely. The new economic moats are verification-grade network effects: curated ground truth, incident logs, and the institutional capacity to absorb liability. Firms become valued not on output volume, but on their ability to underwrite the tail risks of agentic systems.
Here's the market failure at the core. Deploying agentic systems without verification delivers immediate private returns, but offloads unbounded risk onto society. The result is a systemic accumulation of hidden debt—what the authors call counterfeit utility—outputs that satisfy narrow proxies but diverge catastrophically from real human intent. High measured productivity becomes a dangerous illusion.
The system bifurcates. Without intervention, we drift toward a hollow economy: explosive activity with no one at the helm, risk hidden under metrics, and a slow erosion of meaningful human agency. But there's another path. Aggressive investment in synthetic practice to maintain expertise, cryptographic provenance to make verification cheap and auditable, and strict liability to internalize risk can steer us toward an augmented economy where trust compounds and prosperity is real.
The policy implications are sharp. Markets alone will not solve this. Liability must be enforceable and comprehensive. Verification-grade ground truth must be treated as public infrastructure, not proprietary data. And we must fund synthetic practice environments at industrial scale, because without them, the human capacity to verify collapses mechanically as automation advances.
What this paper makes undeniable is that the binding constraint on AGI-enabled economies is not the speed of inference or the depth of reasoning. It's the scaling law for verification. Human bandwidth to audit, underwrite, and steer is finite and slow to grow. Unless we build infrastructure that extends this capacity, every gain in automation becomes a step toward systemic fragility. The real race is for scalable, verifiable trust.
The Measurability Gap isn't a distant theoretical concern. It's the structural reality shaping labor, risk, and control right now. Visit EmergentMind.com to explore this framework further and create your own research videos.