Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy (1407.6845v2)

Published 25 Jul 2014 in cs.PL and cs.GT

Abstract: Mechanism design is the study of algorithm design in which the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property---incentive properties are only useful if the strategic agents also believe this fact. Verification is an attractive way to convince agents that the incentive properties actually hold, but mechanism design poses several unique challenges: interesting properties can be sophisticated relational properties of probabilistic computations involving expected values, and mechanisms may rely on other probabilistic properties, like differential privacy, to achieve their goals. We introduce a relational refinement type system, called $\mathsf{HOARe}2$, for verifying mechanism design and differential privacy. We show that $\mathsf{HOARe}2$ is sound w.r.t. a denotational semantics, and correctly models $(\epsilon,\delta)$-differential privacy; moreover, we show that it subsumes DFuzz, an existing linear dependent type system for differential privacy. Finally, we develop an SMT-based implementation of $\mathsf{HOARe}2$ and use it to verify challenging examples of mechanism design, including auctions and aggregative games, and new proposed examples from differential privacy.

Citations (87)

Summary

We haven't generated a summary for this paper yet.