Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Equilibrium Selection in Information Elicitation without Verification via Information Monotonicity (1603.07751v1)

Published 24 Mar 2016 in cs.GT

Abstract: Peer-prediction is a mechanism which elicits privately-held, non-variable information from self-interested agents---formally, truth-telling is a strict Bayes Nash equilibrium of the mechanism. The original Peer-prediction mechanism suffers from two main limitations: (1) the mechanism must know the "common prior" of agents' signals; (2) additional undesirable and non-truthful equilibria exist which often have a greater expected payoff than the truth-telling equilibrium. A series of results has successfully weakened the known common prior assumption. However, the equilibrium multiplicity issue remains a challenge. In this paper, we address the above two problems. In the setting where a common prior exists but is not known to the mechanism we show (1) a general negative result applying to a large class of mechanisms showing truth-telling can never pay strictly more in expectation than a particular set of equilibria where agents collude to "relabel" the signals and tell the truth after relabeling signals; (2) provide a mechanism that has no information about the common prior but where truth-telling pays as much in expectation as any relabeling equilibrium and pays strictly more than any other symmetric equilibrium; (3) moreover in our mechanism, if the number of agents is sufficiently large, truth-telling pays similarly to any equilibrium close to a "relabeling" equilibrium and pays strictly more than any equilibrium that is not close to a relabeling equilibrium.

Citations (34)

Summary

We haven't generated a summary for this paper yet.