Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequential Bayesian Learning with A Self-Interested Coordinator (2305.06793v1)

Published 11 May 2023 in cs.GT, cs.SY, and eess.SY

Abstract: Social learning refers to the process by which networked strategic agents learn an unknown state of the world by observing private state-related signals as well as other agents' actions. In their classic work, Bikhchandani, Hirshleifer and Welch showed that information cascades occur in social learning, in which agents blindly follow others' behavior, and consequently, the actions in a cascade reveal no further information about the state. In this paper, we consider the introduction of an information coordinator to mitigate information cascades. The coordinator commits to a mechanism, which is a contract that agents may choose to accept or not. If an agent enters the mechanism, she pays a fee, and sends a message to the coordinator indicating her private signal (not necessarily truthfully). The coordinator, in turn, suggests an action to the agents according to his knowledge and interest. We study a class of mechanisms that possess properties such as individual rationality for agents (i.e., agents are willing to enter), truth telling, and profit maximization for the coordinator. We prove that the coordinator, without loss of optimality, can adopt a summary-based mechanism that depends on the complete observation history through an appropriate sufficient statistic. Furthermore, we show the existence of a mechanism which strictly improves social welfare, and results in strictly positive profit, so that such a mechanism is acceptable for both agents and the coordinator, and is beneficial to the agent community. Finally, we analyze the performance of this mechanism and show significant gains on both aforementioned metrics.

Citations (1)

Summary

We haven't generated a summary for this paper yet.