Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Distributed computation of equilibria in misspecified convex stochastic Nash games (1308.5448v6)

Published 25 Aug 2013 in math.OC

Abstract: The distributed computation of Nash equilibria is assuming growing relevance in engineering where such problems emerge in the context of distributed control. Accordingly, we present schemes for computing equilibria of two classes of static stochastic convex games complicated by a parametric misspecification, a natural concern in the control of large-scale networked engineered system. In both schemes, players learn the equilibrium strategy while resolving the misspecification: (1) Monotone stochastic Nash games: We present a set of coupled stochastic approximation schemes distributed across agents in which the first scheme updates each agent's strategy via a projected (stochastic) gradient step while the second scheme updates every agent's belief regarding its misspecified parameter using an independently specified learning problem. We proceed to show that the produced sequences converge in an almost-sure sense to the true equilibrium strategy and the true parameter, respectively. Surprisingly, convergence in the equilibrium strategy achieves the optimal rate of convergence in a mean-squared sense with a quantifiable degradation in the rate constant; (2) Stochastic Nash-Cournot games with unobservable aggregate output: We refine (1) to a Cournot setting where we assume that the tuple of strategies is unobservable while payoff functions and strategy sets are public knowledge through a common knowledge assumption. By utilizing observations of noise-corrupted prices, iterative fixed-point schemes are developed, allowing for simultaneously learning the equilibrium strategies and the misspecified parameter in an almost-sure sense.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.