Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Random Subjective Expected Utility (1808.00296v1)

Published 1 Aug 2018 in econ.TH

Abstract: Dynamic Random Subjective Expected Utility (DR-SEU) allows to model choice data observed from an agent or a population of agents whose beliefs about objective payoff-relevant states and tastes can both evolve stochastically. Our observable, the augmented Stochastic Choice Function (aSCF) allows, in contrast to previous work in decision theory, for a direct test of whether the agent's beliefs reflect the true data-generating process conditional on their private information as well as identification of the possibly incorrect beliefs. We give an axiomatic characterization of when an agent satisfies the model, both in a static as well as in a dynamic setting. We look at the case when the agent has correct beliefs about the evolution of objective states as well as at the case when her beliefs are incorrect but unforeseen contingencies are impossible. We also distinguish two subvariants of the dynamic model which coincide in the static setting: Evolving SEU, where a sophisticated agent's utility evolves according to a BeLLMan equation and Gradual Learning, where the agent is learning about her taste. We prove easy and natural comparative statics results on the degree of belief incorrectness as well as on the speed of learning about taste. Auxiliary results contained in the online appendix extend previous decision theory work in the menu choice and stochastic choice literature from a technical as well as a conceptual perspective.

Summary

We haven't generated a summary for this paper yet.