Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Amortized Bayesian Inference for Models of Cognition (2005.03899v3)

Published 8 May 2020 in stat.ML and cs.LG

Abstract: As models of cognition grow in complexity and number of parameters, Bayesian inference with standard methods can become intractable, especially when the data-generating model is of unknown analytic form. Recent advances in simulation-based inference using specialized neural network architectures circumvent many previous problems of approximate Bayesian computation. Moreover, due to the properties of these special neural network estimators, the effort of training the networks via simulations amortizes over subsequent evaluations which can re-use the same network for multiple datasets and across multiple researchers. However, these methods have been largely underutilized in cognitive science and psychology so far, even though they are well suited for tackling a wide variety of modeling problems. With this work, we provide a general introduction to amortized Bayesian parameter estimation and model comparison and demonstrate the applicability of the proposed methods on a well-known class of intractable response-time models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Andreas Voss (10 papers)
  2. Eva Marie Wieschen (1 paper)
  3. Paul-Christian Bürkner (58 papers)
  4. Stefan T. Radev (31 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.