Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 93 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 128 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Bayesian Semiparametric Longitudinal Inverse-Probit Mixed Models for Category Learning (2112.04626v4)

Published 8 Dec 2021 in stat.ME

Abstract: Understanding how the adult human brain learns novel categories is an important problem in neuroscience. Drift-diffusion models are popular in such contexts for their ability to mimic the underlying neural mechanisms. One such model for gradual longitudinal learning was recently developed by Paulon et al. (2021). Fitting conventional drift-diffusion models, however, requires data on both category responses and associated response times. In practice, category response accuracies are often the only reliable measure recorded by behavioral scientists to describe human learning. However, To our knowledge, drift-diffusion models for such scenarios have never been considered in the literature. To address this gap, in this article, we build carefully on Paulon et al. (2021), but now with latent response times integrated out, to derive a novel biologically interpretable class of `inverse-probit' categorical probability models for observed categories alone. However, this new marginal model presents significant identifiability and inferential challenges not encountered originally for the joint model by Paulon et al. (2021). We address these new challenges using a novel projection-based approach with a symmetry-preserving identifiability constraint that allows us to work with conjugate priors in an unconstrained space. We adapt the model for group and individual-level inference in longitudinal settings. Building again on the model's latent variable representation, we design an efficient Markov chain Monte Carlo algorithm for posterior computation. We evaluate the empirical performance of the method through simulation experiments. The practical efficacy of the method is illustrated in applications to longitudinal tone learning studies.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.