Immersive virtual games: winners for deep cognitive assessment (2502.10290v2)
Abstract: Studies of human cognition often rely on brief, controlled tasks emphasizing group-level effects but poorly capturing individual variability. A suite of minigames on the novel PixelDOPA platform was designed to overcome these limitations by embedding classic cognitive tasks in a 3D virtual environment with continuous behavior logging. Four minigames explore constructs overlapping NIH Toolbox tasks: processing speed, rule shifting, inhibitory control, and working memory. In a clinical sample of 60 participants outside a controlled lab setting, large correlations (r=0.42-0.93) were found between PixelDOPA tasks and NIH Toolbox counterparts, despite differences in stimuli and task structures. Process-informed metrics (e.g., gaze-based response times) improved task convergence and data quality. Test-retest analyses showed high reliability (ICC=0.52-0.83) for all minigames. Beyond endpoint metrics, movement and gaze trajectories revealed stable, idiosyncratic gameplay strategy profiles, with unsupervised clustering differentiating participants by navigational and viewing behaviors. These trajectory-based features showed lower within-person variability than between-person variability, facilitating participant identification across sessions. Game-based tasks can therefore retain psychometric rigor of standard cognitive assessments while providing insights into dynamic individual-specific behaviors. By using an engaging, customizable game engine, comprehensive behavioral tracking can boost power to detect individual differences without sacrificing group-level inference. This possibility reveals a path toward cognitive measures that are both robust and ecologically valid, even in less-than-ideal data collection settings.
Collections
Sign up for free to add this paper to one or more collections.