Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discovering group dynamics in coordinated time series via hierarchical recurrent switching-state models (2401.14973v2)

Published 26 Jan 2024 in stat.ML and cs.LG

Abstract: We seek a computationally efficient model for a collection of time series arising from multiple interacting entities (a.k.a. "agents"). Recent models of spatiotemporal patterns across individuals fail to incorporate explicit system-level collective behavior that can influence the trajectories of individual entities. To address this gap in the literature, we present a new hierarchical switching-state model that can be trained in an unsupervised fashion to simultaneously learn both system-level and individual-level dynamics. We employ a latent system-level discrete state Markov chain that provides top-down influence on latent entity-level chains which in turn govern the emission of each observed time series. Recurrent feedback from the observations to the latent chains at both entity and system levels allows recent situational context to inform how dynamics unfold at all levels in bottom-up fashion. We hypothesize that including both top-down and bottom-up influences on group dynamics will improve interpretability of the learned dynamics and reduce error when forecasting. Our hierarchical switching recurrent dynamical model can be learned via closed-form variational coordinate ascent updates to all latent chains that scale linearly in the number of entities. This is asymptotically no more costly than fitting a separate model for each entity. Analysis of both synthetic data and real basketball team movements suggests our lean parametric model can achieve competitive forecasts compared to larger neural network models that require far more computational resources. Further experiments on soldier data as well as a synthetic task with 64 cooperating entities show how our approach can yield interpretable insights about team dynamics over time.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Attentive state-space modeling of disease progression. Advances in neural information processing systems, 32.
  2. Variational inference and learning of piecewise linear dynamical systems. IEEE Transactions on Neural Networks and Learning Systems, 33(8):3753–3764.
  3. Baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents.
  4. Altman, R. M. (2007). Mixed hidden markov models: an extension of the hidden markov model to the longitudinal data setting. Journal of the American Statistical Association, 102(477):201–210.
  5. Deep Explicit Duration Switching Models for Time Series. In Advances in Neural Information Processing Systems (NeurIPS).
  6. Clustering on the unit hypersphere using von mises-fisher distributions. Journal of Machine Learning Research, 6(9).
  7. Bayesian time series models. Cambridge University Press.
  8. Beal, M. J. (2003). Variational algorithms for approximate Bayesian inference. University of London, University College London (United Kingdom).
  9. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289–300.
  10. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877.
  11. JAX: Composable transformations of Python+NumPy programs.
  12. Hierarchical Hidden Markov Models with General State Hierarchy. In Proceedings of the National Conference on Artificial Intelligence (AAAI).
  13. Collapsed amortized variational inference for switching nonlinear dynamical systems. In International Conference on Machine Learning.
  14. Deep Switching Auto-Regressive Factorization:Application to Time Series Forecasting. In AAAI Conference on Artificial Intelligence.
  15. Where will they go? predicting fine-grained adversarial multi-agent motion using conditional variational autoencoders. In Proceedings of the European conference on computer vision (ECCV), pages 732–747.
  16. The Hierarchical Hidden Markov Model: Analysis and Applications. Machine Learning, 32.
  17. Time series analysis of circular data. Journal of the Royal Statistical Society: Series B (Methodological), 56(2):327–339.
  18. Stochastic variational inference for hidden Markov models. In Advances in Neural Information Processing Systems.
  19. Joint modeling of multiple time series via the beta process with application to motion capture segmentation. Annals of Applied Statistics, 8(3):1281–1313.
  20. A sticky HDP-HMM with application to speaker diarization. Annals of Applied Statistics, 5(2A):1020–1056.
  21. Variational learning for switching state-space models. Neural computation, 12(4):831–864.
  22. Efficiently Modeling Long Sequences with Structured State Spaces. In International Conference on Learning Representations (ICLR). arXiv.
  23. Hamilton, J. D. (1994). Time series analysis. Princeton university press.
  24. Hamilton, J. D. (2010). Regime switching models. Macroeconometrics and time series analysis, pages 202–209.
  25. Hamilton, J. D. (2020). Time series analysis. Princeton university press.
  26. Infinite hierarchical hidden Markov models. In Artificial Intelligence and Statistics.
  27. Stochastic Variational Inference. Journal of Machine Learning Research, 14(1).
  28. Neural Relational Inference for Interacting Systems. In Proceedings of the 35th International Conference on Machine Learning, pages 2688–2697. PMLR.
  29. Structured Inference Networks for Nonlinear State Space Models. In AAAI Conference on Artificial Intelligence. arXiv.
  30. Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914–922. PMLR.
  31. Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans.
  32. Linou, K. (2016). NBA Player Movements.
  33. Mixed-effects state-space models for analysis of longitudinal dynamic systems. Biometrics, 67(2):476–485.
  34. Representing and discovering adversarial team behaviors using player roles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2706–2713.
  35. Modelling the dynamic pattern of surface area in basketball and its effects on team performance. Journal of Quantitative Analysis in Sports, 14(3):117–130.
  36. Possession Sketches: Mapping NBA Strategies. In MIT Sloan Sports Analytics Conference.
  37. Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling. In International Conference on Learning Representations. arXiv.
  38. Learning and detecting activities from movement trajectories using the hierarchical hidden Markov model. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pages 955–960 vol. 2.
  39. Graph representations for the analysis of multi-agent spatiotemporal sports data. Applied Intelligence, 53(4):3783–3803.
  40. Rabiner, L. R. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proc. of the IEEE, 77(2):257–286.
  41. Deep State Space Models for Time Series Forecasting. In Advances in Neural Information Processing Systems (NeurIPS).
  42. Learning to Simulate Complex Physics with Graph Networks. In Proceedings of the 37th International Conference on Machine Learning, pages 8459–8468. PMLR.
  43. Personalized input-output hidden markov models for disease progression modeling. In Machine Learning for Healthcare Conference, pages 309–330. PMLR.
  44. An Approach to Time Series Smoothing and Forecasting Using the Em Algorithm. Journal of Time Series Analysis, 3(4):253–264.
  45. Time series analysis and its applications, volume 3. Springer.
  46. Hierarchical Hidden Markov Models for Information Extraction. In International Joint Conferences on Artificial Intelligence (IJCAI).
  47. A Hierarchical Switching Linear Dynamical System Applied to the Detection of Sepsis in Neonatal Condition Monitoring. In Uncertainty in Artificial Intelligence.
  48. The Multi-Agent Behavior Dataset: Mouse Dyadic Social Interactions.
  49. Modeling Player and Team Performance in Basketball. Annual Review of Statistics and Its Application, 8(1):1–23.
  50. Smooth Transition Autoregressive Models — a Survey of Recent Developments. Econometric Reviews, 21(1):1–47.
  51. Approximate inference by broadening the support of the likelihood. In Fifth Symposium on Advances in Approximate Bayesian Inference.
  52. GroupNet: Multiscale hypergraph neural networks for trajectory prediction with relational reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  53. Deep Switching State Space Model (DS$3̂$M) for Nonlinear Time Series Forecasting with Regime Switching.
  54. Diverse generation for multi-agent sports games. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4610–4619.
  55. AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent Forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9813–9823.
  56. Generating Multi-Agent Trajectories using Programmatic Weak Supervision. In International Conference on Learning Representations.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com