Papers
Topics
Authors
Recent
2000 character limit reached

A Variational Autoencoder for Neural Temporal Point Processes with Dynamic Latent Graphs (2312.16083v2)

Published 26 Dec 2023 in cs.LG and stat.ML

Abstract: Continuously-observed event occurrences, often exhibit self- and mutually-exciting effects, which can be well modeled using temporal point processes. Beyond that, these event dynamics may also change over time, with certain periodic trends. We propose a novel variational auto-encoder to capture such a mixture of temporal dynamics. More specifically, the whole time interval of the input sequence is partitioned into a set of sub-intervals. The event dynamics are assumed to be stationary within each sub-interval, but could be changing across those sub-intervals. In particular, we use a sequential latent variable model to learn a dependency graph between the observed dimensions, for each sub-interval. The model predicts the future event times, by using the learned dependency graph to remove the noncontributing influences of past events. By doing so, the proposed model demonstrates its higher accuracy in predicting inter-event times and event types for several real-world event sequences, compared with existing state of the art neural point processes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Mutually Regressive Point Processes. In Advances in Neural Information Processing Systems (NeurIPS), 1–12.
  2. Proximal Graphical Event Models. In Advances in Neural Information Processing Systems (NeurIPS), 1–10.
  3. Recurrent Marked Temporal Point Processes: Embedding Event History to Vector. In SIGKDD, 1555–1564. New York, NY, USA.
  4. Shaping Social Activity by Incentivizing Users. In Advances in Neural Information Processing Systems (NeurIPS), 2474–2482.
  5. Multistage Campaigning in Social Networks. In Advances in Neural Information Processing Systems (NeurIPS), 2–9.
  6. Dynamic Neural Relational Inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8513–8522.
  7. Hawkes, A. G. 1971. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1): 83–90.
  8. Statistical analysis of longitudinal network data with changing composition. Sociological Methods & Research, 32(2): 253–287.
  9. Auto-Encoding Variational Bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 1–14.
  10. Neural Relational Inference for Interacting Systems. In Proceedings of the International Conference on Machine Learning (ICML), 2688–2697.
  11. An Empirical Study: Extensive Deep Temporal Point Process. CoRR.
  12. Discovering Latent Network Structure in Point Process Data. In Proceedings of the International Conference on Machine Learning (ICML), 1413–1421. Bejing, China.
  13. The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process. In Advances in Neural Information Processing Systems (NeurIPS), 6757–6767.
  14. Fully Neural Network based Model for General Temporal Point Processes. In Advances in Neural Information Processing Systems (NeurIPS), 1–11.
  15. A Variational Point Process Model for Social Event Sequences. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 173–180.
  16. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the International Conference on Machine Learning (ICML), 1278–1286. Bejing, China.
  17. Geometric Hawkes Processes with Graph Convolutional Recurrent Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 4878–4885.
  18. Intensity-Free Learning of Temporal Point Processes. In Proceedings of the International Conference on Learning Representations (ICLR), 1–21.
  19. Deep Reinforcement Learning of Marked Temporal Point Processes. In Advances in Neural Information Processing Systems (NeurIPS), 3172–3182.
  20. Wasserman, S. 1980. Analyzing Social Networks as Stochastic Processes. Journal of the American Statistical Association, 75(370): 280–294.
  21. Modeling Event Propagation via Graph Biased Temporal Point Process. IEEE Transactions on Neural Networks and Learning Systems, 1–11.
  22. Wasserstein Learning of Deep Generative Point Process Models. In Advances in Neural Information Processing Systems (NeurIPS).
  23. Learning Time Series Associated Event Sequences With Recurrent Point Process Networks. IEEE Transactions on Neural Networks and Learning Systems, 30(10): 3124–3136.
  24. Dependent Relational Gamma Process Models for Longitudinal Networks. In Proceedings of the International Conference on Machine Learning (ICML), 5551–5560.
  25. A Poisson Gamma Probabilistic Model for Latent Node-Group Memberships in Dynamic Networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 4366–4373.
  26. The Hawkes Edge Partition Model for Continuous-time Event-based Temporal Networks. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), 460–469.
  27. Estimating Latent Population Flows from Aggregated Data via Inversing Multi-Marginal Optimal Transport. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), 181–189.
  28. Self-Attentive Hawkes Process. In Proceedings of the International Conference on Machine Learning (ICML), 11183–11193.
  29. Learning Neural Point Processes with Latent Graphs. In Proceedings of the international conference on World Wide Web (WWW), 1495–1505. New York, NY, USA.
  30. Neural Relation Inference for Multi-dimensional Temporal Point Processes via Message Passing Graph. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 3406–3412.
  31. Transformer Hawkes Process. In Proceedings of the International Conference on Machine Learning (ICML), 11692–11702.
Citations (2)

Summary

  • The paper introduces VAETPP, a novel method leveraging a variational autoencoder with dynamic latent graphs to capture evolving event dependencies.
  • It employs a graph recurrent neural network and a log-normal mixture decoder to effectively predict inter-event times and event types.
  • Empirical evaluations on real-world datasets show VAETPP outperforms existing models in negative log-likelihood, RMSE, and prediction accuracy.

A Variational Autoencoder for Neural Temporal Point Processes with Dynamic Latent Graphs: An Overview

The paper introduces a novel approach to model temporal point processes (TPPs) by incorporating dynamic latent graphs using a variational auto-encoder (VAE). This approach, termed variational autoencoder temporal point process (VAETPP), is specifically designed to handle sequences of events with complicated time-varying dependencies.

Model Overview

The temporal dynamics of event occurrences, such as customer interactions or social media activities, often demonstrate patterns of self- and mutually-exciting effects. Traditional models like Hawkes processes (HPs) capture mutual excitations but fail to account for latent state transitions. Neural TPPs, which utilize neural networks to capture dependencies, typically utilize static graphs which do not reflect changing dependencies over time.

VAETPP overcomes these limitations by handling dynamic dependencies among event types. The event sequence timeline is partitioned into regularly-spaced sub-intervals where events within each sub-interval are assumed to have stationary dynamics, but these dynamics can change across sub-intervals. The model uses a VAE framework to learn a unique latent graph for each sub-interval, effectively capturing the temporal evolution of dependencies.

Technical Contributions

  1. Dynamic Graph Learning: VAETPP introduces a sequential latent variable model to learn dependency graphs between observed event dimensions within each sub-interval. This allows the model to adapt to changing dependencies over time.
  2. Variational Autoencoder Framework: The VAE framework encodes observed event sequences in order to generate a dynamic latent graph, which removes non-contributing influences of past events. This is achieved through the use of a graph recurrent neural network (GRNN), which evolves the latent embeddings over time.
  3. Log-normal Mixture Decoder: The model predicts inter-event times using a log-normal mixture distribution. This provides flexibility and accuracy in capturing the distribution of inter-event times.

Empirical Evaluation

The proposed VAETPP was evaluated on multiple real-world datasets, such as New York Motor Vehicle Collisions (NYMVC) and various Stack Exchange datasets (MathOF, AskUbuntu, SuperUser). The evaluation metrics included negative log-likelihood (NLL), root mean square error (RMSE) for event time prediction, and accuracy for event type prediction.

The VAETPP demonstrated superior performance compared to existing models such as RMTPP, FullyNN, LogNormMix, and THP. Specifically, VAETPP consistently achieved the lowest NLL values across all datasets, indicative of its strong capability in modeling inter-event times. It also exhibited better performance in event time and type prediction tasks, showcasing its efficacy in capturing dynamic dependencies.

Theoretical and Practical Implications

VAETPP advances the theoretical understanding of TPPs by integrating dynamic latent graphs. This approach is particularly relevant for applications where event dependencies evolve over time, such as social media interactions, recommendation systems, and network security monitoring. The model's ability to handle periodically changing dynamics enhances its practical applicability to real-world scenarios.

Future Directions

Future research could explore extending VAETPP to automatically infer the sub-intervals for dynamic graph learning, potentially leading to models that can more flexibly adapt to non-stationary event sequences. Additionally, integrating more sophisticated graph neural network architectures might further enhance the model's capacity to capture complex dependency structures.

In conclusion, VAETPP represents a significant advancement in the modeling of temporal point processes. By incorporating dynamic latent graphs within a variational autoencoder framework, it effectively captures evolving dependencies, leading to improved predictive performance in real-world event sequences.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 6 likes about this paper.