Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Controllable Time Series Generation (2403.03698v1)

Published 6 Mar 2024 in cs.LG, cs.AI, and cs.DB

Abstract: Time Series Generation (TSG) has emerged as a pivotal technique in synthesizing data that accurately mirrors real-world time series, becoming indispensable in numerous applications. Despite significant advancements in TSG, its efficacy frequently hinges on having large training datasets. This dependency presents a substantial challenge in data-scarce scenarios, especially when dealing with rare or unique conditions. To confront these challenges, we explore a new problem of Controllable Time Series Generation (CTSG), aiming to produce synthetic time series that can adapt to various external conditions, thereby tackling the data scarcity issue. In this paper, we propose \textbf{C}ontrollable \textbf{T}ime \textbf{S}eries (\textsf{CTS}), an innovative VAE-agnostic framework tailored for CTSG. A key feature of \textsf{CTS} is that it decouples the mapping process from standard VAE training, enabling precise learning of a complex interplay between latent features and external conditions. Moreover, we develop a comprehensive evaluation scheme for CTSG. Extensive experiments across three real-world time series datasets showcase \textsf{CTS}'s exceptional capabilities in generating high-quality, controllable outputs. This underscores its adeptness in seamlessly integrating latent features with external conditions. Extending \textsf{CTS} to the image domain highlights its remarkable potential for explainability and further reinforces its versatility across different modalities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (104)
  1. Generative Time-series Modeling with Fourier Flows. In ICLR.
  2. TSGBench: Time Series Generation Benchmark. Proc. VLDB Endow. 17, 3 (2023), 305–318.
  3. A Stitch in Time Saves Nine: Enabling Early Anomaly Detection with Correlation Analysis. In ICDE. 1832–1845.
  4. A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. In ICLR.
  5. Donald J Berndt and James Clifford. 1994. Using dynamic time warping to find patterns in time series. In KDD Workshop. 359–370.
  6. Multi-level variational autoencoder: learning disentangled representations from grouped observations. In AAAI. 2095–2102.
  7. Classification and regression trees. (1984).
  8. Understanding disentangling in β𝛽\betaitalic_β-VAE. arXiv preprint arXiv:1804.03599 (2018).
  9. Time Series Domain Adaptation via Sparse Associative Structure Alignment. In AAAI. 6859–6867.
  10. Unsupervised Time Series Outlier Detection with Diversity-Driven Convolutional Ensembles. Proc. VLDB Endow. 15, 3 (2021), 611–623.
  11. K-modes clustering. Journal of Classification 18, 1 (2001), 35–55.
  12. Isolating sources of disentanglement in VAEs. In NeurIPS. 2615–2625.
  13. Neural Ordinary Differential Equations. In NeurIPS. 6572–6583.
  14. Discovering hidden factors of variation in deep networks. In ICLR Workshop.
  15. Imre Csiszár. 1975. I-divergence geometry of probability distributions and minimization problems. The Annals of Probability (1975), 146–158.
  16. Tal Daniel and Aviv Tamar. 2021. Soft-IntroVAE: Analyzing and improving the introspective variational autoencoder. In CVPR. 4391–4400.
  17. ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. DMKD 34, 5 (2020), 1454–1495.
  18. Disentangled and controllable face image generation via 3D imitative-contrastive learning. In CVPR. 5154–5163.
  19. TimeVAE: A Variational Auto-Encoder for Multivariate Time Series Generation. arXiv preprint arXiv:2111.08095 (2021).
  20. Towards backdoor attack on deep learning based time series classification. In ICDE. 1274–1287.
  21. Guided variational autoencoder for disentanglement learning. In CVPR. 7920–7929.
  22. Cian Eastwood and Christopher K. I. Williams. 2018. A framework for the quantitative evaluation of disentangled representations. In ICLR.
  23. Real-valued (medical) time series generation with recurrent conditional gans. arXiv preprint arXiv:1706.02633 (2017).
  24. SOM-VAE: Interpretable Discrete Representation Learning on Time Series. In ICLR.
  25. Recurrent Independent Mechanisms. In ICLR.
  26. Testing the rehearsal hypothesis with two FastTap interfaces. In Proceedings of the 41st Graphics Interface Conference (GI). 223–231.
  27. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS. 6629–6640.
  28. β𝛽\betaitalic_β-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR.
  29. Zhexue Huang. 1998. Extensions to the k-means algorithm for clustering large data sets with categorical values. DMKD 2, 3 (1998), 283–304.
  30. PSA-GAN: Progressive self attention GANs for synthetic time series. In ICLR.
  31. GT-GAN: General Purpose Time Series Synthesis with Generative Adversarial Networks. In NeurIPS. 36999–37010.
  32. PATE-GAN: Generating synthetic data with differential privacy guarantees. In ICLR.
  33. Capturing Label Characteristics in VAEs. In ICLR.
  34. A style-based generator architecture for generative adversarial networks. In CVPR. 4401–4410.
  35. Learning neural causal models from unknown interventions. arXiv preprint arXiv:1910.01075 (2019).
  36. Variational Autoencoders and Nonlinear ICA: A Unifying Framework. In AISTATS. 2207–2217.
  37. Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. In ICML. 2649–2658.
  38. Semi-supervised learning with deep generative models. In NeurIPS. 3581–3589.
  39. Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding. In ICLR.
  40. Delays in human-computer interaction and their effects on brain activity. PloS one 11, 1 (2016).
  41. Config: Controllable neural face image generation. In ECCV. 299–315.
  42. Deep convolutional inverse graphics network. In NeurIPS. 2539–2547.
  43. Variational Inference of Disentangled Latent Concepts from Unlabeled Observations. In ICLR.
  44. Modeling Long-and Short-Term Temporal Patterns with Deep Neural Networks. In SIGIR. 95–104.
  45. Vector Quantized Time Series Generation with a Bidirectional Prior Model. In AISTATS. 7665–7693.
  46. Temporal dependencies in feature importance for time series prediction. In ICLR.
  47. IPS: Instance Profile for Shapelet Discovery for Time Series Classification. In ICDE. 1781–1793.
  48. Causal Recurrent Variational Autoencoder for Medical Time Series Generation. In AAAI. 8562–8570.
  49. Towards learning disentangled representations for time series. In KDD. 3270–3278.
  50. Disentangled variational auto-encoder for semi-supervised learning. Information Sciences 482 (2019), 73–85.
  51. Learning disentangled representation with pairwise independence. In AAAI. 4245–4252.
  52. ECOD: Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions. TKDE (2022).
  53. Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions. In IMC. 464–483.
  54. Time-Transformer AAE: Connecting Temporal Convolutional Networks and Transformer for Time Series Generation. (2022).
  55. Deep learning face attributes in the wild. In ICCV. 3730–3738.
  56. Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE Trans. Inf. Theory 28, 2 (1982), 129–137.
  57. On the fairness of disentangled representations. In NeurIPS. 14611–14624.
  58. Challenging common assumptions in the unsupervised learning of disentangled representations. In ICML. 4114–4124.
  59. Weakly-supervised disentanglement without compromises. In ICML. 6348–6359.
  60. Disentangling factors of variation using few labels. In ICLR.
  61. Introducing Variational Autoencoders to High School Students. In AAAI. 12801–12809.
  62. Auxiliary deep generative models. In ICML. 1445–1453.
  63. James MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proc. of the 5th Berkeley Symp. Math. Statist. Probability, Vol. 1. 281–297.
  64. Disentangling factors of variation in deep representation using adversarial training. In NeurIPS. 5041–5049.
  65. Erik Meijering. 2002. A chronology of interpolation: from ancient astronomy to modern signal and image processing. Proc. IEEE 90, 3 (2002), 319–342.
  66. An identifiable double vae for disentangled representations. In ICML. 7769–7779.
  67. Olof Mogren. 2016. C-RNN-GAN: A continuous recurrent neural network with adversarial training. In Constructive Machine Learning Workshop (CML) at NIPS 2016. 1.
  68. Christoph Molnar. 2020. Interpretable machine learning. Lulu.com.
  69. Joseph Needham and Wang Ling. 1959. Science and Civilisation in China. Volume 3. Mathematics and the Sciences of the Heavens and the Earth. Cambridge University Press. https://doi.org/10.1086/349436
  70. Jozsef Nemeth. 2020. Adversarial disentanglement with grouped observations. In AAAI. 10243–10250.
  71. James Newling and François Fleuret. 2016. Nested mini-batch K-means. In NeurIPS. 1360–1368.
  72. Sig-Wasserstein GANs for time series generation. In Proceedings of the Second ACM International Conference on AI in Finance. 1–8.
  73. Conditional Sig-Wasserstein GANs for Time Series Generation. arXiv preprint arXiv:2006.05421 (2020).
  74. Vicent Nos. 2021. Probnet: Geometric Extrapolation of Integer Sequences with Error Prediction. https://github.com/pedroelbanquero/geometric-extrapolation.
  75. Dual contradistinctive generative autoencoder. In CVPR. 823–832.
  76. Towards generating real-world time series data. In ICDM. 469–478.
  77. J. Ross Quinlan. 1986. Induction of decision trees. Machine Learning 1, 1 (1986), 81–106.
  78. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. In ICML. 252–259.
  79. T-CGAN: Conditional Generative Adversarial Network for Data Augmentation in Noisy Time Series with Irregular Sampling. arXiv preprint arXiv:1811.08295 (2018).
  80. Learning to disentangle factors of variation with manifold interaction. In ICML. 1431–1439.
  81. Latent Ordinary Differential Equations for Irregularly-Sampled Time Series. In NeurIPS. 5320–5330.
  82. Learning Disentangled Representations with Wasserstein Auto-Encoders. In ICLR Workshop.
  83. Learning graphical model structure using L1-regularization paths. In AAAI. 1278–1283.
  84. J Scott Armstrong and Fred Collopy. 1993. Causal forces: Structuring knowledge for time-series extrapolation. Journal of Forecasting 12, 2 (1993), 103–115.
  85. D Sculley. 2010. Web-scale k-means clustering. In WWW. 1177–1178.
  86. Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN). In NeurIPS. 32777–32788.
  87. Rethinking Controllable Variational Autoencoders. In CVPR. 19250–19259.
  88. Weakly Supervised Disentanglement with Guarantees. In ICLR.
  89. Learning disentangled representations with semi-supervised deep generative models. In NeurIPS. 5927–5937.
  90. Kaleb E Smith and Anthony O Smith. 2020. Conditional GAN for timeseries generation. arXiv preprint arXiv:2006.16477 (2020).
  91. Interventional robustness of deep latent variable models. arXiv preprint arXiv:1811.00007 (2018).
  92. What went wrong and when? Instance-wise feature importance for time-series black-box models. In NeurIPS. 799–809.
  93. On disentangled representations learned from correlated data. In ICML. 10401–10412.
  94. Neural discrete representation learning. In NeurIPS. 6309–6318.
  95. AEC-GAN: Adversarial Error Correction GANs for Auto-Regressive Long Time-Series Generation. In AAAI. 10140–10148.
  96. Image quality assessment: from error visibility to structural similarity. TIP 13, 4 (2004), 600–612.
  97. Wikipedia contributors. 2023. Multivariate interpolation — Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Multivariate_interpolation&oldid=1148098588 [Online; accessed 15-October-2023].
  98. Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis. In NeurIPS. 1099–1107.
  99. Time-series Generative Adversarial Networks. In NeurIPS. 5509–5519.
  100. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR. 586–595.
  101. Infovae: Balancing learning and inference in variational autoencoders. In AAAI. 5885–5892.
  102. Forecasting fine-grained air quality based on big data. In KDD. 2267–2276.
  103. Multi-view perceptron: a deep model for learning face identity and view representations. In NeurIPS. 217–225.
  104. Joint disentanglement of labels and their features with VAE. In ICIP. 1341–1345.

Summary

We haven't generated a summary for this paper yet.