Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 11 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 30 tok/s Pro
2000 character limit reached

Modeling Spatio-temporal Dynamical Systems with Neural Discrete Learning and Levels-of-Experts (2402.05970v1)

Published 6 Feb 2024 in cs.LG and cs.AI

Abstract: In this paper, we address the issue of modeling and estimating changes in the state of the spatio-temporal dynamical systems based on a sequence of observations like video frames. Traditional numerical simulation systems depend largely on the initial settings and correctness of the constructed partial differential equations (PDEs). Despite recent efforts yielding significant success in discovering data-driven PDEs with neural networks, the limitations posed by singular scenarios and the absence of local insights prevent them from performing effectively in a broader real-world context. To this end, this paper propose the universal expert module -- that is, optical flow estimation component, to capture the evolution laws of general physical processes in a data-driven fashion. To enhance local insight, we painstakingly design a finer-grained physical pipeline, since local characteristics may be influenced by various internal contextual information, which may contradict the macroscopic properties of the whole system. Further, we harness currently popular neural discrete learning to unveil the underlying important features in its latent space, this process better injects interpretability, which can help us obtain a powerful prior over these discrete random variables. We conduct extensive experiments and ablations to demonstrate that the proposed framework achieves large performance margins, compared with the existing SOTA baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (114)
  1. P. Benacerraf, “Mathematical truth,” The Journal of Philosophy, vol. 70, no. 19, pp. 661–679, 1973.
  2. A. Newell, “Physical symbol systems,” Cognitive science, vol. 4, no. 2, pp. 135–183, 1980.
  3. M. Bürkle, U. Perera, F. Gimbert, H. Nakamura, M. Kawata, and Y. Asai, “Deep-learning approach to first-principles transport simulations,” Physical Review Letters, vol. 126, no. 17, p. 177701, 2021.
  4. Y. Yu, K. H. R. Chan, C. You, C. Song, and Y. Ma, “Learning diverse and discriminative representations via the principle of maximal coding rate reduction,” Advances in Neural Information Processing Systems, vol. 33, pp. 9422–9434, 2020.
  5. K. H. R. Chan, Y. Yu, C. You, H. Qi, J. Wright, and Y. Ma, “Redunet: A white-box deep network from the principle of maximizing rate reduction,” The Journal of Machine Learning Research, vol. 23, no. 1, pp. 4907–5009, 2022.
  6. P. W. Anderson, “More is different: broken symmetry and the nature of the hierarchical structure of science.” Science, vol. 177, no. 4047, pp. 393–396, 1972.
  7. S. Pierson, “Corpore cadente…: Historians discuss newton’s second law,” Perspectives on Science, vol. 1, no. 4, pp. 627–658, 1993.
  8. M. Sharan, A. K. Yadav, M. Singh, P. Agarwal, and S. Nigam, “A mathematical model for the dispersion of air pollutants in low wind conditions,” Atmospheric Environment, vol. 30, no. 8, pp. 1209–1220, 1996.
  9. B. A. Egan and J. R. Mahoney, “Numerical modeling of advection and diffusion of urban area source pollutants,” Journal of Applied Meteorology and Climatology, vol. 11, no. 2, pp. 312–322, 1972.
  10. X. Shan, X.-F. Yuan, and H. Chen, “Kinetic theory representation of hydrodynamics: a way beyond the navier–stokes equation,” Journal of Fluid Mechanics, vol. 550, pp. 413–441, 2006.
  11. M. F. McCracken, “Artificial neural networks in fluid dynamics: A novel approach to the navier-stokes equations,” in Proceedings of the Practice and Experience on Advanced Research Computing, 2018, pp. 1–4.
  12. M. W. Hirsch, “The dynamical systems approach to differential equations,” Bulletin of the American mathematical society, vol. 11, no. 1, pp. 1–64, 1984.
  13. L. Wang, Q. Zhou, and S. Jin, “Physics-guided deep learning for power system state estimation,” Journal of Modern Power Systems and Clean Energy, vol. 8, no. 4, pp. 607–615, 2020.
  14. V. Harish and A. Kumar, “A review on modeling and simulation of building energy systems,” Renewable and sustainable energy reviews, vol. 56, pp. 1272–1292, 2016.
  15. P. Moin and K. Mahesh, “Direct numerical simulation: a tool in turbulence research,” Annual review of fluid mechanics, vol. 30, no. 1, pp. 539–578, 1998.
  16. R. S. Rogallo and P. Moin, “Numerical simulation of turbulent flows,” Annual review of fluid mechanics, vol. 16, no. 1, pp. 99–137, 1984.
  17. S. A. Orszag and G. Patterson Jr, “Numerical simulation of three-dimensional homogeneous isotropic turbulence,” Physical review letters, vol. 28, no. 2, p. 76, 1972.
  18. J. Sanyal, S. Vásquez, S. Roy, and M. Dudukovic, “Numerical simulation of gas–liquid dynamics in cylindrical bubble column reactors,” Chemical Engineering Science, vol. 54, no. 21, pp. 5071–5083, 1999.
  19. M. A. van der Hoef, M. van Sint Annaland, N. Deen, and J. Kuipers, “Numerical simulation of dense gas-solid fluidized beds: a multiscale modeling strategy,” Annu. Rev. Fluid Mech., vol. 40, pp. 47–70, 2008.
  20. P. Yang, X. Tan, and W. Xin, “Experimental study and numerical simulation for a storehouse fire accident,” Building and Environment, vol. 46, no. 7, pp. 1445–1459, 2011.
  21. T. Ma and J. Quintiere, “Numerical simulation of axi-symmetric fire plumes: accuracy and limitations,” Fire Safety Journal, vol. 38, no. 5, pp. 467–492, 2003.
  22. J. Bakarji, K. Champion, J. N. Kutz, and S. L. Brunton, “Discovering governing equations from partial measurements with deep delay autoencoders,” arXiv preprint arXiv:2201.05136, 2022.
  23. X. He, Q. He, and J.-S. Chen, “Deep autoencoders for physics-constrained data-driven nonlinear materials modeling,” Computer Methods in Applied Mechanics and Engineering, vol. 385, p. 114034, 2021.
  24. B. Chen, K. Huang, S. Raghupathi, I. Chandratreya, Q. Du, and H. Lipson, “Automated discovery of fundamental variables hidden in experimental data,” Nature Computational Science, vol. 2, no. 7, pp. 433–442, 2022.
  25. S. K. Godunov and I. Bohachevsky, “Finite difference method for numerical computation of discontinuous solutions of the equations of fluid dynamics,” Matematičeskij sbornik, vol. 47, no. 3, pp. 271–306, 1959.
  26. M. Bär, R. Hegger, and H. Kantz, “Fitting partial differential equations to space-time dynamics,” Physical Review E, vol. 59, no. 1, p. 337, 1999.
  27. S. Yanchuk and G. Giacomelli, “Spatio-temporal phenomena in complex systems with time delays,” Journal of Physics A: Mathematical and Theoretical, vol. 50, no. 10, p. 103001, 2017.
  28. C. Baker, G. Bocharov, and C. Paul, “Mathematical modelling of the interleukin–2 t–cell system: a comparative study of approaches based on ordinary and delay differential equation,” Computational and Mathematical Methods in Medicine, vol. 1, no. 2, pp. 117–128, 1997.
  29. A. Graves and A. Graves, “Long short-term memory,” Supervised sequence labelling with recurrent neural networks, pp. 37–45, 2012.
  30. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
  31. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
  32. A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE signal processing magazine, vol. 35, no. 1, pp. 53–65, 2018.
  33. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  34. X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” Advances in neural information processing systems, vol. 28, 2015.
  35. E. de Bezenac, A. Pajot, and P. Gallinari, “Deep learning for physical processes: Incorporating prior scientific knowledge,” in International Conference on Learning Representations.
  36. V. L. Guen and N. Thome, “Disentangling physical dynamics from unknown factors for unsupervised video prediction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 474–11 484.
  37. S. Seo, C. Meng, and Y. Liu, “Physics-aware difference graph networks for sparsely-observed dynamics,” in International conference on learning representations, 2020.
  38. M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational physics, vol. 378, pp. 686–707, 2019.
  39. G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang, “Physics-informed machine learning,” Nature Reviews Physics, vol. 3, no. 6, pp. 422–440, 2021.
  40. Z. Gao, X. Shi, H. Wang, Y. Zhu, Y. B. Wang, M. Li, and D.-Y. Yeung, “Earthformer: Exploring space-time transformers for earth system forecasting,” Advances in Neural Information Processing Systems, vol. 35, pp. 25 390–25 403, 2022.
  41. Y. Wang, H. Wu, J. Zhang, Z. Gao, J. Wang, S. Y. Philip, and M. Long, “Predrnn: A recurrent neural network for spatiotemporal predictive learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 2, pp. 2208–2225, 2022.
  42. T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. W. Battaglia, “Learning mesh-based simulation with graph networks,” in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.   OpenReview.net, 2021. [Online]. Available: https://openreview.net/forum?id=roNqYL0_XP
  43. A. Van Den Oord, O. Vinyals et al., “Neural discrete representation learning,” Advances in neural information processing systems, vol. 30, 2017.
  44. V. Fortuin, M. Hüser, F. Locatello, H. Strathmann, and G. Rätsch, “Som-vae: Interpretable discrete representation learning on time series,” arXiv preprint arXiv:1806.02199, 2018.
  45. M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 9650–9660.
  46. R. Stewart and S. Ermon, “Label-free supervision of neural networks with physics and domain knowledge,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
  47. N. Watters, D. Zoran, T. Weber, P. Battaglia, R. Pascanu, and A. Tacchetti, “Visual interaction networks: Learning a physics simulator from video,” Advances in neural information processing systems, vol. 30, 2017.
  48. K. Kashinath, M. Mustafa, A. Albert, J. Wu, C. Jiang, S. Esmaeilzadeh, K. Azizzadenesheli, R. Wang, A. Chattopadhyay, A. Singh et al., “Physics-informed machine learning: case studies for weather and climate modelling,” Philosophical Transactions of the Royal Society A, vol. 379, no. 2194, p. 20200093, 2021.
  49. R. Wang, K. Kashinath, M. Mustafa, A. Albert, and R. Yu, “Towards physics-informed deep learning for turbulent flow prediction,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 1457–1466.
  50. T. Xue, A. Beatson, S. Adriaenssens, and R. Adams, “Amortized finite element analysis for fast pde-constrained optimization,” in International Conference on Machine Learning.   PMLR, 2020, pp. 10 638–10 647.
  51. A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, and P. Battaglia, “Graph networks as learnable physics engines for inference and control,” in International Conference on Machine Learning.   PMLR, 2018, pp. 4470–4479.
  52. J. Wu, E. Lu, P. Kohli, B. Freeman, and J. Tenenbaum, “Learning to see physics via visual de-animation,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  53. L. Lu, P. Jin, and G. E. Karniadakis, “Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators,” arXiv preprint arXiv:1910.03193, 2019.
  54. J. Sirignano and K. Spiliopoulos, “Dgm: A deep learning algorithm for solving partial differential equations,” Journal of computational physics, vol. 375, pp. 1339–1364, 2018.
  55. J. Han, A. Jentzen, and W. E, “Solving high-dimensional partial differential equations using deep learning,” Proceedings of the National Academy of Sciences, vol. 115, no. 34, pp. 8505–8510, 2018.
  56. X. Meng, Z. Li, D. Zhang, and G. E. Karniadakis, “Ppinn: Parareal physics-informed neural network for time-dependent pdes,” Computer Methods in Applied Mechanics and Engineering, vol. 370, p. 113250, 2020.
  57. X.-Y. Liu, H. Sun, and J.-X. Wang, “Predicting parametric spatiotemporal dynamics by multi-resolution pde structure-preserved deep learning,” arXiv preprint arXiv:2205.03990, 2022.
  58. C. Rao, H. Sun, and Y. Liu, “Hard encoding of physics for learning spatiotemporal dynamics,” arXiv preprint arXiv:2105.00557, 2021.
  59. C. Rao, P. Ren, Y. Liu, and H. Sun, “Discovering nonlinear pdes from scarce data with physics-encoded learning,” arXiv preprint arXiv:2201.12354, 2022.
  60. X. Huang, Z. Ye, H. Liu, S. Ji, Z. Wang, K. Yang, Y. Li, M. Wang, H. Chu, F. Yu et al., “Meta-auto-decoder for solving parametric partial differential equations,” Advances in Neural Information Processing Systems, vol. 35, pp. 23 426–23 438, 2022.
  61. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial intelligence, vol. 17, no. 1-3, pp. 185–203, 1981.
  62. C. Zach, T. Pock, and H. Bischof, “A duality based approach for realtime tv-l 1 optical flow,” in Pattern Recognition: 29th DAGM Symposium, Heidelberg, Germany, September 12-14, 2007. Proceedings 29.   Springer, 2007, pp. 214–223.
  63. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 3, pp. 500–513, 2010.
  64. M. Menze, C. Heipke, and A. Geiger, “Discrete optimization for optical flow,” in Pattern Recognition: 37th German Conference, GCPR 2015, Aachen, Germany, October 7-10, 2015, Proceedings 37.   Springer, 2015, pp. 16–28.
  65. Q. Chen and V. Koltun, “Full flow: Optical flow estimation by global optimization over regular grids,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4706–4714.
  66. Z. Teed and J. Deng, “Raft: Recurrent all-pairs field transforms for optical flow,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16.   Springer, 2020, pp. 402–419.
  67. S. Jiang, D. Campbell, Y. Lu, H. Li, and R. Hartley, “Learning to estimate hidden motions with global motion aggregation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9772–9781.
  68. A. Razavi, A. Van den Oord, and O. Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” Advances in neural information processing systems, vol. 32, 2019.
  69. J. Walker, A. Razavi, and A. v. d. Oord, “Predicting video with vqvae,” arXiv preprint arXiv:2103.01950, 2021.
  70. M. J. Bocus, X. Wang, R. Piechocki et al., “Streamlining multimodal data fusion in wireless communication and sensor networks,” arXiv preprint arXiv:2302.12636, 2023.
  71. X. Liu, T. Iqbal, J. Zhao, Q. Huang, M. D. Plumbley, and W. Wang, “Conditional sound generation using neural discrete time-frequency representation learning,” in 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP).   IEEE, 2021, pp. 1–6.
  72. T. Zhao, K. Lee, and M. Eskenazi, “Unsupervised discrete sentence representation learning for interpretable neural dialog generation,” arXiv preprint arXiv:1804.08069, 2018.
  73. J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh, “Action-conditional video prediction using deep networks in atari games,” Advances in neural information processing systems, vol. 28, 2015.
  74. M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440, 2015.
  75. S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz, “Mocogan: Decomposing motion and content for video generation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1526–1535.
  76. M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra, “Video (language) modeling: a baseline for generative models of natural videos,” arXiv preprint arXiv:1412.6604, 2014.
  77. N. Srivastava, E. Mansimov, and R. Salakhudinov, “Unsupervised learning of video representations using lstms,” in International conference on machine learning.   PMLR, 2015, pp. 843–852.
  78. R. Villegas, J. Yang, Y. Zou, S. Sohn, X. Lin, and H. Lee, “Learning to generate long-term future via hierarchical prediction,” in international conference on machine learning.   PMLR, 2017, pp. 3560–3569.
  79. R. Villegas, D. Erhan, H. Lee et al., “Hierarchical long-term video prediction without supervision,” in International Conference on Machine Learning.   PMLR, 2018, pp. 6038–6046.
  80. T. Kim, S. Ahn, and Y. Bengio, “Variational temporal abstraction,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  81. D. Weissenborn, O. Täckström, and J. Uszkoreit, “Scaling autoregressive video models,” arXiv preprint arXiv:1906.02634, 2019.
  82. M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh, and D. Kingma, “Videoflow: A flow-based generative model for video,” arXiv preprint arXiv:1903.01434, vol. 2, no. 5, p. 3, 2019.
  83. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  84. C. Bai, F. Sun, J. Zhang, Y. Song, and S. Chen, “Rainformer: Features extraction balanced network for radar-based precipitation nowcasting,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
  85. J. Sun, J. Zhang, Q. Li, X. Yi, Y. Liang, and Y. Zheng, “Predicting citywide crowd flows in irregular regions using multi-view graph convolutional networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 5, pp. 2348–2359, 2020.
  86. S. Wang, J. Cao, and S. Y. Philip, “Deep learning for spatio-temporal data mining: A survey,” IEEE transactions on knowledge and data engineering, vol. 34, no. 8, pp. 3681–3700, 2020.
  87. R. Jiang, D. Yin, Z. Wang, Y. Wang, J. Deng, H. Liu, Z. Cai, J. Deng, X. Song, and R. Shibasaki, “Dl-traff: Survey and benchmark of deep learning models for urban traffic prediction,” in Proceedings of the 30th ACM international conference on information & knowledge management, 2021, pp. 4515–4525.
  88. X. Jiang, P. Ji, and S. Li, “Censnet: Convolution with edge-node switching in graph neural networks.” in IJCAI, 2019, pp. 2656–2662.
  89. K. Wang, Z. Zhou, X. Wang, P. Wang, Q. Fang, and Y. Wang, “A2djp: A two graph-based component fused learning framework for urban anomaly distribution and duration joint-prediction,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  90. Y. Wang, M. Long, J. Wang, Z. Gao, and P. S. Yu, “Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms,” Advances in neural information processing systems, vol. 30, 2017.
  91. Y. Wang, L. Jiang, M.-H. Yang, L.-J. Li, M. Long, and L. Fei-Fei, “Eidetic 3d lstm: A model for video prediction and beyond,” in International conference on learning representations.
  92. Z. Gao, C. Tan, L. Wu, and S. Z. Li, “Simvp: Simpler yet better video prediction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3170–3180.
  93. T.-Y. Yang, J. Rosca, K. Narasimhan, and P. J. Ramadge, “Learning physics constrained dynamics using autoencoders,” Advances in Neural Information Processing Systems, vol. 35, pp. 17 157–17 172, 2022.
  94. M. Jaques, M. Burke, and T. Hospedales, “Physics-as-inverse-graphics: Unsupervised physical parameter estimation from video,” arXiv preprint arXiv:1905.11169, 2019.
  95. W. Yu, Y. Lu, S. Easterbrook, and S. Fidler, “Efficient and information-preserving future frame prediction and beyond,” 2020.
  96. A. Pervez and E. Gavves, “Stability regularization for discrete representation learning,” in International Conference on Learning Representations, 2021.
  97. A. H. Liu, S. Jin, C.-I. J. Lai, A. Rouditchenko, A. Oliva, and J. Glass, “Cross-modal discrete representation learning,” arXiv preprint arXiv:2106.05438, 2021.
  98. J. Peng, D. Liu, S. Xu, and H. Li, “Generating diverse structure for image inpainting with hierarchical vq-vae,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 775–10 784.
  99. W. Yan, Y. Zhang, P. Abbeel, and A. Srinivas, “Videogpt: Video generation using vq-vae and transformers,” arXiv preprint arXiv:2104.10157, 2021.
  100. Y. Zhang, W. Yan, P. Abbeel, and A. Srinivas, “Videogen: generative modeling of videos using vq-vae and transformers,” 2020.
  101. C. Gârbacea, A. van den Oord, Y. Li, F. S. Lim, A. Luebs, O. Vinyals, and T. C. Walters, “Low bit-rate speech coding with vq-vae and a wavenet decoder,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 735–739.
  102. A. Tjandra, S. Sakti, and S. Nakamura, “Transformer vq-vae for unsupervised unit discovery and speech synthesis: Zerospeech 2020 challenge,” arXiv preprint arXiv:2005.11676, 2020.
  103. E. De Bézenac, A. Pajot, and P. Gallinari, “Deep learning for physical processes: Incorporating prior scientific knowledge,” Journal of Statistical Mechanics: Theory and Experiment, vol. 2019, no. 12, p. 124009, 2019.
  104. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Computer Vision-ECCV 2004: 8th European Conference on Computer Vision, Prague, Czech Republic, May 11-14, 2004. Proceedings, Part IV 8.   Springer, 2004, pp. 25–36.
  105. A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/kanade meets horn/schunck: Combining local and global optic flow methods,” International journal of computer vision, vol. 61, pp. 211–231, 2005.
  106. A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758–2766.
  107. P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. Van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” arXiv preprint arXiv:1504.06852, 2015.
  108. E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462–2470.
  109. H. Wu, Z. Yao, J. Wang, and M. Long, “Motionrnn: A flexible model for video prediction with spacetime-varying motions,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 15 435–15 444.
  110. M. Veillette, S. Samsi, and C. Mattioli, “Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorology,” Advances in Neural Information Processing Systems, vol. 33, pp. 22 009–22 019, 2020.
  111. Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar, “Fourier neural operator for parametric partial differential equations,” arXiv preprint arXiv:2010.08895, 2020.
  112. J. Guibas, M. Mardani, Z. Li, A. Tao, A. Anandkumar, and B. Catanzaro, “Adaptive fourier neural operators: Efficient token mixers for transformers,” arXiv preprint arXiv:2111.13587, 2021.
  113. Y. Wang, J. Zhang, H. Zhu, M. Long, J. Wang, and P. S. Yu, “Memory in memory: A predictive neural network for learning higher-order non-stationarity from spatiotemporal dynamics,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 9154–9162.
  114. H. Wu, W. Xion, F. Xu, X. Luo, C. Chen, X.-S. Hua, and H. Wang, “Pastnet: Introducing physical inductive biases for spatio-temporal video prediction,” arXiv preprint arXiv:2305.11421, 2023.
Citations (6)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets