Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding the Impact of Synchronous, Asynchronous, and Hybrid In-Situ Techniques in Computational Fluid Dynamics Applications (2407.20717v1)

Published 30 Jul 2024 in cs.PF and cs.CE

Abstract: High-Performance Computing (HPC) systems provide input/output (IO) performance growing relatively slowly compared to peak computational performance and have limited storage capacity. Computational Fluid Dynamics (CFD) applications aiming to leverage the full power of Exascale HPC systems, such as the solver Nek5000, will generate massive data for further processing. These data need to be efficiently stored via the IO subsystem. However, limited IO performance and storage capacity may result in performance, and thus scientific discovery, bottlenecks. In comparison to traditional post-processing methods, in-situ techniques can reduce or avoid writing and reading the data through the IO subsystem, promising to be a solution to these problems. In this paper, we study the performance and resource usage of three in-situ use cases: data compression, image generation, and uncertainty quantification. We furthermore analyze three approaches when these in-situ tasks and the simulation are executed synchronously, asynchronously, or in a hybrid manner. In-situ compression can be used to reduce the IO time and storage requirements while maintaining data accuracy. Furthermore, in-situ visualization and analysis can save Terabytes of data from being routed through the IO subsystem to storage. However, the overall efficiency is crucially dependent on the characteristics of both, the in-situ task and the simulation. In some cases, the overhead introduced by the in-situ tasks can be substantial. Therefore, it is essential to choose the proper in-situ approach, synchronous, asynchronous, or hybrid, to minimize overhead and maximize the benefits of concurrent execution.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Dardel computing system. [Online]. Available: https://www.pdc.kth.se/hpc-services/computing-systems/dardel-1.1043529
  2. Nek5000, a fast and scalable high-order solver for computational fluid dynamics. [Online]. Available: https://nek5000.mcs.anl.gov/
  3. Supercomputer Raven at Max Plank Computing and Data Facility. [Online]. Available: https://www.mpcdf.mpg.de/services/supercomputing/raven
  4. M. Atzori, W. Köpp, S. W. Chien, D. Massaro, F. Mallor, A. Peplinski, M. Rezaei, N. Jansson, S. Markidis, R. Vinuesa, E. Laure, P. Schlatter, and T. Weinkauf, “In situ visualization of large-scale turbulence simulations in nek5000 with paraview catalyst,” The Journal of Supercomputing, vol. 78, no. 3, pp. 3605–3620, 2022.
  5. U. Ayachit, A. Bauer, E. P. Duque, G. Eisenhauer, N. Ferrier, J. Gu, K. E. Jansen, B. Loring, Z. Lukic, S. Menon et al., “Performance analysis, design considerations, and applications of extreme-scale in situ infrastructures,” in SC’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis.   IEEE, 2016, pp. 921–932.
  6. U. Ayachit, A. Bauer, B. Geveci, P. O’Leary, K. Moreland, N. Fabian, and J. Mauldin, “Paraview catalyst: Enabling in situ data analysis and visualization,” in Proceedings of the First Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, 2015, pp. 25–29.
  7. U. Ayachit, B. Whitlock, M. Wolf, B. Loring, B. Geveci, D. Lonie, and E. W. Bethel, “The sensei generic in situ interface,” in 2016 Second Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization (ISAV).   IEEE, 2016, pp. 40–44.
  8. J. C. Bennett, H. Abbasi, P.-T. Bremer, R. Grout, A. Gyulassy, T. Jin, S. Klasky, H. Kolla, M. Parashar, V. Pascucci et al., “Combining in-situ and in-transit processing to enable extreme-scale scientific analysis,” in SC’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis.   IEEE, 2012, pp. 1–9.
  9. J. G. C. Gscheidle, S. Rezaeiravesh and P. Schlatter., “A framework for in-situ estimation of time-averaging uncertainties,” in Multi-scale, Multi-physics and Coupled Problems on highly parallel systems (MMCP) Workshop, High Performance Computing in Asia-Pacific Region (HPC Asia) 2022, 12 - 14 January 2022, ser. HPCAsia2022, 2022.
  10. H. Childs, “Visit: An end-user tool for visualizing and analyzing very large data,” 2012.
  11. H. Childs, S. D. Ahern, J. Ahrens, A. C. Bauer, J. Bennett, E. W. Bethel, P.-T. Bremer, E. Brugger, J. Cottam, M. Dorier et al., “A terminology for in situ visualization and analysis systems,” The International Journal of High Performance Computing Applications, vol. 34, no. 6, pp. 676–691, 2020.
  12. P. Fischer, S. Kerkemeier, M. Min, Y.-H. Lan, M. Phillips, T. Rathnayake, E. Merzari, A. Tomboulides, A. Karakus, N. Chalmers et al., “Nekrs, a gpu-accelerated spectral element navier-stokes solver,” arXiv preprint arXiv:2104.05829, 2021.
  13. W. F. Godoy, N. Podhorszki, R. Wang, C. Atkins, G. Eisenhauer, J. Gu, P. Davis, J. Choi, K. Germaschewski, K. Huck et al., “Adios 2: The adaptable input output system. a framework for high-performance data management,” SoftwareX, vol. 12, p. 100561, 2020.
  14. S. M. Hosseini, R. Vinuesa, P. Schlatter, A. Hanifi, and D. S. Henningson, “Direct numerical simulation of the flow around a wing section at moderate reynolds number,” International Journal of Heat and Fluid Flow, vol. 61, pp. 117–128, 2016.
  15. L. Hufnagel, J. Canton, R. Örlü, O. Marin, E. Merzari, and P. Schlatter, “The three-dimensional structure of swirl-switching in bent pipe flow,” Journal of Fluid Mechanics, vol. 835, p. 86–101, 2018.
  16. N. Jansson, M. Karp, A. Podobas, S. Markidis, and P. Schlatter, “Neko: A modern, portable, and scalable framework for high-fidelity computational fluid dynamics,” arXiv preprint arXiv:2107.01243, 2021.
  17. J. Kress, M. Larsen, J. Choi, M. Kim, M. Wolf, N. Podhorszki, S. Klasky, H. Childs, and D. Pugmire, “Comparing the efficiency of in situ visualization paradigms at scale,” in International Conference on High Performance Computing.   Springer, 2019, pp. 99–117.
  18. T. Kuhlen, R. Pajarola, and K. Zhou, “Parallel in situ coupling of simulation with a fully featured visualization system,” in Proceedings of the 11th Eurographics Conference on Parallel Graphics and Visualization (EGPGV), vol. 10.   Eurographics Association Aire-la-Ville, Switzerland, 2011, pp. 101–109.
  19. S. Li, N. Marsaglia, C. Garth, J. Woodring, J. Clyne, and H. Childs, “Data reduction techniques for simulation, visualization and data analysis,” Computer Graphics Forum, vol. 37, no. 6, pp. 422–447, 2018. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13336
  20. Q. Liu, J. Logan, Y. Tian, H. Abbasi, N. Podhorszki, J. Y. Choi, S. Klasky, R. Tchoua, J. Lofstead, R. Oldfield et al., “Hello adios: the challenges and lessons of developing leadership class i/o frameworks,” Concurrency and Computation: Practice and Experience, vol. 26, no. 7, pp. 1453–1473, 2014.
  21. O. Marin, E. Merzari, P. Schlatter, and A. Siegel, “Proper orthogonal decomposition on compressed data,” in Proceedings of the International Symposium on Turbulence and Shear Flow Phenomena, 2017. [Online]. Available: http://www.tsfp-conference.org/proceedings/2017/2/398.pdf
  22. R. Maulik, D. Fytanidis, B. Lusch, V. Vishwanath, and S. Patel, “Pythonfoam: In-situ data analyses with openfoam and python,” arXiv preprint arXiv:2103.09389, 2021.
  23. E. Merzari, P. Fischer, M. Min, S. Kerkemeier, A. Obabko, D. Shaver, H. Yuan, Y. Yu, J. Martinez, L. Brockmeyer et al., “Toward exascale: overview of large eddy simulations and direct numerical simulations of nuclear reactor flows with the spectral element method in nek5000,” Nuclear Technology, vol. 206, no. 9, pp. 1308–1324, 2020.
  24. N. Offermans, O. Marin, M. Schanen, J. Gong, P. Fischer, P. Schlatter, A. Obabko, A. Peplinski, M. Hutchinson, and E. Merzari, “On the strong scaling of the spectral element solver nek5000 on petascale systems,” in Proceedings of the Exascale Applications and Software Conference 2016, ser. EASC ’16.   New York, NY, USA: Association for Computing Machinery, 2016. [Online]. Available: https://doi.org/10.1145/2938615.2938617
  25. R. A. Oldfield, K. Moreland, N. Fabian, and D. Rogers, “Evaluation of methods to integrate analysis into a large-scale shock shock physics code,” in Proceedings of the 28th ACM international conference on Supercomputing, 2014, pp. 83–92.
  26. E. Otero, R. Vinuesa, O. Marin, E. Laure, and P. Schlatter, “Lossy data compression effects on wall-bounded turbulence: bounds on data reduction,” Flow, Turbulence and Combustion, vol. 101, no. 2, pp. 365–387, 2018.
  27. N. Sánchez Abad, R. Vinuesa, P. Schlatter, M. Andersson, and M. Karlsson, “Simulation strategies for the food and drug administration nozzle using nek5000,” AIP Advances, vol. 10, no. 2, p. 025033, 2020.
  28. S. S. Shende and A. D. Malony, “The tau parallel performance system,” The International Journal of High Performance Computing Applications, vol. 20, no. 2, pp. 287–311, 2006.
  29. G. Wallace, “The jpeg still picture compression standard,” IEEE Transactions on Consumer Electronics, vol. 38, no. 1, pp. xviii–xxxiv, 1992.
  30. X. I. A. Yang and K. P. Griffin, “Grid-point and time-step requirements for direct numerical simulation and large-eddy simulation,” Physics of Fluids, vol. 33, no. 1, p. 015108, 2021. [Online]. Available: https://doi.org/10.1063/5.0036515
Citations (4)

Summary

We haven't generated a summary for this paper yet.