Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Task-parallelism in SWIFT for heterogeneous compute architectures (2505.14538v1)

Published 20 May 2025 in cs.PF and astro-ph.IM

Abstract: This paper highlights the first steps towards enabling graphics processing unit (GPU) acceleration of the smoothed particle hydrodynamics (SPH) solver for cosmology SWIFT and creating a hydrodynamics solver capable of fully leveraging the hardware available on heterogeneous exascale machines composed of central and graphics processing units (CPUs and GPUs). Exploiting the existing task-based parallelism in SWIFT, novel combinations of algorithms are presented which enable SWIFT to function as a truly heterogeneous software leveraging CPUs for memory-bound computations concurrently with GPUs for compute-bound computations in a manner which minimises the effects of CPU-GPU communication latency. These algorithms are validated in extensive testing which shows that the GPU acceleration methodology is capable of delivering up to 3.5x speedups for SWIFTs SPH hydrodynamics computation kernels when including the time required to prepare the computations on the CPU and unpack the results on the CPU. Speedups of 7.5x are demonstrated when not including the CPU data preparation and unpacking times. Whilst these measured speedups are substantial, it is shown that the overall performance of the hydrodynamic solver for a full simulation when accelerated on the GPU of state-of-the-art superchips, is only marginally faster than the code performance when using the Grace Hopper superchips fully parallelised CPU capabilities. This is shown to be mostly due to excessive fine-graining of the tasks prior to offloading on the GPU. Fine-graining introduces significant over-heads associated with task management on the CPU hosting the simulation and also introduces un-necessary duplication of CPU-GPU communications of the same data.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.