Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Up Large-Scale Graph Processing for GPU-Accelerated Heterogeneous Systems (1806.00762v1)

Published 3 Jun 2018 in cs.DC

Abstract: Not only with the large host memory for supporting large scale graph processing, GPU-accelerated heterogeneous architecture can also provide a great potential for high-performance computing. However, few existing heterogeneous systems can exploit both hardware advantages to enable the scale-up performance for graph processing due to the limited CPU-GPU transmission efficiency. In this paper, we investigate the transmission inefficiency problem of heterogeneous graph systems. Our key insight is that the transmission efficiency for heterogeneous graph processing can be greatly improved by simply iterating each subgraph multiple times (rather than only once in prior work) in the GPU, further enabling to obtain the improvable efficiency of heterogeneous graph systems by enhancing GPU processing capability. We therefore present Seraph, with the highlights of {\em pipelined} subgraph iterations and {\em predictive} vertex updating, to cooperatively maximize the effective computations of GPU on graph processing. Our evaluation on a wide variety of large graph datasets shows that Seraph outperforms state-of-the-art heterogeneous graph systems by 5.42x (vs. Graphie) and 3.05x (vs. Garaph). Further, Seraph can be significantly scaled up over Graphie as fed with more computing power for large-scale graph processing.

Summary

We haven't generated a summary for this paper yet.