Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient and Scalable Graph Pattern Mining on GPUs (2112.09761v3)

Published 17 Dec 2021 in cs.DC

Abstract: Graph Pattern Mining (GPM) extracts higher-order information in a large graph by searching for small patterns of interest. GPM applications are computationally expensive, and thus attractive for GPU acceleration. Unfortunately, due to the complexity of GPM algorithms and parallel hardware, hand optimizing GPM applications suffers programming complexity, while existing GPM frameworks sacrifice efficiency for programmability. Moreover, little work has been done on GPU to scale GPM computation to large problem sizes. We describe G2Miner, the first Graph Pattern Mining (GPM) framework that runs on multiple GPUs. G2Miner uses pattern-aware, input-aware and architecture-aware search strategies to achieve high efficiency on GPUs. To simplify programming, it provides a code generator that automatically generates pattern-aware CUDA code. G2Miner flexibly supports both breadth-first search (BFS) and depth-first search (DFS) to maximize memory utilization and generate sufficient parallelism for GPUs. For the scalability of G2Miner, we use a customized scheduling policy to balance work among multiple GPUs. Experiments on a V100 GPU show that G2Miner achieves average speedups of 5.4x and 7.2x over two state-of-the-art single-GPU systems, Pangolin and PBE, respectively. In the multi-GPU setting, G2Miner achieves linear speedups from 1 to 8 GPUs, for various patterns and data graphs. We also show that G2Miner on a V100 GPU is 48.3x and 15.2x faster than the state-of-the-art CPU-based system, Peregrine and GraphZero, on a 56-core CPU machine.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Xuhao Chen (13 papers)
  2. Arvind (76 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.