Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Doing More for Less -- Cache-Aware Parallel Contraction Hierarchies Preprocessing (1208.2543v1)

Published 13 Aug 2012 in cs.DS and cs.PF

Abstract: Contraction Hierarchies is a successful speedup-technique to Dijkstra's seminal shortest path algorithm that has a convenient trade-off between preprocessing and query times. We investigate a shared-memory parallel implementation that uses $O(n+m)$ space for storing the graph and O(1) space for each core during preprocessing. The presented data structures and algorithms consequently exploits cache locality and thus exhibit competitive preprocessing times. The presented implementation is especially suitable for preprocessing graphs of planet-wide scale in practice. Also, our experiments show that optimal data structures in the PRAM model can be beaten in practice by exploiting memory cache hierarchies.

Citations (2)

Summary

We haven't generated a summary for this paper yet.