Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (2504.17033v1)
Abstract: We give a deterministic $O(m\log{2/3}n)$-time algorithm for single-source shortest paths (SSSP) on directed graphs with real non-negative edge weights in the comparison-addition model. This is the first result to break the $O(m+n\log n)$ time bound of Dijkstra's algorithm on sparse graphs, showing that Dijkstra's algorithm is not optimal for SSSP.
Summary
- The paper presents a deterministic SSSP algorithm achieving O(m log^(2/3)n) time, surpassing Dijkstra's O(m+n log n) bound for sparse graphs.
- It employs a recursive divide-and-conquer strategy that combines elements of Dijkstra’s and Bellman-Ford methods to reduce the frontier size.
- The work introduces innovative techniques like FindPivots and specialized data structures to efficiently manage relaxations in the comparison-addition model.
This paper, "Breaking the Sorting Barrier for Directed Single-Source Shortest Paths" (2504.17033), presents a deterministic algorithm for solving the single-source shortest path (SSSP) problem on directed graphs with non-negative real edge weights in O(mlog2/3n) time. This is a significant result because it is the first algorithm to break the O(m+nlogn) time complexity of Dijkstra's algorithm on sparse graphs (m≈n), which has long been considered a sorting barrier for SSSP in the comparison-addition model.
The standard Dijkstra's algorithm, when implemented with efficient priority queues like Fibonacci heaps or relaxed heaps, achieves O(m+nlogn) time. This runtime is dominated by the nlogn term in sparse graphs, which arises from extracting the minimum-distance vertex from a priority queue n times. This process implicitly sorts the vertices by their distance from the source. Recent work has shown that if the algorithm is required to output this sorted order, Dijkstra's is indeed optimal [HHRTT24]. This paper shows that if only the distances are required, a faster deterministic algorithm exists for directed graphs. Previous work had achieved faster-than-sorting randomized SSSP for undirected graphs [DMSY23].
The core technical approach of the new algorithm is a sophisticated divide-and-conquer strategy that merges ideas from Dijkstra's algorithm and the BeLLMan-Ford algorithm. Dijkstra's algorithm proceeds by always exploring from the vertex with the smallest current distance estimate, effectively sorting vertices. BeLLMan-Ford, conversely, performs relaxations iteratively over all edges, making progress on paths up to a certain number of edges or vertices. The proposed algorithm aims to compute distances in increasing order, but crucially avoids fully sorting the vertices by their distances.
At a high level, the algorithm processes vertices in stages, defined by distance bounds. Suppose we have computed all shortest paths shorter than some value b. The goal is to find shortest paths for vertices with true distances between b and some larger bound B. In a Dijkstra-like approach, one would use a priority queue containing vertices whose current distance estimates fall within [b,B). The bottleneck is when this "frontier" of vertices is large, requiring Ω(nlogn) time to extract the minimum repeatedly.
The key innovation is a technique to reduce the size of this frontier. The algorithm maintains a set of "incomplete" vertices whose true distances are less than the current upper bound B, but whose shortest paths are not yet finalized. These incomplete vertices must have their shortest path pass through some "complete" vertex on the current frontier S. The algorithm aims to limit the size of this frontier S relative to the set U of "vertices of interest" (those with true distance <B whose shortest path goes through S) to roughly ∣U∣/k, where k=log1/3(n) is a parameter.
This is achieved using a sub-routine called FindPivots
. Given a bound B and a frontier S, FindPivots
performs k steps of BeLLMan-Ford-like relaxation only from vertices in S (and vertices reached within k steps). After k steps, any vertex v∈U whose shortest path passes through a complete vertex u∈S and uses at most k edges involving other vertices in U will have its distance finalized and be marked as complete. The vertices in S that are roots of "shortest path trees" within U containing at least k vertices are designated as "pivots". The crucial property is that the number of such pivots is at most ∣U∣/k. Any incomplete vertex in U after this process must depend on a pivot. This significantly reduces the size of the set of "important" frontier vertices.
The overall algorithm is a recursive procedure, BMSSP(l, B, S)
, which computes shortest paths bounded by B starting from the set S at recursion level l.
- Base Case (l=0): S is a singleton {x}. It runs a small Dijkstra-like process from x up to a limit of k+1 vertices or the bound B.
- Recursive Step (l>0):
- Call
FindPivots(B, S)
to get a set of pivots P⊆S and a set W of vertices completed within k steps. ∣P∣ is small relative to ∣U∣. - Initialize a specialized data structure (described below) with pivots P. This structure is designed for partial sorting of distance values.
- Iteratively:
-
Pull
a subset Si of M=2(l−1)t vertices with the smallest distances from the data structure, obtaining a new bound Bi. - Recursively call
BMSSP(l-1, B_i, S_i)
. This finds distances for vertices reachable through Si with distances less than Bi. - Relax edges from the complete vertices Ui returned by the recursive call. Neighbors whose distances are updated are either inserted back into the data structure (if their new distance is in [Bi,B)) or "batch prepended" (if their new distance is in [Bi′,Bi), where Bi′ is the bound returned by the recursive call). Batch prepending is an optimized insertion for elements smaller than all others currently in the structure.
-
- Continue iterations until the distance bound B is reached or a workload limit (∣U∣>k⋅2lt) is hit, indicating a "partial execution".
- Return the computed boundary B′ and the set U of complete vertices found within that bound.
- Call
The specialized data structure from lemma:partition
is crucial for managing the vertices in the frontier D at each level. It needs to efficiently handle insertions of new vertices with various distance values and also efficiently pull out a small set of vertices with the minimum current distance values (Pull
). The Batch Prepend
operation is needed because distances to neighbors of a completed vertex u∈Ui might be smaller than many values currently in the structure D, effectively belonging to an earlier "batch" of distances. The lemma describes a block-based linked list structure augmented with a binary search tree to support these operations with amortized costs depending on the total number of elements N, the batch size M, and the number of batch prepended elements L.
The parameters are set to k=⌊log1/3(n)⌋ and t=⌊log2/3(n)⌋. The number of recursion levels is O(logn/t)=O(log1/3n). The size of S at level l is bounded by 2lt.
The total time complexity is derived by summing the costs across all levels of the recursion tree. The dominant costs come from the FindPivots
calls and the operations on the specialized data structure.
FindPivots
on a call (l,B,S) takes O(kmin{k∣S∣,∣U∣}) time. Summing this cost over all nodes in the recursion tree gives O(n⋅k⋅(logn)/t)=O(nlog1/3nlog1/3n)=O(nlog2/3n). The constant degree graph transformation ensures the number of edges processed per vertex relaxation is constant.- Operations on the data structure: Insert takes amortized O(t) time, Batch Prepend takes amortized O(logk) per element, and Pull takes O(M). The total number of insertions is related to the number of edge relaxations. Each edge relaxation (u,v) can potentially insert v into the data structure. An edge (u,v) can cause v to be inserted via
Insert
only once across all levels. It can cause v to be inserted viaBatch Prepend
multiple times if dv is updated below Bi bounds in recursive calls. The analysis shows that the total data structure time is bounded by O(mlog2/3n+nlog2/3nloglogn)=O(mlog2/3n).
The overall time complexity combines these costs, resulting in the claimed O(mlog2/3n). The transformation to a constant-degree graph, a standard technique, takes O(m) time and space. Handling non-unique path lengths adds only a constant factor overhead to comparisons. The algorithm works in the comparison-addition model, appropriate for real-valued edge weights.
In summary, the paper provides a significant theoretical advancement by presenting the first deterministic SSSP algorithm faster than Dijkstra's for sparse directed graphs with real non-negative weights in the comparison-addition model. The key is a recursive divide-and-conquer approach that intelligently combines ideas from Dijkstra and BeLLMan-Ford, using a frontier reduction technique based on limited relaxations and a specialized data structure for managing potential frontier vertices without full sorting. While the constant factor log2/3n might mean Dijkstra remains faster in practice for typical graph sizes, this work fundamentally changes our understanding of the lower bounds for SSSP when vertex ordering is not required.
Related Papers
Tweets
HackerNews
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (17 points, 2 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (3 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (3 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths [pdf] (2 points, 1 comment)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (2 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (2 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (2 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (2 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (2 points, 0 comments)
- Breaking the Sorting Barrier for Directed Single-Source Shortest Paths (1 point, 0 comments)
- New algorithm beats Dijkstra's time for shortest paths in directed graphs (1313 points, 125 comments)
- New algorithm beats Dijkstra's time for shortest paths in directed graphs (990 points, 56 comments)
- New algorithm beats Dijkstra's time for shortest paths in directed graphs (127 points, 6 comments)