Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Fibonacci heaps with worst case extensions (1911.11637v1)

Published 25 Nov 2019 in cs.DS

Abstract: We are concentrating on reducing overhead of heaps based on comparisons with optimal worstcase behaviour. The paper is inspired by Strict Fibonacci Heaps [1], where G. S. Brodal, G. Lagogiannis, and R. E. Tarjan implemented the heap with DecreaseKey and Meld interface in assymptotically optimal worst case times (based on key comparisons). In the paper [2], the ideas were elaborated and it was shown that the same asymptotical times could be achieved with a strategy loosing much less information from previous comparisons. There is big overhead with maintainance of violation lists in these heaps. We propose simple alternative reducing this overhead. It allows us to implement fast amortized Fibonacci heaps, where user could call some methods in variants guaranting worst case time. If he does so, the heaps are not guaranted to be Fibonacci until an amortized version of a method is called. Of course we could call worst case versions all the time, but as there is an overhead with the guarantee, calling amortized versions is prefered choice if we are not concentrated on complexity of the separate operation. We have shown, we could implement full DecreaseKey-Meld interface, but Meld interface is not natural for these heaps, so if Meld is not needed, much simpler implementation suffices. As I don't know application requiring Meld, we would concentrate on noMeld variant, but we will show the changes could be applied on Meld including variant as well. The papers [1], [2] shown the heaps could be implemented on pointer machine model. For fast practical implementations we would rather use arrays. Our goal is to reduce number of pointer manipulations. Maintainance of ranks by pointers to rank lists would be unnecessary overhead.

Citations (1)

Summary

We haven't generated a summary for this paper yet.